Recently we have heard a lot of discussion about whether various social media companies should be broken up or regulated, due to antitrust violations and whether they have the right, or responsibility, to monitor and censor the content of their user’s posts and videos.
Let’s address the first issue, that of antitrust or monopolistic behaviour. In terms of market share, there are nearly 14 billion user accounts on social media platforms, globally. With a global population of just under 8 billion, and internet usage by just over half (around 4 billion users), that means a lot of people have accounts on multiple platforms. At the end of the day though, these companies don’t directly make money from having users or from people making posts.
They sell advertising. The three largest internet advertising companies are Facebook, Google, and Amazon. Combined they receive 68% of total internet advertising dollars. If investigators were to find evidence that they were conspiring to set advertising rates, that would be grounds to file an antitrust case, but to date I am not aware of any such allegations. Realistically-speaking, though an outsized portion of the market is held by just a few companies, it appears that they are acting in competition with each other.
There are, however, several specific cases that could catch the regulator’s eyes. Google’s YouTube has engaged in an active campaign of demonetizing content provider’s videos, meaning that they remove advertising from the video. This practice has raised some eyebrows and it is possible that it might attract the attention of some regulators, depending upon what they find when they investigate. Has YouTube been using this to show favour to certain providers over others? Has YouTube benefited financially by doing this?
These are all questions that will be asked. It is possible that government regulators might implement an external review process that oversees the demonetization process. Nobody disagrees that YouTube should have the right to demonetize videos that violate their policies, but these policies appear to be undergoing frequent changes and inconsistent enforcement, leaving content producers confused and frustrated.
Google’s search engine has had some changes in the last few years that will likely attract the attention of regulators, with preferential search results being given not just to paying customers, but also to Google’s own newly offered services that compete with others. Sites like MapQuest, TripAdvisor, and Yelp, among others, have lost traffic to Google’s own services which now show up first when you do related searches. Google’s risk for being broken up into different companies is higher than many of the other social media companies, but they might just be subject to closer oversight and regulation.
The real show, though, is not about market share or anti-competitive behaviour within search algorithms. What people are talking most about is the plague of fake news and the resulting fact-checking that has been occurring among certain political circles and the question that has been asked is, can the government step in and control how social media sites like Facebook, YouTube, and Twitter choose to censor or not censor their users’ content?
This is a very difficult question to address. The obvious answer is that neither Canada’s Section 2 of the Charter of Rights and Freedoms nor the American 1st Amendment of the Constitution (the guarantors of freedom of expression for their respective countries) can be bent to the task of preventing a private corporation from censoring any content that has been posted on their website.
The concept of freedom of speech is limited to the government attempting to prevent us from saying something, it is not a hammer to be used to force one private person to support the position of another. And yet, this is where things get tricky. When we say freedom of speech, there is a raft of fine print that is attached to such a right. Our speech is not actually unlimited, as making slanderous or libellous statements about another, hate speech, and making statements that might cause public harm (the common example being crying “fire!” in a crowded theatre) are already not protected by either of these documents.
I don’t envy Mark Zuckerberg’s position with regard to this situation, in 2016 Facebook allowed their advertising and page content to be mostly a wild west, only cracking down on the most egregious statements made on their platform. In the resulting scandal of electoral interference, Facebook received substantial criticism for not tracking and preventing the abuse. Since then, several platforms, including Twitter and Facebook, have tried to implement practices to reduce the flood of false news stories and questionable advertising.
Now they are receiving criticism for either removing some posts or posting fact-checking entries below questionable stories. Do we want private corporations to be responsible for checking the validity of the information that is being spread on their platforms? Should they be allowed to? The issue is that much of our social dialogue has been shifted to social media and as such might be considered part of the fabric of our society.
Should the government become involved in reviewing such content, or would that be a violation of our guarantees of free speech? Should Facebook and Twitter be able to restrict or comment on speech as they feel? Should the government provide oversight over an open review process that is managed by the companies, or perhaps a third-party agency that could be created to help in this issue? What about the differences in legal requirements in different countries?
A post or article that is legal in the US might be illegal in Germany, for example. Expecting a company like Facebook to evaluate and censor every article and post made to determine if they meet each country’s laws and social mores would be overwhelming.
The question is complex and will take a lot of careful consideration. Unfortunately, the political pressure that is being applied appears to be very much one-sided in view and structured around the desires of a single individual, for the most part. The act of foreign nations or entities influencing elections in any country is a very serious issue and should be investigated.
Should this be left to each country’s justice system or should we be expecting a private company to control this? Do we need to have a Canada Facebook, a US Facebook, etc., so that people can only see things posted to their specific country? Would such a balkanization of the social media platforms benefit users? Would it harm our ability to share interesting information? Many of these are questions that have not been asked about this current scenario, but should be.
The reality is that we live on a planet that has 195 different nations, all with their own laws. Even member states of the EU have their own domestic laws that might differ between nations. We are left with only two real solutions, the first would be for the social media companies to form an industry association which can then work together with each country to tailor an industry policy that the companies can all follow, or for each country to update their laws to reflect the novel nature of this industry and form of communication that did not exist when any of these documents were drafted.
The likelihood that all 195 nations would agree on a common set of rules is slight, so I think that the inevitable result will be that the social media companies will be forced to tailor their sites to the laws of each country, as they already do for nations like China.
Éamonn Brosnan is a research associate with the Frontier Centre for Public Policy.