Social Media Censorship on the Rise in the U.S. and Beyond

All forms of media in the United States have long had a complicated relationship with censorship, as in efforts to block specific (generally controversial) material from public consumption. Be it Facebook’s attempts to halt the dispersion of false information, or television networks’ attempts to protect viewers, any measures taken that block certain voices and perspectives from general public consumption are vulnerable to criticism. Decisions by regulators to censor online material mainly face such disapproval because they are perceived by many to suppress the First Amendment right of all Americans: freedom of speech. Taking into consideration the profound effect online interactions have on the current state of global affairs, the world’s governments and social media giants alike have begun to make difficult decisions on which forms of online expression should remain acceptable. Efforts to combat the spread of dangerous sentiments, however, are subject to close scrutiny due to the limitations they put on people’s abilities to express themselves. 

Recently, U.S. President Donald Trump has taken to Twitter, vowing to “monitor the censorship” that has become a favoured tactic by social media giants in their fight against misinformation, hate speech, and other content deemed potentially harmful. In the weeks following these remarks, the White House launched a website encouraging Americans to report instances in which social platforms’ policies have shown “political bias“, policies which could subsequently hinder the individuals’ ability to voice their beliefs. Trump has long criticized giant social media sites, frequently alleging that they are unfairly biased against those promoting a conservative ideology. A change in content regulation practices announced by Facebook, America’s largest social media corporation, prompted Trump’s most recent remarks on the state of US media censorship. While it did not overtly target Trump or his party, the change did affect those championing right-wing viewpoints above all others, as the accounts of controversial far-right figures and organizations have been the target of recent bans.

The controversial announcement which prompted Trump’s comments revealed that Facebook would more strictly police “dangerous” content across both Facebook and Instagram. As a result, Facebook made the decision to permanently ban several prominent far-right figures and organizations from using its platforms. The corporation stated that the decision was made because the banned accounts promoted hate and violence, rather than as a result of the ideological views they expressed. Trump, however, expressed his dismay, as he perceives the move as an indication that “it’s getting worse and worse for conservatives on social media.” Many other prominent conservative figures have expressed similar concerns, signalling that many who harbour similar views see the new regulations as targeting their community.

Far-right figure Milo Yiannopoulos was one of the users Facebook labelled as “dangerous.” 

Regardless of who the move toward censorship has ultimately affected, it is important to consider Facebook’s reasoning. The step to ban accounts promoting “violence” and “dangerous ideas” has been taken due to the corporation’s responsibility to monitor its platforms’ content while considering the modern global political context. At present, the site is an instrumental tool for the spread of various ideas, from the accepted to the extreme. Facebook’s overt purpose as a platform notwithstanding, in today’s world, the network, and other similar forms of public media, are incredibly powerful tools for the widespread dispersal of information and ideas. In general, social media fosters much hateful and violent discussion simply by virtue of its ubiquity and scope. The platform has been used by the perpetrators of recent terror attacks such as the March 2019 shootings in Christchurch, New Zealand to advance and disperse their goals; using Facebook as a host, the perpetrator of the attack was able to broadcast the horrific act around the world in real time. Banning people and organizations that normalize and promote the ideologies which prompted such horrific acts is one way for Facebook and its peers to protect both the general public and themselves from future similar attacks and their ramifications. Taking a public stance against dangerous ideas helps the company divert blame from itself, even when its product has provided a stage for the resulting violence.

New Zealand PM Jacinda Ardern has been a vocal supporter of enhanced social media regulation.

Swiftly introducing public action is essential in these instances; when social media giants are brought into the public eye as a result of their role (however unwilling) in the execution of violent acts, questions are inevitably raised surrounding the companies and what they could have done differently. In the aftermath of the Christchurch attacks, New Zealand Prime Minister Jacinda Ardern was vocal in her wishes for social media platforms to adopt stronger measures against extremism, such as those recently announced by Facebook. However, such solutions are not always left for the private sector to deliberate on and put into action: governments often limit internet communications during uncertain moments or periods of strife due to the immediacy and boldness offered by online discourse, which can easily make troubling sentiments more prevalent and accepted. Following the April 2019 bombings in Sri Lanka, the country blocked several social media platforms, including Facebook, in an attempt to dispel tensions that the violence prompted. The temporary ban was lifted after about a week, yet the ban was then reinstated on May 13, as Facebook was being used by some in the country to organize anti-Muslim riots and perpetuate unrest. In Europe as well, governments are consistently choosing to pursue stricter regulatory policies which limit the content accessible via social media and aim to police dangerous and violent messages.

Finding ways to appropriately regulate content is a daunting task due to the difficult nature of the issue. Facebook’s new, stricter policies attempt to achieve a similar goal of curtailing the potential use of the network to foster dangerous beliefs and practices. While the company’s intentions may have been well-founded, it is important to take the views of those across the political spectrum who have voiced their opposition to the increasingly policed nature of the media into account. The issue is particularly difficult to parse; the measures recently taken by Facebook and governments in all parts of the world are framed by these agents as a necessary step towards curtailing misinformation, violence, and terrorism. Conversely, many of those championing Trump’s side of the issue see them as curtailments of freedom of speech and even as deliberate political attempts to drown out certain voices. The situation in the United States is particularly notable, as it represents a reversal of the dynamics of media regulation that are ubiquitous around the globe. In America, the government refuses to join other nations in efforts to pursue further limitations on the platforms; rather, social media corporations such as Facebook are themselves adopting policies that regulate the content which can be dispersed through them. This has prompted criticism from prominent political figures, including the president, who argue against the violation of freedom of speech achieved through the stricter regulation. 

Facebook has recently taken increasingly strict measures against content it judges to be dangerous. 

Whether Facebook’s recent bannings truly do represent an orchestrated attempt to undermine those expressing certain political views, as some figures alleged, or are a genuine step in the company’s efforts to nurture a safer online environment, the dangers inherent to such regulation must be addressed. There is a marked difference between decisions to monitor and disallow specifically-outlined content deemed to be dangerous, and decisions taken by some governments to entirely block social media use, allegedly for the common good. However, distinctions between the two are often difficult to define and enforce. In 2018, nearly twenty countries adopted legislation restricting social media under the guise of curtailing the spread of “fake news.” While these laws were certainly effective in achieving this, they also undoubtedly curbed citizens’ ability to express themselves and limited the power and range of dissenting voices. In Chad, social media has been blocked for over a year, limiting citizens’ abilities to protest recent constitutional changes. The political culture in places like Europe, the United States, and New Zealand would likely not allow censorship to be taken to such an extreme, but as content becomes more strictly monitored, more and more avenues of expression are being curtailed. This is a danger that cannot easily be avoided; it is important that those imposing regulations are cognisant of the risks inherent to overly-strict censorship and make an effort to denote a stopping point which aligns with their reasoning for the censorship itself.

The problematic nature of the normalization of censorship has commonly been grounds for criticism of the steps recently taken globally which limit online content. After all, should anyone, or any algorithm, have the power to decide which ideas are acceptable for public consumption and which are dangerous? Of course, terrorism, extremism, and violent attacks should not be enabled, condoned, or supported, either directly or indirectly. However, the universal nature of the internet makes its use for malevolent purposes inevitable. In terms of censorship, then, how far is too far?

Edited by Hannah Judelson-Kelly