Radicalization and Reproductive Rights: Revising Section 230 in a Post-Roe America
Within just four weeks of Roe v. Wade’s overturn, Google searches for “medication abortion pills” increased by 70 per cent. One week after the Supreme Court’s ruling, “Is abortion legal in Texas?” became the highest trending Google search question in Arkansas, Texas’ neighbouring state, where abortion is prohibited. In a post-Roe America, the digital realm has become a refuge for women seeking abortions, reproductive justice groups, and abortion care providers.
Under Section 230 of the Communications Decency Act, tech companies like Facebook and Instagram cannot be held liable for user content, provided that user uploads are within the bounds of federal criminal law. Within the scope of Section 230, tech companies are generally free to decide what kinds of content should be removed from their platforms, though companies cannot promote or host content considered illegal under federal law. Google is legally obligated, for example, to remove any images displaying child pornography or instances of copyright infringements from its platforms. With that being said, Section 230 shields tech companies from liability for infractions of state criminal laws.
Since Roe’s overturn, reproductive justice groups have taken advantage of the freedoms offered by Section 230, turning to online platforms to escape vicious state abortion laws like those in Arkansas. Crowdfunding sites have cropped up across the web to source donations for abortion services, and community-oriented groups sharing abortion resources have sprung up on Reddit and Facebook. Since Section 230 immunizes tech companies against violations of state laws, platforms like GoFundMe can escape criminal liability in the Deep South for abortion crowdfunding efforts that appear on their sites.
Section 230’s future as a safeguard of abortion rights, however, remains tenuous. In recent years, the provision has come under fire on both ends of the political spectrum, with Democrats denouncing Section 230 for enabling the spread of extremist right-wing content and Republicans criticizing Section 230 for empowering tech companies to censor user content. In a 2020 court filing, Justice Clarence Thomas proposed that the Justices redetermine the correct interpretation of Section 230 should an appropriate case arise. In line with Justice Thomas’ suggestion, the Supreme Court agreed on October 3 to hear two cases invoking Section 230, the outcome of which could lead to the breakdown of the law as we know it.
Section 230 in hot water
The court’s first case, Gonzalez v. Google LLC, will examine Google’s alleged role in aiding and abetting the November 2015 Paris attacks committed by the terrorist organization ISIS. According to the plaintiff, whose daughter, Nohemi Gonzalez, died in the attacks, Google cannot escape liability by appealing to Section 230. Through its use of video recommendation systems, Gonzalez argues, Google’s subsidiary, YouTube, played an active role in promoting ISIS recruitment videos to its users. By promoting terrorist content on this platform, Gonzalez alleges, Google helped to facilitate the radicalization of young militants in violation of the Anti-Terrorism Act. This federal law empowers individuals to sue any person who “aids and abets” terrorist acts. After the Ninth Circuit Court of Appeals ruled that Section 230 shielded Google from liability, Gonzalez decided to bring his case to the Supreme Court.
The second case, Twitter, Inc. v. Taamneh, is an appeal by Twitter regarding its liability for terrorist content uploaded onto its site. As in the Gonzalez case, the family of Nawras Alassaf, who was killed by ISIS during the 2017 Istanbul nightclub shooting, has accused Twitter, Google, and Facebook of aiding and abetting the attack. According to Alassaf’s relatives, these companies contributed to the growth of ISIS and failed to take a “meaningful” role in curbing terrorist activity on their sites. In this case, the Ninth Circuit Court of Appeals – the same court that ruled against holding Google liable for the 2015 attack – decided that Twitter, Google, and Facebook could be held legally responsible for aiding and abetting the nightclub shooting since all three companies provided services to supporters of ISIS. Following this ruling, Twitter petitioned for the Supreme Court to review the appellate court’s decision, arguing that it had enforced anti-terrorist policies by suspending Twitter accounts associated with ISIS supporters and removing terrorist content.
In both cases, the Supreme Court’s rulings will have profound consequences for Section 230. In a court filing, Google suggested that a negative judicial outcome in the Gonzalez case – in other words, a ruling that YouTube should be held liable for the attacks – would render Section 230 a “dead letter,” effectively nullifying its importance as a liability shield for online platforms. In a post-Roe America, this could have dangerous implications for reproductive rights.
The pros and perils of revising Section 230
As it currently stands, Section 230 prevents web platforms from being sued or charged for hosting content that violates state abortion laws. If, say, an Arkansan was to start a fundraiser on GoFundMe for their upcoming abortion, any criminal proceedings launched by state attorneys against GoFundMe would be swiftly dismissed, enabling fundraising efforts to continue. In a world where Section 230 is invalidated, tech companies could face strings of lawsuits and criminal charges for hosting content that breaches state abortion laws. Tech companies will be unlikely to allocate their resources toward fighting lawsuits and criminal cases. They will instead choose to comply with state abortion laws, placing abortion activists in precarious positions and enabling state crackdowns on abortion to seep into cyberspace.
At the same time, cases like Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh demonstrate that current moderation methods have failed to regulate harmful user content. In 2016, one year prior to the Istanbul shooting, Twitter reported that it had suspended 125,000 accounts linked to ISIS militants. However, these moderation methods failed to stamp out ISIS’ digital presence entirely. Throughout 2016, ISIS recruitment videos continued to proliferate on the platform, culminating in the Istanbul attack the following year. Tech companies like Twitter have also come to rely primarily on recommendation algorithms to organize users’ feeds. With these algorithms in place, the more a user engages with certain types of content, the more that content will appear on their feed. In turn, these algorithmic systems could fuel the radicalization of young militants by increasing the likelihood of their engagement with terrorist content.
Adopting a revised interpretation of Section 230 will be a delicate balancing act. In both Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh, the Supreme Court will need to specify the role that algorithmic systems play in corporate liability and will need to set clearer guidelines for companies regarding content moderation methods. In its current state, Section 230 does not account for the role of content recommendation systems in disseminating dangerous content, nor does it specify the kinds of methods that companies must use to remove harmful content from their platforms. At the same time, pro-choice Justices will need to bear in mind the potential repercussions of invalidating Section 230 entirely. Without any revision to Section 230, tech companies will have little incentive to take on a greater role in removing dangerous content from their platforms. Yet if Section 230 were disregarded entirely, abortion rights groups would likely lose the digital freedoms that enabled them to survive in the aftermath of Roe’s overturn.
Edited by Erika Mackenzie.