An unfavorable ruling against Google in a closely watched Supreme Court case that termed YouTube’s recommendation engine could have profound unintended consequences for much of the wider internet, the search giant argued in a legal filing Thursday.
Google, which owns YouTube, is pursuing a high-profile lawsuit over whether algorithmically generated YouTube recommendations are exempt from Big Tech’s Signature Liability Shield, Section 230 of the Communications Decency Act.
Section 230 broadly protects technology platforms from lawsuits over companies’ content moderation decisions. But a Supreme Court ruling that says AI-based recommendations don’t qualify for that protection could “threaten the core functions of the internet,” Google wrote in its briefing.
“Websites like Google and Etsy rely on algorithms to sift through mountains of user-created content and display content that is likely to be relevant to any user,” the company wrote. “If plaintiffs could evade [Section 230] targeting how websites sort content or making users accountable for liking or sharing articles would turn the internet into a disorganized mess and a minefield for lawsuits.
In light of such a ruling, websites would have to choose between deliberately over-moderating their websites, scrubbing down virtually anything that might be perceived as offensive, or doing no moderation at all to avoid the risk of liability, Google argued.
The case stems from allegations that Google violated a US anti-terrorism law with its content algorithms by recommending pro-ISIS YouTube videos to users. The prosecutors in the case are the family of Nohemi Gonzalez, who was killed in an ISIS attack in Paris in 2015.
In the filing, Google said “YouTube detests terrorism” and cited its “increasingly effective actions” to limit the spread of terrorist content on its platform, before insisting that the company cannot be sued for recommending the videos due to its Section 230 liability shield.
The case, Gonzalez v. Google, is seen as a benchmark for content moderation, and one of the first Supreme Court cases to consider Article 230 since its adoption in 1996. Multiple Supreme Court judges have expressed interest in the law , which has been widely interpreted by the courts, defended by the tech industry and sharply criticized by politicians in both parties.
The Biden administration argued in a legal letter last month that Section 230 protections should not extend to recommendation algorithms. President Joe Biden has long called for changes to Article 230, saying technology platforms need to take more responsibility for the content that appears on their websites. As recently as Tuesday, Biden published an op-ed in the Wall Street Journal urging Congress to change Section 230.
But in a blog post Thursday, Google General Counsel Halimah DeLaine Prado argued that narrowing Section 230 would increase the threat of online and small business lawsuits, chilling speech and economic activity on the Internet.
“Services may become less useful and less reliable — as efforts to root out scams, fraud, conspiracies, malware, violence, harassment and more are stifled,” DeLaine Prado wrote.