To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue
June 26, 2023
Supreme Court of the United States
J Roberts, CJ & SCJ; C Thomas, E Kagan, S Alito, KB Jackson, S Sotomayor, N Gorsuch, B Kavanaugh and AC Barret, SCJJ
January 19, 2023
Reported by Faith Wanjiku and Bonface Nyamweya
Criminal law- computer and cybercrimes- publication of illegal or prohibited content- where Congress enacted section 230 of the Communications Decency Act in order to protect internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content- where the petitioners sought to hold the respondent liable for the harms caused by theIslamic State of Iraq and Syria’s (ISIS) videos, on the ground that YouTube had disseminated that content and, through its recommendation algorithms, made it easier for users to find and consume that content- whether to be entitled to immunity, a provider of an interactive computer service ought to have contributed to the creation or development of the content at issue- whether a platform was responsible for developing particular information when it merely provided a generally available means by which third parties could post information of their own independent choosing online- Communications Decency Act, 47 U.S.C. § 230.
Statutes-statutory provisions-interpretation- interactive computer service- an interactive computer service included an access software provider, defined to include a provider of software or enabling tools that filtered, screened, allowed, or disallowed content, picked, chose, analyzed, or digested content, or transmitted, received, displayed, forwarded, cached, searched, subset, organized, reorganized, or translated content- Communications Decency Act, 47 U.S.C. § 230(f)(2), (4).
Statutes-statutory provisions-interpretation-information content provider- an information content provider was defined as any entity that was responsible, in whole or in part, for the creation or development of information- Communications Decency Act, 47 U.S.C. § 230(f)(3).
Words and phrases- create-definition of- to bring into existence- Merriam-Webster’s Collegiate Dictionary 293 (11th ed. 2003); The Oxford English Dictionary Online.
Words and phrases- develop-definition of- to expand by a process of growth- to bring (something) to a fuller or more advanced state; to improve, extend- Merriam-Webster’s Collegiate Dictionary 293 (11th ed. 2003); The Oxford English Dictionary Online.
Congress enacted section 230 of the Communications Decency Act in order to protect internet platforms’ ability to publish and present user-generated content in real time, and to encourage them to screen and remove illegal or offensive content. Petitioners contended that YouTube assisted ISIS in spreading its message by allowing users to see ISIS videos posted by users, and by automatically presenting on-screen videos that were similar to those that the user had previously watched. Petitioners thus sought to hold the respondent liable for the harms caused by ISIS’s videos, on the ground that YouTube had disseminated that contented and, through its recommendation algorithms, made it easier for users to find and consume that content. Petitioners’ claims therefore treated YouTube as the publisher of content that it was not responsible for creating or developing. The fact that YouTube used targeted recommendations to present content did not change that conclusion; those recommendations displayed already-finalized content in response to user inputs and curate YouTube’s voluminous content in much the same way as the early methods used by 1990s-era platforms.
- Whether to be entitled to immunity, a provider of an interactive computer service ought to have contributed to the creation or development of the content at issue.ii. Whether a platform was responsible for developing particular information when it merely provided a generally available means by which third parties could post information of their own independent choosing online.
iii. What was the meaning of the term ‘create’ in relation to content moderation envisaged in section 230?
Relevant provisions of the law
Communications Decency Act, 47 U.S.C. § 230
Section 230—Protection for private blocking and screening of offensive material
It is the policy of the United States—
(3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
(4) to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
(5) to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.
(c)Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(d) Obligations of interactive computer service
A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections.
- Congress enacted section 230 of theCommunications Decency Act (section 230) in response to the New York Supreme Court’s decision in Stratton Oakmont, Inc. v Prodigy Services Company (Stratton Oakmont). There, the New York Supreme Court held that the online platform Prodigy could be held liable for defamation based on an anonymous user’s posting of defamatory statements on one of Prodigy’s online bulletin boards. The court reasoned that Prodigy should be subject to liability because it had made a conscious choice to exercise editorial control over the user-generated content posted on its site by removing or editing some offensive content. Because Prodigy removed some content, the site could be held responsible for its failure to remove all problematic content, including the defamatory statement from its site.
- The court distinguished an earlier decision that had refused to impose defamation liability on another message board website, CompuServe, on the ground that CompuServe had not attempted to moderate the content on its site. The Stratton Oakmont decision thus penalized an internet platform for engaging in less-than-perfect content moderation—that was, for failing in its attempt to remove every piece of potentially unlawful content from its site.
- To impose liability on an Internet service because it had made decisions concerning which content to present and which to remove, even if those decisions were imperfect was backward. Congress therefore sought to encourage Internet service providers to engage in content moderation, recognizing that there was no way that Internet services would be able to perfectly screen all information that was going to be coming in to them from all manner of sources.
- In drafting section 230, Congress took into account the ways in which Internet platforms of the time presented, moderated, and curated content in order to make their websites useful to, and safe for, users. Many of the major Internet platforms engaged in content curation that was a precursor to the targeted recommendations that were then employed by YouTube and other contemporary platforms.
- Prodigy, the website at issue in the Stratton Oakmont decision, provided a salient example of early content moderation and curation. Prodigy often categorized its message boards by topic, allowing a user to choose to read a message board dedicated to a subject of interest. For example, Prodigy’s Money Talk board, which presented the alleged defamatory content in Stratton Oakmont, was the most widely read financial computer bulletin board in the United States in 1995, and members posted statements on it regarding stocks, investments and other financial matters.
- Beyond organizing its content by subject, Prodigy employed a stringent editorial policy that relied on editors to screen potential messages by making subjective determinations as to whether and to what extent particular messages would be posted. Prodigy used prescreening technology to automatically review all potential message board posts for offensive language, similar to the systems in place today. Prodigy provided curated, topic-specific boards where users could post and read messages, and it held itself out as exercising editorial control over those messages and when, how and where they appeared on its platform.
- Other Internet platforms similarly exercised discretion concerning whether and how they presented user-generated content, and some also attempted to tailor displayed content to particular users. For instance, early search engine Lycos experimented with various ranking systems for organizing search results, presenting users with curated results and relevant information depending on the query. Other platforms, including Amazon, created recommendation systems for their wares, helping customers find precisely what they needed based on their past purchases. And still others, such as WebConnect and DoubleClick, deployed user-targeted advertisements, allowing businesses to home in on potential customers.
- The wide variety of content presentation and moderation technologies in use and development at the time informed Congress’s consideration of section 230. Congress sought in section 230 to afford platforms leeway to engage in the moderation and curation of activities that were prevalent at the time, and to encourage the development of new technologies for content moderation by both platforms and users. Congress was well aware that, in view of the then-exponential growth in Internet usage, the challenge of moderating user-generated content was only going to increase. Platforms would need to experiment with new technologies that would be capable of screening and organizing increasingly voluminous amounts of real-time third-party content.
- Congress sought to encourage that evolution by enacting a technology-agnostic immunity provision that would protect Internet platforms from liability for failing to perfectly screen unlawful content. Section 230 furthered that purpose through immunity and preemption provisions. Section 230(c)(1), the immunity provision, stated that no provider or user of an interactive computer service should be treated as the publisher or speaker of any information provided by another information content provider.
- An interactive computer service’s immunity under section 230(c)(1) turned on whether the content that was the subject of the lawsuit was provided by another, rather than the platform itself; and the plaintiff’s claim sought to treat the platform as the publisher or speaker of the content in question. The provision did not distinguish among technological methods that providers used to moderate and present content, thereby allowing for innovation and evolution over time.
- Congress declared that section 230 was intended to encourage the development of technologies which maximized user control over what information was received and to remove disincentives for the development and utilization of blocking and filtering technologies. And it broadly defined the interactive computer services eligible for immunity, to include platforms that provided software or tools that filtered, chose, and displayed content, among other things.
- Section 230 also protected platforms’ leeway to use and develop new forms of content presentation and moderation by ensuring that they were not subject to varying state-law rules requiring or encouraging them to remove or retain content or to display it in certain ways. Section 230 preempted any inconsistent state laws while allowing consistent state laws to remain.
- Targeted recommendations were one such innovation in content presentation. A platform that offered targeted recommendations displayed particular content to users by using algorithms that were designed to analyze user data and predict what the user might want to see. Targeted recommendations were ubiquitous across the Internet and existed in fields from social media to commerce. For instance, video discovery platforms like YouTube and Vimeo presented particular videos to a user based on the videos that the user had previously watched and other data. Similarly, e-commerce platforms such as Amazon and Etsy recommended products to users based on their preferences. A host of online advertisers relied on targeted recommendations to reach consumers efficiently.
- Targeted recommendations were a direct descendant of the curation and presentation methods that were extant when section 230 was enacted, even if the technology used to decide which recommendations to make had advanced significantly. Where earlier Internet platforms catered to user tastes by, for instance, arranging content by subject or by manually deciding what content to prioritize, the sheer volume of user-generated content on recent platforms made those methods impracticable. Without more targeted content recommendations, users would have no efficient way of navigating among innumerable pieces of information to find the content in which they were most interested.
- Section 230(c)(1)’s immunity provision did not turn on the particular methods of content presentation used by an Internet platform; hence, immunity was available, or not, on the same terms for all methods of content presentation. Section 230(f)(2) expressly referenced platforms that used targeted recommendations in its definition of an interactive computer service eligible for liability protection under section 230(c)(1). An interactive computer service included an access software provider, defined to include a provider of software or enabling tools that filtered, screened, allowed, or disallowed content, picked, chose, analyzed, or digested content, or transmitted, received, displayed, forwarded, cached, searched, subset, organized, reorganized, or translated content. Interactive computer services that engaged in targeted recommendations were doing just that—analyzing, picking, and screening content for display to users. They were therefore plainly eligible for immunity if they met the other prerequisites set forth in section 230.
- In section 230, Congress sought to protect online platforms for their content moderation and presentation efforts. At the same time, that protection had meaningful limits such that Congress did not intend to insulate Internet platforms from liability for claims that were based on a platform’s own unlawful content, or that were based on its actions that went beyond publishing third-party content and did not depend on the publishing of any such content.
- Section 230(c)(1)’s text reflected those principles. Under that provision, a provider of an interactive computer service such as YouTube was immune from suit when the content at issue was provided by another information content provider (when the platform was not responsible, in whole or in part, for the creation or development of the allegedly illegal content), when the claim sought to treat the provider of an interactive computer service as the publisher or speaker of that content. Both requirements had to be satisfied for the platform to be entitled to section 230 immunity. Construing those requirements according to their plain meaning ensured that Internet platforms had adequate leeway to experiment with moderating and presenting content provided by others, while also appropriately limiting immunity to those suits that sought to impose liability on platforms for publicly communicating content provided by others.
- Section 230’s first significant limitation on immunity was that a platform was immune only with respect to information provided by another information content provider. An information content provider was defined as any entity that was responsible, in whole or in part, for the creation or development of information. To be entitled to the immunity provided in section 230(c)(1), then, the platform ought not be wholly or partially responsible for the creation or development of the information in question.
- A platform created information when it brought that information into existence. And a platform developed information by transforming it into a new state by altering or transforming its substance. Two aspects of the statutory context confirmed that conclusion. First, the object of the preposition following development was information. That was the same information that was covered by the protection from liability in subsection (c)(1). Development therefore referred to transforming the information itself into a more advanced state. Second, development and creation both clearly connoted actions that affected the information’s substance.
- To be entitled to section 230’s protections, the platform could not be wholly or partially responsible for the creation or development of the information in question. The term responsible implied that, as a factual matter, the platform was a cause of the creation or development of the information. In the context of section 230, the term also implied legal responsibility complicity or culpability.
- That connotation arose from the fact that the information for which the platform was responsible was information that gave rise to potential liability. Thus, to be responsible for the creation or development of the information was to contribute to the development of aspects of the information that allegedly caused injury.
- The plain meaning of creation or development supported Congress’s objective of clearly allocating liability in a way that promoted innovation in content moderation. A platform was not responsible for developing particular information when it merely provided a generally available means by which third parties could post information of their own independent choosing online. Such a broad understanding would defeat the purposes of section 230 by swallowing up every bit of the immunity that the section otherwise provided.
- Where a platform had actually contributed to the creation or development of illegal content, even in part, the platform would not be immune. Actions such as designing a tool that specifically induced the creation of illegal user-generated content, or required users to input illegal content, would render a platform responsible for developing the illegal content. That ensured that section 230 immunity did not enable platforms to contribute to illegal content with impunity.
- Section 230’s text and structure made it clear that the immunity conferred extended beyond common-law claims for defamation. The statute on its face applied equally to any cause of action not specifically exempted from section 230’s reach in section 230(e) (e.g., federal criminal laws, intellectual property laws, communications privacy laws). Although the term publisher had sometimes been used by courts as a term of art in defamation cases, the term had its ordinary meaning—i.e., one who made information public. That conclusion followed from the fact that the term speaker, which appeared together with publisher in section 230(c)(1), was not a term of art in common-law defamation.
- Had Congress intended to limit immunity to defamation claims, it could have said so explicitly but it did not. It would hardly have made sense for Congress to limit immunity to defamation claims, as the objectives stated in the statutory preamble would be undermined if all claims other than defamation could be used to hold platforms liable for illegal content produced by others. Section 230(e), which clarified the specific causes of action to which section 230 did not extend, would have been unnecessary if the statute provided immunity only against defamation claims.
- Section 230 provided immunity from claims premised on a platform’s publication of allegedly harmful content that had been created and developed wholly by third parties. Although that immunity extended well beyond defamation claims, section 230 did not offer blanket protection to all online platforms against any claim, or broadly immunize platforms simply because they could be considered publishers in the abstract.
- The respondent was entitled to section 230’s protection from liability. The petitioners contended that YouTube assisted ISIS in spreading its message by allowing users to see videos that users had posted and by automatically presenting on-screen videos that were similar to those that the user had previously watched. The petitioners thus sought to hold the respondent liable for the harms caused by ISIS’s videos, on the ground that YouTube had presented that content to the public, i.e., it had disseminated that content and, through its recommendation algorithms, made it easier for users to find and consume that content. The petitioners’ claims therefore treated YouTube as the publisher of content that it was not responsible for creating or developing.
- The respondent was not responsible, in whole or in part, for the creation or development of the content. The respondent did not have any hand in creating the ISIS videos, nor did it develop that content by altering it or transforming it in any way. The allegations in the complaint, moreover, established that the respondent did not require, or even encourage, the illegal content in a way that would render it responsible for developing that content.
- The petitioners alleged that YouTube recommended ISIS videos to users based upon the content and what was known about the viewer. Yet the record established that the recommendation algorithms did not treat ISIS-created content differently than any other third-party created content. That was, the recommendations did not pick and choose ISIS content in particular. Instead, like a search engine, YouTube’s recommendation algorithm worked to deliver content in response to user inputs.
- When a platform’s recommendation algorithm merely responded to user preferences by pairing users with the types of content they sought, the algorithm functioned in a way that was not meaningfully different from the many curatorial decisions that platforms had always made in deciding how to present third party content. Since the days of Prodigy and CompuServe, platforms had sought to arrange the voluminous content on their sites in a way that was useful to users and responsive to user interests. In so doing, platforms did not develop the user-generated content within the meaning of section 230(f)(3), because decisions about how to present already-finalized content did not transform or alter the content itself in any way.
- Any time a platform engaged in content moderation or decided how to present user content, it necessarily made decisions about what content its users could or could not wish to see. In that sweeping sense, all content moderation decisions could be said to implicitly convey a message. The government’s reasoning therefore suggested that any content moderation or presentation decision could be deemed an implicit recommendation. But the very purpose of section 230 was to protect those decisions, even when they were imperfect.
- Under the government’s logic, the mere presence of a particular piece of content on the platform would also send an implicit message, created by the platform itself, that the platform had decided that the user would like to see the content. And when a platform’s content moderation was less than perfect—when it failed to take down some harmful content—the platform could then be said to send the message that users would like to see that harmful content. Accepting the government’s reasoning therefore would subject platforms to liability for all of their decisions to present or not present particular third-party content—the very actions that Congress intended to protect.
- Imposing liability on YouTube for targeted recommendations of unlawful third-party content would, in practice, require YouTube to monitor the content posted by third parties and alter the mix of content it displayed—thus confirming that petitioners’ claims treated YouTube as a publisher of others’ content and were precisely the sort of claims that Congress sought to foreclose in enacting section 230.
- Section 230 protected targeted recommendations to the same extent that it protected other forms of content curation and presentation. Any other interpretation would subvert section 230’s purpose of encouraging innovation in content moderation and presentation. The real-time transmission of user-generated content that section 230 fostered had become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users, section 230’s protection was even more important than when the statute was enacted.
Petition dismissed. Court of Appeal ruling affirmed.
Relevance to the Kenyan jurisprudence
Article 33 of the Constitution of Kenya 2010 talks of the freedom of expression while article 34 illustrates freedom of media. More dramatically, article 34 (1)(2)(3) states that:
1. Freedom and independence of electronic, print and all other types of media is guaranteed, but does not extend to any expression specified in Article 33.
2. The State shall not-
a. exercise control over or interfere with any person engaged in broadcasting, the production or circulation of any publication or the dissemination of information by any medium; or
b. penalise any person for any opinion or view or the content of any broadcast, publication or dissemination.
3. Broadcasting and other electronic media have freedom of establishment, subject only to licensing procedures that-
a. are necessary to regulate the airwaves and other forms of signal distribution; and
b. are independent of control by government, political interests or commercial interests.
The Computer Misuse and Cybercrimes Act of 2018 provides for offences relating to computer systems; to enable timely and effective detection, prohibition, prevention, response, investigation and prosecution of computer and cybercrimes; to facilitate international co-operation in dealing with computer and cybercrime matters; and for connected purposes.
The court in Bloggers Association of Kenya (BAKE) v Attorney General & 3 others; Article 19 East Africa & another (Interested Parties)  eKLR noted that:
In considering the proportionality of limitation to the freedom of expression in section 22 of the Act, dealing with false publication, and considering the appealing characteristics internet to the web, the court considers the impact of the restrictive not only from the point of view of the private citizen, directly affected by the measure but also from the perceptive of its impact on the public at large.
Moreover, the court in Law Society of Kenya v Bloggers Association of Kenya & 6 others  eKLR stressed that:
We need not over emphasise the importance of the said Act and particularly the sections complained of. This is so because it is necessary to strike a fine balance between freedom of speech and information as espoused in the constitution on the one hand, and also the need to disseminate correct information and also combat cybercrime on the other.
This case is important to the Kenyan jurisprudence as it discusses theprotection for private blocking and screening of offensive material on the internet by providers when it holds that to be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue.