
By: Natalie Rajha
Apr 5, 2023
As the digital landscape evolves, we must ensure the law stays close to it.
Section 230(c) of the Communications Decency Act of 1996 shields internet service providers and online platforms like Twitter and Facebook from liability for content published on or through their service by other people. Created at a time when the Internet was expanding and concerns over stalling service providers and innovators were present, Section 230(c) aimed to protect service providers from liability issues and allow the public to be involved online. Almost 30 years after Section 230(c) and the Communications Decency Act of 1996, the Internet, online providers, and online media have expanded exponentially. With a change in social media and an increase in online content that is both good and harmful to our society, Section 230(c) must be closely analyzed to determine whether it should be altered to better fit the complexities of today’s online environment.
Platforms such as Twitter, Facebook, and Instagram have become powerful influencers in global conversation, serving as sources of social interaction and information for people worldwide. While much good has come from the expansion, the negative aspects of a rise in misinformation, cyberbullying, and harmful content have led many to call for revisions on Section 230(c), particularly its ability to shield these companies from liability for content posted by third parties. However, an important part of Section 230(c) also indicates that online service providers cannot be held legally responsible for removing content they deem inappropriate or harmful. This means that although service providers are not liable for content published by users, they can also remove or restrict particular content that they may deem objectionable if done so with good intentions. While it appears that this law encourages free speech of third parties while also moderating particular content, one should consider the digital age we are in.
During pivotal moments in this country, false information and harmful conversations can create distrust and destruction. Today, much of this information travels through online media. People argue that these online platforms promote content that generates more engagement through algorithms, and they accuse these platforms of profiting from misinformation and other harmful material while remaining immune to any legal consequences. Currently, Section 230(c) protects these companies and algorithm-based recommendations. This creates an argument that these platforms should have more accountability for the subject matter they allow on their sites and would encourage greater monitoring of content and algorithms. For Section 230(c), this would mean potentially changing the level of liability of these companies to ensure that the content posted online does not portray obscene or contemptible behavior. For instance, holding platforms accountable if algorithms continue to promote harmful content over a designated period. If this were to be done, however, it would require (a) clear guidelines within the law to determine what is unacceptable and (b) monitoring to ensure these sites are being responsible.
Even if Section 230(c) of the Communications Decency Act of 1996 is altered, many unintended consequences may follow, potentially leading to a worse online environment than before. For instance, a goal of Section 230(c) was to encourage the growth of the internet, so protecting these platform providers from liability was essential. If platforms are required to take more responsibility for what is posted by third parties using their sites, over-censorship or discouragement of the creation of new platforms may follow. Moreover, certain smaller companies may have to deal with greater financial burdens to comply with the more rigorous content moderation standards. The effect on the companies is one thing, but revisions of Section 230(c) will also have consequences for the broader public. This particularly pertains to our First Amendment right to free speech, as a restrictive approach to content moderation could stifle free speech and limit the exchange of ideas. For instance, the line that characterizes free speech from speech that needs to be moderated may be very thin, as content that merely expresses an opinion may at times also be deemed objectionable.
As the digital landscape evolves, we must ensure the law stays close to it. Section 230(c) needs to be reexamined in a manner that balances free expression with that of provider liability, while also supporting greater innovation online. By striking the right balance between provider accountability, moderation of content, and freedom of speech, Section 230(c) can shape the Internet and ensure that it remains a space for open dialogue, regardless of new challenges like the influence of algorithms and misinformation.