4 Ways Section 230 Reform May Redefine Content Moderation For Businesses Big And Small

Published: September 29, 2021
Impacts of Section 230 Reform to Content Moderation Practices

The internet has evolved drastically from when the public first accessed the world wide web more than two decades ago. It has achieved tremendous success in giving the world quick access to information—so much more success than anyone expected.  

As of January 2021, more than 4 billion of the world’s population of over 7 billion use the internet daily. That’s about 60% of humanity enjoying the First Amendment’s gift of online free speech. 

Such a massive number of users, producing content of different forms daily, has brought massive business opportunities and benefits. User-generated content has been proven effective in promoting brand awareness, expanding customer base, and gaining market insights. However, they are also potential sources of harm. A single negative comment can spread rapidly and cause a significant dent in a brand’s reputation.

Section 230: A Fortress Under Fire

Thanks to a 26-word law, online platforms currently have the freedom to moderate user-generated content without legal repercussions. Section 230 states that:

 “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another.”

But with a barrage of criticisms across political spectrums, things might soon change. The possible reform of Section 230 might signal the end of the internet as we know it.

Some of the Proposed Reforms to Section 230 and Their Impact on Businesses

Claims of misinformation and unfair censorship against tech giants, such as Facebook and Twitter, prompted both the federal and judicial arms of the state to consider revamping Section 230. Here are some of the options currently being explored:

1. Total repeal of Section 230

This case is perhaps the worst scenario. The complete revocation of the law would put companies at risk for legal complaints every time a third-party user posts something on their platforms. Once passed, a single user-generated comment, restricted or allowed, can cause the hosting site millions of dollars in lawsuits. 

While both small firms and tech giants will be significantly affected if section 230 gets repealed, the latter has the upper hand since they have more financial resources to cover their bases. 

With an enormous number of people—with varying views, backgrounds, and intentions—using the internet daily, moderating content will undoubtedly be costly. Aside from lawyer fees, businesses must invest in an additional workforce to monitor and remove objectionable content promptly. 

The most economically sound solution for small and big firms alike is to acquire the service of external content moderation providers. Doing so will equip them with the vigilance and speed required by the law, all while continuing to enjoy the benefits brought by user-generated content.

2. Introduction of size-based carve-outs

In this scenario, only tech superpowers, like Amazon and Twitter, will lose the immunity provided by Section 230. This possibility will promote the growth of smaller firms, which was the original intention for Section 230 in the first place.

Most large tech firms are prepared for this scenario. Facebook, for instance, employs the combination of human and artificial intelligence in moderating user-generated content. 

However, the better way is to increase reliance on human capabilities if the establishment of size-based carve-outs pushes through. That way, companies could avoid inadvertent censorship of legal and harmless content, something that algorithm-powered bots are prone to. 

Take, for example, what happened to Facebook. There had been instances when the social media giant’s automated systems removed some ads by mistake, causing the advertisers to lose money. One of the affected advertisers shared that their ad has a disclaimer saying “it wasn’t open to those trying to sell adult content.” But Facebook’s AIs flagged and took down the ad for violating the company’s policy on “nudity/sexual activity.” 

While AIs are useful in moderating a high volume of content, they still are no match to a human’s capacity for understanding nuances and identifying cultural contexts. Human moderators are still companies’ best line of defense. 

Heavy reliance on artificial intelligence also makes hosting platforms susceptible to intentional attacks curated by unscrupulous groups or individuals. Some may game the system and purposely avoid the words banned by algorithm-powered tools to cause harm. 

As mentioned, the best approach for businesses is to continue using both bots and human moderators, but with an increased reliance on the latter.

3. Clarification of the scope of the immunity

A lot of the criticisms received by Section 230 are focused on the vagueness of some of its provisions. Section 230(c)(2) states that:

“Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” 

Critics point out that the subjectivity of the terms “good faith” and “otherwise objectionable”  have been used by platforms to censor content less objectively. As such, lawmakers are proposing the establishment of requirements that will define what constitutes “good faith.” Others argue that removing the “otherwise objectionable” clause from the provision will help clear the confusion surrounding the law. 

If any or both of the said directions push through, hosting platforms will no longer enjoy protection if they make moderation decisions based on their personal views. Again, this possibility could pose a significant challenge for companies that rely heavily on automation to moderate user-generated content. 

AIs function through pre-programmed algorithms. The subjectivity of what constitutes lewd or harassing, for instance, requires critical judgment. If the system mistakenly identifies an educational material as filthy and removes the content, the hosting platform may face legal charges. 

The best approach for affected companies is to strengthen their content moderation workforce. A lot of businesses are already hiring offshore contractors to mitigate the costs of protecting their brands. Some critics point out that this practice is also challenging since cultural contexts vary. What constitutes profanity in one country may be acceptable in another. However, compared to the alternative of making automated tools the sole guardians of content, this is still the more sensible option. 

The key is to find a capable outsourcing partner that can facilitate training practices to ensure continuous alignment with the established legal guidelines. It also helps if the BPO has a branch onshore to set a standard for cultural context.

4. Mandate the issuance of transparency reports and timely response to notice-and-takedowns

Some critics of Section 230 argue that some big tech platforms have used the law to silence the voice of parties that do not share their beliefs or political affiliations. 

Far from how they started, companies like YouTube and Twitter have grown tremendously and dominated the tech industry. They are no longer small players that host online forums but are, in fact, gatekeepers instrumental in shaping public opinion. For this reason, some lawmakers proposed that Section 230 must be revised to demand more accountability and transparency from hosting platforms. 

If this direction pushes through, companies will have to provide transparency reports that include their guidelines for moderating content. Likewise, they will be required to take down published content that violates their policies promptly. 

If any or both of these directions push through, small and mid-sized players will be the most affected. Considering that a timely response will be required, they will need to hire enough people to screen content in advance or real-time. If they fail to take down or respond to a notice, they may face legal charges.

However, with the vast amount required, smaller companies may be tempted to limit the features of their platform to restrict the uploading or posting of user-generated content. This action, however, takes away their ability to compete with larger tech firms and puts them in a vulnerable spot, leading to eventual buyouts. 

The better option is to outsource a BPO that can strengthen a company’s content moderation efforts without costing a fortune. By leveraging the contractor’s technology and access to a global workforce, smaller firms can maintain a competitive advantage while guarding their online properties.

Stay vigilant. Stay informed on the best content moderation practices for your business.

While Section 230’s reform remains a topic for debate, the best thing for businesses to do at this point is to be proactive. Now more than ever, practicing content moderation is very important. 

Companies must utilize and leverage all possible avenues to increase vigilance and speed in guarding their online platforms. After all, with or without Section 230 reform, it always pays to protect your name. 

If you want to know more about content moderation, SuperStaff is inclined to help. With how the digital landscape is changing by the minute, you need to partner with a content moderation service provider capable of safeguarding your digital properties in real-time.

Share This Story!