June 17, 2020

Justice Department Issues Recommendations for Section 230 Reform. Reforms Strike Balance Of Protecting Citizens While Preserving Online Innovation & Free Speech. Platform Or Publisher?

The United States Dept of Justice
Wednesday, June 17, 2020

Reforms Strike Balance of Protecting Citizens While Preserving Online Innovation and Free Speech

The Department of Justice released today a set of reform proposals to update the outdated immunity for online platforms under Section 230 of the Communications Decency Act of 1996. Responding to bipartisan concerns about the scope of 230 immunity, the department identified a set of concrete reform proposals to provide stronger incentives for online platforms to address illicit material on their services while continuing to foster innovation and free speech. The department’s findings are available here.

“When it comes to issues of public safety, the government is the one who must act on behalf of society at large. Law enforcement cannot delegate our obligations to protect the safety of the American people purely to the judgment of profit-seeking private firms. We must shape the incentives for companies to create a safer environment, which is what Section 230 was originally intended to do,” said Attorney General William P. Barr. “Taken together, these reforms will ensure that Section 230 immunity incentivizes online platforms to be responsible actors. These reforms are targeted at platforms to make certain they are appropriately addressing illegal and exploitive content while continuing to preserve a vibrant, open, and competitive internet. These twin objectives of giving online platforms the freedom to grow and innovate while encouraging them to moderate content responsibly were the core objectives of Section 230 at the outset. The Department’s proposal aims to realize these objectives more fully and clearly in order for Section 230 to better serve the interests of the American people.”

The department's review of Section 230 over the last ten months arose in the context of its broader review of market-leading online platforms and their practices, which were announced in July 2019. The department held a large public workshop and expert roundtable in February 2020, as well as dozens of listening sessions with industry, thought leaders, and policy makers, to gain a better understanding of the uses and problems surrounding Section 230.

Section 230 was originally enacted to protect developing technology by providing that online platforms were not liable for the third-party content on their services or for their removal of such content in certain circumstances. This immunity was meant to nurture emerging internet businesses and to overrule a judicial precedent that rendered online platforms liable for all third-party content on their services if they restricted some harmful content.

However, the combination of 25 years of drastic technological changes and an expansive statutory interpretation left online platforms unaccountable for a variety of harms flowing from content on their platforms and with virtually unfettered discretion to censor third-party content with little transparency or accountability. Following the completion of its review, the Department of Justice determined that Section 230 is ripe for reform and identified and developed four categories of wide-ranging recommendations.

Incentivizing Online Platforms to Address Illicit Content

The first category of recommendations is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation claims. These reforms include a carve-out for bad actors who purposefully facilitate or solicit content that violates federal criminal law or are willfully blind to criminal content on their own services. Additionally, the department recommends a case-specific carve out where a platform has actual knowledge that content violated federal criminal law and does not act on it within a reasonable time, or where a platform was provided with a court judgment that the content is unlawful, and does not take appropriate action.

Promoting Open Discourse and Greater Transparency

A second category of proposed reforms is intended to clarify the text and revive the original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users. One of these recommended reforms is to provide a statutory definition of “good faith” to clarify its original purpose. The new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.

Clarifying Federal Government Enforcement Capabilities

The third category of recommendations would increase the ability of the government to protect citizens from unlawful conduct, by making it clear that Section 230 does not apply to civil enforcement actions brought by the federal government.

Promoting Competition

A fourth category of reform is to make clear that federal antitrust claims are not, and were never intended to be, covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.
All I listed report FAKE NEWS and HATEFILLED NEWS.
CityJournal.org
written by Adam Candeub and Mark Epstein
Monday May 7, 2018

When the House Judiciary Committee held a hearing on social media censorship late last month, liberal Democratic congressman Ted Lieu transformed into a hardcore libertarian. “This is a stupid and ridiculous hearing,” he said, because “the First Amendment applies to the government, not private companies.” He added that just as the government cannot tell Fox News what content to air, “we can’t tell Facebook what content to filter,” because that would be unconstitutional.

Lieu is incorrect. While the First Amendment generally does not apply to private companies, the Supreme Court has held it “does not disable the government from taking steps to ensure that private interests not restrict . . . the free flow of information and ideas.” But as Senator Ted Cruz points out, Congress actually has the power to deter political censorship by social media companies without using government coercion or taking action that would violate the First Amendment, in letter or spirit. Section 230 of the Communications Decency Act immunizes online platforms for their users’ defamatory, fraudulent, or otherwise unlawful content. Congress granted this extraordinary benefit to facilitate “forum[s] for a true diversity of political discourse.” This exemption from standard libel law is extremely valuable to the companies that enjoy its protection, such as Google, Facebook, and Twitter, but they only got it because it was assumed that they would operate as impartial, open channels of communication—not curators of acceptable opinion.

When questioning Facebook CEO Mark Zuckerberg earlier this month, and in a subsequent op-ed, Cruz reasoned that “in order to be protected by Section 230, companies like Facebook should be ‘neutral public forums.’ On the flip side, they should be considered to be a ‘publisher or speaker’ of user content if they pick and choose what gets published or spoken.” Tech-advocacy organizations and academics cried foul. University of Maryland law professor Danielle Citron argued that Cruz “flips [the] reasoning” of the law by demanding neutral forums. Elliot Harmon of the Electronic Freedom Foundation responded that “one of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts.”

As Cruz properly understands, Section 230 encourages Internet platforms to moderate “offensive” speech, but the law was not intended to facilitate political censorship. Online platforms should receive immunity only if they maintain viewpoint neutrality, consistent with traditional legal norms for distributors of information. Before the Internet, common law held that newsstands, bookstores, and libraries had no duty to ensure that each book and newspaper they distributed was not defamatory. Courts initially extended this principle to online platforms. Then, in 1995, a federal judge found Prodigy, an early online service, liable for content on its message boards because the company had advertised that it removed obscene posts. The court reasoned that “utilizing technology and the manpower to delete” objectionable content made Prodigy more like a publisher than a library.

Congress responded by enacting Section 230, establishing that platforms could not be held liable as publishers of user-generated content and clarifying that they could not be held liable for removing any content that they believed in good faith to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This provision does not allow platforms to remove whatever they wish, however. Courts have held that “otherwise objectionable” does not mean whatever a social media company objects to, but “must, at a minimum, involve or be similar” to obscenity, violence, or harassment. Political viewpoints, no matter how extreme or unpopular, do not fall under this category.

The Internet Association, which represents Facebook, Google, Twitter, and other major platforms, claims that Section 230 is necessary for these firms to “provide forums and tools for the public to engage in a wide variety of activities that the First Amendment protects.” But rather than facilitate free speech, Silicon Valley now uses Section 230 to justify censorship, leading to a legal and policy muddle. For instance, in response to a lawsuit challenging its speech policies, Google claimed that restricting its right to censor would “impose liability on YouTube as a publisher.” In the same motion, Google argues that its right to restrict political content also derives from its “First Amendment protection for a publisher’s editorial judgments,” which “encompasses the choice of how to present, or even whether to present, particular content.”

The dominant social media companies must choose: if they are neutral platforms, they should have immunity from litigation. If they are publishers making editorial choices, then they should relinquish this valuable exemption. They can’t claim that Section 230 immunity is necessary to protect free speech, while they shape, control, and censor the speech on their platforms. Either the courts or Congress should clarify the matter.

Adam Candeub is a law professor & director of the Intellectual Property, Information & Communications Law Program at Michigan State University. He previously served as an attorney at the Federal Communications Commission. Mark Epstein is an antitrust attorney specializing in the technology sector.

No comments: