The United States Dept of Justice
Wednesday, June 17, 2020
Reforms Strike Balance of Protecting Citizens While Preserving Online Innovation and Free Speech
The Department of Justice released today a set of reform proposals to update the outdated immunity for online platforms under Section 230 of the Communications Decency Act of 1996. Responding to bipartisan concerns about the scope of 230 immunity, the department identified a set of concrete reform proposals to provide stronger incentives for online platforms to address illicit material on their services while continuing to foster innovation and free speech. The department’s findings are available here.
“When it comes to issues of public safety, the government is the one who must act on behalf of society at large. Law enforcement cannot delegate our obligations to protect the safety of the American people purely to the judgment of profit-seeking private firms. We must shape the incentives for companies to create a safer environment, which is what Section 230 was originally intended to do,” said Attorney General William P. Barr. “Taken together, these reforms will ensure that Section 230 immunity incentivizes online platforms to be responsible actors. These reforms are targeted at platforms to make certain they are appropriately addressing illegal and exploitive content while continuing to preserve a vibrant, open, and competitive internet. These twin objectives of giving online platforms the freedom to grow and innovate while encouraging them to moderate content responsibly were the core objectives of Section 230 at the outset. The Department’s proposal aims to realize these objectives more fully and clearly in order for Section 230 to better serve the interests of the American people.”
The department's review of Section 230 over the last ten months arose in the context of its broader review of market-leading online platforms and their practices, which were announced in July 2019. The department held a large public workshop and expert roundtable in February 2020, as well as dozens of listening sessions with industry, thought leaders, and policy makers, to gain a better understanding of the uses and problems surrounding Section 230.
Section 230 was originally enacted to protect developing technology by providing that online platforms were not liable for the third-party content on their services or for their removal of such content in certain circumstances. This immunity was meant to nurture emerging internet businesses and to overrule a judicial precedent that rendered online platforms liable for all third-party content on their services if they restricted some harmful content.
However, the combination of 25 years of drastic technological changes and an expansive statutory interpretation left online platforms unaccountable for a variety of harms flowing from content on their platforms and with virtually unfettered discretion to censor third-party content with little transparency or accountability. Following the completion of its review, the Department of Justice determined that Section 230 is ripe for reform and identified and developed four categories of wide-ranging recommendations.
Incentivizing Online Platforms to Address Illicit Content
The first category of recommendations is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation claims. These reforms include a carve-out for bad actors who purposefully facilitate or solicit content that violates federal criminal law or are willfully blind to criminal content on their own services. Additionally, the department recommends a case-specific carve out where a platform has actual knowledge that content violated federal criminal law and does not act on it within a reasonable time, or where a platform was provided with a court judgment that the content is unlawful, and does not take appropriate action.
Promoting Open Discourse and Greater Transparency
A second category of proposed reforms is intended to clarify the text and revive the original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users. One of these recommended reforms is to provide a statutory definition of “good faith” to clarify its original purpose. The new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.
Clarifying Federal Government Enforcement Capabilities
The third category of recommendations would increase the ability of the government to protect citizens from unlawful conduct, by making it clear that Section 230 does not apply to civil enforcement actions brought by the federal government.
Promoting Competition
A fourth category of reform is to make clear that federal antitrust claims are not, and were never intended to be, covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.
WATCH: @seanmdav rips into NBC News for their attempt to get Google to demonetize the Federalist:— Daily Caller (@DailyCaller) June 17, 2020
"If this was a just world -- there would be accountability for fake journalists who go around trying to destroy their competition for the crime of criticizing them." pic.twitter.com/UiyIxsCl9k
NBC has updated the article, a spokesperson tells me. Here's a bit that appears to have been just added: https://t.co/Nl9SmJBnEX pic.twitter.com/f4AreyEfDm— Scott Nover (@ScottNover) June 16, 2020
So... NBC’s article contained innacurate information? How you’re going to hold companies accountable not for the things they post but for the comment section? 🤡🌍 pic.twitter.com/3gjXpQODko— Samantha Jones (@Samantha_J9) June 16, 2020
All I listed report FAKE NEWS and HATEFILLED NEWS.Okay, now do far-Left... that would be all other news outlets ABC, NBC, CBS, KTLA5, WaPo, NPR, LA Times, NY Times, Mother Jones, CNN, MSNBC, Roll Call, CNBC, VOX, Media Matters, Bloomberg, Slate, AP, Democracy Now, The Daily Beast, The Atlantic, AP, Buzzfeed, The Hill, HuffPo.— Global Awareness 101 (@Mononoke__Hime) June 16, 2020
"Google blocked The Federalist from its advertising platform after the NBC News Verification Unit brought the project to its attention."— Watchdog (@LibWatchdog) June 16, 2020
So NBC is now calling up Google and demanding they pull ads from conservative outlets. WTF???https://t.co/ZllmQsFk0o
The NBC story says they only “brought the project to their attention.”— Watchdog (@LibWatchdog) June 16, 2020
But in a tweet, the reporter behind the story thanked the activists for their “hard work and collaboration” and then tweeted “BlackLivesMatter.”
She isn’t reporting on activists. She IS an activist. pic.twitter.com/xwCEsmlFNv
So what got The Federalist booted from Google Ads??— Watchdog (@LibWatchdog) June 16, 2020
They ran a piece criticizing media coverage of the riots?? On that basis you could ban every single conservative outlet! pic.twitter.com/2NVVtRX7bh
Wait, wait - you want to treat the @FDRLST comment section, which they don’t curate, as THEIR speech but simultaneously say the content you directly host and modify IS NOT your speech under Section 230? Wow, this is getting really interesting https://t.co/QEtpCtssco— Josh Hawley (@HawleyMO) June 16, 2020
Keep in mind this brouhaha is all about access to @Google’s ad platform. That’s how Google makes its money, through data-targeted ads. And that’s why addressing how Google runs its ad business is key to challenging its power & market concentration— Josh Hawley (@HawleyMO) June 16, 2020
The Federalist wouldn't go along with NBC's lie that the riots are "mostly peaceful," so Google stripped its ad revenue.— Allum Bokhari (@LibertarianBlue) June 16, 2020
America's cities burn. You can see it with your own eyes. But if you don't pretend it isn't happening you'll be destroyed.
Welcome to Tiananmen America. https://t.co/gSyFfzLUkN
The warning was reportedly issued because of material in the Federalist's comments section. Website owners should be aware - if you want ad revenue, Google won't allow you to have free speech.— Allum Bokhari (@LibertarianBlue) June 16, 2020
The Federalist reports the truth.— Global Awareness 101 (@Mononoke__Hime) June 16, 2020
The NBC journalist activist reported The Federalist to Google not because their content violated Google standards but because of one comment from an anonymous person. That comment could have easily been made by the journalist activist herself.
Hate speech is subjective.— Global Awareness 101 (@Mononoke__Hime) June 16, 2020
People getting feelings hurt accuse those who hurt their feelings of hate speech.
The Resistance has been calling anyone who doesn't think like them racist and anyone who challenges them as using hate speech.
Americans leaving Dem party in droves.
They will come for ALL OF YOU.— Jason Howerton (@jason_howerton) June 16, 2020
They will not be satisfied until the only information that exists is their shallow, narrow leftist talking points.
Do not f––ing give these psychopaths an inch. https://t.co/Wqczem7YPv
to be clear they don’t just want conservatives removed from op-Ed pages, they want conservative media ended altogether— Joe Gabriel Simonson (@SaysSimonson) June 16, 2020
Working to get websites you don't like demonetized by Google isn't journalism, it's activism.— Kyle Feldscher (@Kyle_Feldscher) June 16, 2020
Left-wing activists enlist left-wing journalists to help pressure left-wing tech employees to silence their political opponents on the world's biggest platforms. It's egregious and it's becoming more common.— Peter J. Hasson (@peterjhasson) June 16, 2020
The U.S. Senate doesn't consider this matter resolved. Enjoy the colonoscopy. https://t.co/feaCyVgUpq— jon gabriel (@exjon) June 16, 2020
CityJournal.org
written by Adam Candeub and Mark Epstein
Monday May 7, 2018
When the House Judiciary Committee held a hearing on social media censorship late last month, liberal Democratic congressman Ted Lieu transformed into a hardcore libertarian. “This is a stupid and ridiculous hearing,” he said, because “the First Amendment applies to the government, not private companies.” He added that just as the government cannot tell Fox News what content to air, “we can’t tell Facebook what content to filter,” because that would be unconstitutional.
Lieu is incorrect. While the First Amendment generally does not apply to private companies, the Supreme Court has held it “does not disable the government from taking steps to ensure that private interests not restrict . . . the free flow of information and ideas.” But as Senator Ted Cruz points out, Congress actually has the power to deter political censorship by social media companies without using government coercion or taking action that would violate the First Amendment, in letter or spirit. Section 230 of the Communications Decency Act immunizes online platforms for their users’ defamatory, fraudulent, or otherwise unlawful content. Congress granted this extraordinary benefit to facilitate “forum[s] for a true diversity of political discourse.” This exemption from standard libel law is extremely valuable to the companies that enjoy its protection, such as Google, Facebook, and Twitter, but they only got it because it was assumed that they would operate as impartial, open channels of communication—not curators of acceptable opinion.
When questioning Facebook CEO Mark Zuckerberg earlier this month, and in a subsequent op-ed, Cruz reasoned that “in order to be protected by Section 230, companies like Facebook should be ‘neutral public forums.’ On the flip side, they should be considered to be a ‘publisher or speaker’ of user content if they pick and choose what gets published or spoken.” Tech-advocacy organizations and academics cried foul. University of Maryland law professor Danielle Citron argued that Cruz “flips [the] reasoning” of the law by demanding neutral forums. Elliot Harmon of the Electronic Freedom Foundation responded that “one of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts.”
As Cruz properly understands, Section 230 encourages Internet platforms to moderate “offensive” speech, but the law was not intended to facilitate political censorship. Online platforms should receive immunity only if they maintain viewpoint neutrality, consistent with traditional legal norms for distributors of information. Before the Internet, common law held that newsstands, bookstores, and libraries had no duty to ensure that each book and newspaper they distributed was not defamatory. Courts initially extended this principle to online platforms. Then, in 1995, a federal judge found Prodigy, an early online service, liable for content on its message boards because the company had advertised that it removed obscene posts. The court reasoned that “utilizing technology and the manpower to delete” objectionable content made Prodigy more like a publisher than a library.
Congress responded by enacting Section 230, establishing that platforms could not be held liable as publishers of user-generated content and clarifying that they could not be held liable for removing any content that they believed in good faith to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This provision does not allow platforms to remove whatever they wish, however. Courts have held that “otherwise objectionable” does not mean whatever a social media company objects to, but “must, at a minimum, involve or be similar” to obscenity, violence, or harassment. Political viewpoints, no matter how extreme or unpopular, do not fall under this category.
The Internet Association, which represents Facebook, Google, Twitter, and other major platforms, claims that Section 230 is necessary for these firms to “provide forums and tools for the public to engage in a wide variety of activities that the First Amendment protects.” But rather than facilitate free speech, Silicon Valley now uses Section 230 to justify censorship, leading to a legal and policy muddle. For instance, in response to a lawsuit challenging its speech policies, Google claimed that restricting its right to censor would “impose liability on YouTube as a publisher.” In the same motion, Google argues that its right to restrict political content also derives from its “First Amendment protection for a publisher’s editorial judgments,” which “encompasses the choice of how to present, or even whether to present, particular content.”
The dominant social media companies must choose: if they are neutral platforms, they should have immunity from litigation. If they are publishers making editorial choices, then they should relinquish this valuable exemption. They can’t claim that Section 230 immunity is necessary to protect free speech, while they shape, control, and censor the speech on their platforms. Either the courts or Congress should clarify the matter.
Adam Candeub is a law professor & director of the Intellectual Property, Information & Communications Law Program at Michigan State University. He previously served as an attorney at the Federal Communications Commission. Mark Epstein is an antitrust attorney specializing in the technology sector.
This says it all about the hypocrisy from the left & #MainstreamMedia Apparently #COVID19 has a political bias. pic.twitter.com/ehd9R1Lk8M— LicenseToSellAZ (@Sessal4) June 16, 2020
No comments:
Post a Comment