Governments are making unprecedented demands for online platforms to police speech. Many social media companies are rushing to comply, but in their response to these calls to remove objectionable content they too often censor valuable speech.
This statement was originally published on eff.org on 12 June 2019.
Over the past year, governments have made unprecedented demands for online platforms to police speech, and many companies are rushing to comply. But in their response to calls to remove objectionable content, social media companies and platforms have all too often censored valuable speech. While it is reasonable for companies to moderate some content, no one wins when companies and governments can censor online speech without transparency, notice, or due process.
This year’s Who Has Your Back report examines major tech companies’ content moderation policies in the midst of massive government pressure to censor. We assess companies’ policies in six categories:
- Transparency in reporting government takedown requests based on legal requests
- Transparency in reporting government takedown requests alleging platform policy violations
- Providing meaningful notice to users of every content takedown and account suspension
- Providing users with an appeals process to dispute takedowns and suspensions
- Transparency regarding the number of appeals
- Public support of the Santa Clara Principles
These categories build on last year’s first-ever censorship edition of Who Has Your Back1 in an effort to foster improved content moderation best practices across the industry. Even with stricter criteria, we are pleased to see several companies improving from last year to this year.
Only one company – Reddit – earned stars in all six of these categories. And two companies – Apple and GitHub – earned stars in five out of six categories, both falling short only on appeals transparency. We are pleased to report that, of the 16 companies we assess, twelve publicly endorse the Santa Clara Principles on Transparency and Accountability in Content Moderation,2 indicating increasing industry buy-in to these important standards.
Some content moderation best practices are seeing wider adoption than others. Although providers increasingly offer users the ability to appeal content moderation decisions, they do not as consistently provide users with clear notice and transparency regarding their appeals processes. According to the policies of several providers, users have the ability to appeal all content removals, but they may not receive notification that their content has been removed in the first place. This creates a critical gap in information and context for users trying to navigate takedown and suspension decisions – and for advocates striving to better understand opaque content moderation processes. We will continue encouraging more consistent adoption of the best practices identified in this report, and closing such critical information gaps, moving forward.