For journalists, writers, and creators who rely on having an online presence to make a living and make their voices heard, the situation is even worse - especially if they belong to groups already marginalized for their actual or perceived identity.
This statement was originally published on pen.org on
When PEN America and Meedan asked writers, journalists, and creators about their experiences reporting online abuse to social media platforms, we heard again and again, over the past three years, about the deep frustration, exasperation, and harm caused by the reporting mechanisms themselves:
“I do the reports because I don’t want to not report. That’s even worse. But it feels like shouting into a void. There’s no transparency or accountability.” Jaclyn Friedman, writer and founder, Women, Action & the Media1
“Reporting is the only recourse that we have when abuse happens. It’s a form of accountability… but when people constantly feel like they are wasting their time, they are just going to stop reporting.” Azmina Dhrodia, expert on gender, technology and human rights and former senior policy manager, World Wide Web Foundation2
“The experience of using reporting systems produces further feelings of helplessness… Rather than giving people a sense of agency, it compounds the problem.” Claudia Lo, senior design and moderation researcher, Wikimedia3
Online abuse is a massive problem.4 According to a 2021 study from the Pew Research Center, nearly half of adults in the U.S. have personally experienced online harassment. The rate of severe harassment – including stalking and sexual harassment – has significantly increased in recent years.5
For journalists, writers, and creators who rely on having an online presence to make a living and make their voices heard, the situation is even worse – especially if they belong to groups already marginalized for their actual or perceived identity. In a 2020 global study of women journalists from UNESCO and the International Center for Journalists, 73 percent of respondents said they experienced online abuse. Twenty percent reported that they had been attacked or abused offline in connection with online abuse. Women journalists from diverse racial and ethnic groups cited their identity as the reason they were disproportionately targeted online.6 According to Amnesty International’s 2018 report, Toxic Twitter: A Toxic Place for Women, Black women were “84 percent more likely than white women to be mentioned in abusive or problematic tweets.”7
Being inundated with hateful slurs, death threats, sexual harassment, and doxing can have dire consequences. On an individual level, online abuse places an enormous strain on mental and physical health. On a systemic level, when creative and media professionals are targeted for what they write and create, it chills free expression and stifles press freedom, deterring participation in public discourse.8 Online abuse is often deployed to stifle dissent. Governments and political parties are increasingly using online attacks, alongside physical attacks and trumped-up legal charges, to intimidate and undermine critical voices, including those of journalists and writers.9
The technology companies that run social media platforms, where so much of online abuse plays out, are failing to protect and support their users. When the Pew Research Center asked people in the U.S. how well social media companies were doing in addressing online harassment on their platforms, nearly 80 percent said that companies were doing “an only fair or poor job.”10 According to a 2021 study of online hate and harassment conducted by the Anti-Defamation League and YouGov, 78 percent of Americans specifically want companies to make it easier to report hateful content and behavior, up from 67 percent in 2019.11
Finding product and policy solutions that counter the negative impacts of online abuse without infringing on free expression is challenging, but it’s also doable—with time, resources, and will. In a 2021 report, No Excuse for Abuse, PEN America outlined a series of recommendations that social media platforms could enact to reduce risk, minimize exposure, facilitate response, and deter abusive behavior, while maintaining the space for free and open dialogue. In doing that research, it became clear that the mechanisms for reporting abusive and threatening content to social media platforms were deeply flawed.12 In this follow-up report, we set out to understand how and why.
On most social media platforms, people can “report” to the company that a piece of content – or an entire account – is violating policies. When a user chooses to report abusive content or accounts, they typically initiate a “reporting flow,” a series of steps they follow to indicate how the content or account violates platform policies. In response, a platform may remove the reported content or account, use other moderation interventions (such as downranking content, issuing a warning, etc.), or take no action at all, depending on the company’s assessment of whether the reported content or account is violative.
For users, reporting content that violates platform policies is one of the primary means of defending themselves, protecting their community, and seeking accountability. For platforms, reporting is a critical part of the larger content moderation process.
To identify abusive content, social media companies use a combination of proactive detection via automation and human moderation and reactive detection via user reporting, which is then adjudicated by automated systems or human moderators. The pandemic accelerated platforms’ increasing reliance on automation, including the algorithmic detection of harmful language. While automated systems help companies operate at scale and lower costs, they are highly imperfect.13
Human moderators are better equipped to take the nuances of language, as well as cultural and sociopolitical context, into account. Relying on human moderation to detect abusive content, however, comes with its own challenges, including scalability, implicit bias, and fluency and cultural competency across languages. Moreover, many human moderators – the majority of whom are located in the Global South – are economically exploited and traumatized by the work.14
Because proactive detection of online abuse, both human and automated, is highly imperfect, reactive user reporting remains a critical part of the larger content moderation process. More effective user reporting, in turn, can also provide the data necessary to better train automated systems. The problem is that when reporting mechanisms do not work properly, that undermines the entire content moderation process, which significantly impedes the ability of social media companies to fulfill their duty of care to protect their users and facilitate the open exchange of ideas.
A poorly functioning moderation process threatens free expression in myriad ways. Content moderation interventions that remove or reduce the reach of user content can undermine free expression, especially when weaponized or abused.15 At the same time, harassing accounts that are allowed to operate with impunity can chill the expression of the individuals or communities they target.16
In our research, we found that reporting mechanisms on social media platforms are often profoundly confusing, time-consuming, frustrating, and disappointing. Users frequently do not understand how reporting actually works, including where they are in the process, what to expect after they submit a report, and who will see their report. Additionally, users often do not know if, or why, a decision has been reached regarding their report. They are consistently confused about how platforms define specific harmful tactics and therefore struggle to figure out if a piece of content is violative. Few reporting systems currently take into account coordinated or repeated harassment, leaving users with no choice but to report dozens or even hundreds of abusive comments and messages piecemeal.
On the one hand, the reporting process takes many steps and can feel unduly laborious; on the other, there is rarely the opportunity to provide context or explain why a user may find something abusive. Few platforms offer any kind of accessible or consistent documentation feature, which would allow users to save evidence of online abuse even if it has been deemed abusive and removed. And fewer still enable users to ask their allies for help with reporting, which makes it more difficult to reduce exposure to abuse.
When the reporting process is confusing, users make mistakes. When the reporting process does not leave any room for the addition of context, moderators may lack the information they need to decide whether content is violative. It’s a lose-lose situation – except perhaps for abusive trolls.
For this report, nonprofit organizations PEN America and Meedan joined forces to understand why reporting mechanisms on platforms are often so difficult and frustrating to use, and how they can be improved in concrete, actionable ways. Informed by interviews with nearly two dozen writers, journalists, creators, technologists, and civil society experts, as well as extensive analysis of existing reporting flows on major platforms (Facebook, Instagram, YouTube, Twitter, and TikTok), this report maps out concrete, actionable recommendations for how social media companies can make the reporting process more user-friendly, more effective, and less harmful.
While we discuss the policy implications of our research, our primary goal is to highlight how platform design fails to make existing policies effective in practice. We recognize that reporting mechanisms are only one aspect of content moderation, and changes to reporting mechanisms alone are not sufficient to mitigate the harms of online abuse. Comprehensive platform policies, consistent and transparent policy enforcement, and sophisticated user-centered features are central to more effectively addressing online abuse and protecting users. And yet reporting remains the first line of defense for millions of users worldwide facing online harassment. If social media platforms fail to revamp reporting, as well as put more holistic protections in place, then public discourse in online spaces will remain less inclusive, less equitable, and less free.