An investigation by 7amleh reveals how Meta is profiting from the dissemination of harmful content on its platforms.
Ed. Note: At the time of publishing this piece, we at IFEX are bearing witness to the atrocious escalation of violence in Palestine and Israel. In this worrisome context, we express our firm solidarity with IFEX members MADA, 7amleh and I’Iam, and with our colleagues throughout the region, as the consequences of the conflict spread beyond their borders.
This statement was originally published on 7amleh.org on 22 November 2023.
7amleh tested Meta’s content moderation policy’s ability to prohibit hate speech and incitement in paid advertisements. Our investigation revealed a troubling reality: Meta reaps financial benefits from the dissemination of harmful content within its platforms.
The Arab Center for the Advancement of Social Media tested Meta’s content moderation policy’s ability to prohibit hate speech and incitement in paid advertisements. Our investigation revealed a troubling reality: Meta reaps financial benefits from the dissemination of harmful content within its platforms. The investigation was prompted by the discovery that Facebook ran targeted ads calling for the assassination of individuals, and ads calling for the forcible expulsion of Palestinians from the occupied West Bank to Jordan.
7amleh’s investigation highlighted swift approvals for 19 ads featuring hate speech and incitement in Hebrew against Arabs and Palestinians in the context of the war on Gaza. This sheds light on the failure of Meta’s “automated and manual review” enforcement mechanisms aimed at prohibiting inflammatory advertising content on its platforms, and the company’s inadvertent financial gains from the propagation of this rhetoric.
7amleh’s investigation tested Meta’s hostile classifiers in Hebrew and its automated advertising standards enforcement mechanisms. It ran a targeted campaign of 19 ads, containing hate speech and incitement to violence in Hebrew, across Meta’s platforms. The approved ads included calls to “wipe out Gaza, its women, children and elderly” and other inflammatory phrases that explicitly called for killing Palestinians, burning all of Gaza, deporting people to other countries, and calls to carry out a second Nakba.
For this investigation, 7amleh also tested the same ads in Arabic, and as with Hebrew, all of the Arabic ads were also approved. This follows a troubling development of Facebook allowing ad campaigns targeting Palestinians in their native language. For example, a Facebook profile called “Migrate Now” recently started posting ads calling on “Arabs in Judea and Samaria” to migrate to Jordan “before it’s too late”. This coded language is an obvious use of intimidation, and has no place on Meta’s platforms. Furthermore, Meta should certainly not be profiting from groups running these types of hateful ads.
Once receiving approval for all the ads, 7amleh shared documentation of the investigation with The Intercept. However, shortly after journalists at The Intercept reached out to Meta for comment, 7amleh received notices that the ads had been retroactively rejected. Though 7amleh hopes that Meta takes this issue seriously, it is important to note that in the past the social media giant has a track record of apologizing for individualized problems without taking serious action to create systemic solutions.
The investigation aimed to assess the platform’s ability to identify and prevent the dissemination of harmful content. The alarming speed of approval, within an hour, and the scheduled publication of these ads highlighted significant vulnerabilities. It is important to note that, given the experimental nature of the investigation, 7amleh never intended to actually run the ads, and publication was halted before going live.
The findings underscore the urgency for Meta to address these shortcomings, not just in its classifiers, but also in its content moderation protocols. For years, Palestinian civil society organizations have raised concerns about escalating violations of Palestinian digital rights on Meta’s platforms. Last year’s Business for Social Responsibility (BSR) independent Human Rights Due Diligence report underscored issues related to the lack of classifiers in Hebrew to counter hate speech and incitement. The proliferation of harmful speech continues to undermine Meta’s pronounced commitment to safeguarding the safety and dignity of all users on its platforms. Meta’s admission regarding the inefficiency of Hebrew hostile speech classifiers due to insufficient data exacerbates these concerns. However, in the company’s most recent official Human Rights update, Meta asserted that it has launched a functioning Hebrew classifier, so we assume the classifier is operational across all platforms. This persistent issue emphasizes the immediate need for Meta to address content moderation deficiencies to protect Palestinian communities from further harm.
7amleh calls on Meta to prevent further exploitation of its platforms due to the dissemination of hate speech and incitement to violence. This must be done in order for Meta to fulfill its responsibility of safeguarding communities from harm. The findings that Meta reaps financial benefits from violent content must be addressed with the utmost seriousness. This falls hand in hand with the call on Meta to “Stop Dehumanizing Palestinians and Silencing Their Voices”. The company has an ethical and legal responsibility to prevent hate speech and incitement circulating across its platforms, as the risk of online incitement translating into real-world harm affecting Palestinian individuals and communities persists.