IFEX member 7amleh is using AI to promote accountability and strengthen protection from online hate campaigns. The Violence Indicator offers a sobering look at the real-time connection between online incitement and escalating physical violence perpetrated against Palestinians.
Disinformation doesn’t just mislead; it incites. In Palestine, false and inflammatory narratives rapidly turn into hate speech that dehumanises entire communities. The fact that this is happening in a context where speaking out means risking arrest, violence or worse, makes it even harder to confront the lies. Whether it’s Israeli officials accusing Palestinian journalists, medics, and even UNRWA workers of being members of militant groups, or falsely depicting victims of Israel’s attacks as “crisis actors”, these false narratives spread fast. Repeated often enough, they strip away protections, normalise dehumanisation, and justify violence.
In response, civil society is stepping up, with IFEX members like the Arab Center for the Advancement of Social Media (7amleh) leading the charge to reclaim online spaces.
A cornerstone of 7amleh’s work is the Violence Indicator, an AI‑powered language model that scans Hebrew and Arabic language content for hate speech and incitement. First deployed in Hebrew before expanding to an Arabic version, the indicators’ data feed into 7or, a real‑time platform with visual dashboards and metrics that researchers and advocates use to demand stronger safeguards.
“Monitoring hate and disinformation is a first, critical step in countering it,” Nadim Nashif, 7amleh’s Executive Director, told IFEX. “Our tool doesn’t just collect data. It provides evidence, early warning, and a foundation for accountability.”
Often peaking during crises, the indicator offers a sobering look at the real-time connection between online incitement and escalating physical violence perpetrated against Palestinians on the ground.
In early 2023, the tool flagged six weeks of coordinated incitement on X in Hebrew that preceded a 26 February pogrom in Huwara in the Occupied Palestinian Territories, where Israeli settlers torched homes, cars, and livestock, injured hundreds and killed one Palestinian. “We saw [Israeli] mobs organising attacks in online communities. The incitement was happening online first, before turning into real-world violence,” said Nashif.
After the 7 October attacks and Israel’s war on Gaza, 7amleh’s Violence Indicator captured millions more Hebrew posts between October and December 2023, each one stoking violence, enabling war crimes and shaping real‑world policy. Civil society organisations warned that this deluge of hate was not only deepening the toxicity of the online information environment but also manufacturing public consent for atrocities.
“People are being killed, homes are destroyed, and lives are shattered – and a lot of it starts online,” underscored Nashif. “That’s not anecdotal; there’s statistical evidence now.”
Since its launch, the Violence Indicator has become an invaluable resource, providing evidence-based insights that are needed to hold tech giants accountable, and changing the way organisations like 7amleh talk to them.
“When you sit down with these companies, they’re experts in gaslighting,” Nashif says. “They’ll say, ‘Oh, we’re not aware’ [of the issue] or that they don’t have the data to support the claims. But when you provide a huge dataset [and] show them hundreds of cases. It becomes much harder to deny.”
According to Nashif, platforms have historically over-moderated Arabic content, driven in part by post-9/11 pressures to scrutinise Arabic and Muslim voices. While these hostile speech classifiers were designed with an overcautious approach, often unfairly targeting Arabic speakers, Hebrew language content went largely unchecked, with Meta initially lacking a moderation system altogether.
Among the most significant outcomes of this project has been its contribution towards the successful pressuring of Meta to implement effective Hebrew-language classifiers that automatically flag or remove hate speech, content that had long escaped proper enforcement under the company’s moderation policies.
“In the early days of the war, phrases like ‘Death to Arabs’ were everywhere. Now those keywords are automatically removed or flagged. You can’t even write them anymore,” Nashif says. “It’s not perfect. There’s still a lot of work to do. But compared to where we started, it’s a huge step forward.”
Still, the threats are evolving and expanding. Disinformation and hate speech are now weaponised in AI‑driven influence campaigns that spin up swarms of fake accounts to suppress independent reporting and legitimise the targeting of journalists, especially in Gaza where over 200 Palestinian journalists and media workers have been killed by Israeli forces since October 2023.
“We can detect violence or hate, but not always coordination,” Nashif notes. “Now we’re facing something completely different. Using AI, you can generate thousands of highly convincing profiles in seconds to target political opposition, suppress independent media, and manipulate public opinion on a global scale.”
Even so, 7amleh sees an opportunity. The same AI tools that power disinformation and deepen an already toxic online environment can be repurposed for protection, helping civil society build resilience and adapt to emerging threats. Building that kind of technological capacity within civil society is no longer optional, and 7amleh sees innovation as integral to staying ahead of these evolving challenges.
“We talk a lot about digital transformation and AI, but many organisations are still working with outdated methods and tools,” explained Nashif. “We need to build our own tools. We need to embrace AI, large language models, and data tools; not as luxuries, but as necessities for advocacy.”
By turning online evidence into action, 7amleh’s work offers a vital blueprint for civil society in Palestine and beyond to confront digital violence head-on. In a context where unchecked disinformation and online hate fuel real-world harm, this work is not just urgent, it’s essential to securing safety, justice, and accountability.