Eagerness to regulate new technologies is understandable, but measures may infringe upon civil rights or fail to resolve the issues they were designed to tackle. Brazil and India's 2024 electoral processes exemplify this tension.
This statement was originally published on globalvoices.org on 31 May 2024.
Both Brazil and India have elections in 2024
This story is part of Data Narratives, a Civic Media Observatory project that aims to identify and understand the discourse on data used for governance, control, and policy in El Salvador, Brazil, Turkey, Sudan, and India. Read more about the project here and see our public dataset for the full analysis covered in the text below.
As the baton of the G20 presidency passed from India to Brazil in December 2023, digitization has emerged as a focal point for both countries, with Big Tech and Artificial Intelligence at the center stage of conversation and deliberation, both for its benefits and regulations. But, does this enthusiasm for harnessing AI’s benefits for their economies and ensuring control over emerging uses of AI overshadow something else?
Brazil and India, as two prominent economies from the global majority and strong BRICS members, are also among the countries holding elections in 2024. India is finalizing its national election process, and Brazil will vote for municipal representatives in October. Both nations exhibit a keen interest in pioneering regulatory frameworks for new technologies, notably AI, which may reflect a desire to inspire other global majority countries.
While the eagerness to regulate new technologies is understandable, it can sometimes result in unintended and adverse consequences. Underdiscussed measures may infringe upon civil rights or fail to resolve the issues they were designed to tackle in the first place. Brazil and India’s 2024 electoral processes exemplify this tension, with both countries introducing enforceable norms — though not formal laws — aimed at governing the use of artificial intelligence (AI) in elections.
Brazil: Concerns for freedom of expression
The Brazilian Superior Electoral Court (“Tribunal Superior Eleitoral” or TSE) issued 12 resolutions at the start of March, introducing new electoral rules already applicable to the upcoming 2024 elections. The elections will be held in October, only at a municipal level, to elect mayors and city councilors from over 5,570 cities in the country.
The TSE introduced several provisions concerning AI and the country’s platform liability regime in the cases of electoral propaganda. According to the court’s official website, notable measures directed at political parties and social media companies include the prohibition of deepfakes, a requirement for disclosure regarding AI use in electoral propaganda, and restrictions on using bots for voter engagement. There is also a provision holding major tech companies liable for not promptly removing during the electoral period content deemed to pose electoral risks — such as disinformation (including deepfakes), hate speech, and anti-democratic content.
Even though the specific rules regarding the use of AI during elections have some positive ramifications, the resolution has raised alarms regarding freedom of expression among Brazilian civil society. It directly challenges the established platform liability regime in Brazilian law, primarily governed by the Marco Civil da Internet (Civil Rights Framework for the Internet), enacted as Law No. 12.965/2014. By the time of its approval, the law was highly valued by important digital civil rights actors worldwide. Under this framework, as a general rule, platforms enjoy intermediary liability, shielding them from responsibility for user-generated content unless they fail to comply with a court order mandating the removal of specific illegal material, as outlined in Article 19.
However, the TSE’s new provisions could impose a burden on platforms to monitor and filter user-generated content, directly changing the country’s liability regime. Failure to comply could result in legal consequences, which can incentivize platforms to err on the side of caution by overzealously removing potentially legitimate content to avoid liability.
It is not known how TSE defined these rules. Organizations that defend users’ rights in Brazil, such as Coalizão Direitos na Rede (Rights in Network Coalition), suggest that the TSE should discuss the serious consequences of this provision and talk to civil society and experts to find ways to repair the undesirable effects that the new resolution could have on society — such as the massive takedown of legitimate content:
However, this is not the only recent development to cast doubt on the stability of Brazil’s liability regime. Challenges to the constitutionality of Article 19 loom large, with the Federal Supreme Court (STF) poised to address the issue in the coming months. Additionally, Congress deliberated for four years on Bill 2630/2020, colloquially dubbed “the Brazilian DSA” (as a reference to the European Digital Services Act), which sought to revamp platform regulation and heighten the responsibilities of major tech players. Even though the bill may seem dead after the events of April 2024, another bill is to be presented soon, and the scenario suggests change is on the way.
India: Addressing political bias in AI regulation
In India, what started as a frenzy over the AI deepfake video of an Indian actor released in November 2023 soon became a burning issue with concerns around the use of AI and deepfakes to spread misinformation during India’s election. In fact, Prime Minister Narendra Modi, belonging to the Bhartiya Janta Party (BJP), in a public address, lamented deepfakes as an emerging threat that needs to be curbed urgently by developing a global AI regulation. However, the ongoing research by the Data Governance Observatory found that the narrative around the use of AI and deepfakes by anti-national actors and the opposition parties was used to bolster the urgency to regulate the use of AI and deepfakes and depict harms arising from such content.
During the state legislature elections in Rajasthan, Madhya Pradesh, and Tamil Nadu, a deepfake video of a BJP political leader who was then chief minister of Madhya Pradesh was circulated. In the video, the BJP leader was seen lauding the Indian National Congress (INC), the main opposition to the BJP. A prime-time news channel’s coverage on “weaponization of AI and deepfakes during elections” highlighted the video of the BJP leader to emphasize that stringent regulations are necessary to regulate the use of AI deepfakes. The news debates used the deepfake video of the BJP leader as an illustration of the malicious use of deepfakes by opposition parties like INC. On the one hand, opposition parties are indirectly targeted for wrongful use of deepfakes. Still, the news coverage, on the other hand, showed the prime minister’s positive approach to AI. The news debate, while not directly stating that AI and deepfakes are used by opposition parties, through its juxtaposition of the deepfake video of the BJP leader and the prime minister’s positive approach to AI, advances the narrative of deepfakes being used by opposition parties rather than the BJP.
While opposition parties became a token to depict malicious AI use, the agencies and businesses involved in making deepfake videos and platforms hosting deepfake videos were framed as the cause of the problem. The agencies that make AI deepfakes for malicious use and platforms that fail to remove AI misinformation do have liability, but are they the only source and cause of the AI misinformation threat? The political parties who employ these agencies and political leaders who forward misinformation at the time of election should also share the responsibility.
To tackle the issue of AI misinformation, the Indian government issued a deepfake and AI advisory. The deepfake advisory obligates platforms to clearly inform users that posting deepfakes can lead to criminal prosecution under the law. Along with the advisory, the government warned that it will develop stricter regulations soon. After the deepfake advisory, the government also issued two iterations of an AI advisory. In the first draft, the advisory mandated all AI developers and platforms to seek government permission before launching a new AI model in India. This was later changed, and the obligation to obtain permission from the government was removed. Instead, a self-regulatory approach was introduced in which platforms must self-label AI-generated content. The government also intends to bring a proper AI regulation in June–July, which aims to harness AI’s economic potential while also curbing any potential risks and harms.
The analysis from the Data Governance Observatory shows that civil society and business alliances have called the government’s approach reactive. They claim that taking down deepfakes and enforcing restrictions is a stop-gap measure that does not adequately consider their impact on innovation and also does not recognize the role of political parties. During elections, political parties are held more accountable, and bodies like the Election Commission of India could pressure them to ensure more transparency around deepfakes. Civil society actors claim that, while deepfakes and AI are just another tool to spread misinformation, the larger problem of tackling a weak media system and polarized atmosphere looms large for India. So even when the government issues advisories to tech platforms to take down deepfakes, the lack of clear consultations around the capacity of tech platforms to address the issue of deepfakes, and the role of bodies like the Election Commission of India makes the advisories a band-aid solution.
While, within the country, the regulation of AI and deepfakes remains messy and lacks appropriate consultation, at international forums like the G20, the government has presented itself as a protector of its citizens and a pioneer amongst the global majority of countries for inclusive and transparent digitization with a well-thought-out approach towards regulation. While this gives India accolades at international forums, the fact remains that the existing approach lacks effective and transparent policy consultation that can address political bias and clarify the role of political parties in dealing with AI use during elections.
What are the next steps?
As the international community focuses on Brazil and India as centers of technological innovation, the half-baked AI and liability regulations carry forward issues of free speech, reinforce pre-existing political bias, and lack appropriate consultation. In India, the media ecosystem has been increasingly polarized, and, with the current government, dissent and debate on regulations have diminished, which is happening now with AI regulations. Unless there is appropriate consultation with stakeholders and inclusion of stakeholder perspectives, the AI regulations may not be efficient. On the other hand, Brazil, even though it’s under a left-leaning government, is also facing great political polarization, with the digital agenda – especially around platform and AI regulation – being one of the critical points in the discussion. Thus, issues around AI regulations are specifically problematic for both countries as, while India is already in the election phase, and Brazil will soon go into elections, a politically charged atmosphere will only increase the severity of these concerns.
Amid the lack of consultation, narratives around electoral regulations in the digital realm only give a partial picture of the concerns posed by AI and Big Tech; however, they hold the power to influence the general public and perception of the international community as they look towards India and Brazil as front runners in the new tech regulation race.
The efforts of global majority nations to create their own solutions appropriate to their realities without drawing solely on European and US approaches are positive and deserve praise. However, this cannot be done at the expense of the fundamental rights of the people of these countries. Although these regulations are urgent and necessary, it is equally important that they rightly tackle the problems they intend to solve. Hastily made rules can create new problems and further complicate the already complex politically divided scenario in these countries.
Written by Shubhangi Heda
Written by Alice Lana