From blackouts to legislation, from fact-checking to education, Mong Palatino scrutinises some of the most popular responses to the problem of disinformation in his region.
This is part of a series IFEX is producing on regional experiences with the global problem of information disorder, and what people are doing to counter it.
Disinformation may be a global phenomenon, but its impact and the measures used to counter it vary from country to country.
In Asia-Pacific we are experiencing a rapidly changing media environment, and many countries are either in transition or besieged by political turmoil. The growing problem of disinformation clearly exacerbates social tensions and undermines democracy.
Its impact is far-reaching – and some of the proposed solutions are, as well. Maybe too far-reaching. Are measures to address disinformation – or “fake news”, as it is often referred to, negatively impacting freedom of expression and information, and closing civic space?
Several governments have responded with new laws and regulations. Media and civil society groups have launched their own initiatives to tackle the issue. Even tech companies have tweaked their platforms to prevent the spread of so-called ‘fake news’. But many of these efforts to combat disinformation have engendered their own concerns.
In this article I look at some of the most popular tools and practices in dealing with disinformation in the Asia-Pacific region, and ask: Are these solutions working? How are they affecting the lives of ordinary residents?
Communications blackouts
“The human cost of this blackout is immeasurable.”
Disinformation can easily inflame hatred and ignite communal violence. Consider the consequences of the sharing of false claims against the Rohingya minority by hardline Buddhists in Myanmar. The fear of an escalating disturbance is often invoked by governments to impose broad or complete restrictions on communication networks.
Following the deadly bomb attacks in Sri Lanka in April 2019, social media platforms were blocked. The same course of action was deployed in Indonesia during the post-election riots last May.
Sri Lanka authorities said that this was needed to “avoid propagating unverified reports and speculation.” But stopping people from sharing information in general also means that verified information from credible sources can’t be accessed. Guaranteeing the safety of citizens during emergency situations means providing them with constant updates from state agencies and trustworthy institutions about new threats, relief efforts, and life-saving security measures.
In addition to this, as the International Federation of Journalists (IFJ) notes, blocking communication signals creates “unnecessary stress on people and families as they try to contact and confirm the safety of their loved ones.”
The work of media is also hampered, since they can’t access their source or file reports on time.
And just like what’s happening in Kashmir, where the internet and even landline cables have been blocked since August, this has affected the delivery of basic services, such as emergency medical care.
Human rights groups in the region have warned that the “human cost of this blackout is immeasurable” including its devastating impact on local businesses.
Legislating against ‘fake news’
“This sort of power handed over to any government is just ripe for abuse.”
Governments cite the threats posed by disinformation to legislate the regulation of online content. In many countries, authorities apply existing laws to criminalize disinformation. For example, Thailand uses the Computer Crime Act to run after those peddling disinformation.
In recent years, a growing number of governments have drafted separate bills and executive orders to fight so-called ‘fake news’. In April 2018, Malaysia passed the ‘Anti-Fake News Act’ a few weeks before the general elections, amid criticisms that it was intended to suppress opposition voices. The country’s new ruling coalition initially pledged to repeal the law, but some legislators are now insisting that reform is the better option.
In May 2019, Singapore’s Parliament voted in favor of the Protection from Online Falsehoods and Manipulation Act (POFMA) which critics described as ‘Orwellian’ – since it essentially grants government ministers the power to decide what is true or false. Independent journalist Kirsten Han told IFEX that it gives the government “a vast amount of power over public discourse without adequate checks and balances.” She warned that “this sort of power handed over to any government is just ripe for abuse.”
Cambodia and the Philippines also have pending bills against disinformation. But, similarly to the laws passed in Malaysia and Singapore, these bills are criticised for their vague definition of ‘fake news’ which can be abused to harass activists, journalists, and opposition members.
Fact-checking collaboration
“Do we really reach those people who are impacted by dis/misinformation?”
Journalists and civil society members perform a crucial role by organising and leading several fact-checking initiatives in the region. These include the Fact Check Initiative Japan, the ‘Fact Check Center’ by independent news website Prachatai in Thailand, the teaming up of media outlet Rappler and non-profit organisation Vera Files with Facebook in the Philippines, and CekFakta in Indonesia.
Wahyu Dhyatmika, editor of Tempo.co and a board member of IFEX member the Alliance of Independent Journalists, in Indonesia, underscores the value of collaboration in any campaign against disinformation. “We shared our fact-checked articles and work with Google to enhance the performance of those articles on search engine. We also separately work with Facebook as third-party fact-checkers.”
He added that CekFakta did live fact-checking during Indonesia’s presidential debates – especially important since the two major candidates relied heavily on “buzzers” (individuals paid to create online disinformation operations) – to get votes.
It is a reminder that fact-checking is not merely an extension of media reporting, but an essential journalistic duty, since the most malicious forms of disinformation can be traced to the work of political parties and even government centers.
Reflecting on the work of CekFakta, Dhyatmika told IFEX that they want to overcome bigger challenges: “Do we really reach those people who are impacted by dis/misinformation? How can we avoid preaching to the literate only, and really making a difference to people that were influenced by mis/disinformation?”
Self-regulation by the media
“There is an historical overlay of truth-telling – because any attempt to deceive can be quickly exposed and has repercussions for those responsible.”
Politicians often dismiss journalists as purveyors of ‘fake news’ when the latter publish or broadcast critical reports about the government. Worse, their supporters resort to intimidation through doxxing and other forms of online bullying. But what politicians should do – if they have a legitimate issue against a wrong or inaccurate report – is to raise this directly and properly with concerned media outlets.
Having a working mechanism for redress is helpful to avoid charges that the media is behind a disinformation operation. Many countries in the region have press councils that can process complaints against erring members. The problem is that many politicians prefer to file criminal charges against members of the press instead of resolving their concern through non-antagonistic means.
Australia provides an example of how self-regulation by the media can minimize the destructive impact of politicians unfairly accusing journalists of enabling disinformation. IFEX member Media, Entertainment & Arts Alliance (MEAA) encourages media listeners, viewers, and readers to raise their complaints directly with media outlets. Aside from this, the Australian Press Council and even the government broadcast regulator maintain their own complaint mechanisms. MEAA notes that “there’s a general practice observed by the media to report truthfully, fairly, accurately; to disclose all essential facts and to correct errors at the earliest opportunity.”
MEAA added that there is an “historical overlay of truth-telling – because any attempt to deceive can be quickly exposed and has repercussions for those responsible.”
Taiwan’s experience
“If there is a trending rumor, if the ministries go out and clarify within one hour, then actually more people hear the clarification first.”
For Taiwan’s digital minister Audrey Tang, disinformation is “information that is intentionally harmful and untrue.” She leads the country’s battle against disinformation without relying on censorship laws, citing the country’s traumatic experience with martial law and the public preference to continue the process of democratization.
Responding to a query published on a government portal, she explains the ministry’s proactive approach in countering disinformation: “If there is a trending rumor, if the ministries go out and clarify within one hour, then actually more people hear the clarification first. The second defense, of course, is collaborative checking.”
She emphasized the participation of various sectors in flagging disinformation, adding that her ministry is working with the local internet community so that suspicious information will not be deleted, but rather stored in a spam-like folder. “Once the sender sends another email it will still reach the recipient, it’s not censorship, but it goes to the junk mail folder so it doesn’t waste people’s time by default.”
She also suggests that in developing a public program against disinformation, the guide should be a norm-first approach. In other words, establishing a social norm where the sharing of disinformation is discouraged is less confrontational and punitive than a law, and more effectively encourages public support.
Response of tech companies
“Tech companies must also re-think their internal policies to ensure that self-initiated content takedowns are not arbitrary, and users have a right to voice their concerns.”
In the Asia Pacific region, Facebook has an estimated 577 million users, and Twitter about 94 million. WhatsApp’s biggest market is in India, where it has 400 million users. Since 2018, these massively popular global tech companies have acknowledged the role their platforms can play in facilitating the virality of hate speech, specifically, and other forms of disinformation.
One example is Myanmar. Facebook has vowed to address the concerns raised by civil society about the company’s slow response to posts that lead to race-based violence and religious extremism and that enable the persecution of ethnic minority groups like the Rohingya. The tech giant has reported that it has already removed hundreds of Myanmar-based accounts linked to powerful political forces, such as the military, which were engaging in ‘coordinated inauthentic behavior’.
The protests that brought almost two million people into the streets of Hong Kong in the past three months this year were inspiring to many, and troubling to others, who began an active campaign of disinformation and doxxing against the protesters and journalists who were covering the rallies. Twitter announced that it removed hundreds of accounts which were conducting disinformation against the protest movement.
But some fear that the approaches taken by powerful tech companies risk chipping away at rights to freedom of expression and information. They are urged to be more transparent and careful about their actions. For instance, Twitter is accused of removing accounts with content that expressed solidarity with Kashmir, based on requests submitted by the Indian government. IFEX member Sflc.in said tech companies “must also re-think their internal policies to ensure that self-initiated content takedowns are not arbitrary and users have a right to voice their concerns.”
Also in India, WhatsApp has agreed to prohibit the practice of adding users in chat groups without their consent. Before this adjustment, civil society group said “users are vulnerable to large-scale harassment and privacy violations” because they can be exposed to undesirable content by being added in groups even without their approval. WhatsApp also placed some technical restrictions on forwarding messages to disable automated disinformation operations.
Despite these efforts, social media and messaging apps continue to be swamped with disinformation. Authorities are highlighting this to push for stricter regulation of the internet, but it is inspiring the media and civil society to adopt other approaches to defeating disinformation.
We’re all in this together: Building critical thinking and resilience
There are different views and strategies on how to address the growing spectre of disinformation, and I’ve reviewed examples across the Asia-Pacific region by governments, the media, civil society, and tech companies. There is no single solution that will effectively end the disorder caused by disinformation. But all stakeholders will need to be united in their goal of improving public awareness and media literacy. This requires sustained dialogue and cooperation which can only take place under conditions where freedom of expression is actively promoted and civil liberties are genuinely protected.
For useful resources on disinformation, check out the UNESCO handbook about teaching disinformation, the Poynter global database on anti-misinformation actions, and the work of Journalism Trust Initiative on building better transparency and standards among media practitioners in combatting disinformation.
While accusations of ‘fake news!’ still take up enormous bandwidth, the term is vague at best, and easily manipulated. Following the important work of Claire Wardle and Hossein Derakhshan, we consider three aspects of “information disorder”: disinformation, misinformation, and mal-information. Disinformation is information that is false and deliberately created to harm a person, social group, organisation or country. Misinformation is information that is false, but not created with the intention of causing harm. Mal-information is information that is based on reality, used to inflict harm on a person, social group, organisation or country. We hope this series will help broaden understanding and encourage dialogue about the problem of information disorder as well as the repercussions countermeasures may have on civic space and on our right to freedom of expression and information.