Tech firms and privacy groups say the bill represents a threat to encryption and secure communication. Rights groups are worried that the outsourcing of decisions on illegal content to private platforms could incentivise the over-removal of legitimate content.
This statement was originally published on cpj.org on 27 January 2023.
Does the image [below, in the tweet], depicting the rescue of a child who attempted to reach the U.K. by sea, present the act of immigration in “a positive light”?
It’s an absurd question, of course. It’s journalism – an effort to convey in visual terms the stark truth that tens of thousands of migrants and asylum-seekers try to get into Britain every year, and many lose their lives in the attempt.
Yet this is the question that the U.K. government may be asking social media firms to answer when their users try to upload posts containing video footage of migrant crossings under proposed online safety legislation drafted to compel companies like Facebook to control the spread of illegal content within the U.K.
The bill’s implication for immigration reporting isn’t the only thing that should worry journalists. Tech firms and privacy groups say it represents a threat to encryption and secure communication. The bill — which also counts supporters in the media industry for its focus on online abuse — is now in the House of Lords and could be amended further before becoming law, which could happen later this year.
Here’s CPJ’s briefing on what the proposed legislation could mean for press freedom.
Will news coverage of immigration issues really be censored if the bill becomes law?
Michelle Donelan, who heads the U.K.’s Digital, Culture, Media and Sport department, attracted headlines this month when she announced that “posting videos of people crossing the channel which show that activity in a positive light” could be an online offense on grounds that it is abetting the crime of illegal immigration. “Platforms would have to proactively remove that content,” she said in a statement.
The bill explicitly protects journalistic content, but the devil is in the details. Platforms will have to be able to demonstrate to Ofcom, the government-appointed regulator that will enforce the law, that they have preserved press freedom for registered news publishers, U.K.-linked news, and citizens who post for purposes of journalism. However, journalists and other observers have told CPJ they are concerned that the definitions of such outfits and individuals could be written or interpreted in ways that exclude legitimate reporting or boost disinformation.
Rights groups are also wary of how the legislation could be enforced. “The Online Safety Bill still outsources decisions on illegal content to private platforms, essentially privatising the role of law enforcement and incentivising over-removals of legitimate content,” notes rights group Global Partners Digital.
Additionally, observers also note that while small companies lack resources to meet the new law’s requirements larger companies are more likely to look for technical solutions that they can implement across the board, meaning any U.K.-restrictions could have far-reaching impact in other countries.
What kind of technical solutions?
Experts say that in order to comply with the requirement to “proactively filter” online information, companies are likely to use artificial intelligence – technology that scan posts before they are published for keywords or images. Upload filters, as they’re known, then block the posts – without the author necessarily being aware of it.
Automatic filters make mistakes, and rights groups generally oppose them. If humans are liable to dispute whether video footage of small boats crossing the channel are “positive” or not, machines will certainly struggle to make that judgement.
Consider this Sky News TikTok post showing migrants rescued at sea. The clear Sky News branding would give it protection as journalistic content. But raw footage of the kind that journalists look for on social media whenever they’re not at the scene of breaking news, is not.
As journalist Diane Taylor wrote in The Guardian, “what if social media companies, fearful of legal action, react by blocking a wide range of Channel-related footage?… Evidence-gathering in investigations into small boat tragedies in the Channel is complex and video footage could be vital.”
Global experts have warned the U.K. government about the bill’s potential to undermine private communication. Why would this impact journalists?
Journalists and their sources face grave physical and legal threats for publishing sensitive information, so it’s vital that they can communicate privately. CPJ’s safety team suggests using services like Signal or WhatsApp that encrypt chats end-to-end – meaning they can only be read by the sender and receiver.
The Online Safety Bill doesn’t ban encryption. However, the content restrictions it lays out apply to private and public communications. Companies can’t, at the moment, scan end-to-end encrypted messages, so it’s not clear how they will meet the requirements to control what people say in them. Unless they break the encryption.
How would breaking encryption work?
Ofcom could only ask platforms to scan private communications relating to child sexual abuse material, Monica Horten of the U.K. Open Rights Group said in a phone interview with CPJ. But, she said, that restriction doesn’t matter too much because the processes involved undermine encryption across the board, not just for targets of a criminal investigation.
One method could be for companies to use the upload filters to intercept and check messages before they are encrypted and sent, she said. “If it’s all fine, it just gets encrypted and goes on its way. If it’s not, it goes to the National Crime Agency. That method is known as client-side scanning.”
Otherwise, she said, “you’re breaking the encryption somewhere in the middle. You are then creating back doors, and back doors create vulnerabilities which other bad actors can exploit like hackers or hostile states.”
Either way, “instead of having this totally secure messaging system where you know nobody else can get into it, you’ve suddenly got a system where someone else could check into it if they chose to.”
As a legal assessment commissioned by Index on Censorship notes, parts of the bill “amount to state-mandated surveillance because they install the right to impose technologies that would intercept and scan private communications on a mass scale.”
That means the law could open the door to surveillance of journalists and their confidential source communications, not just in the U.K., but worldwide.
Madeline Earp is a consultant technology editor for CPJ. She has edited digital security and rights research for projects including five editions of Freedom House’s Freedom on the Net report, and is a former CPJ Asia researcher.