The UK government has just released its proposal for tackling "online harms", including how content on social media platforms should be monitored and regulated.
This statement was originally published on privacyinternational.org on 8 April 2019.
The UK government today has released its proposal for tackling “online harms”, including how content on social media platforms should be monitored and regulated.
In particular, the Department for Digital, Culture, Media and Sport (DCMS) and Home Office proposal introduces:
. a mandatory “duty of care” that social media companies are compelled to uphold or face fines. The government describes the duty of care as “requiring companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services”.
. the creation of an independent regulator, with new enforcement powers covering disinformation, violent content, cyber bullying, and more. The regulator would be able to enforce the duty of care as well as other measures set out in the white paper, be able to issue fines, block access to websites, and impose liability on individual members of the companies’ senior management. Other powers would include:
. pushing social media companies and others to publish annual transparency reports detailing the quantity of “harmful content” and what the company is doing to address the content.
. forcing companies to respond more quickly to user complaints
. issuing codes of practice which could require companies to minimise the spread of disinformation
Our take
PI welcomes the UK government’s commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so will introduce, rather than reduce, “online harms”. A 12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls on others respond to the consultation as well.
Here are our initial suggestions:
. proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex challenge and we support the need for broad consultation before any legislation is put forward in this area.
. do not lose sight of how data exploitation facilitates the harms identified in the report and ensure any new regulator works closely with others working to tackle these issues.
. assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression and privacy.
. require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human rights norms.
. assess the privacy implications of any demand for “proactive” monitoring of content in digital media platforms.
. ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on
solely automated decisions, including profiling, when they significantly affect individuals).
. ensure that company transparency reports include information related to how the content was targeted at users.
. require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be full transparency regarding the complaint and redress mechanisms available and opportunities for civil society to take action.