In its submission to the UN report on AI and the right to privacy, Privacy International highlights concerns about facial recognition technology and argues for effective laws accompanied by safeguards to ensure AI applications comply with human rights.
This statement was originally published on privacyinternational.org on 9 June 2021.
Privacy International submitted its input to the forthcoming report by the UN High Commissioner for Human Rights (HCHR) on the right to privacy and artificial intelligence (AI.)
In our submission we identify key concerns about AI applications and the right to privacy. In particular we highlight concerns about facial recognition technologies and the use of AI for social media monitoring (SOCMINT). We document sectors where the use of AI applications have negatively affected the most vulnerable groups in society, such as the use of AI in welfare and in immigration and border control.
The briefing also argues for the adoption of adequate and effective laws accompanied by safeguards to ensure AI applications comply with human rights.
KEY ADVOCACY POINTS
In our briefing PI suggests that the forthcoming UN report on right to privacy and AI include the following aspects:
- Reassert that any interference with the right to privacy due to the use of AI technologies should be subject to the overarching principles of legality, necessity and proportionality.
- Establish the need for a human rights-based approach to all AI applications and describe the necessary measures to achieve it (including human rights by design and human rights impact assessments).
- Identify the human rights risks of specific AI applications, due to the technologies employed and/or the context of their use; and describe the circumstances when AI applications should be banned because of human rights concerns.
- Encourage states to adopt or review effective data protection legislation and sectoral laws to address the negative human rights implications of AI applications – at individual, group and society level.
- Note that states have a responsibility to respect and protect human rights from threats arising by the use of AI technologies. On the one hand, state regulation can shape how the private sector develops and applies AI systems and technologies. On the other hand, states have a responsibility to ensure that public sector uses of AI – particularly in health care, welfare, migration, policing, and surveillance, is used responsibly.
- Define the scope of responsibility of non state actors, including companies and international organisations, for AI uses and the need for mechanisms to ensure that they are held accountable.
PI submission to OHCHR Report on AI and Privacy 2021 final.pdf