science & technology, Stories, United Nations
 

Review: UN Human Rights Office highlights the risks posed by increased use of AI systems

0

The UN Human Rights Office recently published the ‘Right to Privacy in Digital Age’ report that analyses how AI affects peoples’ right to privacy and other rights. It notes the risks posed by the increased use of such systems both by government & private agencies. The report also makes some important recommendations for agencies using AI. 

The operation of Artificial Intelligence (AI) systems has the potential to facilitate and deepen privacy intrusions and other interference with rights in a variety of ways. These include new applications as well as features of AI systems that expand, intensify, or incentivize interference with the right to privacy, most notably through the increased collection and use of personal data. In this article, we look at the reports by the United Nations Human Rights Office that give an overview of the international legal framework, highlights aspects of AI that facilitate interference in privacy, and provide a set of recommendations for States and businesses to address these challenges.

On 15 September 2021, the UN Human Rights Office has published a Report titled ‘Right to Privacy in Digital Age’ that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects peoples’ right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

The 2021 report looks at how states and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged, and analysed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out of date, or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in yet unknown ways. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition.

In a nutshell, the Report pushes for data privacy frameworks that account for the new threats linked to the use of AI. Considering the diversity of AI applications, systems and uses, regulation should be specific enough to address sector-specific issues and to tailor responses to the risks involved.

Legal framework for the right to privacy rooted in International Human Rights Conventions

Article 12 of the Universal Declaration of Human Rights, Article 17 of the International Covenant on Civil and Political Rights and several other international and regional human rights instruments recognize the right to privacy as a fundamental human right. As the UN Human Rights Office highlights, the right to privacy plays a pivotal role in the balance of power between the State and the individual and is a foundational right for a democratic society. The right to privacy applies to everyone. Differences in its protection on the basis of race, colour, sex, language, religion, political or other opinions, national or social origin, property, birth, or other status are inconsistent with the principle of non-discrimination laid down in articles 2 (1) and 3 of the International Covenant on Civil and Political Rights.

The report emphasises that any interference with the right to privacy must not be arbitrary or unlawful. The term “unlawful” means that States may interfere with the right to privacy only on the basis of law and in accordance with that law. The law itself must comply with the provisions, aims and objectives of the International Covenant on Civil and Political Rights and must specify in detail the precise circumstances in which such interference is permissible. It has been emphasised that these protections should meet the minimum standards identified in the earlier Reports of the UN High Commissioner on the ‘Right to Privacy’.

The Reports on Right to Privacy provides a clear and universal framework for the promotion and protection of the right to privacy, including in the context of domestic and extraterritorial surveillance, the interception of digital communications and the collection of personal data.

Reported violations by advanced surveillance technologies continue due to lack of adequate national legislations

The Right to Privacy Report also highlights that practices in many States have revealed a lack of adequate national legislation and/or enforcement, weak procedural safeguards, and ineffective oversight, all of which have contributed to a lack of accountability for arbitrary or unlawful interference in the right to privacy.

This can be exemplified by the recent Pegasus data leak allegations. According to recent reports, the Pegasus data leak allegations surfaced through a consortium of media organizations around the globe suggesting a widespread and continuing abuse of the software in order to hack the phones and computers of people conducting legitimate journalistic activities, monitoring human rights or expressing dissent or political opposition, although the manufacturers insist that the software is only intended for use against criminals and terrorists. The Pegasus malware infects electronic devices, enabling operators of the tool to obtain messages, photos and emails, record calls, and even activate microphones, according to the consortium’s reporting. The leak allegedly contains a list of more than 50,000 phone numbers which reportedly belong to those identified as people of interest, by clients of the company behind Pegasus, including some governments.

The Report notes that courts at the national and regional levels are engaged in examining the legality of electronic surveillance policies and measures.

In this context, India offers a case in point – the Supreme Court of India is likely to set up a technical committee to probe the allegations of snooping of journalists, activists, etc., using the Pegasus spyware. The Pegasus controversy erupted in India on 18 July after 40 Indian journalists, political leaders like Rahul Gandhi, election strategist Prashant Kishore, former commissioner of the Election Commission of India (ECI) Ashok Lavasa, etc. were reported to be on the list of targets.

Opacity and challenges of artificial intelligence systems

The reports state that AI systems typically rely on large data sets, often including personal data. This incentivizes widespread data collection, storage, and processing. Many businesses optimize services to collect as much data as possible. Data collection happens in intimate, private, and public spaces. Data brokers acquire, merge, analyse and share personal data with countless recipients. The Report highlighted that these data transactions are largely shielded from public scrutiny and only marginally inhibited by existing legal frameworks.

Apart from exposing people’s private lives to companies and States, these data sets make individuals vulnerable in several other ways. Data breaches have repeatedly exposed the sensitive information of millions of people. The Report argues that arrangements enabling government agencies to have direct access to such data sets held by businesses, for example, increase the likelihood of arbitrary or unlawful interference in the right to privacy of the individuals concerned. The decision-making processes of many AI systems are opaque. The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as the intentional secrecy of government and private actors are factors that undermine meaningful ways for the public to understand the effects of AI systems on human rights and society. This is often referred to as the “black box” problem. The opacity makes it challenging to meaningfully scrutinize an AI system and can be an obstacle for effective accountability in cases where AI systems cause harm.

In the Indian context, similar arguments regarding accountability have come up when the Delhi police, with the help of an automated facial recognition system (AFRS), tracked down the suspects involved in the 2020 Delhi riots. Earlier, the Delhi police had also used this technology to compare the details of people involved in violence during the anti-Citizenship Act protests with a data bank of more than two lakh ‘anti-social elements’, according to news reports. According to the news report, there are currently 16 different facial recognition tracking (FRT) systems in active utilisation by various Central and State governments across India for surveillance, security, or authentication of identity. Another 17 are in the process of being installed by different government departments.

While the FRT system has seen rapid deployment by multiple government departments in recent times, there are no specific laws or guidelines to regulate the use of this potentially invasive technology.

Recommendations on comprehensive due diligence, transparency, and effective remedies

States are increasingly integrating AI systems into law enforcement, national security, criminal justice, and border management systems. The Report argues that effective protection of the right to privacy and interlinked rights depends on the legal, regulatory, and institutional frameworks established by States.

Broadly, the Reports suggest that States and businesses should ensure that comprehensive human rights due diligence is conducted when AI systems are acquired, developed, deployed, and operated, as well as before big data held about individuals are shared or used as well as resourcing and leading such processes. States may also require or otherwise incentivize companies to conduct comprehensive human rights due diligence.

The aim of human rights due diligence processes is to identify, assess, prevent, and mitigate adverse impacts on human rights that an entity may cause or to which it may contribute or be directly linked. Meaningful consultations should be carried out with potentially affected rights holders and civil society, while experts with interdisciplinary skills should be involved in impact assessments, including in the development and evaluation of mitigations. The results of human rights impact assessments, action is taken to address human rights risks and public consultations should themselves be made public. Moreover, the report noted that in situations where there is a close nexus between a State and a technology company requires dedicated attention. 

In essence, the report makes the following recommendations to the State:

  • Fully recognize the need to protect and reinforce all human rights in the development, use and governance of AI as a central objective.
  • Expressly ban AI applications that cannot be operated in compliance with international human rights law, unless and until adequate safeguards to protect human rights are in place.
  • Impose a moratorium on the use of remote biometric recognition technologies in public spaces, at least until the authorities responsible can demonstrate compliance with privacy and data protection standards and the absence of significant accuracy issues and discriminatory impacts.
  • Adopt and effectively enforce, through independent, impartial authorities, data privacy legislation for the public and private sectors as an essential prerequisite for the protection of the right to privacy in the context of AI.
  • Ensure that victims of human rights violations and abuses linked to the use of AI systems have access to effective remedies.
  • Ensure that public-private partnerships in the provision and use of AI technologies are transparent and subject to independent human rights oversight, and do not result in the abdication of government accountability for human rights.

It recommends the following to State and Business Enterprises:

  • Systematically conduct human rights due diligence throughout the life cycle of the AI systems they design, develop, deploy, sell, obtain, or operate. A key element of their human rights due diligence should be regular, comprehensive human rights impact assessments.
  • Dramatically increase the transparency of their use of AI, including by adequately informing the public and affected individuals and enabling independent and external auditing of automated systems. 
  • Ensure participation of all relevant stakeholders in decisions on the development, deployment and use of AI, in particular affected individuals, and groups.

Featured Image: Right to Privacy in Digital Age

Share.

About Author

Aprajita is driven by her ardent interest in a wide array of unrelated subjects - from public policy to folk music to existential humour. As part of her interdisciplinary education, she has engaged with theoretical ideas as well as field-based practices. By working with government agencies and non-profit organisations on governance and community development projects, she has lived and learned in different parts of the country, and aspires to do the same for the rest of her life.

Comments are closed.

scroll