There is a need to develop AI regulatory frameworks that are based on a comprehensive understanding of systemic racism and grounded in the international human rights framework, urged Ashwini K P, UN Special Rapporteur on Contemporary Forms of Racism, Tuesday.
“There is a dire need to establish mechanisms to enable individuals and groups who are affected by AI-driven systemic racism and racial discrimination,” she told a session on ‘The Power of Artificial Intelligence and Human Rights: Risks and Opportunities’ at the International Conference on ‘Artificial Intelligence and Human Rights: Opportunities, Risks and Visions for a Better Future’.
The UN official noted that the risks of AI are multifarious. “The rise of artificial intelligence systems and machine learning algorithms has led to the digitisation of data on a massive scale. Algorithms use that data to make decisions and engage in actions across several sectors. However, the data sets on which algorithms are trained are often incomplete or under represent certain groups of people including along racial and ethnic lines, resulting in algorithm bias,” she said. “Technology is never neutral. It reflects the values and interests of those who influence its design and use, and is fundamentally shaped by the same structures of inequality that operate in society,” she explained adding that AI results in discriminatory outcomes in critical areas such as employment, law enforcement, healthcare, and education since AI systems learn from historical data, which often contains embedded societal biases.
In the context of risk posed by AI, Ashwini said, the spread of misinformation has had a serious impact on the social fabric of society. “The ability of AI to produce incredibly lifelike text, pictures, and videos has presented information integrity with previously unheard-of difficulties. AI-generated media and deepfakes can be used as weapons to sway public opinion, pose as people or interfere with democratic processes,” she said.
The UN official noted that AI’s benefits are not equitably distributed and marginalised communities often lack the infrastructure, digital literacy or localised tools to engage meaningfully with AI technologies. “Technological space has always been occupied by the privileged resulting in the digital divide. This digital divide reinforces existing social disparities and limits participation in an increasingly automated world,” she said.
Ashwini who underlined the need for proactive safeguards against the threats that AI poses, said governments must enact enforceable laws that centre human dignity and prohibit discriminatory uses of AI. “Developers should ensure that AI decisions are explainable. Affected individuals must have access to remedies, including appeals and human oversight. AI systems must incorporate data minimisation, encryption and purpose limitation to safeguard user privacy. Systems with unacceptable risks must be banned.”
Chaired by Mohammad Alnsour, OHCHR, Mena Section Chief - Geneva, the session was attended by Rapporteur: Nicole Chaaya, Civil Society and Technical Co-operation Unit, OHCHR Syria Office; Reem Alsalem, UN Special Rapporteur on Violence Against Women and Girls; Matthew Hervey, Al and IP Expert, Head of Legal and Policv at Human Native Al - UK; Abdel Basset Ben Hassen, Chair of the Board, Arab Institute for Human Rights - Tunis; and Azin Tadjdini, Human Rights Officer, Office of the High Commissioner for Human Rights - Geneva.
Related Story