While artificial intelligence (AI) is set to inject trillions into the global economy and transform sectors like health and education, experts warned that its rapid rise urgently requires legally binding frameworks to protect human rights, particularly privacy.

Dany Wazen, Digital Transformation Specialist with United Nations Development Programme (UNDP) – Lebanon, highlighted AI’s huge economic potential, saying a 2025 UNDP Human Development Report projected that AI could contribute $15.7tn to the global economy by 2030, with some local economies seeing up to a 26% boost in gross domestic product.

He noted that AI and machine learning specialists are already among the top 10 fastest-growing jobs globally, underscoring the urgent need for workforce adaptation, as 60% of workers worldwide will require formal training by 2027 to meet evolving job demands. Wazen, joined by a panel of international experts, was speaking at a session at the International Conference on “Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future” on Tuesday at the Ritz Carlton Hotel, Doha. The event, concluding Wednesday, brought together global leaders and specialists to tackle AI’s complex impact.

The session, titled "Artificial Intelligence: Concept, Capabilities and Governing Values,” was chaired by Dr Stephen Rainbow, High Commissioner for Human Rights in New Zealand, and Yasmine Hamdar, AI Specialist at UNDP – UAE. According to the UNDP report, AI-powered automation tools have significantly reduced administrative workloads in healthcare by up to 70%, freeing professionals to focus more on patient care. It added that this efficiency is expected to translate into substantial savings, with AI potentially generating up to $150bn in annual savings for the US healthcare system alone by 2026.

In the education sector, Wazen said growth projections, with the global AI in education market expected to skyrocket from $5.18bn in 2024 to $112.3bn by 2034.

He underlined AI tools’ critical role in student success, with 71% of teachers and 65% of students deeming them essential for college and work. Wazen cited real-world applications across various sectors, including improved loan granting and fraud detection in finance, personalised interactions in customer service via chatbots, and optimised traffic flow and logistics in transportation.

However, experts also issued stark warnings, cautioning that the optimistic outlook in this digital age was shadowed by the potential for human rights abuses. Prof Alena Douhan, UN Special Rapporteur on the Negative Impact of Unilateral Coercive Measures on the Enjoyment of Human Rights, raised serious concerns that states are misusing references to “malicious cyber activity” to introduce unilateral sanctions, bypassing established legal norms.

“The problem with unilateral sanctions is that states do not qualify it as a crime. They punish people and companies without due process, without the presumption of innocence ...” she said, noting that it is a contrast with the recently adopted United Nations Convention against Cybercrime. This, she pointed out, mandates adherence to domestic law and due process.

Prof Douhan also questioned the concept of state sovereignty in cyberspace, recalling that the UN General Assembly has already recognised threats from the malicious use of cyber technologies by terrorist and extremist groups, and urged adherence to the principle that “The Right to privacy in the digital age” calls for the same rights online as offline.

To safeguard privacy in the AI, Dr Ana Brian Nougreres, UN Special Rapporteur on the Right to Privacy, called for a concrete action, saying that: “We must move beyond soft ethical principles and adopt a binding legal framework grounded in international human rights law."

She emphasised that AI systems must respect the principles of legality, necessity, and proportionality, stressing that any privacy interference must have a legal basis. “Privacy should not be an afterthought or optional feature; it must be a core part of the architecture of any AI system,” she said. Developers and companies, she added, must be legally mandated to build systems that minimise data collection, restrict access to sensitive information, and ensure privacy settings are enabled by default, free from deceptive design.

“As AI becomes more powerful and pervasive, the stakes for privacy and other fundamental rights grow exponentially... We need concrete and enforceable measures to ensure that AI technologies serve humanity rather than erode the rights and freedoms that define us.

“Let us ensure that innovation does not come at the cost of human dignity. Let us build an AI future rounded in transparent, accountability and respect for human rights,” Dr Nougreres said.