As much as 84% of Middle East organisations consider AI or artificial intelligence a top priority, while emerging risks underscore urgent need for responsible AI (RAI), according to a joint study by Boston Consulting Group (BCG) and MIT Sloan Management Review.
The study found 38% of Middle East organisations believe they are prepared for AI regulations and that 25% of entities in the Middle East are RAI leaders, while 75% are lagging.
“The AI landscape in the Middle East, both from a technological and regulatory perspective has changed dramatically,” said Elias Baltassis, Partner and Director, BCG X.
The rapid adoption of generative AI tools has brought AI to the forefront of conversations in the region, he said, adding yet, the fundamentals of responsible AI remain crucial.
"This year, our research emphasises the pressing need for Middle Eastern organisations to invest in and scale their RAI programmes to address the growing uses and risks of AI in a region that is increasingly embracing digital transformation,” according to him.
In the Middle East, the components of RAI programmes encompass broad principles (43%), policies (49%), governance (76%), monitoring (49%), tools and implementation (51%), and change management (43%).
Individual considerations within these RAI programmes include transparency and explainability (62%), social and environmental impact (59%), accountability (57%), fairness (54%), safety, security, and human wellbeing (68%), and data security and privacy (86%).
Finding that 75% of the Middle Eastern organisations are RAI laggards; it said there was an urgent need for most organisations in the region to double down on their RAI efforts.
The data suggests that organisations in the region can experience a range of benefits from RAI, including better products/services (43%), brand differentiation (27%), increased customer retention (43%), improved long-term profitability (30%), accelerated innovation (41%), and improved recruiting and retention (16%).
The vast majority (78%) of organisations surveyed globally are highly reliant on third-party AI tools, exposing them to a host of risks, including reputational damage, the loss of customer trust, financial loss, regulatory penalties, compliance challenges, and litigation. Still, one-fifth of them that use third-party AI tools fail to evaluate their risks at all.
The study said only 38% of organisations feel adequately prepared for AI regulations, highlighting the need for more awareness and preparedness. The regulatory landscape is evolving almost as rapidly as AI itself, with many new AI-specific regulations taking effect on a rolling basis.
It highlighted that the chief executive officers or CEOs play a key role in both affirming an organisation’s commitment to AI and sustaining the necessary investments in it.
"Organisations with a CEO who takes a hands-on role in RAI efforts (such as by engaging in RAI-related hiring decisions or product-level discussions or setting performance targets tied to RAI) report 58% more business benefits than do organisations with a less hands-on CEO, regardless of their leader status," it said.
Related Story