The unstoppable rise of artificial intelligence is bringing about upheavals in the tech world and becoming a driving force across the global economy.
The AI frenzy got a fillip when OpenAI transformed how the public think about AI with the launch of its hugely successful chatbot, ChatGPT.
But the complex, rapidly evolving field of AI raises legal, national security and civil rights concerns that can’t be ignored.
Already at work in products as diverse as toothbrushes and drones, systems based on AI have the potential to revolutionise industries from healthcare to logistics. But replacing human judgment with machine learning carries risks.
Even if the ultimate worry — fast-learning AI systems going rogue and trying to destroy humanity — remains in the realm of fiction, there already are concerns that bots doing the work of people can spread misinformation, amplify bias, corrupt the integrity of tests and violate people’s privacy.
Alphabet’s Google, Microsoft, IBM and OpenAI have encouraged US lawmakers to implement federal oversight of AI, which they say is necessary to guarantee safety.
In the US, President Joe Biden’s executive order on AI sets standards on security and privacy protections and builds on voluntary commitments adopted by more than a dozen companies.
Among bills proposed so far, one would prohibit the US government from using an automated system to launch a nuclear weapon without human input; another would require that AI-generated images in political ads be clearly labelled.
At least 25 US states considered AI-related legislation in 2023, and 15 passed laws or resolutions, according to the National Conference of State Legislatures.
The European Parliament has passed a bill setting up the most comprehensive regulation of AI in the Western world.
The legislation would ban the use of AI for detecting emotions in workplaces and schools, as well as limit how it can be used in high-stakes situations like sorting job applications.
It would also place the first restrictions on generative AI tools, which captured the world’s attention last year with the popularity of ChatGPT.
Endorsed by an overwhelming majority in the European Parliament, the new act aims to protect human rights by assigning obligations to AI systems based on their potential risks and levels of impact, without hindering the field of AI innovations.
In China, a set of 24 government-issued guidelines took effect on August 15, targeting generative AI services, such as ChatGPT, that create images, videos, text and other content.
Under those guidelines, AI-generated content must be properly labelled and respect rules on data privacy and intellectual property.
Leading technology companies including Amazon.com, Alphabet, IBM and Salesforce pledged to follow the Biden administration’s voluntary transparency and security standards, including putting new AI products through internal and external tests before their release.
In September, Congress summoned tech tycoons including Elon Musk and Bill Gates to advise on its efforts to create a regulatory regime.
One concern for companies is the degree to which US rules could apply to the developers of AI products, not just to users of them.
Since American tech companies and specialised American-made microchips are at the forefront of AI innovation, US leaders wield particular sway over how the field is overseen.
The Big Tech, despite its overarching global reach, has for long been facing a widening trust deficit, both from the users and regulators.
Now, the rampant euphoria over AI has again the problematised the good-vs-bad tech debate.
And AI is the subject of regulatory reviews and consideration worldwide with companies and governmental bodies in the process of negotiating how to mitigate the potential harms without stifling innovation.
Related Story