If Geoffrey Hinton, one of the so-called godfathers of artificial intelligence (AI), urged governments last Wednesday to step in and make sure that machines do not take control of society, the lesser mortals cannot be faulted for being left wondering as to what is ahead. Hinton made headlines in May when he announced that he quit Google after a decade of work to speak more freely on the dangers of AI, shortly after the release of ChatGPT captured the imagination of the world. The latest suggestion/warning from the highly respected AI scientist, based at the University of Toronto, came in his address to a packed audience at the Collision tech conference in the Canadian city. The conference brought together more than 30,000 startup founders, investors and tech workers, most looking to learn how to ride the AI wave and not hear a lesson on its dangers.
“Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might try and take control away,” Hinton said while observing that “Right now, there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it taking over and maybe you want to be more balanced.” Hinton warned that the risks of AI should be taken seriously despite his critics who believe he is overplaying the risks. “I think it’s important that people understand that this is not science fiction, this is not just fear mongering,” he insisted. “It is a real risk that we must think about, and we need to figure out in advance how to deal with it.”
Hinton expressed concern that AI would deepen inequality, with the massive productivity gain from its deployment going to the benefit of the rich, and not workers. The AI pioneer also pointed to the danger of fake news created by ChatGPT-style bots and said he hoped that AI-generated content could be marked in a way similar to how central banks watermark cash money. “It’s very important to try, for example, to mark everything that is fake as fake. Whether we can do that technically, I don’t know,” he said. The European Union is considering such a technique in its AI Act, a legislation that will set the rules for AI in Europe, which is currently being negotiated by lawmakers and has drawn flak from a segment of those concerned.
According to an open letter signed by more than 160 executives at companies ranging from Renault to Meta last Friday, the proposed EU AI legislation would jeopardise Europe’s competitiveness and technological sovereignty. EU lawmakers agreed to a set of draft rules in June where systems like ChatGPT would have to disclose AI-generated content, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content.
Since ChatGPT became popular, several open letters have been issued calling for regulation of AI and raising the “risk of extinction from AI”. Signatories of previous letters included Elon Musk, OpenAI CEO Sam Altman, and Geoffrey Hinton and Yoshua Bengio – the latter being another of the three so-called “godfathers of AI”. The third, Yann LeCun, who works at Meta, signed the letter challenging the EU regulations. Other signatories included executives from a diverse set of companies such as Spanish telecom company Cellnex, French software company Mirakl and German investment bank Berenberg.
The letter warned that under the proposed EU rules technologies like generative AI would become heavily regulated and companies developing such systems would face high compliance costs and disproportionate liability risks. Such regulation could lead to highly innovative companies moving their activities abroad and investors withdrawing their capital from the development of European AI in general, it said. In short, the AI conundrum is getting complex day by day.
Related Story