Every coin has two sides, so do all topics, situations, inventions or innovations under the sun. The term ‘double-edged sword’ has been used often to describe technology. Debate in this regard has regained momentum ever since OpenAI’s ChatGPT, an Artificial Intelligence (AI) bot that can create human-like responses to questions asked by a user, made its debut on November 30, 2022, kicking off a frenzy. Microsoft uses OpenAI’s technology in its Bing chatbot and Google recently launched its competitor Bard. In March 2023 an open letter from the Future of Life Institute signed by tech leaders like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak called for a six-month AI research halt, spurred by concerns about the flip side of AI.
But it seems that the entire issue has transformed into an ego tussle with OpenAI founder and CEO Sam Altman agreeing ‘with parts of the open letter’ but thinks it was “missing most technical nuance about where we need the pause.” He made the remarks last Thursday in a video appearance at a Massachusetts Institute of Technology (MIT) event that discussed business and AI. “I think moving with caution and an increasing rigour for safety issues is really important,” Altman continued. “The letter I don’t think was the optimal way to address it.”
Musk, Wozniak, and dozens of other academics have called for an immediate pause to training “experiments” connected to large language models that were “more powerful than GPT-4,” OpenAI’s flagship large language model, or LLM. Over 25,000 people have signed the letter since then. OpenAI’s GPT technology underpins Bing AI chatbot, and prompted a flurry of AI investment. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.
Altman told the MIT event: “I also agree as capabilities get more and more serious, that the safety bar has got to increase.” Earlier this year, he acknowledged that AI technology made him a “little bit scared.” Questions about safe and ethical AI use have come up at the White House, on Capitol Hill, and in boardrooms across America. He confirmed at the MIT event that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4.
However, just because OpenAI is not working on GPT-5 doesn’t mean it’s not expanding the capabilities of GPT-4 — or, as Altman was keen to stress, considering the safety implications of such work. “We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter,” he said, evidently expressing his differences with Musk, Wozniak and team.
Altman’s confirmation that OpenAI is not currently developing GPT-5 won’t be of any consolation to people worried about AI safety. The company is still expanding the potential of GPT-4 (by connecting it to the internet, for example), and others in the industry are building similarly ambitious tools, letting AI systems act on behalf of users. There’s also all sorts of work that is no doubt being done to optimise GPT-4, and OpenAI may release GPT-4.5 (as it did GPT-3.5) first. Even if the world’s governments were somehow able to enforce a ban on new AI developments, it’s clear that society has its hands full with the systems currently available.