OpenAI transformed how the public thinks about artificial intelligence (AI) a year ago with the launch of its hugely successful chatbot, ChatGPT, which has currently 100mn active users.
But the expulsion and swift reinstatement of Sam Altman as the leader of OpenAI, the AI startup he co-founded, was the culmination of a long-brewing clash of diverging worldviews that was masked by the company’s startling success.
On one side were the commercial ambitions of Altman and OpenAI’s major partner, Microsoft.
On the other were board members who had concerns that AI could one day wipe out humanity.
That worry is one driver of the effective altruism movement, which has become an influential force within Silicon Valley and the AI industry.
Effective altruism is a movement that aims to use research and reasoning to solve the most pressing global problems for the benefit of the maximum number of people. It reflects the ideas of Peter Singer, a moral philosopher and professor of bioethics at Princeton University, who argues that people should spend their resources saving as many lives as possible, especially in parts of the world where a life can be saved for a relatively low cost.
Over the past decade, effective altruism has broadened its mission toward preventing future scenarios in which humans could go extinct, such as nuclear war and pandemics.
Also on that list: an AI apocalypse.
The notion spawned the field of AI safety, which aims to prevent disastrous outcomes from the work of building AI.
AI safety was embraced as an important cause by big-name Silicon Valley figures who believe in effective altruism, including Peter Thiel, Elon Musk and Sam Bankman-Fried, the founder of crypto exchange FTX, who was convicted in early November of a massive fraud.
Founded with a mission to “ensure that AI benefits all of humanity,” OpenAI was supposed to be a counterweight to the profit-driven efforts within labs of technology giants. Members of OpenAI’s governing board had ties, both past and present, to the effective altruism movement.
The startup began as a nonprofit organisation but added a for-profit subsidiary so that it could raise the vast amounts of money it needed to operate the technology that fuels ChatGPT and DALL-E.
OpenAI attracted billions of dollars from Microsoft and, as of October, pinned its value at $86bn. That led to tension between OpenAI’s commercial ambitions — driven by Altman and Microsoft — and worries from some board members about pushing AI development too fast.
For Microsoft - which holds a roughly 49% stake - OpenAI is the key to its AI strategy.
In a wider sense, OpenAI’s turmoil reflects the bigger debate over AI with the schism over the pace of commercialisation.
The problematic has played out in several other arenas.
Google, for instance, has long advocated a slow and cautious AI approach, which made it look like a laggard when OpenAI made a splash with ChatGPT, despite more than a decade of investment and research into AI.
Artificial intelligence is the subject of regulatory reviews and consideration worldwide, with the UK recently organising an international conference to discuss proper approaches to its adoption. Private companies and governmental bodies are in the process of negotiating how to mitigate the potential harms without stifling innovation.
The rampant euphoria over AI, as well as Altman’s dramatic ouster and swift return, has again problematised the good-vs-bad tech debate.
While sustainable profitability is prerequisite for any industry to thrive, excessive commercialisation needs to be kept at check so that technology can help make the world a better place.
OpenAI, which began as a nonprofit, has attracted billions of dollars from Microsoft and, as of October, pinned its value at $86bn