The common man cannot be faulted for ending up dumbstruck when tech giants such as Google parent Alphabet and ChatGPT-maker OpenAI, who are spearheading rapid developments in artificial intelligence (AI), themselves caution the world about the risks of the technology.
It was only a few weeks ago that industry experts and tech leaders said in an open letter that AI may lead to human extinction and reducing the risks associated with the technology should be a global priority. Sam Altman, CEO of OpenAI, as well as executives from Google’s AI arm DeepMind and Microsoft were among those who supported and signed the short statement from the Center for AI Safety.
Last Thursday, Alphabet cautioned employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, according to a Reuters report. The company has advised employees not to enter its confidential materials into AI chatbots, citing long-standing policy on safeguarding information.
The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative AI to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.
Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate. Asked for comment, the company told Reuters that Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology.
The concerns show how Google wishes to avoid business harm from software it launched in competition with ChatGPT. At stake in Google’s race against ChatGPT’s backers OpenAI and Microsoft Corporation are billions of dollars of investment and still untold advertising and cloud revenue from new AI programs.
Google’s caution also reflects what’s becoming a security standard for corporations, namely to warn personnel about using publicly-available chat programs. A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not return requests for comment, reportedly has as well.
Some 43% of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents including from top US-based companies, done by the networking site Fishbowl. By February, Google told staff testing Bard before its launch not to give it internal information, Insider reported. Now Google is rolling out Bard to more than 180 countries and in 40 languages as a springboard for creativity, and its warnings extend to its code suggestions.
Google told Reuters it has had detailed conversations with Ireland’s Data Protection Commission and is addressing regulators’ questions, after a Politico report on Tuesday that the company was postponing Bard’s EU launch pending more information about the chatbot’s impact on privacy. Such technology can draft emails, documents, even software itself, promising to vastly speed up tasks. Included in this content, however, can be misinformation, sensitive data or even copyrighted passages from a “Harry Potter” novel. A Google privacy notice updated on June 1 also states: “Don’t include confidential or sensitive information in your Bard conversations.”
Some companies have developed software to address such concerns. For instance, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally. Google and Microsoft also are offering conversational tools to business customers that will come with a higher price tag but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users’ conversation history, which users can opt to delete. Given the scenario, it is only logical to expert more warnings about AI.