Google Employees Internally Labelled Bard AI a 'Pathological Liar' and 'Cringe-Worthy' - The Messenger
It's time to break the news.The Messenger's slogan

Google Employees Internally Labelled Bard AI a ‘Pathological Liar’ and ‘Cringe-Worthy’

An internal invitation-only forum shows the roadblocks, open debate and questions that went into Google’s generative AI

JWPlayer

Google engineers, product managers and designers who use Bard, the company’s generative AI, have been airing their concerns about the chatbot on a private forum for months. 

There were layers of open debate about the AI tool’s usefulness, Bloomberg reports, that happened in an invitation-only Discord server with about 9,000 members. 

“My rule of thumb is not to trust LLM output unless I can independently verify it,” Dominik Rabiej, a senior product manager for Bard, wrote in the Discord in July. The employee added that Bard “isn’t there yet.” LLMs are large language models that train AI chatbots like Bard and ChatGPT. 

Cathy Pearl, a user experience lead for Bard, brought up another challenge in August, writing “what are LLMs truly useful for, in terms of helpfulness? Like really making a difference. TBD!”

Even earlier, before Bard launched in March, employees tested the tool and internally called it “cringe-worthy” and a “pathological liar.” They raised ethical concerns about Bard giving dangerous or erroneous information to users; one employee asked for advice about landing a plane and repeatedly received answers that would lead to a crash. 

Google Bard Generative AI
Google's Bard AI chatbotSmith Collection/Gado/Getty Images

Even Google CEO Sundar Pichai initially called generative AI a “trade-off.”

“It’s exciting because there are new use cases; people are responding to it,” he told Bloomberg in June. “It’s uncomfortable because it’s inherently generative. There are times it makes up things.”

Google went ahead with Bard and expanded it despite these concerns, integrating the AI chatbot with popular services like Gmail, Docs and Flights, and rolling out five different conversational tones and more than 40 languages for responses. The company also added a “Google It” button for queries in English so users can quickly fact-check responses. 

Google was transparent about Bard’s constraints, adding a disclaimer to the chatbot that states: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” Google said in a statement that testing is "routine and unsurprising."

“Since launching Bard as an experiment, we’ve been eager to hear people’s feedback on what they like and how we can further improve the experience,” Jennifer Rodstrom, a Google spokesperson, told Bloomberg. “Our discussion channel with people who use Discord is one of the many ways we do that.”

Bard isn’t the only chatbot to raise ethical concerns: AI experts have compared the technology as a whole to nuclear weapons. ChatGPT-creator Sam Altman even called for a new agency to regulate AI.

Businesswith Ben White
Sign up for The Messenger’s free, must-read business newsletter, with exclusive reporting and expert analysis from Chief Wall Street Correspondent Ben White.
 
By signing up, you agree to our privacy policy and terms of use.
Thanks for signing up!
You are now signed up for our Business newsletter.