Google Employees Internally Labelled Bard AI a ‘Pathological Liar’ and ‘Cringe-Worthy’
An internal invitation-only forum shows the roadblocks, open debate and questions that went into Google’s generative AI
Google engineers, product managers and designers who use Bard, the company’s generative AI, have been airing their concerns about the chatbot on a private forum for months.
There were layers of open debate about the AI tool’s usefulness, Bloomberg reports, that happened in an invitation-only Discord server with about 9,000 members.
“My rule of thumb is not to trust LLM output unless I can independently verify it,” Dominik Rabiej, a senior product manager for Bard, wrote in the Discord in July. The employee added that Bard “isn’t there yet.” LLMs are large language models that train AI chatbots like Bard and ChatGPT.
Cathy Pearl, a user experience lead for Bard, brought up another challenge in August, writing “what are LLMs truly useful for, in terms of helpfulness? Like really making a difference. TBD!”
Even earlier, before Bard launched in March, employees tested the tool and internally called it “cringe-worthy” and a “pathological liar.” They raised ethical concerns about Bard giving dangerous or erroneous information to users; one employee asked for advice about landing a plane and repeatedly received answers that would lead to a crash.
Even Google CEO Sundar Pichai initially called generative AI a “trade-off.”
- Alphabet Shares Jump After International Expansion of Google AI Chatbot Bard
- Tech Reporter Finds AI Still Can’t Replace Humans When It Comes to Designing Christmas Cards: ‘Heppy Christmas’
- Google Integrates Bard AI Into Gmail, Docs, Flights and More
- Google Launches New AI, Gemini, and a More Powerful Bard Chatbot
- Google Skips EU, Debuts AI-Chatbot Bard on Remote, Penguin-Inhabited Island
- Google Will Require a Label on Political Ads if They Use AI
“It’s exciting because there are new use cases; people are responding to it,” he told Bloomberg in June. “It’s uncomfortable because it’s inherently generative. There are times it makes up things.”
Google went ahead with Bard and expanded it despite these concerns, integrating the AI chatbot with popular services like Gmail, Docs and Flights, and rolling out five different conversational tones and more than 40 languages for responses. The company also added a “Google It” button for queries in English so users can quickly fact-check responses.
Google was transparent about Bard’s constraints, adding a disclaimer to the chatbot that states: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” Google said in a statement that testing is "routine and unsurprising."
“Since launching Bard as an experiment, we’ve been eager to hear people’s feedback on what they like and how we can further improve the experience,” Jennifer Rodstrom, a Google spokesperson, told Bloomberg. “Our discussion channel with people who use Discord is one of the many ways we do that.”
Bard isn’t the only chatbot to raise ethical concerns: AI experts have compared the technology as a whole to nuclear weapons. ChatGPT-creator Sam Altman even called for a new agency to regulate AI.
- Nvidia to Begin Mass Production of AI Chip Designed Just for ChinaBusiness
- Now You Can Play ‘Trivial Pursuit’ Online With an Infinite Number of AI-Generated QuestionsTech
- Samsung’s ‘Ballie’ Is a Rolling Robot Projector That Can Help Control Your HomeTech
- Even Short Droughts May Have Far Worse Consequences Than We ThoughtTech
- OpenAI Slams New York Times Lawsuit, Says Claims ‘Without Merit’Tech
- US Moon Lander: Latest on Peregrine’s Historic MissionTech
- Historic US Moon Lander Back on Track After Experiencing AnomalyTech
- You’ll Actually Be Able to Buy LG’s Transparent OLED TV Later This YearTech
- iPhone Owners Find $92 ‘Batterygate’ Payments in Their Bank AccountsBusiness
- You Can Install This Wireless Wi-Fi Security Cam Over a Mile Away From Your HouseTech
- Samsung’s Biggest CES 2024 Reveals: Transparent Displays, an 8K Projector and a Speaker PaintingTech
- Supreme Court Shoots Down Elon Musk’s X Surveillance CaseTech