Google employees reportedly begged it not to release 'pathological liar' AI chatbot Bard

1 year ago 138
  1. Home
  2. News
Half of Artificial Intelligence robot face
(Image credit: Getty Images, Yuichiro Chino)

According to internal documents reviewed by Bloomberg (opens in new tab), several Google employees raised concerns that its Bard (opens in new tab) AI chatbot was not ready for its March release, citing low-quality, incorrect, and potentially dangerous answers. Two quotes about Bard allegedly from employees: "pathological liar" and "cringe-worthy."

Bard (opens in new tab) is Google's answer to OpenAI's ChatGPT (opens in new tab). CEO Sundar Pichai said it "combines the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models." But Bloomberg reports that Google rushed Bard out the door in order to compete with ChatGPT in what the company allegedly called a competitive "code red." 

According to Bloomberg's report, an employee sent a message to an internal message group that was viewed by 7,000 employees saying, “Bard is worse than useless: please do not launch.” Right before the launch, Google's AI governance lead, Jen Gennai, reportedly overruled a risk evaluation by her own team when they said that Bard's answers could potentially be harmful.

Bloomberg reports a couple of examples: Suggestions on landing a plane that would result in a crash and scuba diving instructions that could have resulted in "serious injury or death."

Meredith Whittaker, a former manager at Google, told Bloomberg that "AI ethics has taken a back seat" at the company

Google Bard advice.

Google Bard says not to rely on its advice for "important decisions." (Image credit: Future)

ChatGPT has its own issues with the truth (opens in new tab) and proper sourcing of information it scrapes for answers. Currently, Google refers to Bard as an "experiment" and, in Bloomberg's words, maintains that "responsible AI is a top priority" for the company.

As an experiment, I asked Google Bard if its advice is potentially dangerous, and it replied with the following response:

"It is possible for my advice to be dangerous, especially if it is about health or other sensitive topics. I am still under development, and I am not always able to distinguish between good and bad advice."

It also told me not to rely on its advice for "important decisions" and that it does "not have the same level of understanding and knowledge as a human being." 

Sign up to get the best content of the week, and great gaming deals, as picked by the editors.

Jorge is a hardware writer from the enchanted lands of New Jersey. When he's not filling the office with the smell of Pop-Tarts, he's reviewing all sorts of gaming hardware, from laptops with the latest mobile GPUs to gaming chairs with built-in back massagers. He's been covering games and tech for over ten years and has written for Dualshockers, WCCFtech, Tom's Guide, and a bunch of other places on the world wide web. 

Continue reading