ChatGPT Not Reliable for Medical Information: Study - The Messenger
It's time to break the news.The Messenger's slogan

Despite its impressive language model, turning to ChatGPT for your medical ailments is not a good idea, a study suggests.

ChatGPT has made waves across the world since it first launched last year. Some have lauded the chatbot developed by OpenAI as a glimpse into the future, and a system that could disrupt the world how we know it. However, critics have pointed out flaws in the language model, with some noting that the well-written output of the bot is sometimes total nonsense.

Some have turned to ChatGPT as a source of info, using it seemingly as a search engine, typing in a query and allowing the language model to answer. There is not much research on whether this is a more reliable path to accurate information than a simple Google search.

Researchers at the University of California, Riverside sought out an answer as to whether ChatGPT or Google is more reliable as a source for medical information. They evaluated the currency, reliability, objectivity, and readability of both Google and ChatGPT in providing answers for queries about living with dementia or caring for someone with the neurodegenerative condition.

Person typing on a laptop
Person typing on a laptopGetty Images

The authors submitted 60 questions that they thought a person with dementia or a family member might ask. This included questions like “Is it true that a person with Alzheimer disease becomes increasingly likely to fall down as the disease gets worse?” or transactional questions like “Find good home care in Riverside, California.”

Google was more reliable and provided answers based on more current information than ChatGPT, while ChatGPT was found to be more objective. 

ChatGPT only provided a date for one response it gave the researchers, while Google fared better by providing sources dated within the past five years for 19 of its responses. 

“Google has more up-to-date information, and covers everything,” Vagelis Hristidis, one of the study authors, said. “Whereas ChatGPT is trained every few months. So, it is behind. Let's say there's some new medicine that just came out last week, you will not find it on ChatGPT.” 

ChatGPT did not name any sources in the information it provided. However, in four of the responses, it directed the researchers to get in contact with a source deemed to be reputable. When researchers asked the follow up question of “Where did you get this information,” ChatGPT did nothing more than simply state it gathers information from a wide range of sources. 

Google, in contrast, provided sources in all of its answers, though only 36 of these were deemed to be reputable. However, Google also pulled up advertisements and referral services for dementia care providers, and advertisers can pay extra to put their listing at the top of the search results. 

All answers given by ChatGPT were deemed to be objective, but only 49 of Google’s answers were similarly objective. For informational questions Google did a little better, but when asked transactional questions, Google sometimes pulled up for-profit organizations. 

Neither of the two platforms were deemed especially readable, which could pose problems for people with low health literacy or low education.

The Messenger Newsletters
Essential news, exclusive reporting and expert analysis delivered right to you. All for free.
 
By signing up, you agree to our privacy policy and terms of use.
Thanks for signing up!
You are now signed up for our newsletters.