‘Take Off Your Clothes, Please — ChatGPT Will See You Now’ - The Messenger
It's time to break the news.The Messenger's slogan
Opinion
THE VIEWS EXPRESSED BY CONTRIBUTORS ARE THEIR OWN AND NOT THE VIEW OF THE MESSENGER

I must admit there is a significant upside to the growing infiltration of artificial intelligence into my world, the world of medicine. So-called machine learning is based on analyses of massive data banks, well beyond what any physician or scientist is capable of amassing. Radiology and dermatology are immediate beneficiaries of these advances and the growing use of AI. A top breast radiologist at my medical center told me recently that the day will come soon when a common recommendation when encountering a slight abnormality on a mammogram — “Repeat the study in six months to look for change” — will be replaced by an instant AI diagnostic assessment.

So, will AI soon replace all physicians, beginning with radiologists? The answer is a resounding "No." 

AI, no matter how sophisticated, will always lack clinical judgment; AI cannot provide creative solutions, and it will “think” in a binary manner, which means that you will have to phrase a question in exactly the right way to receive an appropriate answer. Plus, AI responses may simulate real human emotions, but a computer cannot actually feel these emotions. A recent study in the journal JAMA Network reported that people found that AI (in this case, ChatGPT, an interactive program) not only gave higher-quality answers but also exhibited more empathy than a real physician. 

Artificial Intelligence increasingly is being used in the medical field.
Artificial Intelligence increasingly is being used in the medical field.Yuichiro Chino/Getty

But we must keep in mind that AI’s empathy isn’t real, it is feigned — so the survey is more of an indictment of physician burnout than it is an endorsement of AI. When a ChatGPT bot asks how you are feeling and seems sympathetic to your answer, it is a programmed exchange, not real empathy. Only a real doctor can practice the art of medicine; AI never will. It’s a world of make-believe.

The biggest downside of AI in the doctor’s office is the risk of piracy. The greater the technology, the less your personal information is safe. You will become part of a mass of identifiable data each time you interact with AI. Cybersecurity will face a significant challenge trying to protect AI from hacking.

Unfortunately, a central question when it comes to diagnosis and treatment for some patients will be who to listen to, AI or a doctor? What if AI disagrees with my assessment? Who will my patient listen to? Will I be held liable for AI’s answers? What if AI is wrong, and I lack the wherewithal to see it? And what about the variability from one AI system to another? A finely tuned, highly encrypted AI system in a radiology or dermatology department in a top medical center which reviews hundreds of scans or skin photographs at once is far different than ChatGPT or Google BARD, which do not have the same level of health-science sophistication when it comes to information accuracy.

There will be significant growing pains as health care systems attempt to integrate AI into a useful place where its role doesn’t intrude on the crucial human element that doctors and nurses provide for patients. No computer or AI robot could ever replace the nuance of my interactions with a particular patient. It is true that AI will provide me with more information to help personalize my assessments and treatments but, on the other hand, if AI gains too much prominence, it could be used to replace a doctor’s judgment, streamlining insurance company and health system approvals and denials. This could undermine personalized medicine and be a dangerous direction for AI to take.

In the back of everyone’s mind is the film, The Terminator, in which Skynet becomes self-aware and starts a deadly war with its makers that ends up destroying society. Luckily, self-awareness for machines is a science fiction concept that I don’t believe will ever be a real factor, and will never be part of the health care world. 

The far greater real danger is NOT that AI will become a real physician or nurse but that patients may treat them like they are because of inability to access their real doctor. Patients may project expectations onto their AI program and be too willing to accept answers verbatim. Not checking with a flesh-and-blood doctor could compromise quality and ultimately undermine patient care at a primary level. 

Dr. Marc Siegel, clinical professor of medicine at New York University’s Langone Medical Center, is the author of numerous books, including “COVID: The Politics of Fear and the Power of Science.” He hosts and is medical director of SiriusXM’s “Doctor Radio” program.

Businesswith Ben White
Sign up for The Messenger’s free, must-read business newsletter, with exclusive reporting and expert analysis from Chief Wall Street Correspondent Ben White.
 
By signing up, you agree to our privacy policy and terms of use.
Thanks for signing up!
You are now signed up for our Business newsletter.