I must admit there is a significant upside to the growing infiltration of artificial intelligence into my world, the world of medicine. So-called machine learning is based on analyses of massive data banks, well beyond what any physician or scientist is capable of amassing. Radiology and dermatology are immediate beneficiaries of these advances and the growing use of AI. A top breast radiologist at my medical center told me recently that the day will come soon when a common recommendation when encountering a slight abnormality on a mammogram — “Repeat the study in six months to look for change” — will be replaced by an instant AI diagnostic assessment.
So, will AI soon replace all physicians, beginning with radiologists? The answer is a resounding "No."
AI, no matter how sophisticated, will always lack clinical judgment; AI cannot provide creative solutions, and it will “think” in a binary manner, which means that you will have to phrase a question in exactly the right way to receive an appropriate answer. Plus, AI responses may simulate real human emotions, but a computer cannot actually feel these emotions. A recent study in the journal JAMA Network reported that people found that AI (in this case, ChatGPT, an interactive program) not only gave higher-quality answers but also exhibited more empathy than a real physician.
But we must keep in mind that AI’s empathy isn’t real, it is feigned — so the survey is more of an indictment of physician burnout than it is an endorsement of AI. When a ChatGPT bot asks how you are feeling and seems sympathetic to your answer, it is a programmed exchange, not real empathy. Only a real doctor can practice the art of medicine; AI never will. It’s a world of make-believe.
The biggest downside of AI in the doctor’s office is the risk of piracy. The greater the technology, the less your personal information is safe. You will become part of a mass of identifiable data each time you interact with AI. Cybersecurity will face a significant challenge trying to protect AI from hacking.
Unfortunately, a central question when it comes to diagnosis and treatment for some patients will be who to listen to, AI or a doctor? What if AI disagrees with my assessment? Who will my patient listen to? Will I be held liable for AI’s answers? What if AI is wrong, and I lack the wherewithal to see it? And what about the variability from one AI system to another? A finely tuned, highly encrypted AI system in a radiology or dermatology department in a top medical center which reviews hundreds of scans or skin photographs at once is far different than ChatGPT or Google BARD, which do not have the same level of health-science sophistication when it comes to information accuracy.
- Medical AI Chatbots Could Worsen Health Disparities for Black Patients
- 17 Doctors Failed to Diagnose This Boy’s Severe Pain. ChatGPT Came Up With the Answer
- Study Shows How ChatGPT Could Help Doctors in Clinical Settings
- Paper Exams, Chatbot Bans: Colleges Seek to ‘ChatGPT-Proof’ Assignments
- Ready or Not, AI Chatbots Are Coming to Hospitals
- OpenAI Reveals ChatGPT Can Now See and Hear, Not Just Speak
There will be significant growing pains as health care systems attempt to integrate AI into a useful place where its role doesn’t intrude on the crucial human element that doctors and nurses provide for patients. No computer or AI robot could ever replace the nuance of my interactions with a particular patient. It is true that AI will provide me with more information to help personalize my assessments and treatments but, on the other hand, if AI gains too much prominence, it could be used to replace a doctor’s judgment, streamlining insurance company and health system approvals and denials. This could undermine personalized medicine and be a dangerous direction for AI to take.
In the back of everyone’s mind is the film, The Terminator, in which Skynet becomes self-aware and starts a deadly war with its makers that ends up destroying society. Luckily, self-awareness for machines is a science fiction concept that I don’t believe will ever be a real factor, and will never be part of the health care world.
The far greater real danger is NOT that AI will become a real physician or nurse but that patients may treat them like they are because of inability to access their real doctor. Patients may project expectations onto their AI program and be too willing to accept answers verbatim. Not checking with a flesh-and-blood doctor could compromise quality and ultimately undermine patient care at a primary level.
Dr. Marc Siegel, clinical professor of medicine at New York University’s Langone Medical Center, is the author of numerous books, including “COVID: The Politics of Fear and the Power of Science.” He hosts and is medical director of SiriusXM’s “Doctor Radio” program.
- By Sheldon H. Jacobson, Ph.D. and Dr. Janet JokelaOpinionGive Yourself a New Year Gift: Visit the Dentist
- By Armstrong WilliamsOpinionWhat Ordinary Americans Want
- By Alan BrownsteinOpinionWhen Should Universities Take a Stand on Public Policy or Normative Issues?
- By W. Mark ValentineOpinionAmerican Drones Are a ‘Force Multiplier’ for US Security and Safety
- By Richard J. ShinderOpinionWhat Can We Do About American Culture, Frozen in Place?
- By Keith NaughtonOpinionIs Trump Building His Own ‘Unenthusiasm Gap’?
- By Amy ChenOpinionSimplicity: How to Reverse the Awful Airline User Experience
- By Stephanie MartzOpinionAddressing Workforce Shortages Starts With Immigration Reforms
- By Harlan UllmanOpinionToday’s Crises, Here and Abroad, Echo the Disasters of the Past
- By Eric R. MandelOpinionShould the US Try to Halt Israel’s War Against Hamas?
- By Austin Sarat and Dennis AftergutOpinionHow to Continue Securing the Truth About January 6
- By Patrick M. CroninOpinionWhat to Make of Kim Jong Un’s Latest Threats of War