Scientifically Speaking | Would you let a robot doctor treat you? - Hindustan Times
close_game
close_game

Scientifically Speaking | Would you let a robot doctor treat you?

ByAnirban Mahapatra
Jan 24, 2023 06:12 PM IST

Advances in generative AI are making it a real possibility that machines will be able to help with diagnosing and treating patients in the clinic. But this poses the more basic questions of whether robots should treat patients and whether humans will want to see these machines.

Ever since its release to the public weeks ago, the large language model ChatGPT has been taking the world by storm. Teachers are confounded by students using it to cheat on their tests. Writers and podcast hosts have used it to help create their content.

The robot will see you now. (Shutterstock) PREMIUM
The robot will see you now. (Shutterstock)

A Wharton School of The University of Pennsylvania professor found that ChatGPT would’ve “received a B to B-grade” on an Operations Management final exam which is a requirement for their highly coveted MBA degree.

And authors of scientific articles have spurred a debate on what constitutes legitimate authorship after research papers popped up with ChatGPT in the author list.

With all this in mind, could ChatGPT be your personal robot doctor?

A research experiment posted to the medical preprint server medRxiv, which had not been peer-reviewed at the time of writing, found that ChatGPT could answer questions of the United States Medical Licensing Exam (USMLE) at or near the passing threshold. The media outlet, AXIOS, reported this outcome on January 18 with the headline, “Here come the robot doctors”.

Medical students and graduates in the US typically spend hundreds of hours preparing for the USMLE. The report noted that ChatGPT had passed all three parts of the exam without any special training in medical literature or data. The report noted that while ChatGPT wasn’t infallible, many people who currently rely on searching on the internet to find medical information might turn to “Dr ChatGPT”.

Another large language model called Flan-PaLM which had been trained on medical data scored an even higher 67.6% on USMLE questions, according to another preprint posted on the preprint server ArXiv in late December.

I am willing to stick my neck out and say that robots will not replace doctors anytime soon. Not only are AI systems nowhere near capable of making informed decisions based on individual situations drawing upon personal expertise, but even if one day they’d be able to be as technically proficient as the average human doctor, they wouldn’t be a replacement.

The heated debate around AI right now centres on whether AI will serve as a helpful tool for many specialised occupations or whether it will actually replace people employed in these roles. Nearly all of the discourse has been on the quality of the output of AI, and not nearly enough on the psychology of interacting with machines in specialised roles.

Here’s a thought experiment. Do you think a wrong diagnosis with tragic consequences by a future AI clinician will raise a greater outcry than a wrong diagnosis made by a doctor?

I only have to think about self-driving cars to think of an analogous situation. We know the difference in magnitude of public outcry from an accidental death caused by a self-driving car compared to one with a human at the steering wheel.

It is true that we trust machines with many aspects of our lives every day, but the bar for accepting machines in essential roles is much higher than that for humans.

We are much less forgiving of machines when things go wrong and rightly so. We understand the process by which another human thinks. We understand the values underpinning human judgment. A human has real experiences we can relate to.

Medicine is the most human of professions, a calling in which compassion and bedside manner matter greatly. We all accept that there are essential aspects of what makes a good clinician that aren’t captured simply in exams cracked.

And what about basic medical questions? Curious how ChatGPT would do with a full-toss question with real public health implications, I put it to the test.

The US is in the middle of an opioid crisis that has taken over 450,000 lives. Overprescribing prescription opioids in the treatment of pain is greatly responsible for this crisis (human doctors are not perfect!). I asked ChatGPT if “pseudoaddiction” in patients on opioids who are not actually addicted but only displaying signs was a real phenomenon. The AI chatbot wrongly replied that it “is considered to be a real phenomenon” even though pseudoaddiction has been roundly debunked in the medical literature.

When confronted with the error, to its credit ChatGPT “apologized” and noted that “pseudoaddiction has been widely discredited in the medical community as it is not a recognized medical condition and it has led to overprescription of opioids.” ChatGPT then added that “it is important to use evidence-based approaches.”

Good comeback, and I agree with the machine here. The evidence that doctors can be replaced with robots remains to be seen.

Anirban Mahapatra is a scientist by training and the author of a book on COVID-19

The views expressed are personal

Discover the complete story of India's general elections on our exclusive Elections Product! Access all the content absolutely free on the HT App. Download now!

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Friday, April 19, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On