The Doctor Is In AI

At MedicalExpress there’s a report on an interesting study:

The team randomly sampled 195 exchanges from AskDocs where a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed health care professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT. They compared responses based on information quality and empathy, noting which one they preferred.

The panel of health care professional evaluators preferred ChatGPT responses to physician responses 79% of the time.

“ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” said Jessica Kelley, a nurse practitioner with San Diego firm Human Longevity and study co-author.

Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians (physicians 22.1% versus ChatGPT 78.5%). The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% versus ChatGPT 45.1%).

“I never imagined saying this,” added Dr. Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study co-author, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”

which confirms something I’ve been saying for some time.

My prediction is not that computer programs will replace flesh and blood physicians but that they’re going to make human physicians more productive. Such programs will also affect what skills are emphasized in the selection and training of physicians. We need physicians who are more human rather than more machine-like. Machines will beat human physicians at being machine-like every time. The challenge is for them not to be more human as well.

2 comments… add one
  • CuriousOnlooker Link

    Fascinating.

    A speculative thought, how much are physician responses driven by training to avoid giving false hopes, liability issues, respecting patient privacy et al. I propose ChatGPT responses don’t have those “filters” on and its responses could be a lot of different once trained for those factors.

    A sidenote, try the following prompt; “Write a poem with exactly 50 words”. AI still has a long way to go.

  • steve Link

    Most of us already use the internet a lot. You occasionally use a drug you haven’t used in years and since i work at a tertiary care center you see 1 in a million odd syndromes that you may have never read about before. So Chat GPT wouldn’t really add that much right now to what most of us are already doing, but it might be a bit faster, if it is reliable. Once it is reliable it should be a very helpful adjunct if it has good verbal recognition and can listen in as you talk with pts. It could write your notes for you, a huge productivity enhancer. It could offer suggestions for tests, studies, diagnoses. At present EMRs are helpful but at a huge cost. GPT/AI could help.

    On this actual study I think a lot of emphasis was placed upon the GPT giving longer answers which were thought to be more empathetic. I would totally agree that for GPT time wont be an issue. It can write out very long responses that no human would have time for. So for the 0.5% of interactions I have with pts and family where I need to give long written responses to very specific questions, GPT will outperform me and almost any doc. (Well, maybe not some academics whose work is not time limited.)

    CO- We all have some training in those areas but quantity and quality vary a lot. Also ability to communicate varies a lot. I have people for whom i have letters of commendation from family and nursing staff for their compassion and kindness. I also have a couple of guys who suck at the warm and fuzzy stuff but if your spouse was dying they would be the ones you would choose to take care of them. It takes a mix of people. If CHAT-GPT gets translated into a robot there will also be times when it wont be able to give long, warm and tender answers either since it will need to be bedside with a pt.

    I dont think it will that hard to teach the kind of filters you suggested but maybe harder to teach it how to manage time if it is to ever function independently in a robot.

    Steve

Leave a Comment