I found this article at Nature by Mariana Lenharo interesting:
An artificial intelligence (AI) system trained to conduct medical interviews matched or even surpassed human doctors’ performance at conversing with simulated patients and listing possible diagnoses on the basis of the patients’ medical history1.
The chatbot, which is based on a large language model (LLM) developed by Google, was more accurate than board-certified primary-care physicians in diagnosing respiratory and cardiovascular conditions, among others. Compared with human doctors, it managed to acquire a similar amount of information during medical interviews and ranked higher on empathy.
“To our knowledge, this is the first time that a conversational AI system has ever been designed optimally for diagnostic dialogue and taking the clinical history,†says Alan Karthikesalingam, a clinical research scientist at Google Health in London and a co-author of the study1, which was published on 11 January in the arXiv preprint repository. It has not yet been peer reviewed.
I had several reactions. First, this is exactly what I have been saying for decades. If the skills required are rote memorization and using clinical findings to diagnose conditions, a properly designed algorithm will beat human physicians. But the finding that the the LLM chatbot had greater empathy is the key one.
Which leads to my second observation. I don’t know whether physicians will fight the encroachment of machines on their turf but they shouldn’t. The effect of AI on medicine will be to make human physicians more productive and accurate. But that’s not all.
AI in medicine will change how prospective physicians are selected. We won’t need rote memorization and using clinical findings to diagnose conditions as much anymore. Gone will be the days when you doctor is frozen to a computer screen. What will be needed is human physicians who can work in collaboration with these chatbots to achieve better outcomes more quickly than human physicians alone.