I recognized that I was likely to concur with Dr. Ashish K. Jha’s assessment of physicians’ use of LLM AI before I read his Washington Post op-ed:
The public is rightly wary about this new technology in health care. Its misuse can have serious consequences for patients, for example, by inappropriately denying care, hallucinating incorrect information or overlooking pertinent patient information. Clear guardrails and direct patient contact with medical professionals is crucial.
Still, for time-pressed doctors, a tool that both confirms judgments and broadens diagnostic thinking can be invaluable. When used properly, it can help combat the tunnel vision that often takes hold in busy clinics and hospitals.
The balance of his op-ed is devoted to his realization that AI makes him a better doctor as a consequence of his “experiment” with it. He goes on to describe three clinical cases in which he used AI and the benefits derived from it as well as its use in pedagogy. He concludes by recommending that future physicians be trained in using AI tools efficiently and effectively.
Not only do I concur with Dr. Jha’s conclusion, I would go one step farther. I think that professionals have an ethical obligation to use AI tools prudently, judiciously, and effectively for precisely the reasons Dr. Jha outlines: they make them better.
By definition a professional is a service provider who works for the public good and adheres to a code of ethics. Modern professional codes of ethics should require professionals to use AI. The AMA has published guidance for the ethical use of AI by physicians. It allows physicians to use AI and discusses issues like oversight, transparency, disclosure, and privacy and security but it does not quite go far enough—it treats AI as an option.
Given the choice, a professional should actively seek to be better than he or she already is. AI is a tool that can do just that. For physicians these tools can reduce error, broaden differential diagnosis, and mitigate cognitive bias. For those professionals it’s in the same class as evidence-based medicine, imaging, and sterile technique.







An AI that uses curated source material would be fine. It would receive the same education as a physician, engineer, attorney, etc., but I would rather not be treated by a doctor trained on fanfiction and Reddit.
Using fanfiction and Reddit is not necessarily the problem. The problem understanding that they are not authoritative sources.
I suspect that the problem is they would need to pay to use textbooks and published papers.
Completely agree, TastyBits. Even better would be an expert system informed by AI using curated sources. It could be a built–in part of the process.