Do Professionals Have an Ethical Obligation to Use AI?

I recognized that I was likely to concur with Dr. Ashish K. Jha’s assessment of physicians’ use of LLM AI before I read his Washington Post op-ed:

The public is rightly wary about this new technology in health care. Its misuse can have serious consequences for patients, for example, by inappropriately denying care, hallucinating incorrect information or overlooking pertinent patient information. Clear guardrails and direct patient contact with medical professionals is crucial.

Still, for time-pressed doctors, a tool that both confirms judgments and broadens diagnostic thinking can be invaluable. When used properly, it can help combat the tunnel vision that often takes hold in busy clinics and hospitals.

The balance of his op-ed is devoted to his realization that AI makes him a better doctor as a consequence of his “experiment” with it. He goes on to describe three clinical cases in which he used AI and the benefits derived from it as well as its use in pedagogy. He concludes by recommending that future physicians be trained in using AI tools efficiently and effectively.

Not only do I concur with Dr. Jha’s conclusion, I would go one step farther. I think that professionals have an ethical obligation to use AI tools prudently, judiciously, and effectively for precisely the reasons Dr. Jha outlines: they make them better.

By definition a professional is a service provider who works for the public good and adheres to a code of ethics. Modern professional codes of ethics should require professionals to use AI. The AMA has published guidance for the ethical use of AI by physicians. It allows physicians to use AI and discusses issues like oversight, transparency, disclosure, and privacy and security but it does not quite go far enough—it treats AI as an option.

Given the choice, a professional should actively seek to be better than he or she already is. AI is a tool that can do just that. For physicians these tools can reduce error, broaden differential diagnosis, and mitigate cognitive bias. For those professionals it’s in the same class as evidence-based medicine, imaging, and sterile technique.

5 comments… add one
  • TastyBits Link

    An AI that uses curated source material would be fine. It would receive the same education as a physician, engineer, attorney, etc., but I would rather not be treated by a doctor trained on fanfiction and Reddit.

    Using fanfiction and Reddit is not necessarily the problem. The problem understanding that they are not authoritative sources.

    I suspect that the problem is they would need to pay to use textbooks and published papers.

  • Completely agree, TastyBits. Even better would be an expert system informed by AI using curated sources. It could be a built–in part of the process.

  • steve Link

    Almost all of my docs, including me, were making liberal use of the internet in some form well before AI existed. Most of them use it now. I see 2 issues besides the obvious ones like enforcement and hallucinations, which will resolve. First, is the long running issue of getting older and independent practitioners to adapt new tech/ideas. Some of these people just dont keep up on the literature and many have decided that they just “know better” and will keep doing what they have always done leaning heavily on folk medicine, supplements and older medicines/therapies. These practitioners seem to have a pt base who believe in this stuff so there would be huge backlash and the party/admin currently in charge is largely supportive of this stuff so it isn’t going to happen.

    Second, we really need the AI incorporated into our EMRs and medical systems as much as possible. Some of that has already happened so you get flagged eg if you order a wrong dose of medicine. (I will note it is annoying when you are ordering outside of normal range because that is appropriate and the computer tries to stop you. I would hope this happens less often with an AI that would incorporate knowledge about the pt and not just doing ranges.) This will help a lot with the older and recalcitrant providers (mostly the same) and we may be able to more effectively be able to monitor when they dont follow best practices though we can already do that for the most part.

    As an aside we need to figure out the best way to do this. Let’s say I do an ultrasound. Do I want the Ai to tell me what it thinks right away or wait until I am done and dictate and then remind me of other choices? Probably some combo of both I think but there is the risk that people dont really learn to understand what they are doing and become totally reliant upon the AI. As a final aside I would note that I think resistance will be especially strong among the mid-levels who practice pretty independently in rural/semi-rural and underserved urban areas. My personal experience is that these providers have a higher incidence of folk medicine practice.

    Steve

  • Some of these people just dont keep up on the literature and many have decided that they just “know better” and will keep doing what they have always done leaning heavily on folk medicine, supplements and older medicines/therapies.

    That issue didn’t start with the availability of the Internet or AI (I’m not asserting you made that claim—only noting it). I’ve read studies of physicians’ continuing education practices from time to time. One finding has been that 25 years after completing med schools many physicians’ practices are seriously out-of-date.

  • steve Link

    Yes. I tried to make that clear by making it broader with “tech/ideas”. Used upon my observations its even worse among non-docs providing care. My network has taken over a number of failing rural hospitals and it always amazed me how bad some of the care was. Not just out of date but often fraudulent and incompetent. What is not clear to people not in the profession is that when offered up to date info and suggestions on how to improve is that so many are resistant to change.

    Reminds me of multiple times when the literature showed what kind of care gave the best outcomes but people refused to follow the guidelines until it was turned into a regulation.

    Steve

Leave a Comment