The Challenge to the Regulatory Framework

This article at Atlantic opens a discussion of something that I think is a critical issue for the use of artificial intelligence in health care:

At a large technology conference in Toronto this fall, Anna Goldenberg, a star in the field of computer science and genetics, described how artificial intelligence is revolutionizing medicine. Algorithms based on the AI principle of machine learning now can outperform dermatologists at recognizing skin cancers in blemish photos. They can beat cardiologists in detecting arrhythmias in EKGs. In Goldenberg’s own lab, algorithms can be used to identify hitherto obscure subcategories of adult-onset brain cancer, estimate the survival rates of breast-cancer patients, and reduce unnecessary thyroid surgeries.

It was a stunning taste of what’s to come. According to McKinsey Global Institute, large tech companies poured as much as $30 billion into AI in 2016, with another $9 billion going into AI start-ups. Many people already are familiar with how machine learning—the process by which computers automatically refine an analytical model as new data comes in, teasing out new trends and linkages to optimize predictive power—allows Facebook to recognize the faces of friends and relatives, and Google to know where you want to eat lunch. These are useful features—but pale in comparison to the new ways in which machine learning will change health care in coming years.

which is how can our present regulatory framework deal with machine learning? The topic is particularly thorny in that even the designers of the algorithms can’t tell you why the program reaches the conclusions it does.

It’s also why I think that a likely scenario is that not just the United States but all countries with large health care systems and sophisticated regulatory frameworks are likely to become technological backwaters in health care. When the choice is between a computer program and nothing, the computer program is going to look much more attractive.

6 comments… add one
  • bob sykes Link

    If you cannot explain how an AI device works to produce an answer, can you patent/copyright it?

  • Gustopher Link

    The topic is particularly thorny in that even the designers of the algorithms can’t tell you why the program reaches the conclusions it does.

    Machine learning isn’t a mysterious process akin to human learning at all — it is relatively straightforward to record what the model is doing, and what factors are causing changes in the model. This information can then be reviewed by subject experts to make sure that it makes sense, or to use as the basis for scientific papers.

    This is just responsible programming.

    Further, you have access to the training data and any ongoing feedback loops.

    And, then when we discover that the strongest indicator for a mole being precancerous is whether the person is wearing tacky jewelry, subject experts can make a determination about whether it’s likely to be a coincidence, and either change the training data or research why the body reacts so strongly to tacky jewelry.

  • CuriousOnlooker Link

    Since these ai algorithms is software, it can be sold to consumers directly to run on their tablets, phones, as long as its marketed for demonstration or educational use. And normals can request their imaging records from their own provider.

    To use in a medical context, the easier part is to demonstrate AI can beat humans in a double blind trial (kind of a Turing test). The hard part is proving why the AI formula is better then the ABC rule for detecting melanoma for example. I suspect both are required by the FDA currently.

  • Andy Link

    This interests me as I’ve had melanoma and undergo routine dermatology checkups. In this area I think AI would be helpful, but not some kind of groundbreaking advance. At most it would provide more fidelity when it comes to decisions on whether to biopsy or not. Dermatologists, at least in my experience, tend to be “safe than sorry” and biopsy anything that looks strange. And if the patient wants something biopsied, they probably comply most of the time.
    I have a hard time believing they will tell patients that they aren’t going to biopsy because some computer/AI told them not to.

    The biggest problem with melanoma remains that people don’t catch it until it’s too late and the reason for that is they aren’t aware of their bodies (or, in my case, it was on my lower back and I couldn’t see it – it was my wife that noticed it) or are ignorant of melanoma warning signs and don’t get things checked out. That’s not something AI can fix.

  • walt moffett Link

    Don’t see where this will be a problem for the current FDA/BigWhatever. The laws and regulators can be quite flexible when needed. Then lets throw in liability issues, the ever present caveat “correlate with clinical findings” and the legal belief its not official until a MD signs off. Could wind up with say a $1000 FDA approved device only approved user is a Masters level trained specialist under the direct supervision of a MSN-PA and follow up exam by board certified Dermo.

    In the alternative, this winds up as part of say a subscription based service for a specific cell phone where no specific claim of diagnosis is made.

  • steve Link

    You have a GIGO problem with AI. Looking at lesions like melanomas is a situation where that is avoided.

    Steve

Leave a Comment