On the topic of healthcare providers increasingly deferring to an Artificial Intelligence-based algorithm, I recommend an article by Lisa Bannon in the June 15th Wall Street Journal. It describes a modern woman vs. machine battle over the appropriate care of a hospitalized patient.
In brief, the AI analyzed certain parameters, in this case a rising white blood cell count, and concluded that the patient likely had a life-threatening blood infection, known as sepsis. The AI recommended a protocol involving drawing blood cultures, possibly urine cultures, other blood tests and x-rays. Sepsis is a common complication of severe illness, and catching it early is a key part of preventing a condition with mortality estimated at up to 25%. Rapid action was needed to stave off one of the most common causes of hospital death.
There was just one problem: the AI was wrong. In this case, the attending nurse knew that the elevated white blood cell count was related to the patient’s underlying leukemia. Her judgment was that the patient was stable; the sepsis evaluation was unnecessary, and it would expose the patient to the (minor) increased risks and discomfort of blood draws.
And yet she performed the sepsis workup anyway.
I am not writing to rail against technology tools generally, or AI in particular. The replacement of judgment with algorithms…excuse me…” practice standards” is a process that began over a decade ago. It is a sailing ship not returning to this shore. And there is a reason for that. The variations in medical practice, the use of ineffective treatments, and the rising costs of healthcare all contrived to make practice standards attractive and often effective. They are intended to be a “decision support” tool, meaning, like with the AI in the article, “The ultimate decision-making authority resides with the human physicians and nurses.”
And that’s where I want to throw my red challenge flag. We all know the AI is not going to get fired for being wrong. It’s not nearly so clear the degree to which we can say that about the nurses. For all of the “use your judgment” talk, the “but you’d better be right” hangs silently in the background. As in the case of the sepsis evaluation, the tendency will be to defer to the machine. As the AI tools get increasingly complex, that tendency will only increase.
Is that wrong? We all know human judgment is flawed. Doctors are stretched thin and have biases, and nurses can’t be everywhere. The AI never sleeps. Is it really such a bad thing to have it running in the background, constantly improving in the goal of better, evidence-based healthcare?
I’d argue yes and no. I’m going to make some predictions and suggestions on how we should approach the AI-augment future of care delivery. Because opting out of it is not an option.
Humans will increasingly defer to AI.
Much like risk managers rarely get fired for the opportunity not taken, clinicians will perceive a “safe haven” in deferring to the bot. We already see it in the WSJ piece. A nurse with decades of experience ended up following the AI recommendation. This isn’t a knock on her. It’s an acknowledgment of a general tendency to defer to the automated authority. And when the risk of being wrong is asymmetrical, expect to see more and more deference to the black box. Let he who has not followed a bad GPS navigator cast the first stone.
Human judgment will suffer over time.
Part of what made the article compelling is the contrast between the decades of human experience and the shiny new algorithm. But what happens when you have a generation who were trained to trust the AI? A doctor can’t develop clinical judgment without making her own calls. Just like you never really learn your way around if you rely on navigation, I believe we will see decreasing capacity for clinical judgment over time.
AI will displace healthcare workers.
There’s no reason the medical professions should be exempt from a disruption that will cascade through all industries, but it is pretty easy to see that the AIs referenced here are being rushed to the front lines because of shortages, not because they are prepared to handle the nuances of delivering healthcare. As in the article, a nurse can see at a glance that a distressed patient needs more pain medicine. Good luck convincing the medical HAL 9000 that your oxycodone dose needs to be adjusted.
I was reminded while writing this of the Legend of John Henry. For those of you who did not have an excellent 4th grade music program, John Henry was a legendary African American railroad worker and “steel driving man.” When the railroad came out with its new steam engine, John Henry vowed that it would never beat him. In a contest between the two, he did indeed win, but the stress proved too much for his heart. True to his word, rather than let the steam drill beat him, he “died with a hammer in my hands.”
I don’t think healthcare needs to fight AI to the death, but I do think it needs to rapidly adapt to avoid becoming a faceless entity that we will not enjoy. What should be done? What _can_ be done to make the incorporation of AI into medicine more humane? I have some suggestions:
Clinicians have to be free to use judgment.
From initial medical training through to clinical practice, it must be made clear that the human ultimately makes the call on any healthcare decision. The administration at UC Davis quoted in the WSJ piece is saying the right things, but their employees don’t appear to trust them. In the end, they doubt they will be supported for overriding the AI. Adherence to standards of practice is an increasingly frequent metric by which providers are judged. Measuring how frequently the human agrees with the AI makes the mistake of treating the algorithm like it’s the gold standard, when it has proven nothing of the sort. Until a randomized trial proves that letting the AI make the calls works out better, it has to be considered an advisor, not the master.
Training must do a better job of incorporating decision support into the curriculum.
My daughter recently completed medical school. And the degree to which I could relate to the stages of her education was depressing. A process that was showing its age decades ago (when I went through) was far too recognizable. The first two years were largely spent preparing for the glorified trivia test that is the USMLE Step 1 test. I can’t emphasize enough how little memorizing the enzymatic failures that can lead to the myriad subtypes of porphyria has to do with being a good physician. It’s not that knowledge like this, and countless other examples, is irrelevant, but that there is simply too much of it to meaningfully absorb. And, more importantly, this is exactly what computers are good at. Teaching has to evolve with the profession, and teaching students how to incorporate the tools of AI into their judgment will be important in growing a generation of physicians who don’t simply check the boxes provided by the AI, but fluidly integrate it into a clinical assessment.
Rediscover what humans are good at.
Become indispensable. This is my most important point. The metaphorical steam drill isn’t coming. It’s here. Healthcare workers, ideally, will use AI to offload the soul-crushing aspects of charting, coding, and other human-computer interactions that have become the bulk of care delivery, and focus on the interpersonal. Use that time wisely. Practice the soft skills of listening. Get back to looking at the person instead of the screen.
If, like John Henry, I am destined to die with a stethoscope in my hand, this will be the battle for which I do it. A day may come when a bot can put a hand on a shoulder to provide comfort after bad news. Where an algorithm with a synthetic voice can spout comforting pieties that are better than my own stumbling and imperfect words of solace. But I doubt it. Regardless, that day is not this day. This day, whatever the medical system needs, it isn’t less humanity. For a future where your healthcare team uses powerful tools to deliver better and more personal healthcare, rather than simply acting as the moving parts of an unseen and all-knowing intelligence, this day, we fight.
"Hal, open the Pod door!"
Great essay!