A Coin With Many Sides
On the surface, any kind of cognitive erosion in physicians because of AI use is alarming. It suggests some disengagement with tasks on a fundamental level and even automation bias over-reliance on machine systems without even knowing youre doing it.
Or does it? The study data seems to run counter to what we often see, argues Charlotte Blease, PhD, an associate professor at Uppsala University, Sweden, and author of Dr. Bot: Why Doctors Can Fail Us―and How AI Could Save Lives. Most research shows doctors are algorithmically averse. They tend to hold their noses at AI outputs and override them, even when the AI is more accurate.
If clinicians arent defaulting to blind trust, why did performance sag when the AI was removed? One possibility is that attitudes and habits change with sustained exposure. We may start to see a shift in some domains, where doctors do begin to defer to AI, she says. And that might not be a bad thing. If the technology is consistently better at a narrow technical task, then leaning on it could be desirable. The key, in her view, is the judicious sweet-spot in critical engagement.
And the social optics can cut the other way. A recent Johns Hopkins Carey Business School randomized experiment with 276 practicing clinicians found that physicians who mainly relied on generative AI for decisions incurred a competence penalty in colleagues eyes. They were viewed as less capable than peers who didnt use AI, with only partial relief when AI was framed as a second opinion.