Artificial intelligence is often presented as medicine’s next great breakthrough. Algorithms that can read scans, flag potential cancers, or suggest treatment options promise faster diagnoses and fewer errors. Hospitals worldwide are beginning to integrate AI into clinical workflows, and the technology is improving at a remarkable pace. However, alongside this progress, researchers are uncovering something unexpected: the more helpful AI becomes, the easier it is for humans to rely on it excessively.
Welcome to what experts increasingly call the AI assistance paradox.
When the Assistant Becomes the Authority
In theory, AI is designed to support doctors. Decision-support tools analyse data, highlight patterns, and provide suggestions for clinicians to evaluate. In practice, however, the presence of an algorithm can subtly change how people make decisions. Psychologists describe this as automation bias – a tendency to trust automated systems even when they may be wrong.
Studies show that when people receive recommendations from an algorithm, they often assume the system is objective, precise, and evidence-based. As a result, they may give less weight to their own judgement. In medicine, this dynamic can have serious consequences. One study examining human-AI collaboration found that clinicians sometimes changed a correct diagnosis after seeing incorrect AI advice. The algorithm did not simply add information – it shifted the decision-making process itself. The technology designed to reduce errors can, in certain situations, introduce new ones.
Why Our Brains Trust Algorithms
Humans are predisposed to assume that complex systems know more than we do. When a recommendation comes from something that appears mathematical, data-driven, and impartial, it triggers a powerful cognitive shortcut: the machine must be right. But AI systems are not infallible. They learn from data, and data can contain gaps, biases, or rare cases that models struggle to interpret. The real risk is not that AI occasionally makes mistakes. The risk arises when humans stop questioning those mistakes.
The Hidden Risk: Losing Clinical Intuition
Another concern emerging from research is known as clinical deskilling. If algorithms routinely detect abnormalities or suggest diagnoses, doctors may gradually spend less time practising those skills themselves. Over time, their ability to detect subtle signals without AI assistance may weaken. This phenomenon has already been observed in other industries. Airline pilots, for example, rely heavily on autopilot systems, but several aviation studies have shown that excessive automation can reduce manual flying proficiency. Healthcare may face a similar challenge. The paradox is striking: the more capable AI becomes, the more important human expertise becomes as well.
AI Is Not the Problem – Design Is
None of this means AI should be removed from medicine. Quite the opposite. When used well, AI systems can dramatically improve healthcare. They can analyse thousands of images in minutes, detect patterns invisible to the human eye, and help doctors process massive datasets that would otherwise be impossible to interpret. The key is how these systems are designed and implemented. Researchers increasingly argue that medical AI should not simply provide answers. Instead, it should:
- explain its reasoning,
- highlight uncertainty,
- show supporting evidence,
- encourage clinicians to verify recommendations.
In other words, AI should behave less like an oracle and more like a colleague.
The Real Future of Medicine
The narrative that AI will replace doctors has always been misleading. The real transformation lies in human-AI collaboration. Doctors bring intuition, contextual understanding, and ethical judgement. AI brings computational speed, pattern recognition, and the ability to process enormous volumes of data.
Together, they can achieve something neither could accomplish alone.
Sources:
Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517–518. https://doi.org/10.1001/jama.2017.7797
European Commission. (2024). Artificial Intelligence Act: Overview of the EU framework for trustworthy AI. https://digital-strategy.ec.europa.eu
Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230–243. https://doi.org/10.1136/svn-2017-000101
Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17, 195. https://doi.org/10.1186/s12916-019-1426-2
MIT Sloan Management Review Polska. (2024). AI w polskiej medycynie: Lepsza diagnostyka vs. ryzyko utraty kompetencji. https://mitsmr.pl/dane-i-sztuczna-inteligencja/ai-w-polskiej-medycynie-lepsza-diagnostyka-vs-ryzyko-utraty-kompetencji/
Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.

