Beyond the Code: Why AI in Healthcare is a Human Story

The recent “Beyond Algorithms: Education, Research, and Policy Implications of AI in Healthcare” symposium, hosted by RCSI and supported by AI2MED, delivered a crucial message: the future of AI in healthcare hinges less on the sophistication of the technology and more on the human systems that govern it.

Bringing together clinicians, researchers, educators, and patient advocates, the event intentionally shifted the conversation from technical fascination to a critical examination of education, policy, and real-world implementation.

Dr Liam McCoy’s keynote challenged the audience to move beyond the idea of clinicians as simple “tool-users,” arguing for the stewardship of human–machine systems. While AI offers opportunities in documentation and imaging, McCoy questioned the effectiveness of typical “human-in-the-loop” models, warning of potential automation-induced de-skilling and the displacement of clinical judgement. He suggested that AI be treated as a supervised, entrustable competency that centres human values like accountability, equity, and patient dignity.

Professor Tom Lawton underscored the necessity of clinician empowerment. He warned that without proper understanding and frameworks, clinicians risk becoming “liability sinks” for system failures. Drawing on the MPS Foundation White Paper, his key recommendations focused on:

  • Augmentation, not replacement: AI must support, not supplant, human decision-making.
  • Transparency and Co-design: Products need robust liability frameworks and must be designed alongside end-users.
  • Challenging the Output: Clinicians must be empowered to interrogate, challenge, and override AI outputs, avoiding algorithmic deference.

Case studies highlighted the gap between technical promise and practical reality.

  • Professor Patrick Redmond’s work on AI-powered digital scribes showed promise for earlier cancer detection but revealed challenges like performance volatility, false positives, and legal concerns.
  • Ciaran Malone’s radiology study found that AI segmentation delivered significant task-level time savings, yet these did not translate to overall improvements in the wider workflow, underscoring the importance of understanding human factors and implementation.
  • Professor Donal Sexton’s research on the use of generative AI to create hospital summaries opens the door to more comprehensive and user friendly advice on  managing ongoing conditions upon patient discharge.

Professor Ibrahim Habli emphasised that assuring the safety of clinical AI systems goes far beyond technical validation. He advocated for structured safety cases that integrate clinical, technical, and organisational evidence. The scope of AI safety harms includes impacts on professional competence and psychological well-being, demanding a continuous, context-sensitive risk assessment grounded in established patient safety norms.

Dr Kirill Veselkov introduced AIDA, a collaborative initiative leveraging reinforcement learning, graph neural networks, and foundation models to enable early, personalised diagnosis of gastric inflammation and precancerous conditions. AIDA’s integration of large-scale clinical and imaging data with omics for mechanistic insights exemplifies the potential of AI to advance precision medicine. However, Veselkov also emphasised the importance of rigorous validation, fairness, explainability, regulatory approval, and patient/clinician trust.

Dr Laura Brady highlighted the importance of the patient voice in all discussions of AI in healthcare. She shared the recommendations from an IPPOSI-convened Citizens’ Jury, highlighting the public’s demands for regulation and oversight, informed consent, transparency, and equitable access. Laura stressed that involving patients and the public from ideation to implementation is the only way to ensure AI aligns with real needs and promotes equity and sustainability.

Dr Dara Cassidy presented an overview of AI2MED, an EU project focused on developing education pathways for AI in healthcare and fostering alliances between healthcare professionals and data scientists. The project’s gap analysis identified fundamental skills and competencies required for safe and effective AI adoption in healthcare, and work is currently underway on the development of education materials to help healthcare professionals and students acquire these skills.

The final panel discussion captured key areas of action:

  • Education: Curricula must foster skills in effective human–AI collaboration and ethical oversight.
  • Research: Priority must be given to real-world case studies and multidisciplinary inquiry to fully map the impact and limitations of AI.
  • Policy: Regulatory frameworks must be robust, ensuring safety, accountability, transparency, and equitable access with meaningful public involvement.

The symposium delivered a clear message that the ultimate goal is not just AI adoption, but its stewardship to serve the broader aims of professional agency, patient care, and societal trust.

Share the Post:

Related Posts