The recent panel discussion Ethics and Regulation in AI Communication in Healthcare brought together leading voices from artificial intelligence, clinical psychiatry, legal scholarship, and biomedical research to explore how emerging technologies are reshaping communication between patients and healthcare professionals. The discussion featured experts ranging from AI innovators working on large-scale implementation, to clinical psychologists and psychiatrists integrating digital tools into mental health care, to legal scholars specializing in IT law and data protection, as well as researchers coordinating national initiatives in responsible AI for health.
Moderated by a scientist with expertise in computational biology and translational medicine, the session offered a balanced perspective on both the opportunities and challenges of AI adoption in healthcare communication. Participants reflected on the ethical dimensions of algorithmic decision-making, the limitations of current AI systems in understanding empathy and context, and the crucial role of regulation in building public trust.
As part of the broader AI2MED event, the panel underscored how collaboration across disciplines-spanning technology, law, and medicine – is essential to ensure that digital innovation supports, rather than replaces, the human connection at the heart of healthcare.
Human Empathy and Digital Efficiency
At the panel Ethics and Regulation in AI Communication in Healthcare, experts discussed the evolving relationship between artificial intelligence and psychiatric practice. One of the panelists noted, “AI is here to stay”- but its morality depends entirely on those who design and use it. While studies suggest that current chatbots may sound “warmer” than physicians, they still cannot recognize when empathy is required. Instead of replacing doctors, AI should relieve them – freeing time and attention for more human connection.
The Limits of Applicability
From a clinical perspective, a clinical psychologist emphasized that, despite the hype, few AI tools have been genuinely tested in real-world psychiatric settings. “Self-help apps are not the same as managing complex psychological conditions,” he explained. Diagnostic models often miss the nuances of real patients because they are trained on idealized data. However, he sees clear potential in automated summarization of clinical notes – technology that could reduce administrative burden and allow more time for patient interaction.
Innovation and Practical Reality
A healthcare innovator and co-founder of an AI-oriented startup, highlighted that current communication-focused AI performs well only when data is neatly structured. “Once you step outside the tables,” he said, “answers become clumsy.” He also questioned the sustainability of centralized technological development and warned that innovation can stagnate if systems don’t reward practical implementation. Smaller teams, he argued, need environments where experimentation is both possible and supported.
Regulation as a Foundation for Trust
From a legal and ethical perspective, a professor from the University of Zagreb’s Faculty of Law reminded participants that the European approach to AI is deliberately cautious. “Europe didn’t decide to be first in technology – it decided to be first in protecting citizens,” he said. Regulation, he emphasized, is not an obstacle but a prerequisite for trust – without which innovation cannot truly enter clinical practice.
Between the Lab and the Ward
Panelists agreed that a significant gap remains between AI developed in laboratories and tools usable in daily clinical practice. Real patients bring unpredictable contexts, and systems trained on “clean” data struggle to interpret real-world complexity. Ethical and regulatory principles – such as transparency, fairness, and human oversight – must therefore be integral to AI design and deployment in healthcare.
From Control to Collaboration
In closing, the discussion made one point clear: the future of healthcare is not about AI or the doctor, but AI and the doctor. Artificial intelligence can improve communication and efficiency, but only if developed collaboratively – with experts who understand both its potential and its limits.

