The application of Artificial Intelligence (AI) to healthcare has at the core of real paradigmatic revolution, which comes not only with new opportunities and benefits, but also with some complex issues. Some of them concern the liability of AI-driven decisions, especially when these harm patients. To face and challenges these issues, it is essential to recognize and understand the existing legal frameworks and to assume a proactive approach to the development of new ones.
One of the most pressing issues is related to the clear attribution of responsibility of AI errors that results in patient injury. The parties potentially involved include physicians, healthcare service providers, AI system’s manufacturer, and even the software developers. Each party’s role and potential liability must be carefully considered based on the specific circumstances of the adverse event.
In Italy, Article 590-sexies of the Penal Code provides a framework for addressing culpable liability for death or personal injury in healthcare. This generally shields physicians from liability if they have adhered to established clinical guidelines and best practices. However, the application of this standard to AI systems presents a significant challenge. The best practices of AI application in healthcare are still evolving and often lack the formal codification seen in “traditional” medical practice. This ambiguity creates a legal gray area, potentially increasing the risk of rising conflicts between AI professional users. This can also make physicians reluctant to use these potentially life-saving technologies if they fear facing legal issues or errors outside their direct control. Furthermore, establishing a causal link between a flaw in the AI system and the resulting patient harm can be incredibly complex. This is especially true dealing with sophisticated AI systems that operate autonomously and whose behavior can change over time due to machine learning. Tracing the precise cause of an error of a deep learning algorithm can be not only tricky, but sometimes almost impossible. Thus, to get compensation for AI-related injuries or errors remains very difficult.
Focusing on hospitals and clinics, it is worth noting that they have the responsibility to ensure that AI systems are used appropriately and that adequate protocols are in place for verification and oversight. They may be held liable under contractual responsibility, facing the burden of proof to demonstrate they took appropriate measures. Moving the focus on physicians, they typically face claims of extra-contractual responsibility, requiring the patient to prove the causal link between the diagnostic error and the harm suffered.
Italian national law states that a physician’s professional negligence in using AI could lead to liability if it can be proved that she/he acted with negligence, imprudence, or incompetence. For example, using an algorithm without proper certification or a software with clear technical limitations could imply a criminal liability.
The challenges surrounding criminal liability for AI use in medicine are often exacerbated, even though many AI algorithms, especially those based on deep learning, operate in a shaded context, making it difficult to understand how specific diagnoses or treatment recommendations have been developed and proposed. This “black box” phenomenon makes it incredibly challenging for physicians to verify the accuracy of the information provided by the AI systems. This lack of transparency comes with some concerns related to the fact that physicians might be held responsible for diagnostic errors even when they had no practical way of preventing them.
In this complex and evolving domain, it is clear that traditional notions of medical malpractice and product liability are no longer sufficient to address the unique challenges posed by AI application in healthcare. In doing so, a multi-faceted approach involving legal scholars, policymakers, technology developers, and healthcare professionals is even more essential.
Some potential solutions include the development of specific legal standards for AI application in healthcare, based on the promotion of transparency and explainability as well as establishing a clear responsibility for AI-related errors. In doing so, the definition of specific mechanisms for an ongoing monitoring and evaluation of AI application in clinical practice is also essential. Thus, robust cybersecurity measures are essential to protect patient data and prevent malicious attacks that could compromise the integrity of AI systems.
Drawing on the previous considerations, it is clear that the future of healthcare depends on national and international institutions’ ability to strike the right balance between innovation and accountability.