Privacy in the Age of GenAI: Insights for Digital Health

AI2MED is a European initiative that aims to accelerate the adoption of AI in healthcare, whilst also bridging the skills gap through comprehensive training programs and collaborative innovation. Such international collaborations are vital; as AI technologies become increasingly embedded in healthcare, concerns around their safe, ethical, and transparent use are gaining urgency. A recent systematic review from researchers at Imperial College London, UK, brings valuable clarity to these concerns. Synthesising data from 179 studies, the review explores how individuals perceive and navigate privacy risks associated with genAI, the Internet of Things (IoT), and other emerging digital technologies, offering insight for those working across digital health, data science, and medical informatics.

One of the review’s core findings is that privacy is deeply context-dependent. There is no universal or one-size-fits-all rule that applies to how individuals make decisions about sharing their data. Instead, users are acting in accordance with a range of cognitive biases, habits, and contextual cues. People’s willingness to share information varies depending on who is requesting the data, what kind of data is involved, and how it will be used.

Privacy concerns tend to increase in situations where users consider that they are sharing more sensitive data such as financial or medical information. This triggers a heightened awareness around data collection and more conservative use of the technology. Privacy concerns are also heightened when systems lack transparency, i.e., when it’s unclear who is accessing data, or how it is being handled. In such environments, trust can quickly erode, undermining not only the technology in question but in digital innovation more broadly.

The study also revisits a well-documented but underexplored phenomenon known as the “privacy paradox”. This term describes the gap between what people say about their privacy preferences and how they behave in practice. Many users express strong concerns about the misuse of personal data, yet still engage with platforms or tools that do not align with these concerns. The study’s findings suggest that this issue is “not inherently paradoxical”, but instead reflects an existing lack of understanding around the influence of context on how people use technology.

To address these findings, the authors advocate that technology developers and policymakers foreground contextual issues and adopt approaches such as “privacy-by-design” that address users’ data concerns and behaviours at a more local or granular level. They also recommend using models such as Nissenbaum’s theory of contextual integrity, which considers data type, sender, recipient, subject, and transmission conditions, to provide a benchmark for privacy, aiding analysis of user technology behaviour and data practices with within real-world scenarios.

The review also points to the need for ongoing training and education in digital literacy and data ethics. Building capacity among healthcare professionals, technologists, and students can help them better understand and respond to the privacy challenges posed by complex, evolving digital ecosystems.

This is where initiatives such as AI2MED, which provides toolkits and resources designed to support thoughtful, ethical engagement with AI in healthcare, add critical value. By equipping users with the skills and frameworks to navigate these challenges, AI2Med aims to empower professionals, support responsible innovation, and build public trust in the technologies reshaping our health systems.

Share the Post:

Related Posts