This text provides a summary of the article: Basch CE, Basch CH. Artificial intelligence, digital media, and population health: Exposure science and social determinants of health. Ann N Y Acad Sci. 2025 Sep 12. doi: 10.1111/nyas.70020.
In recent decades, everyday life has been transformed by the convergence of smartphones, social media platforms, and increasingly sophisticated AI. Screens now mediate how people shop, learn, socialise, seek care, and spend free time, to an extent unparalleled in human history. Estimates suggest that nearly all adults own mobile phones, most own smartphones, the global number of mobile users is in the billions, and average daily phone use approaches several hours, with younger cohorts checking devices far more frequently than older adults.
This pervasive exposure means a growing share of human attention is captured by digital media ecosystems optimised (often with AI) to keep people engaged. The cultural insight that “the medium is the message” feels newly urgent in a world where audio, images, and video are tailored to reinforce continued use and where native users aged roughly 10-25 find it especially difficult to self-regulate screen time.
This environment is the heart of the modern attention economy. What is novel today is not the concept that attention is scarce, but the precision with which individually tuned AI systems sustain engagement. Design choices draw on behavioural psychology and neuroscience, intermittent reinforcement and dopamine-linked reward learning, and on software practices that make swiping, scrolling, and tapping frictionless defaults. The result is a powerful industrial infrastructure aimed at maximising time on screens, which has direct implications for population health, both beneficial and harmful.
On the benefit side, AI has accelerated progress in areas that depend on early detection, triage, and complex pattern recognition. In imaging, algorithms support detection of breast and skin cancers and can even improve their own performance over time. AI contributes to pathology workflows, genetics, precision medicine, and robotics-assisted surgery and rehabilitation, with emerging evidence of improved accuracy and tailored care. These applications remain early and uneven, but the trajectory is clear: algorithmic tools are being woven into health services, logistics, and patient education in ways that promise productivity gains and better outcomes when deployed responsibly.
Yet the same technologies raise serious risks. Long before recent breakthroughs in language models, prominent scientists warned that systems capable of surpassing human intelligence could pose existential threats. More proximate are the documented harms of algorithmic decision-making for specific groups: inequities in education, employment retention, admissions, credit, insurance, and even the broader erosion of democratic norms. These concerns shift AI from a purely technical advance to a social determinant of health because they shape access to resources, stress exposures, and life chances across communities.
A central argument emerging from this landscape is that exposure science, and population health research more broadly, has not kept pace with the digitisation of daily life. Public health long ago moved from blaming individuals to examining social and environmental conditions that determine health, from income and schooling to neighbourhoods and community context. What remains under-measured is the volume, nature, and effects of screen exposure itself: which messages people encounter, who produces them, how they spread, and how they influence decisions about food, activity, sleep, help-seeking, and treatment. In an information-saturated world, digital information environments function as determinants of health and should be characterised as such.
Evidence from a decade of studies of social media content on infectious diseases, chronic illnesses, and mental health underscores both the influence of these platforms and the methodological gaps in how they are studied. Widely viewed posts often come not from government agencies but from consumers, news, and entertainment accounts; they are typically brief, and they frequently include stigmatising, misinformative, or disinformative content. Such content can delay effective care by promoting ineffective treatments, discouraging safe and evidence-based actions like vaccination, and, in the worst cases, providing instructions for self-harm or violence. Legal debates over speech and platform responsibility complicate responses, especially given the broad immunities established in the 1990s when the digital landscape looked very different. Together, these patterns strengthen the case for treating digital communications as exposures that require systematic monitoring and timely counter-messaging grounded in science.
The proposed remedy is familiar in concept but novel in target: build ongoing surveillance systems to track population-level screen exposures much as epidemiology already tracks disease prevalence, vital statistics, risk factors, and service use. Such systems would follow trending topics, content formats, sentiment, engagement, and audiences across platforms, revealing which narratives are gaining traction, how they move through networks, and which populations are most exposed. Because digital information now permeates decisions about diet, movement, sleep, relationships, and care-seeking, tracking these flows is not merely a communications exercise; it is foundational to anticipating shifts in health behaviour and designing timely, credible responses.
Ethical safeguards are essential. Transparency about what is monitored and why, protections against over-policing specific groups, and vigilance against the amplification of structural inequalities must be built into any digital surveillance infrastructure. The goal is to cultivate a trustworthy information environment, so people can make informed choices in line with current evidence.
These shifts also reconfigure the workforce for health promotion and disease prevention. AI is altering jobs across sectors, augmenting some roles while creating demand for new skills in data science and machine learning. Within clinical and community settings, the ability to tailor education by language, culture, and literacy is expanding as mobile media allow video, animation, and interactive formats to reach audiences historically left out by text-heavy approaches. At the same time, young people (who spend the most time on their phones and are particularly motivated by social status dynamics) face distinctive vulnerabilities to manipulative design. Preparing prevention specialists, therefore, means building fluency with AI tools, with attention both to opportunities for personalisation and to mitigation of harms amplified by the attention economy.
A balanced, future-oriented agenda emerges. Technological advances that keep people glued to screens will continue to evolve, and so must public health. Investing in surveillance of digital exposures can illuminate where misinformation clusters, which counter-narratives resonate, and how to time interventions for maximal reach and trust. Concurrently, expanding responsible uses of AI in diagnostics, care coordination, and patient education can raise the system’s productivity while narrowing inequities—if privacy, fairness, and transparency are prioritised. Ultimately, whether these tools widen or close health gaps will depend on choices made by institutions and policymakers, not only by engineers. Now is an especially opportune moment for population scientists and educators to adapt methods, partnerships, and ethics to a world where information itself has become an ambient exposure that shapes health across the lifespan.
References. Basch CE, Basch CH. Artificial intelligence, digital media, and population health: Exposure science and social determinants of health. Ann N Y Acad Sci. 2025 Sep 12. doi: 10.1111/nyas.70020.

