The Digital Information Environment: The Missing Exposure in Population Health

Screens have become the dominant environment of daily life, and a 2025 perspective in Annals of the New York Academy of Sciences argues that public health has not yet fully addressed what this shift means for population health. In their paper, Charles E. Basch and Corey H. Basch describe a context in which digital communication and artificial intelligence (AI) do not simply influence behaviour, but increasingly structure it, shaping how people learn, work, socialize, seek care, and make health decisions. Their central claim is that the “digital information environment” should be treated as a major exposure domain, comparable in importance to diet, toxins, microbes, and the built environment, because the scale, intensity, and personalization of screen exposure are historically unprecedented and tightly linked to economic incentives designed to maximize attention.

The urgency becomes clearer when considering the trajectory of exposure: smartphone ownership has grown dramatically, phone checking is highly frequent, and global mobile use continues to expand. Together, these trends imply that a substantial portion of human attention is now mediated by platforms engineered for sustained engagement, with relevance for “native users” (approximately ages 10 to 25) who have grown up within this ecosystem. In the authors’ framing, this is not accidental drift but the predictable outcome of an attention economy in which attention is monetized, and longer exposure time supports productivity and profit. They emphasize that the attention economy is reinforced by multidisciplinary expertise used to refine design features that make screen use feel automatic and difficult to interrupt, including mechanisms related to intermittent reinforcement and reward pathways that can intensify engagement.

Crucially, the article is not only a warning about technology; it is also an argument about scientific blind spots. Public health has long focused on Social Determinants of Health (SDOH, such as economic stability, education, health care access, neighbourhood conditions, and social context) recognizing that these factors shape exposure, risk, and outcomes across the life course. What is still insufficiently addressed, the authors contend, is systematic attention to the nature and extent of screen exposure in a hyper-connected, information-saturated world, including how the content people encounter and the ways it is delivered can function as determinants of health. They draw on a growing empirical literature examining social media and health topics (from infectious diseases to chronic conditions and mental health) and describe consistent patterns with direct implications for prevention and health promotion: widely viewed health communications are often not produced by governmental agencies; the most consumed content tends to be brief, limiting the ability to convey nuance, uncertainty, and trade-offs; and highly viewed communications frequently include stigmatizing material, misinformation, or disinformation that can delay effective care, discourage evidence-based prevention, or amplify harmful practices. They also highlight that the online spread of health-related falsehoods raises complex ethical and legal questions about platform accountability and governance.

Within this environment, AI is presented as a double-edged force. On one side, AI already shows substantial promise for clinical and population health applications, including improved image interpretation for early cancer detection, support for triage and prioritization of imaging, advances in pathology and precision medicine, and enhanced surgical precision through robotics and tailored care pathways. The authors stress that innovation remains in relatively early phases, suggesting that future applications could improve prevention, early detection, and care delivery at scale. On the other side, the paper emphasizes that algorithmic systems can reinforce inequity or produce discriminatory outcomes across domains that shape health and opportunity, including education, employment, admissions, criminal justice, credit, and insurance. They also flag concerns that AI-driven systems may undermine democratic processes, indirectly affecting health via policy, trust, and social cohesion, factors closely tied to SDOH.

From these observations, the authors advance a practical proposal: public health needs an equivalent of environmental monitoring for the digital domain. Traditional surveillance systems, national health and nutrition surveys, behavioural risk factor systems, vital statistics systems, and disease registries, have been essential to track trends, identify inequities, and evaluate progress toward prevention goals. Basch and Basch argue that comparable infrastructure is now needed to characterize online exposures: the content and sources of widely viewed health messages, the sentiment and imagery used, engagement metrics that signal reach, the audiences being targeted or captured, and the pathways through which ideas trend and spread. The point is not only to count posts, but to understand exposure patterns that shape beliefs, behaviours, and demand for services, and to use those insights to develop evidence-based counter-narratives when myths or harmful advice dominate attention.

They also underline that surveillance cannot be treated as a purely technical fix. Monitoring the digital information environment raises ethical concerns, including privacy and transparency, and the risk that surveillance could disproportionately target demographic groups, geographic areas, or political and social communities. Any surveillance system, they argue, must be transparent about what is tracked and why, and carefully governed to avoid reinforcing structural inequities through over-monitoring or biased interpretation.

Finally, the paper looks ahead to workforce implications. AI is expected to disrupt many sectors rapidly, creating new roles and skill requirements while making other tasks obsolete. For health promotion and disease prevention professionals, this implies a dual responsibility: developing the capacity to critique harmful digital architectures and incentive structures, while also learning to deploy AI constructively, for example, tailoring educational content to language, culture, and literacy level, and using video, animation, and interactive formats to reach broader audiences than text-heavy approaches historically allowed. At the same time, younger cohorts, who spend comparatively more time on their phones, may be more susceptible to harms driven by social media dynamics and attention-capture incentives, making prevention work more complex: practitioners must balance innovation with risk mitigation, and individual-level tools with population-level accountability.

In conclusion, the authors frame the present as a window of opportunity. As digital media and AI continue to evolve in ways that attract and sustain attention, population health science must expand its exposure framework to include screen-based environments and build surveillance systems that track what is shaping health beliefs in real time. The longer-term ambition is not simply to reduce misinformation, but to redirect technological capacity toward genuine improvements in health and equity.

References: Basch CE, Basch CH. Artificial intelligence, digital media, and population health: Exposure science and social determinants of health. Ann N Y Acad Sci. 2025 Oct;1552(1):5-11. doi: 10.1111/nyas.70020. Epub 2025 Sep 12. PMID: 40938576.

Image source: www.pixabay.com

Share the Post:

Related Posts