Artificial intelligence (AI) is rapidly reshaping how societies prevent, detect, and treat infectious diseases. Its promise lies in connecting three domains that typically operate in parallel: clinical care at the patient’s bedside, public health at the population level, and research that drives discovery. A recent evidence-based framework published by Prof Anna Odone and Colleagues in “The Lancet – Infectious Disease”1, maps this landscape by following the entire lifecycle of AI solutions: from data collection and standardisation to model development and technical validation, to prospective clinical evaluation, to real-world implementation and monitoring, and finally to the scalability and policy decisions that determine equitable access. Framing the problem this way clarifies not only where AI adds value today, but also the obstacles that still hinder its translation into better outcomes for patients and communities.
The starting point is data. AI for infectious diseases draws on various streams linked to the classic epidemiological triad of pathogen, host, and environment: genomic and antimicrobial resistance profiles from laboratories, clinical and biomarker data from electronic health records, mobility and behavioural signals from digital platforms, and environmental inputs such as weather, vectors, and wastewater surveillance. When these sources are well-curated, timely, and interoperable, they enable models that reason across multiple scales, connecting what happens within a cell or ward to what occurs across a city or continent. Conversely, fragmented, isolated, or biased datasets compromise generalizability, exacerbate inequalities, and undermine trust. Addressing data quality, standardisation, privacy, and representativeness is therefore crucial for any meaningful implementation of AI in infectious disease control.
In clinical practice, AI is now supporting decisions, from diagnosis to management. Computer vision systems can triage images, highlight subtle patterns, and predict severity; during COVID-19, deep learning models such as open-source X-ray classifiers emerged alongside prognostic tools that
combined routine laboratory analyses with clinical markers. In the laboratory, machine learning accelerates organism identification and resistance profiling, including predicting antimicrobial resistance directly from MALDI-TOF spectra or integrating clinical context with culture histories to suggest narrower choices. On the ward, reinforcement learning approaches have shown that when clinicians’ actions align with model-recommended sepsis treatments, mortality can decrease, and language models are beginning to simplify documentation and patient communication. Patient-facing tools, ranging from chatbots that encourage compliance to remote monitoring that flags deterioration, extend personalised support beyond the clinic, especially where staffing is scarce. The clinical promise is faster and more accurate diagnosis, more appropriate therapy, and more effective care, as well as better utilisation of scarce human and organisational resources, provided that the models are prospectively validated and securely integrated into workflows with transparency and oversight.
In public health, AI is improving traditional surveillance by increasing speed, automation, and scope. Natural language processing helps extract structured information from free-text medical records and open-source reports; early warning platforms can identify anomalous trends, while hybrid approaches combine case reports with mobility, weather, or social media signals to improve nowcasting and outbreak identification. Wastewater monitoring, combined with machine learning, has estimated community infection rates with a high correlation to clinical cases, demonstrating how environmental data can enhance situational awareness. Within healthcare facilities, computer vision and ambient intelligence systems monitor compliance with hand hygiene or contact precautions and offer real-time suggestions that improve safety without replacing human supervision. Chatbots and public-facing digital coaches have been tested to support vaccination decisions and behaviour change, with mixed but promising results that underscore the need for careful design, evaluation, and governance.
On the discovery front, AI is opening new perspectives in pathogen characterisation and anti-infective drug development. Protein structure prediction using neural network and language model-based approaches is expanding phylogenetics and revealing previously unknown viral glycoproteins, helping to anticipate evolutionary trajectories and identify therapeutic targets. Generalizable frameworks that combine deep learning-based fitness predictions with biophysics have shown excellent performance in predicting variation among different viruses, while interactome-based models are clarifying host-pathogen relationships that determine complications and long-term outcomes. In drug and vaccine research and development, graph neural networks, generative models, and attention-based architectures are accelerating de novo molecule generation, compound screening, and epitope classification; Recent studies report the explainable discovery of new classes of antibiotics active against resistant bacteria, AI-generated SARS-CoV-2 protease inhibitors prioritized in silico, and the deep learning identification of antimicrobial peptides, some derived from ancient DNA, with efficacy in preclinical models. Computational
vaccinology platforms are operationalising these advances for researchers by classifying antigens and simplifying post-prediction analysis.
Despite this momentum, its maturity remains uneven. Much of the published work still focuses on retrospective technical performance, with limited prospective validation and very few randomised evaluations. Regulatory pathways for AI tools are evolving, but clinical approvals often lack reliable data on clinical effects. Healthcare adoption of AI is further limited by low AI literacy among professionals, integration burdens, the opaqueness of “black box” models, and the need for continuous post-implementation monitoring as data, practices, and pathogens evolve. Ethical risks (such as misinformation, automation bias, the amplifying of existing inequalities, and unclear responsibilities) require careful governance throughout the product lifecycle. International disparities in digital infrastructure and data access rules mean that, without deliberate action, AI could widen rather than narrow global healthcare gaps.
However, the path forward is clear. Infectious disease specialists, microbiologists, epidemiologists, and public health workers should be co-designers of AI systems, not passive recipients. This means defining clinically relevant objectives; curating diverse, high-quality datasets; selecting transparent methods where possible; stress-testing models prospectively and in real-world settings; and establishing audit and update plans that address data, concept, and label drift over time. It also means strengthening training so that teams can interpret model results, understand their limitations, and preserve the human values (honesty, social responsibility, equity, and respect for rights) that must underpin any technology used in healthcare and prevention. In this way, AI becomes a practical tool: it accelerates laboratory and imaging workflows, refines antimicrobial stewardship, enriches surveillance with new signals, and accelerates discovery, all while helping healthcare systems provide safer, more sustainable, and more equitable services. The long-term impact on infectious disease outcomes will depend less on algorithmic novelty and more on rigorous validation, thoughtful implementation, and inclusive governance that ensures benefits are shared across settings and populations.
References. 1Odone A, Barbati C, Amadasi S, Schultz T, Resnik DB. Artificial intelligence and infectious diseases: an evidence-driven conceptual framework for research, public health, and clinical practice. Lancet Infect Dis. Published online September 16, 2025. doi:10.1016/S1473-3099(25)00412-8

