NEJM Ahead of Print: Latest New England Journal of Medicine Research

Artificial intelligence is currently being hailed as the most significant leap in medical science since the discovery of antibiotics. From predicting cardiac events before they happen to identifying malignant tumors with superhuman precision, the promise of AI is a world of “precision medicine” where care is tailored to the individual. However, as a physician and journalist, I see a shadow trailing this progress: the risk that these tools will not heal the world, but instead widen the existing chasm between the privileged and the marginalized.

The digital divide in health care AI is not merely a matter of who owns the latest smartphone or which hospital can afford a high-end server. It is a systemic failure involving data representation, infrastructure, and digital literacy. If the algorithms driving the future of medicine are trained on data from wealthy, urban populations in the Global North, they may prove ineffective—or even dangerous—when applied to patients in rural clinics or underserved communities globally.

To ensure that AI serves as a bridge rather than a barrier, we must move beyond the technical excitement and address the structural inequalities that determine who benefits from innovation. This requires a fundamental shift in how we collect medical data, deploy technology, and educate both providers and patients. The goal is not just “digital health,” but digital health equity.

The Architecture of Inequality: Algorithmic Bias and Data Gaps

At its core, AI is a mirror; it reflects the data it is fed. In healthcare, this creates a profound risk of “encoded inequality.” Most of the large-scale datasets used to train diagnostic AI are sourced from academic medical centers in high-income countries. This means the “norm” the AI learns is based on a specific demographic—often white, urban, and relatively affluent.

When these models are deployed in diverse settings, the results can be skewed. For example, AI tools designed to detect skin cancer have historically struggled with darker skin tones because the training images were predominantly of fair-skinned patients. Here’s not a failure of the code, but a failure of the data. When the training set lacks diversity, the AI develops a blind spot, leading to lower diagnostic accuracy for the very populations that often have the least access to specialized dermatological care.

This phenomenon extends to genomic medicine and cardiology. If the genetic markers used to train a predictive AI are based on populations of European descent, the tool may fail to predict disease risk accurately for people of African, Asian, or Indigenous ancestry. The World Health Organization (WHO) has repeatedly warned that without inclusive data governance, AI could exacerbate health disparities rather than reduce them.

Infrastructure and the “Digital Desert”

Even when an AI tool is clinically validated for all populations, the physical means of delivering that care remain unevenly distributed. We are seeing the rise of “digital deserts”—regions where the lack of high-speed internet and stable electricity renders advanced AI tools useless.

From Instagram — related to Digital Desert, United States

Modern AI diagnostics often require cloud computing and massive bandwidth to process high-resolution images or real-time patient monitoring data. In many rural areas of the United States, sub-Saharan Africa, and Southeast Asia, the “last mile” of connectivity is missing. A rural clinic without reliable broadband cannot utilize a cloud-based AI tool for early sepsis detection, even if that tool could save hundreds of lives in their community.

Infrastructure and the "Digital Desert"
Infrastructure and the "Digital Desert"

there is the issue of hardware. AI-driven healthcare often relies on wearable devices or sophisticated sensors to feed data into the algorithm. The cost of these devices creates a financial barrier, ensuring that the “data-rich” patients receive proactive, AI-enhanced care, while the “data-poor” are left with traditional, reactive medicine. According to data from the International Telecommunication Union (ITU), billions of people still lack meaningful access to the internet, a gap that directly translates into a gap in healthcare quality as AI becomes the standard of care.

The Literacy Barrier: Trust and Understanding

Possessing the technology is only half the battle; the other half is the ability to use it effectively. Digital health literacy—the ability to seek, find, understand, and appraise health information from electronic sources—is unevenly distributed across socioeconomic lines.

For a patient, the digital divide manifests as a lack of trust or an inability to navigate an AI-driven patient portal. For a clinician in an under-resourced setting, it may manifest as a lack of training on how to interpret AI suggestions. There is a danger of “automation bias,” where a provider might over-rely on an AI’s suggestion because they lack the specialized training to challenge it, or conversely, a total rejection of the tool due to a lack of institutional support.

the “black box” nature of many AI algorithms creates a trust deficit. Patients from communities that have historically experienced medical neglect or exploitation are rightfully skeptical of “invisible” algorithms making decisions about their care. Without transparency in how these tools work and a commitment to community-led implementation, the digital divide will be reinforced by a psychological divide of mistrust.

Key Takeaways for Health Equity in AI

  • Diversified Data: AI must be trained on global, representative datasets to prevent algorithmic bias across different ethnicities and socioeconomic groups.
  • Infrastructure Investment: Expanding broadband and stable power to rural and low-income areas is a prerequisite for equitable AI deployment.
  • Inclusive Design: Tools must be designed for low-bandwidth environments and tailored to various levels of digital literacy.
  • Human-in-the-Loop: AI should augment, not replace, the clinician, ensuring that human judgment and cultural competency remain central to care.
  • Policy Oversight: Governments must implement regulations that mandate transparency in AI training data and audit tools for bias before they reach the clinic.

Bridging the Gap: Strategies for a More Equitable Future

Overcoming the digital divide in health care AI requires more than just better code; it requires a political and ethical commitment to health as a human right. We must move toward a model of “inclusive innovation.”

Source New England Journal of Medicine NEJM

One promising approach is the development of “edge AI”—algorithms that can run locally on a device without needing a constant connection to a powerful cloud server. By shrinking the computational requirements, You can bring diagnostic power to the most remote corners of the globe, allowing a handheld ultrasound device to detect fetal complications in a village without internet access.

we need “data democratization.” This involves creating open-source, anonymized medical datasets that represent diverse populations, allowing researchers in low-resource settings to build and refine their own tools rather than relying on expensive, proprietary software from the West. When local clinicians are involved in the design of the AI, the tool is more likely to address the specific epidemiological needs of that community.

Finally, education must be prioritized. We need to integrate digital health literacy into medical school curricula and public health campaigns. Patients should be empowered to understand how their data is being used and how to interact with AI tools safely. The goal is to move from a top-down imposition of technology to a bottom-up integration of tools that patients and providers actually trust.

What Happens Next?

The trajectory of AI in medicine is currently being written, and the window to prevent a permanent “health-tech caste system” is closing. The next critical checkpoint will be the continued evolution of global AI governance frameworks. The World Health Organization is currently working on expanding its guidance for the ethical use of large multi-modal models in health, which will likely set the standard for how nations regulate AI to prevent bias and ensure equity.

As we move forward, the metric of success for health AI should not be the sophistication of the algorithm, but the breadth of its reach. A tool that saves a thousand lives in a high-tech city is a success; a tool that saves a thousand lives in a forgotten village is a triumph.

Do you believe AI will ultimately close the gap in global health, or is it destined to make it wider? Share your thoughts in the comments below or share this article with your network to join the conversation on health equity.

Leave a Comment