What if the biggest problem with electronic health records was not the technology itself, but that we expected it to transform medicine when it could only lay the foundation? That question frames a timely conversation about artificial intelligence in healthcare, as explored in a recent episode of The Podcast by KevinMD featuring Dr. Robert Wachter, professor and chair of the Department of Medicine at the University of California, San Francisco.
Dr. Wachter joined the podcast to discuss his book, A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, which examines how generative AI is beginning to reshape clinical workflows, diagnosis, and patient interaction in ways that earlier health IT systems failed to achieve. Drawing on more than 100 interviews with pioneers across medicine, technology, policy, and business, Wachter argues that in a healthcare system burdened by bureaucracy, rising costs, and clinician burnout, AI doesn’t require to be perfect—it only needs to be better than the status quo.
The discussion highlights real-world applications already emerging in hospitals and clinics: AI tools drafting clinical notes, answering patient questions, recommending treatments, interpreting medical images, and even guiding surgeries. Wachter points to Open Evidence as an example of a generative AI platform that has displaced UpToDate as a go-to clinical knowledge resource for many physicians, signaling a shift in how medical information is accessed and applied at the point of care.
He as well notes the rapid adoption of AI scribes, which have moved from experimental tools to expected components of clinical practice within just two years—a pace of change that underscores both the potential and the challenges of integrating AI into high-stakes medical environments.
The Waymo Model: Building Trust Through Incremental Progress
A central metaphor in Wachter’s analysis comes from the autonomous vehicle industry: the Waymo model of building public trust through gradual, verifiable progress. Just as self-driving cars earned acceptance by demonstrating safety over millions of miles in controlled environments, Wachter suggests that medical AI should follow a similar path—starting with narrow, well-defined tasks where errors can be monitored and corrected before expanding to more complex clinical judgments.
This incremental approach, he argues, is essential to avoiding a catastrophic setback in medical AI adoption. Unlike electronic health records, which were often expected to revolutionize care delivery immediately despite limited functionality, AI implementations must be introduced with humility, rigorous oversight, and a clear understanding of their limitations—particularly around hallucinations, bias, and data quality.
Wachter emphasizes that trust in medical AI won’t approach from flawless performance, but from transparency about when and how the technology fails, coupled with systems designed to catch and learn from those failures. This mindset shift—from expecting perfection to demanding continuous improvement—is critical for sustainable integration.
AI and the Doctor-Patient Relationship: Augmentation, Not Replacement
One of the most provocative ideas Wachter explores is whether the doctor-patient relationship is as irreplaceable as many physicians believe. While acknowledging the deep human value of empathy and connection in healing, he presents evidence that AI can now match, and sometimes surpass, clinicians in certain aspects of diagnostic reasoning and even communicative empathy—particularly when augmented by large language models trained on vast datasets of medical interactions.

This raises important questions about deskilling in medical education. If AI begins to handle tasks traditionally seen as core to clinical expertise—like synthesizing patient histories or weighing differential diagnoses—what skills should future physicians prioritize? Wachter suggests that medical training may need to evolve toward greater emphasis on judgment, ethical reasoning, and interpersonal communication, while leveraging AI as a cognitive partner rather than a threat to professional identity.
Still, he cautions against overestimating AI’s emotional intelligence. While generative models can produce responses that feel empathetic, they do not experience understanding or compassion. The risk, he warns, lies not in AI replacing doctors, but in healthcare systems using the illusion of empathy to justify reduced human contact—especially in underserved or overburdened settings.
Primary Care Reimagined: A Decade of Transformation
Looking ahead, Wachter envisions a radically different primary care landscape within ten years. AI could handle routine triage, chronic disease monitoring, and preventive outreach, freeing clinicians to focus on complex cases, care coordination, and meaningful patient conversations. In this model, physicians might spend less time on documentation and more on interpretation, guidance, and healing—roles that remain distinctly human.
Such a shift could assist alleviate clinician burnout, a persistent crisis driven in part by administrative overload. By automating repetitive tasks, AI has the potential to restore some of the joy and purpose that initially drew people to medicine. But, Wachter stresses that realizing this benefit depends on thoughtful design—ensuring that AI tools are built with clinicians, not just for them, and that they enhance rather than erode professional autonomy.
He also highlights equity as a critical concern. Without deliberate effort, AI could widen existing disparities if access to advanced tools remains concentrated in well-resourced institutions. Ensuring that benefits reach safety-net hospitals, rural clinics, and global health settings will require intentional policy, funding, and innovation strategies.
Navigating Hype and Skepticism
Throughout A Giant Leap, Wachter walks a careful line between enthusiasm and caution. He rejects both utopian visions of AI as a panacea and dystopian fears of machines overtaking medicine. Instead, he presents a nuanced view: AI is not inherently good or bad, but a powerful tool whose impact will depend on the choices made by clinicians, leaders, policymakers, and patients.
The book draws on extensive research and firsthand accounts from those on the frontlines of AI adoption—innovators who are testing boundaries, confronting failures, and learning what works in real-world settings. This grounded perspective helps cut through the noise of vendor claims and speculative futurism, offering instead a roadmap grounded in clinical reality.
For healthcare professionals trying to understand where AI is headed and what it means for their practice, Wachter’s work provides essential context. It’s not about predicting the future, but about preparing for it—with eyes open to both promise and peril.
As the conversation concludes, the message is clear: the leap toward AI-enhanced healthcare is already underway. It began gradually, with pilot programs and proof-of-concept studies. But now, as evidence mounts and tools mature, the shift may feel sudden—like a threshold crossed not with a bang, but with the quiet confidence of systems that finally work as intended.
To learn more about Dr. Robert Wachter’s insights on AI in healthcare, listeners can locate the full episode of The Podcast by KevinMD on major podcast platforms or visit KevinMD.com for additional resources.
Stay informed. Share your thoughts. Join the conversation about the future of medicine.