In Washington, D.C., the American Medical Association (AMA) has formally urged Congress to establish stricter federal oversight for artificial intelligence-powered mental health chatbots, citing growing concerns about patient safety and inadequate safeguards in rapidly expanding digital health tools.
The AMA’s call comes amid widespread adoption of AI chatbots designed to offer mental health support, which the organization acknowledges may improve access to care but warns lack consistent protections against serious risks such as encouraging self-harm, spreading misinformation, violating data privacy, and fostering unhealthy emotional dependency.
In letters addressed to the co-chairs of the Congressional Artificial Intelligence Caucus, the Congressional Digital Health Caucus, and the Senate Artificial Intelligence Caucus, the AMA emphasized that while innovation in digital mental health tools holds promise, it must not arrive at the expense of patient well-being or clinical integrity.
“AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response,” stated AMA CEO John Whyte, MD, MPH, in the organization’s official communication to lawmakers.
The AMA urged federal legislators to require stronger guardrails as a starting point for regulation, noting that modernized protections may be needed as these technologies evolve and become more deeply integrated into healthcare systems.
Core Risks Highlighted by the AMA
The organization outlined several documented hazards associated with unregulated mental health AI chatbots, based on reports and clinical observations:
- Inadequate crisis response: Systems failing to properly identify or de-escalate self-harm risks among users.
- Misinformation and dependency: Potential for AI to provide clinically inaccurate advice or encourage unhealthy emotional reliance in vulnerable individuals.
- Privacy breaches: Concerns over the security, storage, and potential commercialization of sensitive mental health data shared with chatbots.
- Child safety: Heightened risks for children and adolescents who may interact with these tools without proper oversight or age-appropriate safeguards.
The AMA stressed that transparency is a foundational requirement, insisting that chatbots must clearly disclose they are AI systems—not human therapists—and must be strictly prohibited from presenting themselves as licensed clinicians or offering formal diagnoses.
AMA’s Proposed Framework for Federal Oversight
To address these concerns, the AMA proposed five pillars for congressional consideration in regulating mental health AI chatbots:
- Enhance Transparency: Mandate clear disclosure that interactions are with AI, not humans, and ban any representation of chatbots as licensed mental health professionals.
- Clear Regulatory Boundaries: Prohibit AI from diagnosing or treating mental health conditions without formal regulatory review; require developers to implement crisis-detection systems that trigger immediate referrals to human crisis resources.
- Accountability and Monitoring: Enforce continuous safety monitoring, adverse event reporting, and rigorous standards—especially for tools marketed to or used by minors.
- Limit Commercial Influence: Prohibit advertising within mental health chatbots and ensure algorithmic outputs are free from sponsorship bias or commercial pressure.
- Privacy and Security: Enforce strict limits on data collection, require explicit informed consent for how sensitive information is used or shared, and prohibit unauthorized commercialization of user data.
The AMA noted that these measures are intended not to stifle innovation but to ensure that AI tools responsibly complement—rather than replace—established clinical care, particularly in high-stakes mental health contexts where human judgment and empathy remain irreplaceable.
As of April 22, 2026, the letters to the Congressional AI Caucus, Digital Health Caucus, and Senate AI Caucus represent the AMA’s most recent formal engagement with federal policymakers on AI governance in healthcare.
The organization reiterated its support for thoughtful oversight that balances innovation with accountability, urging lawmakers to act promptly to prevent harm while preserving public trust in digital health advancements.
For ongoing updates on federal AI healthcare policy, readers can monitor official communications from the U.S. Congress and regulatory updates from the Department of Health and Human Services.
We welcome your thoughts on this developing issue. Share your perspective in the comments below and help spread awareness by sharing this article with others interested in the intersection of technology and mental health care.