The intersection of artificial intelligence and healthcare delivery is often framed as a leap toward efficiency, but for millions of Medicaid beneficiaries, that efficiency can manifest as a “black box” determining whether they receive essential care. In a significant move to safeguard patient rights, the Medicaid and CHIP Payment and Access Commission (MACPAC) is calling for a drastic increase in Medicaid AI prior authorization transparency and the reinforcement of human oversight.
The non-partisan legislative branch agency, which provides critical data analysis and policy recommendations to Congress, recently voted on a series of measures designed to curb the risks associated with automated decision-making. At the heart of the issue is “prior authorization”—the multi-step process where healthcare payers require providers to obtain approval before delivering specific medications, services, or medical items. While automation promises to speed up this process, MACPAC warns that without transparency, regulators have limited insight into how these algorithms operate, potentially masking data bias or clinical inaccuracies.
As a physician and journalist, I have seen how the gap between algorithmic efficiency and clinical reality can jeopardize patient outcomes. When a computer program denies a claim for medical necessity without a transparent rationale or a qualified human review, the result is not just a bureaucratic hurdle. it is a potential delay in life-saving treatment. The recommendations from MACPAC aim to ensure that while technology may assist the process, the final word on a patient’s health remains with a qualified professional.
The Risks of Automated Care Denials
Prior authorization is widely utilized across both Medicaid managed care and fee-for-service models. However, the integration of artificial intelligence and complex algorithms into this workflow has created a visibility gap. According to MACPAC, state and federal regulators currently possess limited insight into how payers employ these technologies. This lack of transparency makes it difficult to monitor for systemic errors or biases that could unfairly disadvantage certain patient populations.
The primary concern is the “automation” of medical necessity denials. When an AI system flags a treatment as unnecessary based on a dataset that may be outdated or biased, the patient is often left without a clear explanation. By advocating for Medicaid AI prior authorization transparency, MACPAC is pushing for a system where the logic behind a denial is visible and contestable, rather than hidden within a proprietary algorithm.
MACPAC’s Blueprint for Human Oversight
To address these vulnerabilities, MACPAC has proposed several policy levers to the Centers for Medicare & Medicaid Services (CMS) and lawmakers. These recommendations focus on ensuring that automation does not replace medical expertise.
One primary recommendation suggests that CMS issue guidance to state Medicaid agencies and managed care plans. This guidance would require that any automated care denial based on medical necessity be reviewed by a human with the appropriate expertise. This ensures that a clinical professional—not just a software output—validates the decision to withhold care.

The Commission also recommends updating regulations for fee-for-service Medicaid. Under the proposed changes, medical necessity denials would be required to be made by a person with a professional background specifically tailored to the enrollee’s needs, whether those are medical, behavioral, or long-term care requirements. This move is designed to prevent “one-size-fits-all” denials that ignore the nuances of complex patient histories.
MACPAC is urging CMS to provide states with clearer guidance on how to use their existing regulatory authority to oversee the use of automation in utilization management. By empowering states to audit and monitor how insurers use AI, the Commission hopes to create a more accountable ecosystem for the safety-net insurance program.
What This Means for Patients and Providers
For healthcare providers, these recommendations represent a potential reduction in the “administrative burnout” caused by fighting opaque automated denials. When a human expert is required to review a denial, providers have a clearer path to appeal and a more reliable point of contact to discuss the clinical merits of a requested treatment.
For patients, the stakes are higher. The shift toward human-centric oversight means that the specificities of their health condition—details that an algorithm might overlook or misinterpret—are more likely to be considered. The goal is to prevent a scenario where a patient is denied a critical service simply because they did not fit the statistical profile used by a payer’s AI tool.
These efforts are part of a broader examination of automation within the Medicaid system. MACPAC has conducted an extensive study to understand the extent to which states and managed care plans are adopting these technologies, identifying both the efficiencies they offer and the systemic risks they introduce. More details on these findings are expected to be formalized in an upcoming official report to Congress scheduled for June 2026.
Key Takeaways for Medicaid Stakeholders
- Human-in-the-Loop: MACPAC recommends that all automated medical necessity denials in managed care be reviewed by a qualified human expert.
- Specialized Review: In fee-for-service Medicaid, denials should be handled by professionals with backgrounds matching the patient’s specific care needs (e.g., behavioral or long-term care).
- Regulatory Empowerment: The Commission wants CMS to help states better oversee how insurers use automation in utilization management.
- Transparency Gap: The push is driven by a lack of visibility into how AI algorithms are used, which can hide data bias and inaccuracies.
The next major milestone in this policy push will be the release of the MACPAC report to Congress in June 2026, which will provide a comprehensive analysis of automation in the Medicaid prior authorization process and further refine the principles for its oversight.
Do you believe AI should have any role in denying medical care, or should every decision be signed off by a doctor? Share your thoughts in the comments below or share this article to join the conversation on healthcare equity.