When AI systems exhibit unexpected behavior—whether generating biased outputs, leaking sensitive data, or enabling cyberattacks—vendors increasingly respond with a familiar refrain: the flaw isn’t in their model, but in how users deployed it. This pattern of deflecting responsibility has drawn sharp criticism from security researchers, regulators, and enterprise customers who argue that AI companies are avoiding accountability by treating known vulnerabilities as “working as intended.”
The issue gained renewed attention in early 2024 when multiple independent audits revealed that popular large language models (LLMs) could be manipulated to bypass safety guards through subtle prompt variations, a technique known as jailbreaking. Despite public commitments to AI safety, several leading vendors initially classified these exploits as user errors rather than systemic flaws in their architecture or training processes.
This stance contradicts growing consensus among AI ethicists and cybersecurity experts that foundational model providers bear primary responsibility for securing their systems against reasonably foreseeable misuse. As one senior researcher at a major tech university put it during a recent industry briefing: “You can’t sell a powerful tool, disclaim all liability when it’s misused in predictable ways, and then tell customers they need to buy more of your product to fix the problem you created.”
How Vendors Frame AI Security Failures
When vulnerabilities surface, AI companies often deploy a two-part defense strategy. First, they emphasize that customers must use AI-powered tools to detect and mitigate AI-generated threats—effectively positioning their own security products as the solution to risks inherent in their core offerings. Second, they characterize reported issues not as bugs but as expected behaviors arising from the statistical nature of generative models.
This framing allows vendors to avoid classifying certain outputs as defects under standard software liability frameworks. Unlike traditional software, where a crash or data leak typically triggers a patch obligation under warranty or service-level agreements, AI vendors argue that undesirable generations stem from the model’s probabilistic design rather than coding errors.
Critics counter that this distinction ignores the duty of care owed by providers who release systems capable of generating harmful content at scale. “Just given that a model behaves statistically doesn’t indicate its producers are absolved of responsibility for foreseeable outcomes,” said a former Federal Trade Commission technologist now advising on AI policy. “If you realize your system can be prompted to produce hate speech or reveal training data, and you don’t implement reasonable safeguards, that’s negligence—not feature behavior.”
Real-World Consequences of Deflected Accountability
The reluctance to acknowledge vulnerabilities as vendor-responsible issues has tangible downstream effects. Enterprise adopters report spending significant resources building custom guardrails, monitoring systems, and incident response plans to compensate for perceived gaps in vendor-provided safety measures. A 2023 survey of 500 global IT leaders found that 68% believed AI vendors underestimated the security burden placed on customers, with many resorting to third-party tools to monitor model outputs in real time.
In one documented case, a financial services firm discovered that an internal LLM-powered chatbot could be coaxed into revealing snippets of customer transaction histories through role-play prompts. When the issue was reported to the model provider, the vendor initially stated the behavior resulted from “inadequate user-side prompt filtering” and declined to issue a patch, instead recommending an upgrade to their premium security monitoring suite.
Only after the incident was detailed in a peer-reviewed security conference did the vendor release an updated model version with strengthened refusal training—a move analysts noted came months after similar fixes appeared in open-source alternatives. This delay highlighted concerns that commercial vendors may prioritize product segmentation over timely vulnerability remediation when accountability remains ambiguous.
Regulatory Pressure Mounts on AI Accountability
Governments and standards bodies are beginning to challenge the notion that AI vendors can disclaim liability for foreseeable harms. The European Union’s AI Act, which entered into force in August 2024, establishes clear obligations for providers of high-risk and general-purpose AI systems, including requirements to document known limitations, implement proportionate risk-mitigation measures, and report serious incidents.
Under the AI Act, providers must assess whether their models could generate illegal content, facilitate non-consensual deepfakes, or enable cyberattacks—and take steps to prevent such outcomes. Failure to comply can result in fines of up to 3% of global annual turnover. Notably, the regulation rejects the argument that statistical behavior absolves providers of responsibility, instead requiring them to anticipate and mitigate reasonably predictable misuse.
In the United States, while no comprehensive AI law exists yet, federal agencies are increasingly asserting authority over AI safety. The Federal Trade Commission has warned that companies making deceptive claims about AI safety or security may violate Section 5 of the FTC Act, which prohibits unfair or deceptive acts. In March 2024, the FTC settled a case with a generative AI startup over allegations that it misrepresented the effectiveness of its content filters—a action signaling heightened scrutiny of vendor claims.
Meanwhile, the National Institute of Standards and Technology (NIST) released draft guidance in November 2023 urging AI developers to adopt vulnerability disclosure policies similar to those in traditional software, including clear channels for reporting flaws and committed timelines for remediation. The framework emphasizes that providers cannot outsource safety entirely to users through disclaimers or upsell tactics.
What Responsible AI Accountability Looks Like
Security experts point to several practices that distinguish vendors taking ownership of model safety from those deflecting blame. Transparent vulnerability reporting—where companies publish detailed advisories when flaws are discovered and confirm patches have been issued—is a baseline expectation. Some open-source AI projects now maintain public CVE-style databases for model weaknesses, a model increasingly advocated for commercial providers.
Another key indicator is whether vendors issue updates to base models when safety issues are identified, rather than relying solely on post-processing filters or user-side mitigations. Researchers note that while input/output classifiers can reduce harmful generations, they often fail against adaptive attacks and do not address root causes in training data or model architecture.
Finally, meaningful accountability includes engaging with external auditors and participating in red teaming exercises that simulate real-world abuse scenarios. Companies that invite third-party scrutiny and act on findings demonstrate a commitment to safety that goes beyond compliance theater, according to AI governance specialists.
As AI systems become embedded in critical infrastructure, healthcare, and financial services, the expectation that vendors will stand behind the safety and security of their products is no longer optional—it’s a prerequisite for trust. The industry’s ability to shift from blame-shifting to proactive responsibility will determine not only regulatory outcomes but also whether AI fulfills its promise as a force for broad, equitable benefit.
For now, the message from security professionals is clear: when an AI vendor tells you a flaw is “working as intended,” ask what they intend to do about it—and whether they’ve shared that plan with anyone outside their sales team.
Stay informed about developments in AI accountability and security by following updates from trusted sources like the National Institute of Standards and Technology and Federal Trade Commission. Share your thoughts on vendor responsibility in AI safety in the comments below, and help spread awareness by sharing this article with colleagues and peers.