AI Mistakenly Flags Doritos Bag as Gun, Leading to Student Search
A high school student in Baltimore County, Maryland, was recently handcuffed and searched after an artificial intelligence (AI)-powered gun detection system falsely identified a bag of Doritos as a weapon. This incident highlights the growing pains and potential pitfalls of deploying AI technology in school safety measures.
How the Incident Unfolded
the student,identified as Allen,was flagged by the system while simply holding the snack. School cameras equipped with AI are designed to detect potential weapons and automatically alert school officials and law enforcement.
Here’s a breakdown of what happened:
* The AI system detected what it perceived as a gun.
* School resource officers responded and detained Allen.
* He was forced to kneel, handcuffed, and thoroughly searched.
* No weapons where found - only a bag of Doritos.
* Authorities showed Allen the image that triggered the alert, revealing the misidentification.
“I was just holding a Doritos bag – it was two hands and one finger out, and they said it looked like a gun,” Allen explained.
The Rise of AI in School Security
Baltimore County high schools implemented the AI-driven gun detection system last year. The technology aims to proactively identify threats by analyzing camera feeds for suspicious objects. when the system detects something concerning, it instantly sends an alert to designated personnel.
School and Police Response
School administrators acknowledged the distress caused by the incident. They stated that counselors are available to support both the student who was searched and any other students who may be affected.
Baltimore County police confirmed responding to a report of a suspicious person with a weapon at Kenwood High School. Their examination determined the student was not in possession of any weapons.
Concerns and Reactions
This event has understandably sparked concern among parents and advocates. Lamont Davis, the student’s grandfather, expressed his dismay, stating, ”Nobody wants this to happen to their child. No one wants this to happen.”
The Bigger Picture: AI Accuracy and Student Well-being
This incident raises critical questions about the accuracy of AI-powered security systems and their impact on students. While the intention behind these technologies is to enhance safety, false positives can lead to traumatic experiences and erode trust.
You need to consider these points:
* False Alarms: AI systems are not foolproof and can misinterpret everyday objects.
* Student Trauma: Being wrongly accused and subjected to a search can be deeply upsetting for students.
* Equity Concerns: There are concerns that these systems may disproportionately flag students of color.
* Openness and Oversight: Clear policies and oversight are crucial to ensure responsible implementation.
It’s vital that schools carefully evaluate the benefits and risks of AI security systems. Ongoing monitoring, regular audits, and a commitment to transparency are essential to protect both student safety and their rights. This situation underscores the need for a balanced approach that prioritizes both security and the well-being of all students.
Correction (October 24, 2025): An earlier version of this article contained an image of a Baltimore city police vehicle. Baltimore County is a separate jurisdiction and does not include the city of Baltimore.








