A public manhunt in Switzerland has ignited a fierce debate over the intersection of law enforcement, privacy, and the rapid proliferation of artificial intelligence. In Bern, the Kantonspolizei (cantonal police) recently released unpixelated photographs of 31 individuals suspected of involvement in riots during a pro-Palestine demonstration, hoping the public could help identify them. Whereas the police are operating within strict legal frameworks, a private citizen has demonstrated that the same goals can be achieved in minutes using AI-powered facial recognition tools.
The incident highlights a growing “capability gap” between state authorities and private citizens. While the Bern police are bound by data protection regulations that prohibit the use of certain AI tools, private individuals can access powerful image-search platforms that scrape the internet for matching faces. This has led to a situation where private citizens are effectively conducting digital forensics that the state is legally barred from performing, raising questions about whether the current legal protections are keeping pace with technological reality.
The controversy stems from events on October 11, 2025, when a pro-Palestine demonstration in Bern devolved into violence and property damage. After traditional investigative methods failed to identify all the participants, the police turned to a rarely used tactic: publishing clear, unpixelated images of 31 suspects to solicit tips from the general population. This move provided the raw data necessary for AI tools to be deployed by the public, leading to the rapid identification of several suspects by non-police actors.
The AI Loophole: Private Identification vs. Police Constraints
The tension between public safety and personal privacy became evident when a private individual claimed to have identified two of the suspects within five minutes using AI tools. According to reports, these tools allow users to upload a photo, which the AI then compares against a massive database of images found on social media and other websites to find matches. This process, known as reverse image searching or facial recognition, can link an anonymous face in a police photo to a named profile on a social network in seconds.

The Kantonspolizei Bern, however, cannot utilize these specific AI platforms. The police have stated that they may only use legally permissible means. The restriction is based on a 2020 directive from the Federal Data Protection and Information Commissioner (Edöb), which deems the “unsolicited procurement and further processing of facial data via the internet” to be inadmissible. Because these AI tools collect images from the web without the consent of the individuals, their use by state authorities would violate Swiss personality rights and data protection laws.
This creates a paradoxical scenario: the police publish the images to secure help, and the most effective way for the public to help is through a method that the police themselves are forbidden from using. Some observers have pointed out that this may actually result in a greater violation of personality rights, as the police essentially “outsource” the identification process to an unregulated public using tools that bypass legal safeguards.
Legal Deadlocks and the Debate Over Personality Rights
The situation in Bern has sparked a broader legal debate among Swiss jurists and policymakers. On one side, We find calls for the government to modernize its laws to allow the targeted use of AI in criminal investigations, especially in cases involving riots and property damage. Proponents argue that when traditional methods fail, the state should have the tools necessary to maintain law and order.

On the other side, civil liberties advocates warn that allowing police to use such AI tools would open the door to mass surveillance. The concern is that once the threshold is crossed for “riot suspects,” the technology could be expanded to monitor peaceful protesters or other citizens, leading to a permanent erosion of anonymity in public spaces. The Edöb’s 2020 ruling remains the primary legal barrier, emphasizing that the automatic scraping of facial data is a fundamental breach of privacy.
The suspects in this case are being sought in connection with the riots of October 2025, with charges including property damage. While the police have confirmed that some individuals have already been identified through the public appeal, the debate over the “AI loophole” continues to intensify as the technical capabilities of these tools evolve faster than the legislative framework.
Key Takeaways of the Bern AI Controversy
- The Incident: Bern police published unpixelated photos of 31 suspects from an October 2025 pro-Palestine protest to seek public help.
- The AI Factor: Private citizens used AI facial recognition tools to identify suspects in minutes by matching police photos with social media profiles.
- The Legal Barrier: Swiss police are prohibited from using these tools due to a 2020 ruling by the Federal Data Protection and Information Commissioner (Edöb).
- The Conflict: A debate has emerged over whether the state’s refusal to use AI—while publishing the data that enables others to do so—is a contradiction in privacy protection.
- The Outcome: Several suspects have been identified, but the case highlights the gap between current law and available technology.
The Broader Implications for Digital Privacy
The Bern case is a microcosm of a global struggle. As AI tools become more accessible to the general public, the concept of “public anonymity” is disappearing. When law enforcement releases images into the wild, they are not just asking for tips; they are providing a dataset for anyone with an internet connection and an AI account to conduct their own investigations.
:quality(1))
For those with a background in computer science, the mechanism is clear: these tools leverage neural networks trained on billions of images to identify unique facial landmarks. When a user uploads a police photo, the AI doesn’t just “look” at the photo; it converts the face into a mathematical vector and searches for the closest matching vector in its index. This process is nearly instantaneous and incredibly accurate, making traditional “pixelation” or “blurring” of photos (which the Bern police chose not to do in this instance) the only real defense against such tools.
The decision by the Kantonspolizei Bern to release unpixelated images was described as a “rarely used means,” indicating that the police were aware of the risks but felt the severity of the October 2025 riots justified the move. However, the fact that a private citizen could achieve “100 percent identification” within five minutes suggests that the police’s reliance on traditional “tips” is becoming obsolete in the age of AI.
As the legal community in Switzerland and beyond grapples with these developments, the central question remains: should the law evolve to give police these tools, or should the law be strengthened to prevent the public from using them for “vigilante” identification? For now, the Bern police remain limited to legally secured methods, while the digital world continues to operate without such boundaries.
The police continue to seek the remaining suspects from the October events. Further updates regarding the legal status of AI tools in Swiss law enforcement are expected as policymakers review the impact of this incident.
Do you believe law enforcement should be allowed to use AI facial recognition if it means faster identification of criminals, or does the risk to privacy outweigh the benefits? Share your thoughts in the comments below.