AI-Generated Fake Soldier Profile on OnlyFans Sparks Swiss Army Investigation

AI-Generated Soldier Profile Used in Online Scam

A fabricated social media presence, featuring images of a purported Swiss soldier named “Selina Zender,” has been exposed as a sophisticated scam leveraging artificial intelligence. The account, which gained traction on platforms like Instagram, was used to promote a fraudulent OnlyFans profile, prompting concerns about the misuse of AI for deceptive purposes and potential exploitation. The incident highlights a growing trend of AI-generated content being used to create convincing, yet entirely fictitious, personas online, raising questions about online authenticity and security.

The fake account initially surfaced on the Instagram meme page @army_szene, where posts featuring the AI-generated images of “Selina Zender” were shared. These posts, according to reports, offered incentives – including monetary rewards – for likes and shares, artificially boosting engagement. The scheme aimed to drive traffic to a linked OnlyFans account, a platform known for user-generated content, often of an adult nature. The Swiss Army has been alerted to the situation and is evaluating potential legal action, though the specifics of any potential charges remain under review by legal authorities.

The Rise of AI-Powered Deception

The case of “Selina Zender” is not isolated. A recent investigation by Swiss public broadcaster SRF, conducted in late October 2025, revealed a broader pattern of accounts deliberately engineered to generate engagement on social media for advertising purposes. The SRF report detailed how these accounts operate, often employing deceptive tactics to attract followers and inflate metrics. This particular instance demonstrates a more insidious application of these tactics, directly linking fabricated online personas to potentially exploitative platforms.

According to reports from Watson and Blick, the images used to create the “Selina Zender” profile were manipulated using artificial intelligence, based on photos of actual Swiss military personnel. This raises serious concerns about the unauthorized use of individuals’ likenesses and the potential for reputational damage. The use of AI to create these deepfakes makes detection increasingly difficult, as the generated images can be remarkably realistic.

Instagram Account Remains Active, Warnings Ignored

Despite being flagged by users who recognized the images as AI-generated, the account associated with “Selina Zender” remained active as of February 26, 2026. Comments alerting others to the fraudulent nature of the profile were reportedly deleted, and administrators of the @army_szene account did not respond to inquiries from the Swiss magazine *Schweizer Soldat*. This lack of responsiveness suggests a deliberate effort to maintain the deception and continue profiting from the scheme.

Users clicking on links to the associated OnlyFans account are met with a warning indicating a potential scam. Though, the warning can be bypassed, allowing individuals to proceed to the fraudulent profile. This highlights the limitations of current platform safeguards in preventing users from falling victim to these types of scams. The persistence of the account, despite numerous reports, underscores the challenges social media companies face in combating AI-driven disinformation and fraudulent activity.

Swiss Army Response and Potential Legal Ramifications

The Swiss Army is aware of the situation and is investigating the matter. However, the Army has clarified that the determination of whether a criminal offense has been committed rests with the relevant prosecuting authorities. If the publications are deemed legally reprehensible, the Army may take further action. The potential legal ramifications could include charges related to identity theft, fraud, and the unauthorized use of personal data. The specific charges would depend on the findings of the investigation and the applicable Swiss laws.

The incident raises broader questions about the legal framework surrounding AI-generated content and the responsibility of social media platforms in policing such content. Currently, the legal landscape is still evolving, and there is a lack of clear regulations specifically addressing the misuse of AI for deceptive purposes. This case could potentially contribute to the development of new legal precedents and regulations aimed at protecting individuals from AI-driven scams.

The Broader Implications of AI-Generated Identities

The “Selina Zender” case serves as a stark reminder of the potential for AI to be used to create convincing, yet entirely fabricated, online identities. This technology can be used not only for financial scams, as seen in this instance, but also for political disinformation, social engineering, and other malicious purposes. The increasing sophistication of AI-generated content makes it increasingly difficult for individuals to distinguish between real and fake profiles, eroding trust in online interactions.

Experts warn that this is just the beginning of a new era of online deception. As AI technology continues to advance, it will become even easier to create realistic fake personas and manipulate public opinion. This necessitates a multi-faceted approach to combating AI-driven disinformation, including technological solutions for detecting fake content, increased media literacy education, and stronger legal frameworks for holding perpetrators accountable.

https://www.instagram.com/reel/DUtKHOGDKhr/" width="560" height="660" frameborder="0" scrolling="no" allowtransparency="true

Key Takeaways

  • AI-Powered Scams are Increasing: The case of “Selina Zender” demonstrates a growing trend of using AI to create fake online personas for fraudulent purposes.
  • Social Media Platforms Face Challenges: Current platform safeguards are often insufficient to prevent users from falling victim to AI-driven scams.
  • Legal Frameworks are Evolving: The legal landscape surrounding AI-generated content is still developing, and there is a need for clearer regulations.
  • Increased Vigilance is Crucial: Users must be more vigilant about verifying the authenticity of online profiles and content.

The Swiss Army continues to monitor the situation and cooperate with law enforcement agencies. The investigation is ongoing, and further updates will be provided as they become available. As AI technology continues to evolve, it is crucial for individuals and organizations to remain informed about the potential risks and take steps to protect themselves from AI-driven scams and disinformation. We encourage readers to share their experiences and insights in the comments below.

Leave a Comment