Unmasking Bixonimania: How an Invented ‘Disease’ by Almira Osmanovic Thunström Went Viral from Blogs to Chatbots in 2024

A fake disease invented by researchers has fooled artificial intelligence systems and found its way into scientific publications, revealing critical vulnerabilities in how AI processes medical information.

The condition, called bixonimania, was created in 2024 by a team led by Almira Osmanovic Thunström from the University of Gothenburg to test whether large language models would accept and spread fabricated medical information as fact.

Described as a skin condition caused by excessive exposure to blue light from screens, bixonimania was presented with symptoms including sore eyes, darkening around the eyelids, and a compulsive need to rub the eyes. The researchers deliberately chose an obviously inappropriate name ending in “-mania” to signal its falsity to expert readers.

Despite these warning signs, AI chatbots began recommending bixonimania as a diagnosis for users reporting screen-related eye strain. Microsoft Copilot described it as “an intriguing and relatively rare condition,” even as Gemini stated it was “a condition caused by excessive exposure to blue light.”

The experiment took a troubling turn when the fake papers were cited in peer-reviewed literature, suggesting some researchers relied on AI-generated references without verifying the underlying sources. This highlighted a growing concern about the integrity of scientific knowledge in the age of AI.

Osmanovic Thunström explained that the goal was to see if she could create a medical condition absent from existing databases and observe whether AI systems would treat it as real. “I wanted to see if I can create a medical condition that did not exist in the database,” she said in an interview with Nature.

The two preprint papers describing bixonimania were uploaded to a server in early 2024. Within weeks, major AI systems began repeating the invented condition as if it were legitimate medical advice. By April 2026, the fabricated disease continued to appear in AI-generated health responses.

The researchers noted that many steps were taken to ensure the falsity of the condition was apparent, including attributing the research to fictional authors from non-existent institutions like Asteria Horizon University in Nova City, California. An acknowledgment was also made to “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise,” further underscoring the satirical intent.

By inventing a condition with an overtly psychiatric-sounding name and implausible origins, the team aimed to test the boundaries of AI gullibility and the potential for misinformation to infiltrate scientific discourse. The case of bixonimania serves as a cautionary tale about the risks of AI systems amplifying unverified information, particularly in health contexts where accuracy is paramount.

As AI tools become increasingly integrated into healthcare and medical research, the bixonimania experiment underscores the need for robust verification processes and critical evaluation of AI-generated content, especially when it comes to medical advice and scientific references.

For ongoing updates on AI safety and medical information integrity, readers are encouraged to follow developments from authoritative sources in technology ethics and public health.

Share your thoughts on this story in the comments below, and help spread awareness about the importance of verifying medical information in the digital age.

Leave a Comment