Fake disease reveals how AI spreads misinformation

A fake medical condition created for a research experiment has shown how easily artificial intelligence systems can spread false health information.

Fake disease reveals how AI spreads misinformation

David Parkins / File: Nature

A made-up medical condition has revealed how easily artificial intelligence and even researchers can be fooled by convincing misinformation.

The fake illness, called “bixonimania,” was created in 2024 by Almira Osmanovic Thunström as part of an experiment. She published fake studies online to see if AI chatbots would treat the condition as real. They did.

Within weeks, several major AI systems were describing bixonimania as a genuine eye condition, linking it to screen use and even suggesting people seek medical advice.

AI generated condition / File: PROF. Mustafa ÖZDOĞAN

The experiment worked so well that the false information didn’t just stay in AI systems. In one case, it was cited in a real academic paper, which was later retracted.

Experts say this highlights a serious issue: AI tools don’t actually verify facts in the way a human researcher or clinician would. Instead, they generate responses by predicting likely text based on patterns in the data they were trained on. Because of that, information that is written in a formal, authoritative style, such as academic papers, medical reports, or clinical language can be treated as more trustworthy than it really is. As a result, if false or misleading claims are packaged in that kind of format, AI systems may repeat them confidently, without any built-in understanding of whether they are true or not.

“This is how misinformation spreads,” said Alex Ruani, warning that both AI and humans can be misled if sources aren’t carefully checked.

The fake condition may not be real, but the problem it exposed is.

Share

Most read articles