In recent months, a network of artificial intelligence-generated YouTube channels has drawn significant attention for promoting false narratives about Canadian provincial separatism and potential annexation by the United States. These channels, which mimic the appearance and tone of legitimate Canadian news outlets, have collectively amassed tens of millions of views while spreading unsubstantiated claims about widespread public support for secession among provinces including Alberta, Manitoba, Saskatchewan, Quebec and British Columbia.
The phenomenon was first highlighted by researchers at the Canadian Digital Media Research Network (CDMRN), which identified approximately two dozen YouTube channels utilizing AI-generated scripts, synthetic voice narration, and paid actors to produce content designed to exacerbate regional tensions. According to their findings, these channels have accumulated around 40 million views in total, leveraging YouTube’s recommendation algorithms to reach broad audiences beyond those actively seeking separatist content.
Aengus Bridgman, director of the Media Ecosystem Observatory, explained that the core narrative promoted by these channels falsely asserts a mass movement across multiple Canadian provinces seeking to join the United States. He cited one example in which a video claimed 58% of Manitobans wished to leave Canada—a figure he stated has no basis in any credible polling data. “There is zero survey that demonstrates this,” Bridgman said. “It’s completely false and clearly intended to manipulate public perception.”
Jean-Christophe Boucher, an associate professor of political science at the University of Calgary and expert on foreign interference, reported encountering these videos in his own YouTube recommendations despite not seeking such content. He noted that their appearance in mainstream feeds indicates the disinformation campaign is reaching general audiences, not just niche communities interested in separatism. “The fact that these videos are being recommended to ordinary users means they are penetrating the wider public discourse,” Boucher observed.
The content falls under what researchers describe as “slopaganda”—a portmanteau of “slop” and “propaganda”—referring to low-effort, AI-generated material designed not for persuasive depth but for volume and algorithmic engagement. These videos often employ repetitive formats, such as multi-hour “history for sleep” documentaries or automated news-style segments, produced at minimal cost using tools like Claude for script generation and ElevenLabs for realistic voice synthesis.
This approach mirrors other AI-driven content operations that have proven financially lucrative. In a separate case, a 22-year-old creator named Adavia Davis was reported to have built a largely automated YouTube network generating between $40,000 and $60,000 per month—approximately $700,000 annually—through similar techniques. Davis’s operation, which relies on internally developed tools like TubeGen to orchestrate production, exemplifies how low-cost AI pipelines can exploit platform algorithms for profit, even when the content serves misleading or divisive purposes.
While no direct legal action has been taken against these specific channels as of April 2025, the presence of foreign-backed disinformation targeting Canadian unity has drawn concern from national security analysts. The Canadian government has previously warned about external actors using digital platforms to amplify societal divisions, particularly around issues of regional identity and federal-provincial relations. Experts suggest that monitoring algorithmic amplification and improving media literacy are key steps in countering such influence operations.
As of now, We find no scheduled public hearings or official reports specifically addressing this AI-generated disinformation network. However, ongoing research by institutions like CDMRN and the Media Ecosystem Observatory continues to track the evolution of these campaigns. For updates on digital interference threats and media integrity initiatives, readers can consult official publications from the Canadian Security Intelligence Service and the Department of Canadian Heritage.
What do you think about the rise of AI-generated content in shaping public discourse? Share your thoughts in the comments below and aid spread awareness by sharing this article with your network.