An Indian medical student has revealed how he used artificial intelligence to create a fictional conservative social media influencer, generating thousands of dollars by selling AI-generated images and videos of the persona online. The student, identified only as Sam to protect his medical career and immigration status, told WIRED that he developed the character Emily Hart—a purported nurse and lookalike of actress Jennifer Lawrence—using Google’s Gemini AI tool. His goal was to earn extra income while studying for medical certification exams in northern India, with plans to eventually emigrate to the United States after graduation.
According to Sam’s account, initial attempts to create generic “sexy” AI models failed to gain traction on Instagram due to oversaturation. After consulting Gemini for advice, the chatbot reportedly suggested targeting the MAGA/conservative niche as a strategic advantage, noting that this audience—particularly older men in the U.S.—tends to have higher disposable income and stronger brand loyalty. Sam then crafted Emily Hart’s online presence to reflect conservative viewpoints, including opposition to abortion, hardline stances on immigration, and support for Christian nationalism. One post attributed to the influencer read: “If you necessitate a reason to unfollow me: Christ is king, abortion is murder, and all illegal immigrants should be deported.”
The fabricated influencer quickly gained traction, with Sam claiming he earned thousands of dollars per month from selling Emily Hart’s photos and videos. Many followers reportedly believed the account was authentic, engaging with content that weighed in on politically charged topics such as immigration policy and reproductive rights. One post even suggested that Donald Trump could offer citizenship to undocumented immigrants in exchange for Republican votes—a scenario designed to provoke liberal backlash, as reported by the New York Post.
However, the deception did not last. In February 2024, Emily Hart’s Instagram account was suspended for fraudulent activity, and her Facebook page was subsequently removed. Despite the takedowns, archived versions of the content were reviewed by WIRED and the New York Post before deletion. Sam acknowledged that convincing segments of the MAGA audience proved “effortless,” highlighting concerns about how easily AI-generated personas can exploit ideological echo chambers for financial gain.
The case underscores growing anxieties about the misuse of generative AI in digital deception, particularly when combined with political polarization. As AI tools become more accessible and capable of producing photorealistic images and persuasive text, experts warn that bad actors may increasingly fabricate identities to manipulate public discourse, spread disinformation, or profit from niche audiences. While platforms like Meta have policies against inauthentic behavior, detection often lags behind the speed at which synthetic personas can be deployed and monetized.
Sam’s story also raises ethical questions about the responsibilities of AI developers. Although a Gemini representative stated that the chatbot is designed to provide neutral responses without favoring any ideology, the incident suggests that even neutral advice—such as identifying underserved markets—can be repurposed for deceptive ends when combined with generative imagery and targeted messaging. WIRED reached out to Google for comment but did not receive a response prior to publication.
For now, Sam remains anonymous, citing fears that exposure could jeopardize both his medical aspirations and his hopes to relocate to the United States. His experience serves as a cautionary tale about the intersection of emerging technology, online identity, and the vulnerabilities inherent in politically segmented digital communities.
Understanding the Rise of AI-Generated Influencers in Political Niches
The emergence of AI-generated influencers like Emily Hart reflects a broader trend in which synthetic media is used to cultivate followings in specific ideological or interest-based communities. Unlike traditional influencers who build personas through real-life experiences, AI-generated figures are constructed entirely from algorithms, allowing creators to fine-tune appearance, voice, and messaging for maximum engagement. In politically charged environments, these virtual personas can amplify partisan narratives without accountability, blurring the line between authentic advocacy and manufactured outrage.

Researchers note that conservative online spaces in the U.S. Have proven particularly receptive to certain types of AI-generated content, especially when it aligns with culturally resonant symbols such as religious imagery, nationalistic rhetoric, or opposition to perceived liberal elites. The financial incentive is significant: audiences in these niches often demonstrate high engagement rates and willingness to support creators through subscriptions, merchandise, or direct payments for exclusive content.
Still, the use of AI to misrepresent identity raises serious concerns about consent, transparency, and the erosion of trust in digital spaces. When followers believe they are interacting with a real person—especially one sharing personal beliefs or lifestyle content—they may be more susceptible to manipulation, whether commercial or ideological. Regulators and platform administrators are increasingly scrutinizing such practices, though clear guidelines around disclosure for AI-generated influencers remain inconsistent across jurisdictions.
Platform Responses and the Challenge of Detecting Synthetic Identities
Following the exposure of Emily Hart as an AI-generated fabrication, both Instagram and Facebook—owned by Meta—removed the associated accounts for violating policies against inauthentic behavior. Meta’s guidelines prohibit the use of fake accounts to mislead others about identity or origins, particularly when used for deceptive or financially exploitative purposes. However, critics argue that enforcement tends to be reactive, often only occurring after media scrutiny or user reports bring attention to the deception.
Detecting AI-generated influencers remains a technical challenge. While some tools analyze inconsistencies in lighting, facial symmetry, or blinking patterns to identify deepfakes or synthetic images, advanced generative models like those powered by Google’s Gemini or OpenAI’s DALL·E are increasingly capable of producing outputs that evade automated detection. Human moderators may struggle to distinguish between heavily edited real photos and fully synthetic ones, particularly when the content avoids overtly implausible scenarios.
Experts recommend that platforms invest in provenance-tracking technologies, such as digital watermarking or blockchain-based metadata, to aid verify the origin of images and videos. Clearer labeling requirements—similar to those proposed for political ads—could help users discern when they are engaging with AI-generated personas. Until such measures are widely adopted, users are advised to approach ideologically aligned social media accounts with heightened skepticism, especially those that promise exclusive content in exchange for payment.
Why This Case Matters for the Future of Digital Identity
The Sam and Emily Hart case illustrates how accessible AI tools are lowering the barrier to entry for sophisticated online deception. What once required teams of graphic writers, photographers, and marketers can now be accomplished by a single individual with a laptop and access to generative models. This democratization of creation carries both creative potential and significant risks, particularly when financial incentives intersect with ideological vulnerability.

For medical students like Sam—facing high educational costs, uncertain immigration prospects, and pressure to succeed—such schemes may appear as viable shortcuts to financial stability. Yet the long-term consequences include reputational harm, platform bans, and potential legal exposure if fraud or harassment can be proven. The normalization of AI-driven impersonation threatens to undermine the credibility of genuine voices in online discourse, making it harder for audiences to discern truth from fabrication.
As generative AI continues to evolve, stakeholders across technology, journalism, education, and policy must collaborate to establish ethical frameworks that balance innovation with accountability. Transparency about AI use, stronger platform safeguards, and media literacy initiatives will be essential in mitigating harm while preserving the benefits of these powerful tools.
At present, no official investigations or legal actions against Sam have been publicly reported. The primary sources for this story remain his interviews with WIRED and the New York Post, supplemented by archived reviews of the Emily Hart content prior to its removal. Readers seeking updates on platform policies regarding AI-generated content or developments in AI ethics are encouraged to consult official blogs from Meta, Google, and reputable technology news outlets such as MIT Technology Review or Ars Technica.
If you found this article informative, consider sharing it with others interested in technology ethics, social media trends, or the impact of AI on digital identity. Join the conversation by leaving a comment below—we welcome thoughtful perspectives on how society should navigate the evolving landscape of artificial intelligence and online authenticity.