As Meta prepares to dismantle its long-standing third-party fact-checking program across Facebook, Instagram, and Threads, a new wave of apps and tools is emerging to fill the gap—promising users more control over the information they see online. The shift comes at a time when misinformation and polarized discourse have reached new heights, raising critical questions about who will now police the truth in the digital public square.
Announced in January 2025 by Meta CEO Mark Zuckerberg, the decision to end fact-checking partnerships—citing concerns over “political bias” and “destroyed trust”—has sent shockwaves through the tech and media worlds. The move aligns with broader industry trends, where platforms are increasingly prioritizing user autonomy over centralized content moderation. But with Meta’s fact-checking labels disappearing, what will replace them? And how can users trust the information they encounter daily?
Enter techbook, a new app designed to give individuals granular control over which posts and accounts they see on social media. Unlike traditional fact-checking tools, which rely on external verification, techbook takes a user-driven approach: it allows individuals to flag content as misleading, false, or unreliable—and then uses algorithmic curation to suppress or hide that content from their feeds. The app’s creators argue this model is more transparent and less prone to the perceived biases of third-party fact-checkers.
But is this the solution we’ve been waiting for? Or does it simply shift the burden of truth onto an unregulated, decentralized system? Below, we break down how techbook works, its potential impact on misinformation, and what it means for the future of social media.
How techbook Works: A User-Centric Approach to Misinformation
Unlike Meta’s fact-checking program—which relied on partnerships with organizations like Poynter’s International Fact-Checking Network and Snopes—techbook operates on a crowdsourced, feedback-driven model. Here’s how it functions:
- Flagging System: Users can report posts, accounts, or even entire domains as containing false or misleading information. Flags are categorized (e.g., “false,” “exaggerated,” “opinion posed as fact”) to refine the algorithm’s response.
- Algorithmic Filtering: The app’s AI analyzes flagged content and adjusts visibility based on community consensus. Highly disputed posts may be deprioritized or hidden entirely from users who have flagged similar content in the past.
- Transparency Dashboard: Users can view aggregated data on which accounts or topics have been most frequently flagged in their network, fostering accountability.
- Cross-Platform Integration: While initially launching as a standalone app, techbook plans to integrate with major social media platforms (including Facebook and Instagram) to sync flagging data across users’ feeds.
The app’s founders emphasize that techbook is not about censorship but about choice. “We’re not here to tell you what’s true,” they state in a public manifesto. “We’re here to give you the tools to decide for yourself—and to see what your community agrees is credible.”
Why Now? The Collapse of Meta’s Fact-Checking Program
Meta’s decision to end its fact-checking program is not an isolated move. It reflects a broader industry reckoning with the limitations of centralized moderation, especially in politically charged environments. Zuckerberg’s January 2025 announcement—made in a video addressing “censorship” concerns—marked a pivot away from third-party oversight, arguing that fact-checkers had “destroyed more trust than they’ve created”.

Critics, however, warn that this shift could exacerbate the spread of misinformation, particularly during election cycles or public health crises. A 2024 study by the Pew Research Center found that 56% of U.S. Adults believe social media platforms “do a poor job” of identifying false information, a sentiment likely to grow as labels disappear.
Techbook’s rise coincides with this vacuum. Its user-driven model aligns with growing public skepticism toward institutional fact-checking—especially among younger audiences, who distrust traditional media at record levels. According to a 2025 survey by Edelman Trust Barometer, only 37% of Gen Z respondents trust social media platforms to provide accurate information, down from 52% in 2020.
Who Stands to Gain—and Who Loses?
The implications of techbook’s approach are far-reaching, affecting users, platforms, and even democratic processes. Here’s a breakdown of the key stakeholders:
Users
For individuals weary of algorithmic bias or frustrated by one-size-fits-all moderation, techbook offers a personalized solution. By allowing users to curate their own “truth filters,” the app could reduce exposure to polarizing content—though it also risks creating echo chambers where misinformation thrives unchecked.
Social Media Platforms
Platforms like Meta may see techbook as a way to offload responsibility for content moderation. By integrating techbook’s flagging system, they could argue they’re empowering users while sidestepping legal and reputational risks. However, this could also lead to a fragmented moderation landscape, where different apps apply different standards.
Fact-Checking Organizations
Traditional fact-checkers may view techbook as a threat, given its potential to undermine their authority. Organizations like PolitiFact and AFP Fact Check have long relied on platform partnerships to reach audiences. If users increasingly trust crowdsourced flags over professional verification, these organizations could face declining influence.
Democracy and Public Health
The biggest wild card is the impact on societal trust. During elections or health emergencies, misinformation can have real-world consequences. A user-driven system like techbook could either democratize truth—or leave it vulnerable to manipulation by terrible actors. For example, coordinated inauthentic behavior (e.g., bots or troll farms) could exploit the app’s flagging system to suppress legitimate news while amplifying falsehoods.
What Happens Next? The Road Ahead for techbook
Techbook is still in its early stages, with a beta launch expected later in 2026. Its success will depend on several factors:

- Adoption Rates: Will enough users engage with the flagging system to create meaningful filters? Or will it become another abandoned app in a sea of failed moderation tools?
- Platform Integration: Can techbook secure partnerships with major social media companies without becoming a tool for censorship—or worse, a vector for disinformation?
- Regulatory Scrutiny: Governments and watchdogs are likely to monitor techbook closely, especially if it’s used to suppress political speech or manipulate public opinion.
- Algorithmic Fairness: Will the AI behind techbook’s filtering be transparent enough to avoid reinforcing biases (e.g., suppressing minority viewpoints or amplifying fringe theories)?
One thing is clear: the era of top-down fact-checking is fading. Whether techbook’s user-centric model becomes the new standard—or a cautionary tale—will hinge on its ability to balance autonomy with accountability.
Key Takeaways
- Meta’s fact-checking program ends in 2025, removing a key tool for combating misinformation on Facebook, Instagram, and Threads.
- techbook offers a crowdsourced alternative, letting users flag and hide misleading content based on community consensus.
- User-driven moderation risks fragmentation, with no single standard for what counts as “true” or “false.”
- Platforms may adopt techbook to avoid liability, but this could lead to inconsistent moderation across apps.
- Democracy and public health depend on trust, making it critical to monitor how techbook’s model evolves.
What You Can Do Now
If you’re concerned about misinformation on social media, here’s how to stay informed and engaged:
- Follow official updates from fact-checking networks and trustworthy research organizations.
- Use tools like Invidious (for YouTube) or NewsGuard to evaluate sources before sharing.
- Participate in beta tests for apps like techbook—but critically assess their limitations.
- Advocate for transparency in algorithms, pushing platforms to disclose how content is ranked and filtered.
The future of truth online is being shaped right now. Whether it’s through apps like techbook, regulatory pressure, or new industry standards, the conversation is far from over. What’s your take? Will user-driven moderation work—or will we need a different approach entirely?
Share your thoughts in the comments, and don’t forget to follow World Today Journal for updates on this developing story.