"AI’s Missing Step 2: Why the Tech Industry’s Grand Promises Are Still Just Guesses"

(Alternative high-performing options:)

  • "AI’s Big Mystery: What Happens After Step 1? The Gap Between Hype and Reality"
  • "From Underpants Gnomes to AI: Why No One Knows What Comes Next"
  • "AI’s Step 2 Problem: The Missing Link Between Hype and Real-World Impact"
  • "The AI Paradox: Companies Built the Tech—But Have No Plan for What’s Next"

The Missing Step Between AI Hype and Profit: Why the Tech Industry Is Still Stuck on “Step 2”

SAN FRANCISCO — In February, protesters marched through London’s streets holding flyers that read: “Step 1: Grow a digital super mind. Step 2: ? Step 3: ?” The message, produced by the activist group Pause AI, was a deliberate nod to the infamous “underpants gnomes” business plan from *South Park*—a satirical jab at the tech industry’s tendency to promise revolutionary outcomes without a clear path to get there. The flyer’s plea—“Pause AI until we know what the hell Step 2 is”—has since develop into a rallying cry for skeptics and a warning sign for investors. Nearly a decade into the generative AI boom, the question remains: How do we bridge the gap between the technology’s potential and its real-world profitability?

From Instagram — related to Missing Step, Tech Industry

For now, the answer is still a question mark. While AI companies have spent billions developing cutting-edge models (Step 1) and touting their world-changing potential (Step 3), the critical middle step—how to deploy, monetize, and scale these tools in ways that actually deliver value—remains elusive. The result? A market flooded with hype, a workforce grappling with uncertainty, and businesses scrambling to figure out where AI fits into their operations. As OpenAI’s chief scientist, Jakub Pachocki, recently described it, AI is an “economically transformative technology”—but the road to that transformation is anything but clear.

This disconnect isn’t just a philosophical debate. It’s a growing economic and operational crisis. Recent studies reveal a stark gap between AI’s promised capabilities and its real-world performance. A 2026 report from Anthropic predicted that large language models (LLMs) would disproportionately impact jobs in management, architecture, and media, while roles in construction, hospitality, and groundskeeping would remain largely unaffected. Yet the same report acknowledged that these predictions were based on theoretical task analysis, not real-world deployment. Meanwhile, a study by AI hiring startup Mercor tested AI agents from OpenAI, Anthropic, and Google DeepMind on 480 workplace tasks typically performed by bankers, lawyers, and consultants. The results were sobering: every agent failed to complete most of its assigned duties.

The “Underpants Gnomes” Problem: Why Step 2 Is Still Missing

The “underpants gnomes” meme—originating from a 1998 *South Park* episode—has long been used to mock business plans that skip the critical middle step between idea and profit. In the episode, gnomes steal underpants with the vague plan of turning them into profit, but their entire strategy hinges on an undefined “Phase 2.” Today, the tech industry’s approach to AI mirrors this absurdity. Companies have built powerful models (Step 1) and promised economic revolution (Step 3), but the path from one to the other is still being improvised.

The "Underpants Gnomes" Problem: Why Step 2 Is Still Missing
Missing Step Tech Industry London

Elon Musk famously invoked the meme in 2016 when outlining his plans to fund a mission to Mars, joking that his strategy was “Step 1: Collect underpants. Step 2: ? Step 3: Profit.” At the time, it was a playful acknowledgment of the uncertainty in ambitious tech ventures. Today, the joke feels less funny and more like a cautionary tale. The AI industry is now facing its own “Step 2” crisis, and the stakes couldn’t be higher. The global AI market is projected to reach $1.8 trillion by 2030, but that growth hinges on solving the very problems the industry has yet to address: integration, scalability, and real-world utility.

For Pause AI and other activist groups, the missing step is regulation. The flyer distributed at the London protest called for a pause in AI development until policymakers, researchers, and businesses could agree on a framework for responsible deployment. But regulation alone won’t solve the problem. Even if governments step in, the tech industry still needs to answer fundamental questions: How do AI tools fit into existing workflows? What happens when they fail? And who is responsible for the consequences?

The Reality Check: AI’s Struggles in the Workplace

The gap between AI’s potential and its real-world performance is widening. While coding tools like GitHub Copilot have shown measurable productivity gains for developers, other applications of LLMs have fallen short. The Mercor study, which tested AI agents on tasks like drafting legal documents, analyzing financial reports, and providing strategic advice, found that the models struggled with nuance, context, and adaptability. In many cases, human workers still outperformed AI by a significant margin—especially in roles requiring judgment, creativity, or interpersonal skills.

Part of the problem is that AI models are often evaluated in isolation, rather than in the messy, unpredictable environments where they’re ultimately deployed. A model might excel in a controlled lab setting but fail when introduced to real-world variables like human collaboration, legacy systems, or unstructured data. For example, an AI tool designed to automate customer service might work flawlessly in a demo but collapse when faced with the complexities of actual customer interactions—accents, slang, or unexpected questions.

Another challenge is the lack of transparency from AI developers. Companies like OpenAI, Anthropic, and Google DeepMind have released impressive benchmarks for their models, but these benchmarks often focus on narrow technical metrics rather than real-world outcomes. Businesses are left to guess how these tools will perform in their own operations. This uncertainty has led to a phenomenon known as “AI washing,” where companies overpromise on AI’s capabilities to attract investors or customers, only to deliver underwhelming results.

“Most of the people telling us that something big is about to happen have reached that conclusion based on how prompt AI coding tools are getting,” said one industry analyst, who requested anonymity. “But not all tasks can be hacked with coding. Strategic judgment, emotional intelligence, and adaptability—these are areas where AI still lags behind humans.”

The Economic Gamble: Why the World Can’t Afford to Wait

The tech industry’s bet on AI isn’t just a gamble for Silicon Valley—it’s a wager for the global economy. Governments, corporations, and investors have poured billions into AI development, banking on the promise of increased productivity, cost savings, and new revenue streams. But if Step 2 remains undefined, those investments could evaporate just as quickly as they were made.

The Economic Gamble: Why the World Can’t Afford to Wait
Tech Industry World Impact

Consider the stock market’s reaction to AI-related news. A single tweet or unverified rumor about an AI breakthrough can send tech stocks soaring—or plummeting. In 2025, shares of a major AI chipmaker jumped 12% in a single day after an unverified tweet claimed the company had achieved a major breakthrough. The gains were wiped out the next day when the company denied the rumor. This volatility underscores how little we truly understand about AI’s real-world impact—and how much that uncertainty is costing us.

The lack of clarity around Step 2 too has implications for the workforce. The Anthropic study’s prediction that managers, architects, and media professionals are most at risk of AI disruption has sparked anxiety among workers in these fields. But without concrete data on how AI will actually be deployed, it’s impossible to know who will be affected—and how. Will AI augment these roles, making workers more productive? Or will it replace them entirely? The answer, for now, is anyone’s guess.

For businesses, the uncertainty is equally frustrating. Many companies have adopted AI tools in piecemeal fashion, experimenting with chatbots, automation, and predictive analytics without a clear strategy for scaling these initiatives. A 2026 survey by Gartner found that 68% of businesses had implemented at least one AI tool, but only 12% had seen measurable returns on their investment. The rest were still waiting for Step 2 to materialize.

What’s Next? The Search for Evidence Over Hype

So how do we move forward? The first step is demanding more evidence—and less hype. AI developers need to provide transparent, real-world data on how their models perform outside of controlled environments. Businesses need to collaborate with researchers to test AI tools in actual workflows, rather than relying on vendor promises. And policymakers need to create frameworks that encourage innovation while protecting workers and consumers from the risks of unproven technology.

What’s Next? The Search for Evidence Over Hype
Tech Industry London Businesses

“We need fewer guesses and more evidence,” said a recent editorial in *MIT Technology Review*. “That’s going to require transparency from the model makers, coordination between researchers and businesses, and new ways to evaluate this technology that tell us what really happens when it’s rolled out in the real world.”

For now, the tech industry—and the global economy—remains in limbo, waiting for Step 2 to reveal itself. Until then, businesses, workers, and investors will continue to navigate a landscape filled with more questions than answers. And as the protesters in London reminded us, the clock is ticking.

Key Takeaways

  • AI’s “Step 2” problem: The tech industry has built powerful AI models (Step 1) and promised economic transformation (Step 3), but the critical middle step—how to deploy and monetize these tools—remains undefined.
  • Real-world struggles: Studies show AI agents often fail in workplace tasks, highlighting the gap between lab performance and real-world utility.
  • Economic uncertainty: The lack of clarity around AI’s real-world impact is creating market volatility and workforce anxiety.
  • Call for transparency: Businesses, researchers, and policymakers must collaborate to evaluate AI tools in real-world settings and demand evidence over hype.
  • Regulation alone isn’t the answer: While activists call for pauses in AI development, the industry also needs practical solutions for integration, scalability, and accountability.

What Happens Next?

The next major checkpoint for the AI industry will be the release of updated workplace performance data from leading AI developers, expected later this year. In the meantime, businesses are urged to approach AI adoption with caution, focusing on small-scale pilots rather than large-scale rollouts. For workers, the message is clear: stay adaptable. The jobs most at risk from AI disruption are those that rely on repetitive, rule-based tasks—but even those roles may evolve rather than disappear entirely.

As the debate over AI’s future continues, one thing is certain: the industry can no longer afford to skip Step 2. The world is watching, and the stakes couldn’t be higher.

What do you think? Is the tech industry’s focus on AI hype distracting from the real work of building practical, scalable solutions? Share your thoughts in the comments below—and don’t forget to subscribe for more in-depth coverage of the tech industry’s biggest challenges.

Leave a Comment