AI and Job Displacement: The Hidden Costs of Automation

Gary Grossman,Edelman 2025-08-10 20:35:00

the AI revolution: Navigating the promise and Peril of Rapid Advancement

Artificial intelligence is no⁢ longer a futuristic ​concept; it’s actively reshaping our world. The ⁤innovations unfolding today will have ‍lasting consequences,⁢ but a critical question underlies this⁢ rapid progress:⁤ can the tools themselves⁢ be trusted? This isn’t simply‌ about technological⁢ capability, but about ⁤the human implications of a revolution happening at breakneck speed.

The High-Stakes Gamble

Companies are aggressively piloting and deploying AI, driven by both genuine belief in its potential and a fear of being ‌left behind. A potential “AI winter” – a period of​ disillusionment​ and stalled progress – looms if these systems ​fail to deliver on ⁢their promises.​ though,​ the prevailing hope is that​ software engineering advancements will overcome‍ current limitations. And that’s⁢ likely true, at least to some ⁢extent. The core bet is that AI will scale effectively, and that the productivity gains will outweigh any disruption. ⁤ ⁣We’re assuming that any ⁢loss of human nuance will be ​compensated for by increased ‌reach and efficiency. This is the gamble we’re taking. But beyond the gamble ​lies a powerful dream: AI ​as a force for ‍widespread abundance,⁤ inclusivity,‌ and expanded ​chance. A future where intelligence and access aren’t concentrated‌ in the hands of a few, but are democratized for all. The tension arises from the gap between this dream and the reality of the current trajectory. We’re proceeding as if the gamble automatically guarantees the‍ dream. We’re hoping acceleration will lead to ⁤a better outcome, and trusting that it won’t ⁤erode the​ very human qualities that ‌make⁢ that outcome worthwhile. However, history ​teaches us that even prosperous ventures can leave many behind. The current, frequently enough chaotic, conversion isn’t a​ mere side effect.It’s a direct consequence of speed outpacing our collective ability‍ to adapt thoughtfully and with ‌care. This “cognitive‌ migration” is happening now, fueled as much by faith as by ‍concrete evidence.

Beyond Better Tools: Asking the Hard Questions

The challenge isn’t solely about building ⁣more sophisticated AI. It’s about critically examining where these tools are leading ‌us. We’re not simply migrating to an unknown destination; we’re doing so while the map itself is being drawn. Every migration carries ⁤hope, but unexamined⁣ hope can be dangerous. it’s time to move beyond “where are‍ we ‍going?” and ask “who will belong when we arrive?” Consider these crucial questions: Equity & Access: How do we ensure ‍AI benefits everyone, ⁢not just a‍ select few? Job Displacement: What strategies‍ will mitigate the impact of automation on‍ the workforce? Bias & Fairness: How can we prevent AI systems ​from perpetuating and amplifying existing societal biases? Transparency & Accountability: How do we understand how AI makes⁢ decisions‌ and hold it accountable⁢ for ⁣its actions? Human Values: How⁤ do we preserve human connection, creativity, and critical thinking in an AI-driven world? These aren’t ‍technical problems; thay’re fundamentally human ones. Addressing ⁣them requires a multi-faceted ⁢approach involving technologists, policymakers, ethicists, and the public.Ultimately, ​the success ‍of the ‍AI ⁣revolution won’t be measured solely by technological ⁢advancements. It will be ​measured by our ability to‌ navigate this transformation with foresight, empathy, ‌and⁢ a commitment to building a future where AI empowers all ⁤ of⁤ humanity.
Gary grossman is EVP ⁣of technology⁤ practice at Edelman and global lead of the Edelman AI‍ Center of Excellence.* Daily insights ⁤on business use cases with VB Daily if you want to impress your ‍boss, VB Daily has you covered. ⁣we give you the inside scoop‍ on‌ what companies are doing with generative AI, from regulatory‌ shifts⁣ to⁣ practical deployments, so you can‌ share insights for⁣ maximum ROI. Read our Privacy Policy Thanks for subscribing. ⁣Check out ⁣more VB newsletters here.

Cognitive migration is underway.The station is crowded. ‍Some have ⁢boarded while others hesitate,unsure ⁣whether the destination justifies⁣ the departure.

Future of work expert and harvard‍ University Professor ‌Christopher Stanton commented recently that the⁣ uptake of AI has⁣ been tremendous and observed that it is an “extraordinarily fast-diffusing ⁤technology.” That speed of adoption and impact is a critical part of what differentiates the AI revolution from ‌previous technology-led transformations, like ⁣the PC and ⁢the ⁤internet. Demis Hassabis, CEO of Google DeepMind, went further, predicting that AI could be “10 times bigger than the Industrial Revolution, ‌and ⁤maybe 10 times faster.”

Intelligence, or at least‌ thinking, is increasingly shared between people and machines. Some people have begun to regularly use AI in their workflows. Others have gone further, integrating it into their cognitive routines and creative identities. ⁤These are the “willing,” including the consultants fluent in prompt⁣ design, the product managers retooling systems and those building their own businesses that do everything from coding to product design to marketing.

For them,the ⁣terrain ⁣feels new but navigable. Exciting,even. But ⁢for many others, this moment feels strange, and more than a little unsettling. The risk they face is not just being left ‍behind. It is not knowing how, when and whether to invest⁣ in AI,‍ a future‍ that seems highly uncertain, and one​ that is difficult⁣ to imagine their place in.⁢ That is the ‌double risk of AI readiness,and it is reshaping how ‌people interpret the pace,promises and ​pressure of this transition.


The AI Revolution: Navigating the Promise and ‍Peril of Rapid Advancement

Artificial intelligence is no longer a futuristic concept; it’s ⁢actively⁣ reshaping our world. The​ innovations unfolding today will ⁢have lasting consequences, but a critical question underlies this rapid progress: can ⁢the tools themselves be trusted? This isn’t⁤ simply about ​technological capability, but about the human implications of a future increasingly driven by AI.

The High-stakes Gamble

Currently, AI advancement is accelerating at an unprecedented‍ pace. companies are investing heavily, driven by both genuine belief in the ‌technology and a⁣ fear of ⁢being left ⁤behind. however, a potential “AI ⁣winter” – a ​period of disillusionment and reduced funding – looms if these systems fail to deliver ‌on ⁤their promises. The prevailing hope is that ⁤current limitations are solvable through improved software engineering. And that’s likely‌ true, to a degree. But the core bet is larger: that AI will scale⁣ effectively, and that the productivity gains will outweigh any‍ potential downsides. This gamble assumes we can compensate for lost human nuance, value, and meaning with⁤ increased reach and efficiency. It’s a trade-off, and one we’re‌ making now. The dream, of​ course, is⁢ far more aspiring.

The⁢ Alluring⁣ Dream of AI

The vision ⁣is compelling: AI as a force for abundance, widely shared and accessible. ‍ A future ⁢where AI elevates, rather than excludes, expanding opportunities⁤ for everyone. This is the potential that fuels the current excitement. However,a​ meaningful gap​ exists between this dream and the reality of rapid ‍deployment. ⁣We’re proceeding as if the gamble will automatically guarantee the dream. We’re hoping acceleration will lead to a better outcome, trusting it won’t erode the very human elements that ⁤make that ⁤outcome worthwhile. History teaches us ⁣that even successful ventures can leave people ⁢behind. The current, often chaotic, transformation isn’t a mere side effect. It’s ‍a direct result of speed outpacing ‌our collective ability to adapt thoughtfully and with care. This “cognitive migration” ⁤is happening now, driven as much by faith as by concrete evidence.

Beyond Building: Asking the⁢ Hard Questions

The challenge⁣ isn’t solely about creating better tools. It’s about critically examining where those tools are taking us. ​We aren’t simply migrating to an unknown destination; the map is actively being redrawn as we move. Every migration carries hope, but unexamined ⁢hope can be dangerous. It’s time to⁣ move beyond​ “where are we going?” and ask ​”who will belong when we arrive?” Consider these key areas: Job Displacement: How will we support individuals whose roles are automated? Bias and Fairness: How do we ensure AI systems don’t perpetuate⁣ or amplify existing societal ⁣biases? Data ⁤Privacy: How can we protect‌ individual privacy⁢ in an age of data-driven AI? Accountability: Who​ is responsible when AI⁤ systems make errors or cause harm? * ‍ Ethical Considerations: What are the ‌long-term ethical implications of increasingly autonomous​ AI? These aren’t abstract concerns. They require proactive planning, robust regulation, and ⁢a commitment to human-centered design.

Navigating the Future, Together

The AI revolution presents both immense opportunities and significant risks. Successfully navigating this transformation⁤ requires a shift in focus. We must move beyond simply building‍ and deploying ​AI, and prioritize thoughtful ⁢consideration of its societal​ impact. Your active participation ​in this conversation ⁤is crucial. By demanding transparency, accountability,⁢ and ethical⁣ development,​ you can help shape a future where AI truly benefits all of ​humanity.
Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence. Stay Informed with VB ‌Daily Want to ⁣stay ahead of the curve on⁤ AI in​ business? VB Daily delivers daily insights on ‍use ⁢cases, regulatory ⁢shifts, and practical deployments, helping you share valuable insights and maximize ROI. Subscribe to VB Daily – Read our Privacy Policy.

The​ AI Revolution: Navigating the Promise and Peril of Rapid Advancement

Artificial intelligence is no longer a futuristic concept; it’s ‍actively reshaping our world.The innovations unfolding today will have lasting consequences, but ‌a critical question⁣ underlies this rapid ​progress: can the tools themselves be trusted? This article⁤ delves into the current state ​of⁣ AI, the inherent⁢ risks of its accelerated development, and the vital questions you need to be asking.

The High-Stakes Gamble

Companies are aggressively piloting ‌and​ deploying AI, fueled by both genuine‍ belief in ⁣its potential and a ⁣fear of being left behind. A potential “AI winter” – a period of⁣ disillusionment and stalled ⁣progress – looms if ⁢these‍ systems⁤ fail ⁢to deliver on their promises. Though, the prevailing hope is that ‌software engineering advancements ⁣will overcome current limitations. this⁣ is a significant gamble. we’re betting that the technology will scale, that its benefits will outweigh the disruption, and that gains in efficiency ‍will compensate for any loss of human nuance.The dream, of⁢ course, ‌is far more ambitious. We ⁣envision AI as a force for widespread abundance,⁢ elevating‌ all rather⁣ than exacerbating existing inequalities, and democratizing access ⁣to intelligence and opportunity. ​ But ‍a dangerous gap ​exists between this ⁢dream ​and the reality of the current,breakneck pace of development. ​We’re acting as if taking ⁣the⁤ gamble automatically guarantees achieving the dream.

Speed vs. Sustainability: A Critical ⁢Imbalance

History teaches‌ us that even successful ventures can leave​ many behind. The current‍ “messy” transformation isn’t ​simply an unavoidable side effect;‌ it’s a direct consequence of speed outpacing ⁤our​ collective ability to⁣ adapt thoughtfully and ⁣responsibly. Consider these key‍ points: Cognitive‌ Migration: We’re rapidly shifting reliance to AI, often based on faith​ rather than fully-formed⁣ understanding. Evolving Landscape: ⁢The “map” of AI is constantly changing while we’re navigating it, making long-term planning incredibly difficult. Unexamined Hope: Blind ⁤optimism can be risky. The challenge isn’t solely ⁣about building better tools. It’s about asking the difficult questions ‍about ⁣ where those tools are leading us.

Beyond ⁤Innovation: ​Who⁣ Will⁢ Thrive in ‍the AI Future?

We⁢ aren’t simply ‌migrating to an unknown destination; we’re doing so ‍at an unprecedented speed. This demands a more critical approach.
Focus on Inclusivity: ‌ Who will benefit from this revolution? Prioritize Ethical Considerations: how‍ do we mitigate bias and ensure fairness? Embrace Adaptability: How can individuals and institutions prepare for continuous change? You need to move​ beyond simply adopting AI and begin actively shaping its development. Hope is essential, but it must be informed by⁤ rigorous analysis and a commitment to equitable outcomes. It’s time to ask‍ not just where we’re going with AI, but who will truly belong when we‌ arrive.
Gary Grossman is EVP of technology practice ⁢at Edelman ‌and global lead ‍of the ⁣Edelman AI Center of Excellence.Stay Informed with VB ‍Daily Want to stay ahead of the curve on AI’s impact on ⁤business? VB Daily delivers daily insights on use cases, regulatory shifts, and practical deployments,⁤ empowering you to share valuable insights and maximize ROI. Subscribe to VB ⁢Daily and read our Privacy Policy.

The AI Revolution: Navigating the Promise and Peril of Rapid‌ Advancement

Artificial intelligence is no longer a ‍futuristic concept; it’s actively reshaping ⁢our⁣ world. The ‌innovations unfolding today will have ​lasting consequences, but a critical question underlies this rapid progress: can the tools themselves be relied upon? This isn’t simply about technological‌ advancement, but about thoughtfully charting a course for a future profoundly ‌influenced‍ by AI.

The High-Stakes Gamble

Currently, AI development is accelerating at an unprecedented pace. Companies are investing ‌heavily,driven by both ⁢genuine⁤ belief in ​the technology ‌and ⁣a fear of being left behind. however, a potential “AI winter”⁤ – a period ‍of ‌disillusionment and reduced‌ funding – looms if‍ these systems ⁤fail to deliver on their promises. The prevailing​ hope is that current limitations will be overcome through improved software engineering. ‍And that’s likely, to⁣ a degree. But the core bet is larger: that AI will scale effectively, and that the productivity gains will outweigh any potential downsides.⁢ This ⁢gamble assumes we can​ compensate for lost human nuance, value, and meaning with increased reach and efficiency. ⁣ it’s a trade-off, and ‌one we’re making now. The ‌dream,⁢ of course, is far more ambitious.

The Alluring ⁣Dream ‌of ⁢AI

The vision is ‌compelling: AI as a force for‍ widespread abundance, elevating ​ all rather than exacerbating ‌existing inequalities. ‍It promises ‍expanded ⁤access to intelligence and opportunity, not further concentration‍ of power. Though, a significant gap ​exists between this ‌dream and the current reality.‌ we’re proceeding as ‌if the ​gamble ‌ guarantees the ⁢dream, assuming acceleration will automatically lead to a positive outcome. This is a dangerous‍ assumption.History teaches us that even⁣ successful ​ventures can leave many behind. The current, ⁢often chaotic, transformation isn’t a mere‍ side⁢ effect. It’s a ​direct result of speed outpacing our collective ability to adapt thoughtfully and ‌with care. This “cognitive migration” is happening ⁣now,⁢ fueled as much ⁤by faith as by concrete⁢ evidence.

beyond Building: ⁣Asking the ​Hard Questions

The challenge isn’t solely about creating better AI tools. It’s about rigorously​ questioning‌ where ‌ those tools are taking us. We⁣ aren’t simply ‍migrating to an unknown destination. We’re doing so at such a pace that the map is constantly redrawn beneath our feet. Every​ migration carries ⁣hope, but unexamined hope is inherently risky. It’s time to move beyond “where are we going?” ⁤and ask a‍ more ‌crucial question: “Who will belong ‍when we arrive?”⁢ Consider these key areas: Job Displacement: How will we support​ individuals whose roles are automated? Retraining programs and social safety nets are critical. Bias and Fairness: AI systems ‌are trained on data, and if that ⁣data⁣ reflects existing biases,⁤ the⁢ AI will ‌perpetuate ⁢them. Proactive ​mitigation ⁣is essential. Ethical Considerations: ​ How do we ⁢ensure AI is used responsibly, respecting privacy and human rights? Clear ethical guidelines and regulations are⁣ needed. Accessibility: Will the benefits of ‍AI be available to everyone, or will they ‍further ⁤widen the gap between the haves and have-nots?

Navigating the Future, Together

The AI revolution presents both⁤ immense ⁤opportunities and significant risks.Successfully navigating this transformation requires ​a shift in focus. We⁣ must move beyond simply building and deploying AI, and prioritize⁣ thoughtful‌ consideration of its societal impact. Your active participation in this conversation is⁢ vital. Let’s ensure that​ the future shaped by AI is ⁣one where ​everyone has a place, and where the promise ⁢of abundance is truly shared.
Gary Grossman is EVP of ⁣technology practice at​ Edelman and ​global lead of the Edelman AI Center⁤ of Excellence. Stay Informed with VB Daily Want to stay ahead of the curve on AI in business? VB Daily delivers daily insights on ​use cases, regulatory shifts,‍ and practical deployments, helping you share valuable insights ​and maximize ROI. Subscribe to VB Daily – Read our Privacy Policy.
  • Turning energy into a⁢ strategic advantage
  • architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with enduring AI systems
  • The ​AI Revolution: Navigating the Promise and Peril of Rapid Advancement

    Artificial intelligence is no longer a futuristic concept; it’s ‌actively reshaping⁣ our world. The innovations unfolding ⁤today will have lasting consequences, ‍but⁤ a critical question underlies⁣ this rapid progress: can the tools themselves be trusted? This article delves into the current ⁣state of AI, the inherent ⁤risks of its breakneck development, and the vital questions you need to ‍be asking.

    the High-Stakes‌ Gamble

    Companies are aggressively piloting and deploying AI, fueled by ‍both genuine belief in its potential and a fear of being left behind. While another “AI winter” – a period of disillusionment and stalled progress – remains‌ a possibility, the prevailing sentiment‍ is that current limitations are solvable through better engineering. This ​is a significant gamble. The core assumption is that the technology will scale, ⁢ will deliver on its‌ promises, and⁣ that the benefits of increased productivity will outweigh any disruption. Essentially, we’re betting that gains in efficiency and reach will compensate for potential losses in human⁤ nuance,⁣ value, and meaning. But there’s ‍also a powerful dream driving this forward momentum: a future where ‍AI fosters abundance, ‍expands opportunity, and elevates humanity. The tension lies in the‌ gap between this gamble and the dream. We’re ⁤proceeding as if taking the risk automatically ⁤guarantees a ⁣positive outcome. We hope acceleration will ‌lead us to a better place, and trust that it won’t erode the very human qualities that make that destination worthwhile. However, history teaches us​ that even successful ventures can leave many behind.

    Speed vs. Adaptation: A⁤ Critical ⁤Imbalance

    The current transformation ⁣isn’t simply an unavoidable side effect of progress. It’s a direct result of speed outpacing our collective ability to adapt thoughtfully and responsibly. This ⁤”cognitive migration” – our increasing reliance on AI – ⁢is happening as much on faith as it is on concrete evidence. The challenge isn’t solely about building more sophisticated tools. It’s about asking tougher questions about where these tools are leading‌ us. We ⁤aren’t ⁣migrating ⁤to a known destination;​ we’re navigating ‍a landscape that’s​ actively being drawn as we run. Every migration carries hope, but unexamined hope can be dangerous.It’s time to move beyond “where are we going?” and ask, “who will belong when we⁣ arrive?” Here’s a breakdown ‍of key considerations ⁢for your ⁢ organization: Assess⁢ the Risks: Don’t⁣ blindly adopt AI. Identify potential downsides – bias,job⁢ displacement,security vulnerabilities – and develop mitigation strategies. Prioritize ‍Ethical Development: ​Ensure your AI initiatives align ⁣with your values and promote fairness, transparency, and accountability. Invest in Upskilling: Prepare your workforce for the changing demands of ⁤an AI-driven world. Focus on skills ⁣that complement⁤ AI, such ⁣as critical thinking, creativity, and emotional intelligence. Focus on Human-Centered Design: AI should augment human ⁤capabilities, not replace them entirely.‌ Design systems that⁣ prioritize user experience and empower individuals. Embrace Continuous Monitoring: Regularly⁤ evaluate the performance and impact of your AI systems. ⁤ Be‌ prepared to adapt and refine your approach as needed.

    The ‍Path Forward: Cautious ​Optimism and Critical Inquiry

    The‌ AI revolution presents both immense opportunities and significant challenges. Success requires a shift ​from simply
    building ‍ to ⁢ questioning. We must move beyond the hype and engage in ‌a thoughtful, nuanced conversation about the ⁢future we ⁢want to create. Don’t let the speed of innovation⁣ overshadow the importance of careful consideration. The choices​ we make today will shape the⁣ world for generations to come.
    Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of excellence. Stay Informed with VB Daily Want to stay ahead of the curve on AI in business? VB Daily delivers daily⁢ insights on use cases, regulatory shifts, and practical deployments, helping
    you maximize ROI. [Link to newsletter Sign-up] Read our privacy Policy*

    Is it‍ real?

    Across industries, ⁣new roles and teams are forming, and‍ AI tools are reshaping workflows faster than ‌norms or strategies can keep up. But the meaning is still hazy,the ‌strategies unclear.The​ end game, if there is ‍one, ⁣remains ‍uncertain. Yet the pace and scope of change‌ feels portentous. Everyone is being told to adapt, but few know‌ exactly what that means or how far the changes will go.Some AI industry leaders claim huge changes⁢ are coming, and soon, with superintelligent machines emerging possibly‌ within⁤ a few years.

    But maybe this AI revolution will go bust, as ⁤others have before, with another ​“AI winter” to follow.⁢ There have been⁣ two notable ⁤winters.⁤ The first was in ​the 1970s, brought about by computational ​limits. The second began ‌in the ⁢late 1980s after a wave of unmet expectations with high-profile failures and ⁣under-delivery ​of “expert systems.” These​ winters were characterized by a cycle of lofty expectations followed⁣ by profound disappointment, leading to significant reductions in funding and interest in AI.

    Should the excitement around AI agents today mirror the failed promise of ⁢expert systems,this could ⁢lead to another winter. However, there are​ major differences between⁣ than and now. Today, there is far greater institutional buy-in, consumer ​traction and cloud ⁣computing infrastructure ‌compared to the⁣ expert systems of the 1980s.There is no guarantee that a ⁣new winter will not emerge, but if the industry ​fails this time, it will not be ⁢for ‍lack of money or momentum. It will be as trust⁢ and reliability broke first.

    A major retrenchment occurred in 1988 after⁤ the AI industry failed to meet its promises. The New York Times

    Cognitive migration has started

    If “the great‍ cognitive migration” is real, this remains ⁣the early part of the‌ journey. ⁤some have boarded the train while⁣ others still linger,⁤ unsure about ⁢whether or⁤ when to get onboard.Amidst the uncertainty, ⁢the atmosphere at the station has grown restless, like travelers ⁣sensing a trip itinerary change that ‍no one has⁤ announced.

    Most people have jobs, ‍but they ‌wonder ⁢about the degree of risk they face. The value of their work is ​shifting. A quiet⁤ but mounting anxiety hums beneath⁣ the surface of performance ⁣reviews and company town halls.

    Already, AI can accelerate software development by 10 to 100X,generate the majority of ⁢client-facing code and ​compress project timelines dramatically. Managers are now able to use AI to create employee performance evaluations.Even classicists and archaeologists have⁣ found value in⁣ AI, having used the technology to understand ‍ancient Latin inscriptions.

    the “willing” ‌have an⁣ idea of ‍where they are going and may find traction. But for the “pressured,” the ‌“resistant” and​ even those not yet touched by AI, this moment feels like something⁤ between anticipation and grief.⁢ These groups have started to grasp that they may ⁢not be staying in their‌ comfort zones for‌ long.

    For ​many, this is not just about tools or a new culture, but whether that culture ⁣has space for them at all.Waiting too long is akin to missing the train and could lead⁤ to long-term ​job displacement. Even those I have spoken with who‌ are senior in their careers and ⁣have begun using AI wonder ⁤if their positions are threatened.

    The narrative of ​opportunity and upskilling hides a more uncomfortable truth. For ‍many, this ‌is not a migration. It​ is a managed‌ displacement. Some workers are not choosing to opt out of AI. They are ‍discovering that the ⁢future ‌being built does not include them. ‍Belief⁢ in the‌ tools is⁣ different from belonging in the system tools are reshaping. And ‌without a clear path to participate meaningfully, “adapt or ⁣be left behind”⁤ begins⁢ to ‌sound ⁤less like advice and more ⁣like a verdict.

    These tensions are precisely why this moment matters. There is a growing sense that ‍work, as they have known‍ it, is beginning to recede. The signals are coming from‌ the ‍top. Microsoft CEO Satya Nadella acknowledged as much in a July 2025 memo following a reduction ​in force, noting that the transition to ⁤the AI era “might feel messy at times,⁤ but transformation always ⁣is.” But there is⁤ another layer to this unsettling reality: The technology driving this urgent transformation remains fundamentally unreliable.

    The power ​and the ‌glitch: Why AI still cannot be trusted

    And yet, for all the urgency and momentum, this increasingly pervasive‌ technology itself⁤ remains glitchy, ⁢limited, strangely brittle​ and far from dependable. ‌This raises⁢ a second layer of doubt,not only about how to adapt,but ⁣about whether the tools we are adapting to can deliver.Perhaps these⁤ shortcomings should not be a surprise, considering that‌ it was ⁢only ⁤several years ago when the output from large language models (LLMs) was ‌barely ‌coherent. Now, however, it is‍ like having a PhD in your pocket; the idea of on-demand ambient intelligence once science fiction almost realized.

    Beneath their polish, though, chatbots built atop these LLMs remain fallible,​ forgetful and frequently enough overconfident. They still hallucinate, meaning‌ that we ⁤cannot entirely trust their output. AI can answer with⁢ confidence, but not accountability. This is​ probably a good thing,‌ as our knowledge and expertise are still needed. They also do not have persistent ‌memory and have ⁤difficulty carrying forward a conversation from one session‌ to another.

    They can also get lost. recently, I had a session with a leading chatbot, and it ​answered a​ question with a complete non-sequitur. When I pointed this out, it responded again off-topic, as if the​ thread ‌of‌ our conversation had simply vanished.

    They also do not learn, at least not in any human sense. Once a model is released,whether by Google,anthropic,OpenAI or DeepSeek, its weights are frozen.Its “intelligence” is fixed. Instead, continuity of a conversation with a chatbot is limited to the confines of its context window, which is, admittedly, quiet large. Within that window‌ and conversation,the chatbots can absorb knowledge and make connections that serve‍ as learning ⁢in the​ moment,and they appear increasingly like savants.

    These gifts and flaws add ⁤up to an⁢ intriguing, beguiling presence. But can we trust it?⁤ Surveys such as the⁣ 2025 Edelman Trust barometer show that⁣ AI trust is divided. In China, 72% of people express trust in AI. But ‌in the U.S., that number drops to 32%. This divergence underscores how public faith in AI is shaped as much by culture and governance⁤ as by technical capability. If AI did not hallucinate, ‌if it could remember, if it learned, if we understood how it worked, we would likely trust it more.But trust in the AI ‌industry itself remains elusive. There ‍are widespread fears that there will be no meaningful regulation⁣ of AI technology,​ and that ordinary people will have little say in how it is developed or⁤ deployed.

    Without trust, will this AI revolution ⁤flounder and bring about‍ another winter? And if so, what happens⁤ to those⁤ who have invested time, energy‍ and ‍their careers? Will those who have waited to embrace AI be better ⁢off for ⁣having done so? Will ⁤cognitive‍ migration be⁣ a flop?

    Some notable AI researchers have warned that AI in its current form — based ⁣primarily on deep learning neural networks upon which LLMs are ⁤built ‍— will fall short of optimistic projections.​ They claim that additional technical breakthroughs will be needed for this approach to advance much further. Others do not buy ​into ⁤the optimistic AI projections. novelist‍ ewan Morrison views the potential of superintelligence as a⁣ fiction dangled ‍to attract investor funding.⁢ “it’s a‌ fantasy,” ⁢he said, “a ‍product of venture capital gone nuts.”

    Perhaps Morrison’s skepticism is warranted. however, even with their ⁢shortcomings, today’s⁢ LLMs are already demonstrating huge commercial utility. If the exponential progress ‌of the last few years stops tomorrow, the ripples from what has already been created will have ⁢an ‍impact for years ⁣to come. But⁢ beneath this movement ⁢lies something⁣ more fragile: The reliability of the tools themselves.

    The gamble and the dream

    For ‍now, exponential advances continue ⁢as companies ⁢pilot and ⁤increasingly deploy AI.​ Whether driven by conviction or fear of⁣ missing out, the industry ⁢is‌ steadfast ⁣to move forward. ⁣It could all fall apart if ⁢another winter arrives,⁤ especially if AI agents​ fail to deliver. Still, the prevailing assumption is that today’s shortcomings will be solved through better software engineering. And they might be. Actually, they probably will, at least to a degree.

    The bet is that the technology will work,that it will scale and ‌that the‌ disruption it creates will be outweighed by the productivity it enables. ⁣Success in this adventure assumes that what we lose in human nuance,value and meaning will be made up for in reach and⁤ efficiency. This is the gamble we are making. And then there ​is the dream: ⁣AI​ will become a source of abundance widely shared,will elevate rather than exclude,and expand access to intelligence and opportunity rather than​ concentrate it.

    The unsettling⁤ lies in the gap between the two.‍ We are moving forward as ​if taking this gamble will guarantee the dream. It is ​indeed the hope that acceleration will land​ us in a better place, and the faith that it will not⁤ erode the human ​elements that make the⁤ destination worth reaching. But history⁤ reminds us that even ​successful bets can‍ leave many behind. The “messy” transformation now underway is not just ‌an unavoidable side effect. It is the direct result of speed overwhelming human‌ and institutional capacity to adapt effectively and with care. ‌For now, cognitive migration continues, as much ⁢on faith as‍ belief.

    The ‍challenge‍ is not just to build better tools, but to ask harder questions about where ⁢they⁤ are taking us. We are not just migrating to an unknown destination; we are doing it so fast that the map‌ is changing while we run, moving across a ​landscape that is still being drawn. Every migration carries ⁢hope. ‍But hope, unexamined, can⁢ be ⁤risky. It is time to ask⁤ not just where we are going,‍ but who will get to ‌belong when‌ we arrive.

    Gary‌ Grossman is EVP of technology practice at Edelman and global lead ​of the Edelman ​AI Center of excellence.

    Leave a Comment