Google Cloud CEO Thomas Kurian Announces Apple Partnership at Google Cloud Next 2026 in Las Vegas

Gemini will power the next generation of Siri, Apple’s voice assistant, according to statements made by Google Cloud CEO Thomas Kurian during Google Cloud Next ’26 in Las Vegas on April 22, 2026. The announcement confirms a strategic collaboration between Google and Apple to integrate Google’s Gemini family of AI models into Siri’s underlying architecture, marking a significant shift in how Apple approaches on-device and cloud-based AI capabilities for its ecosystem.

Kurian highlighted the partnership during his keynote address, where he outlined Google Cloud’s vision for the “Agentic Enterprise” and unveiled fresh infrastructure and platform innovations designed to support scalable AI agent deployment. While specific technical details about the integration were not disclosed in the public remarks, the confirmation aligns with broader industry trends toward cross-platform AI collaboration and follows Apple’s recent efforts to enhance Siri’s contextual understanding and task execution through advanced language models.

The integration of Gemini into Siri represents one of the most notable developments in consumer AI since the launch of large language models, potentially enabling more natural, multi-turn conversations and improved handling of complex requests across Apple devices. Industry analysts have noted that such a move could facilitate address longstanding user feedback about Siri’s limitations compared to competing assistants, particularly in areas involving contextual awareness and third-party app interactions.

Google Cloud Next ’26 featured multiple announcements related to AI infrastructure and enterprise AI adoption, including the general availability of eighth-generation Tensor Processing Units (TPUs), the introduction of the Virgo Network as a scale-out AI data center fabric, and the launch of the Gemini Enterprise Agent Platform. These innovations are intended to support the training, deployment, and management of large-scale AI agents across hybrid cloud environments.

According to Kurian, Google’s AI models now process over 16 billion tokens per minute via direct API use by customers, up from 10 billion per minute in the previous quarter. This growth reflects increasing enterprise adoption of generative AI tools and underscores the scaling demands that next-generation hardware like the Ironwood and Axion processors are designed to meet.

The Agentic Data Cloud, another highlight of the event, aims to close the gap between AI reasoning and action by enabling agents to securely access and act on organizational data across clouds. This includes a cross-cloud Lakehouse and Knowledge Catalog, which together provide a unified foundation for data governance and real-time agent interactions.

In addition to infrastructure updates, Google introduced Agentic Defense, a security framework combining Google’s Threat Intelligence and Security Operations with Wiz’s Cloud and AI Security Platform to prevent, detect, and respond to threats targeting AI workloads. The company similarly outlined Agentic Taskforce initiatives focused on enhancing customer experience through AI in Google Workspace and Gemini Enterprise applications.

Sundar Pichai, CEO of Google and Alphabet, delivered a separate address at the event, emphasizing the momentum behind Google’s AI strategy and the role of cloud infrastructure in enabling the next wave of innovation. He noted that nearly 75% of Google Cloud customers are now using AI products in production, with over 330 customers processing more than a trillion tokens each in the past year.

The collaboration between Google and Apple on AI marks a rare instance of deep technical cooperation between the two tech giants, who have historically competed in areas such as mobile operating systems, voice assistants, and ecosystem services. While both companies continue to develop proprietary AI models — Apple with its own on-device frameworks and Google with Gemini — the partnership suggests a pragmatic approach to overcoming current limitations in assistant performance through shared advancements in model efficiency and reasoning capabilities.

Neither Google nor Apple has released a joint technical whitepaper or developer documentation detailing the specifics of the Gemini-Siri integration as of the close of Google Cloud Next ’26. However, both companies have affirmed their commitment to advancing AI experiences that prioritize user privacy, contextual relevance, and seamless cross-device functionality.

Apple’s Worldwide Developers Conference (WWDC) 2026, scheduled for June 2026, is expected to provide further insight into upcoming Siri enhancements and may include additional details about the role of third-party AI models in future iOS, iPadOS, and macOS updates. Until then, the confirmation from Google Cloud Next ’26 remains the most authoritative public statement on the matter.

For developers and enterprise users interested in building AI agents using Google’s latest tools, the Gemini Enterprise Agent Platform offers a suite of features including Agent Designer, Inbox for activity management, long-running agent support, and Skills-based orchestration. Documentation and access to preview environments are available through Google Cloud’s official portal.

As the AI landscape continues to evolve, collaborations like the one between Google and Apple may become more common, particularly where specialized expertise and infrastructure can accelerate progress toward more capable, trustworthy, and widely accessible intelligent systems.

What are your thoughts on the growing trend of cross-platform AI partnerships? Share your perspective in the comments below, and feel free to spread the word if you found this overview informative.

Leave a Comment