AI Chip Company Revisits IPO Plans After Scrapping 2024 Filing

Cerebras Systems, the Silicon Valley-based artificial intelligence chipmaker, has filed to travel public on the Nasdaq under the ticker symbol “CBRS,” marking its second attempt at an initial public offering after withdrawing paperwork in late 2024. The company, which designs wafer-scale engines optimized for AI workloads, announced the filing on Friday, April 17, 2026, following a period of significant financial turnaround and strategic partnerships with major AI developers.

According to its S-1 filing with the U.S. Securities and Exchange Commission, Cerebras reported $510 million in revenue for 2025, representing a 76% increase from $290 million in 2024. The company shifted from a net loss of $485 million in 2024 to a net income of $87.9 million in 2025, signaling its first annual profit after years of operating losses. This financial improvement coincides with expanded cloud service offerings and reduced reliance on a single major customer.

The filing reveals a notable shift in Cerebras’ customer concentration. Whereas Microsoft-backed G42 accounted for 87% of revenue in the first half of 2024, that share declined to 24% in 2025. Instead, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), a public research institution in Abu Dhabi, emerged as the company’s largest customer, contributing 62% of Cerebras’ 2025 revenue. This change reflects the company’s evolving business model, which now emphasizes operating its AI chips in its own data centers as a cloud service rather than selling hardware outright.

Cerebras’ technology centers on its Wafer-Scale Engine (WSE) series, with the latest WSE-3 chip described in the filing as 58 times larger than Nvidia’s B200 graphics processing unit. The company claims this architecture delivers superior bandwidth and enables inference operations at “extremely fast speeds,” positioning it as a direct competitor to Nvidia in the AI inference market. Inference—the process of running trained AI models to make predictions—has become the dominant workload in enterprise AI deployments, shifting focus from the training phase that previously drove demand for large GPU clusters.

The IPO filing likewise discloses a major strategic agreement with OpenAI. In January 2026, Cerebras announced plans to provide up to 750 megawatts of dedicated computing power to OpenAI through 2028, with infrastructure rollout scheduled between 2026 and 2028. The deal, valued at over $20 billion, includes an option for OpenAI to purchase additional capacity. As part of the arrangement, OpenAI extended a $1 billion loan to Cerebras and received a warrant to buy company stock, underscoring the depth of the partnership between the AI chipmaker and the leading generative AI developer.

Lead underwriters for the offering include Morgan Stanley, Citigroup, Barclays, and UBS Investment Bank. Cerebras previously raised $1 billion in a private placement in February 2026 at a post-money valuation of $23 billion. The company’s remaining performance obligations totaled $24.6 billion as of December 31, 2025, with expectations to recognize 15% of that amount in 2026 and 2027, reflecting long-term contracted revenue from cloud services and enterprise agreements.

Cerebras now counts Amazon Web Services, Microsoft Azure, Google Cloud, Oracle Cloud Infrastructure, and CoreWeave among its competitors in the AI cloud computing space. The company’s transition from chip vendor to full-stack cloud provider mirrors broader industry trends where specialized hardware firms integrate software and infrastructure to deliver optimized AI workloads at scale.

Financial Turnaround and Market Position

Cerebras’ path to profitability marks a significant milestone for a company that reported just $24.6 million in sales in 2022. Revenue grew to $110 million in 2023 before jumping to $290 million in 2024 and reaching $510 million in 2025. The 76% year-over-year growth in 2025 was driven by increased adoption of its cloud services and expanded contracts with research institutions and AI developers.

The company’s shift to profitability was supported by gross margin improvements and operational scaling. While specific margin figures were not detailed in the filing, the transition from a $485 million net loss to $87.9 million in net income suggests substantial efficiency gains. Analysts note that Cerebras’ focus on inference—a less computationally intensive but higher-volume phase of AI deployment—aligns with market demand where enterprises prioritize cost-effective, scalable model serving over experimental training runs.

Cerebras’ total addressable market continues to expand as AI adoption spreads across industries. The company positions its wafer-scale architecture as particularly suited for large language models and other transformer-based systems that require massive memory bandwidth and low-latency interconnects—areas where traditional GPU-based systems face architectural bottlenecks due to data movement between separate chips.

Customer Diversification and Strategic Partnerships

The reduction in reliance on G42, which had been Cerebras’ dominant customer during its initial public offering attempt in 2024, represents a strategic success in customer diversification. MBZUAI’s emergence as the top customer in 2025 highlights Cerebras’ growing foothold in academic and government-backed AI research centers, particularly in the Middle East. The university, funded by the Abu Dhabi government, has become a hub for AI research in collaboration with institutions such as IBM and the Mohamed bin Zayed University of Artificial Intelligence.

Beyond MBZUAI and G42, Cerebras’ S-1 filing indicates that its top 10 customers increased their aggregate spending by approximately 80% within 12 months of their initial purchase, reflecting strong retention and expansion among enterprise clients. This metric suggests that once organizations adopt Cerebras’ cloud platform for AI inference, they tend to increase usage over time—a positive indicator for long-term revenue predictability.

The OpenAI partnership remains one of the most significant elements of Cerebras’ growth strategy. While the $20 billion deal spans multiple years and depends on infrastructure deployment timelines, it provides a substantial backlog of contracted revenue. OpenAI’s decision to commit to Cerebras’ hardware, despite its extensive use of Nvidia GPUs for training, underscores perceived advantages in inference efficiency for specific workloads, particularly those involving large-scale model deployment.

Competitive Landscape and Technological Differentiation

Cerebras competes not only with Nvidia but also with emerging AI chip startups and established cloud providers developing custom silicon. Companies such as Amazon (with its Trainium and Inferentia chips), Google (with TPUs), and Microsoft (through its Maia series) have invested heavily in proprietary AI accelerators. However, Cerebras differentiates itself through its wafer-scale integration approach, which places an entire AI-optimized system on a single silicon wafer, eliminating inter-chip communication delays.

The WSE-3 chip, fabricated using TSMC’s 5-nanometer process, contains 4 trillion transistors and 900,000 AI-optimized cores. Its architecture enables seamless communication across the entire chip surface, allowing for efficient execution of layer-by-layer operations in neural networks. This design reduces the need for complex data routing and synchronization that can limit performance in multi-chip systems.

Industry analysts note that while Nvidia maintains dominance in AI training due to its mature software ecosystem (CUDA) and widespread adoption, the inference market is more open to architectural innovation. Cerebras’ focus on this segment allows it to avoid direct confrontation with Nvidia’s entrenched position while targeting a growing opportunity where performance, latency, and total cost of ownership are critical decision factors.

Use of Proceeds and Future Outlook

Even though Cerebras has not yet specified the exact use of proceeds from the IPO, historical filings from similar technology offerings indicate that funds are typically allocated to research and development, infrastructure expansion, working capital, and potential acquisitions. Given the company’s capital-intensive model—requiring significant investment in data centers, chip fabrication, and software development—it is likely that a portion of the IPO proceeds will support scaling its cloud services to meet growing demand.

The company’s long-term success will depend on its ability to maintain technological leadership, expand its customer base beyond concentrated relationships, and navigate a competitive landscape where both established players and well-funded startups are vying for market share. Cerebras’ S-1 filing emphasizes its mission to “join us on this extraordinary journey through a technological revolution more profound than any that has come before,” reflecting its vision of enabling next-generation AI applications through purpose-built hardware.

As of the filing date, the Securities and Exchange Commission has not announced an effective date for the IPO. The process typically involves a review period followed by a roadshow and pricing, with trading expected to begin several weeks after the initial filing. Investors and market watchers will monitor Cerebras’ progress toward listing as it seeks to become the latest AI-focused company to access public markets.

For readers interested in tracking the company’s filings, the U.S. Securities and Exchange Commission’s EDGAR database provides real-time access to S-1 submissions and amendments. Cerebras’ investor relations website will also host updates on the offering process once available.

What are your thoughts on Cerebras’ return to the public market and its potential to challenge established players in the AI chip industry? Share your perspective in the comments below, and consider sharing this article with others interested in technology and finance.

Leave a Comment