3D wafer-scale AI chip facing GPU cluster with IPO data overlaysCerebras’ wafer-scale chip challenges Nvidia as the company targets a $3.5B IPO at extreme valuation multiples.
Cerebras Files $3.5B IPO at $115-$125 — NeuralWired

Cerebras Targets $3.5B IPO at $115-$125 — and 80x Revenue

The wafer-scale chip company launched its Nasdaq roadshow Monday with a price range that puts it squarely in Nvidia’s crosshairs and asks investors to pay a premium that few hardware companies have ever justified.

Nine years after Andrew Feldman co-founded Cerebras Systems in a Sunnyvale garage with a single audacious idea, building one processor across an entire silicon wafer, the company is asking public markets to value that idea at up to $40 billion. On Monday, Cerebras officially launched its IPO roadshow, setting a price range of $115 to $125 per share for 28 million Class A shares on the Nasdaq under ticker CBRS. At the top of that range, the offering raises $3.5 billion outright. If underwriters exercise their overallotment option in full, total proceeds climb past $4 billion.

The timing is deliberate. AI infrastructure spending hit an inflection point in early 2026 as hyperscalers committed to combined capital expenditure budgets exceeding $300 billion. Demand for specialized compute has never been higher, and Cerebras spent the past 18 months signing deals that would have seemed implausible two years ago. But the company is also walking into a market that scrutinizes AI hardware with more skepticism than it did during the 2023 frenzy. The roadshow has roughly two weeks to close the gap between a $125 ask and the proof of durable, scalable economics investors need.

This is Cerebras’ second attempt at a public listing. The first, filed in late 2024, was withdrawn after national security concerns emerged around the company’s heavy reliance on Abu Dhabi-based technology firm G42. That history hasn’t disappeared. It’s now a known risk factor baked into the S-1, and how convincingly management addresses it on the roadshow will shape where the deal ultimately prices.


The Deal in Numbers

The structure of the offering is straightforward. Cerebras is selling 28 million newly issued Class A shares, with an underwriter option for an additional 4.2 million shares. Morgan Stanley, Citigroup, Barclays, and UBS are leading the transaction, with Mizuho and TD Cowen acting as co-bookrunners.

Key offering figures: 28 million Class A shares at $115-$125 per share. Gross proceeds of up to $3.5 billion (up to $4.03 billion if overallotment exercised in full). Market cap of up to $26.6 billion on an outstanding-share basis. Pricing expected during the week of May 11, 2026. Nasdaq ticker: CBRS.

The valuation math depends on which denominator you use. Renaissance Capital notes that on a fully diluted basis the midpoint of the range implies a $35.7 billion market cap, while the outstanding-share figure sits at $26.6 billion. Bloomberg has separately reported a $40 billion target based on sources familiar with the company’s valuation ambitions. Whatever figure anchors the conversation, the price-to-revenue multiple is extreme: roughly 55x to 80x trailing sales, depending on which valuation you cite against the $510 million in 2025 revenue.

That premium isn’t unprecedented in AI-adjacent hardware. Arm Holdings priced its 2023 IPO at a similarly eye-watering multiple and has since rewarded patient holders with strong gains. But Arm supplies intellectual property to the entire semiconductor industry. Cerebras sells a single, proprietary architecture with a narrow customer base. That distinction matters to long-only funds still digesting the post-2021 tech repricing.

“The proposed range is a stress test for how far the market will stretch for differentiated AI hardware outside Nvidia’s orbit.”

NAI 500 Market Analysis, May 4, 2026 — NAI 500

One data point in the bulls’ corner: early demand signals have been exceptionally strong. According to Bloomberg, indications of interest communicated to the underwriting banks have already exceeded $10 billion in potential orders, more than double the size of the deal at the high end of the range.

The Chip That Changes the Math

The entire Cerebras investment thesis rests on a single architectural bet: that the bottleneck in AI computing isn’t raw transistor count, it’s the cost of moving data between chips. Conventional AI accelerators, including Nvidia’s H100 and B200, are discrete dies connected by high-speed interconnects. Those interconnects consume power and add latency. Cerebras eliminates them by etching its Wafer-Scale Engine across an entire 300mm silicon wafer.

The result is a processor unlike anything else in production. The WSE-3, manufactured on TSMC’s 3nm process, contains roughly 4 trillion transistors and activates approximately 900,000 AI cores out of a total 970,000 (defect tolerance is built in via routing redundancy). On-chip memory sits at 44GB of SRAM with 20 petabytes per second of memory bandwidth. For reference, the company claims its chip is 58x larger than Nvidia’s B200 and delivers 2,625x more memory bandwidth than Nvidia’s B200 package.

🧠

WSE-3 Cores

~900,000 active AI cores out of 970,000 total, with built-in defect tolerance via routing redundancy on 3nm TSMC silicon.

💾

On-Chip Memory

44GB of SRAM on a single die, with 20 petabytes per second of bandwidth, eliminating off-chip data movement latency.

Inference Speed

Company benchmarks show 1,800 tokens per second for Llama 3.1 8B inference, claimed 21x faster than Nvidia Blackwell at 32% lower cost.

📐

Wafer Scale

Full 300mm wafer integration means 4 trillion transistors on a single die — no multi-chip interconnect overhead, no NVLink required.

The practical claim is speed. Cerebras says its systems train large language models up to 10x faster than GPU clusters and run inference at a fraction of the energy cost. Those figures come from internal benchmarks and third-party tests, and Nvidia hasn’t sat still with its own performance roadmap. Still, the OpenAI deal and the AWS partnership give Cerebras real-world validation that independent analysts can’t simply dismiss.

Wafer yield risk: Building chips at wafer scale means a single manufacturing defect that would discard a small GPU die can affect a far larger area. Cerebras routes around defective cores algorithmically, but yield rates remain a closely watched variable that could affect production economics as the company scales.

Revenue, Profit and the OpenAI Factor

The financial story Cerebras is telling in 2026 is materially different from 2024. Two years ago, the company posted $290 million in revenue alongside a $485 million net loss. For the full year ended December 31, 2025, revenue reached $510 million, up 76% year over year, and the company swung to profitability, reporting $87.9 million in net income and earnings of $1.38 per share. That profitability inflection is the headline the company wants dominating roadshow conversations.

Two landmark deals underpin that growth. In December 2025, Cerebras announced a multi-year agreement with OpenAI valued at over $20 billion, under which OpenAI would consume 750 megawatts of Cerebras computing capacity through 2028. OpenAI also extended a $1 billion working capital loan to Cerebras, a vote of confidence that carries more weight than almost any analyst endorsement. Then, in March 2026, Amazon Web Services signed a binding term sheet to become the first major cloud provider to deploy Cerebras systems inside its own data centers.

Metric 2024 2025 Change
Annual Revenue $290 million $510 million +76% YoY
Net Income / (Loss) ($485 million) $87.9 million Profitability swing
EPS Significant loss $1.38 First profitable year
Company Valuation ~$4B (Series F) $23B (Jan 2026 round) +475%
Key Customer Deals G42/UAE partnerships OpenAI ($20B+), AWS term sheet Major diversification

CEO Andrew Feldman has positioned the AWS partnership as direct evidence of customer diversification. The G42 concentration that spooked regulators in 2024 still accounted for a substantial share of 2025 revenue, a figure that will be scrutinized line by line during the roadshow. But the OpenAI and AWS announcements give Cerebras a credible answer to the concentration question that it simply didn’t have 18 months ago.

Feldman is also declining to sell any of his personal shares in the offering, a signal that institutional investors tend to read as confidence. His 10.3 million post-IPO shares would be worth up to $1.28 billion at the high end of the range, meaning his incentives are tightly aligned with public shareholders from day one.

The Risks Investors Can’t Ignore

No AI hardware company goes public in 2026 without a geopolitics section in the risk factors. For Cerebras, that section is longer than most. The company’s first IPO filing collapsed partly because its revenue concentration in the UAE, specifically through G42, triggered national security reviews in Washington. Export control restrictions on advanced AI chips to certain Middle Eastern and Asian markets remain fluid policy territory, and any tightening could directly affect existing contracts.

  • Customer concentration: G42 and affiliated UAE entities accounted for an estimated 86% of 2025 revenue according to S-1 analysis. Even with the OpenAI and AWS deals announced, the forward revenue mix will be a critical roadshow focus.
  • Export control exposure: US restrictions on advanced chip exports remain subject to executive action, and Cerebras’ architecture qualifies as a controlled technology under multiple categories.
  • Wafer yield scalability: Single-wafer manufacturing is complex. Defect-tolerant design works at current volumes, but scaling to meet hyperscaler demand without yield degradation remains unproven at full production intensity.
  • In-house chip programs: Google’s TPU, Amazon’s Trainium, and Meta’s MTIA all represent direct efforts by the largest potential customers to build proprietary AI silicon that doesn’t require outside vendors.
  • Ecosystem maturity: Nvidia’s CUDA software stack has a decade-long head start. Developers write AI code for CUDA by default. Cerebras has its own programming tools, but switching costs are real and the ecosystem is comparatively nascent.

“The roadshow will need to convince long-only funds that wafer-scale silicon is not just clever engineering but a sustained economic moat that can compound beyond early wins.”

NAI 500 Market Analysis, May 4, 2026 — NAI 500

None of these risks are disqualifying on their own. But stacked together, they explain why the $115-$125 range isn’t a slam dunk even against a backdrop of $10 billion in early interest. The deal sizes that matter most aren’t the book-building indications from hedge funds angling for a first-day pop. They’re the long-only allocations from pension funds and growth equity managers who need to own the stock for years.

Nvidia’s Shadow and the Competition Ahead

Cerebras has spent years framing its technology as a direct challenge to Nvidia. In some narrow workloads, that framing holds up: for large language model inference at scale, the WSE-3’s on-chip memory bandwidth gives it a genuine structural advantage. You don’t have to move activations across NVLink bridges if everything lives on one die. That matters enormously when generating tokens at commercial speed and volume.

But Nvidia isn’t standing still. The Blackwell architecture, and whatever follows it, continues compressing the performance gap in inference while defending Nvidia’s dominance in training. Nvidia’s ecosystem advantage is arguably its most durable asset: CUDA-native tooling, a decade of developer familiarity, and deep integrations with every major ML framework. Cerebras can out-benchmark Nvidia on specific tests. Replacing Nvidia in production deployments is a different kind of challenge entirely.

Dimension Cerebras WSE-3 Nvidia B200 Cluster
Architecture Single wafer-scale die Multi-GPU cluster with NVLink
On-chip memory 44GB SRAM ~192GB HBM per GPU (multiple units)
Memory bandwidth 20 PB/s (on-chip) ~8 TB/s per GPU (HBM)
Interconnect overhead None (single die) NVLink/NVSwitch required
Software ecosystem Proprietary (Cerebras SDK) CUDA (decade-long head start)
Claimed inference speed 1,800 tokens/sec (Llama 8B) Benchmark-dependent
Primary customers OpenAI, AWS (term sheet), G42 All major hyperscalers and cloud providers

The more immediate competitive threat may not come from Nvidia but from the hyperscalers themselves. Google’s TPU v5 series, Amazon’s Trainium2, and Meta’s MTIA chips are all designed to run specific AI workloads internal to those companies. If any of the three largest potential Cerebras customers decides its in-house chip meets the need, a major revenue runway disappears. The AWS term sheet is an encouraging signal. It’s not yet a purchase order at scale.

Where Cerebras has a credible story is in inference for large models and in markets where speed-per-dollar matters more than ecosystem familiarity. Startups building real-time AI products, research labs that don’t want to manage multi-node GPU clusters, and sovereign AI programs in countries that can legally access the hardware are all plausible expansion markets. Whether those segments can sustain the growth rate implied by an $80x revenue multiple is the central question of this IPO.

Frequently Asked Questions

What is Cerebras Systems’ IPO price range?

Cerebras set its IPO price range at $115 to $125 per share, offering 28 million Class A shares on the Nasdaq under the ticker CBRS. At the top of the range, the offering raises $3.5 billion, or up to $4.03 billion if underwriters exercise their overallotment option in full. Pricing is expected during the week of May 11, 2026.

What is Cerebras’ valuation at IPO?

On an outstanding-share basis, the $125 high end of the range implies a market cap of $26.6 billion. On a fully diluted basis, Renaissance Capital calculates roughly $35.7 billion at the midpoint. Bloomberg has separately reported that the company is targeting a valuation near $40 billion based on sources familiar with internal projections.

How much revenue does Cerebras make?

Cerebras reported $510 million in revenue for the full year ended December 31, 2025, up 76% from $290 million in 2024. The company also turned profitable in 2025, reporting $87.9 million in net income and earnings of $1.38 per diluted share, compared with a significant net loss in 2024.

What is the Cerebras Wafer-Scale Engine?

The Wafer-Scale Engine (WSE-3) is a single processor etched across an entire 300mm silicon wafer, containing approximately 4 trillion transistors and 900,000 active AI cores. It eliminates the multi-chip interconnect bottlenecks that limit GPU cluster performance by keeping all compute and 44GB of on-chip SRAM on one die, enabling extremely high memory bandwidth.

What is the Cerebras and OpenAI deal?

In December 2025, OpenAI signed a multi-year agreement valued at over $20 billion, under which it would consume 750 megawatts of Cerebras computing capacity through 2028. OpenAI also provided Cerebras with a $1 billion working capital loan as part of the arrangement, representing one of the largest AI infrastructure commitments to any non-Nvidia vendor.

When will Cerebras stock start trading?

Cerebras launched its roadshow on May 4, 2026, and pricing is expected during the week of May 11, 2026, according to Renaissance Capital. Trading would begin on the Nasdaq the following day under the ticker symbol CBRS, subject to market conditions and successful completion of the offering.

Why did Cerebras withdraw its first IPO?

Cerebras filed for an IPO in 2024 but withdrew the paperwork amid national security concerns in Washington tied to the company’s heavy revenue concentration in Abu Dhabi-based technology firm G42. The company has since worked to diversify its customer base, announcing the OpenAI and AWS partnerships, and refiled for a public listing in April 2026.

Bottom Line

Cerebras is a genuinely unusual company attempting a genuinely unusual IPO. Its core technology solves a real problem, and the contracts it signed in the past 18 months with OpenAI and AWS are the kind of validation that money can’t buy on a roadshow. The profitability swing from a $485 million loss in 2024 to $87.9 million in net income in 2025 reframes the story from a money-burning moonshot to something that at least rhymes with a business model.

The tension is the valuation. Paying 55x to 80x revenue for a hardware company with significant customer concentration, active geopolitical risk, and an unproven production scaling curve requires a conviction that the WSE-3 architecture is not just faster today but defensibly faster at scale for the next five to ten years. That conviction is possible. It demands a long horizon and a tolerance for binary outcomes that most institutional investors will price carefully.

Watch the book-building closely. The $10 billion in early interest is a headline, not a closing. The real signal will come when long-only funds announce their final allocations, and whether Cerebras prices at the top, the middle, or below the range of $115 to $125 per share.

Watch For
01 Final IPO pricing during the week of May 11, whether Cerebras prices at the top of its $115-$125 range, above it, or below, will signal how institutional investors weigh the concentration risk versus the OpenAI and AWS deals.
02 G42 revenue concentration in the first post-IPO quarterly earnings filing, the Q1 2026 10-Q will be the first public look at whether customer diversification is accelerating faster than the S-1 implied.
03 AWS binding term sheet conversion, the March 2026 agreement with Amazon Web Services has not yet been converted into a full deployment contract; that milestone, or lack of it, will determine whether the hyperscaler thesis holds.
04 US export control policy, any new restrictions on advanced AI chip exports to the Middle East or other regions could directly affect existing Cerebras contracts and reshape the company’s addressable market overnight.
Stay ahead of the curve. More on AI Hardware, semiconductors, and the future of compute at NeuralWired.
Explore AI

Leave a Reply

Your email address will not be published. Required fields are marked *