Be part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra
Cerebras Techniques, an AI {hardware} startup that has been steadily difficult Nvidia’s dominance within the synthetic intelligence market, introduced Tuesday a major enlargement of its knowledge middle footprint and two main enterprise partnerships that place the corporate to turn into the main supplier of high-speed AI inference providers.
The corporate will add six new AI knowledge facilities throughout North America and Europe, rising its inference capability twentyfold to over 40 million tokens per second. The enlargement consists of amenities in Dallas, Minneapolis, Oklahoma Metropolis, Montreal, New York, and France, with 85% of the full capability situated in the US.
“This 12 months, our objective is to actually fulfill all of the demand and all the brand new demand we anticipate will come on-line on account of new fashions like Llama 4 and new DeepSeek fashions,” mentioned James Wang, Director of Product Advertising and marketing at Cerebras, in an interview with VentureBeat. “That is our large progress initiative this 12 months to fulfill nearly limitless demand we’re seeing throughout the board for inference tokens.”
The info middle enlargement represents the corporate’s formidable wager that the marketplace for high-speed AI inference — the method the place educated AI fashions generate outputs for real-world purposes — will develop dramatically as firms search sooner options to GPU-based options from Nvidia.

Strategic partnerships that carry high-speed AI to builders and monetary analysts
Alongside the infrastructure enlargement, Cerebras introduced partnerships with Hugging Face, the favored AI developer platform, and AlphaSense, a market intelligence platform broadly used within the monetary providers {industry}.
The Hugging Face integration will permit its 5 million builders to entry Cerebras Inference with a single click on, with out having to join Cerebras individually. This represents a significant distribution channel for Cerebras, significantly for builders working with open-source fashions like Llama 3.3 70B.
“Hugging Face is form of the GitHub of AI and the middle of all open supply AI growth,” Wang defined. “The combination is tremendous good and native. You simply seem of their inference suppliers checklist. You simply examine the field after which you should utilize Cerebras straight away.”
The AlphaSense partnership represents a major enterprise buyer win, with the monetary intelligence platform switching from what Wang described as a “international, high three closed-source AI mannequin vendor” to Cerebras. The corporate, which serves roughly 85% of Fortune 100 firms, is utilizing Cerebras to speed up its AI-powered search capabilities for market intelligence.
“It is a great buyer win and a really massive contract for us,” Wang mentioned. “We velocity them up by 10x so what used to take 5 seconds or longer, principally turn into instantaneous on Cerebras.”

How Cerebras is profitable the race for AI inference velocity as reasoning fashions decelerate
Cerebras has been positioning itself as a specialist in high-speed inference, claiming its Wafer-Scale Engine (WSE-3) processor can run AI fashions 10 to 70 instances sooner than GPU-based options. This velocity benefit has turn into more and more invaluable as AI fashions evolve towards extra complicated reasoning capabilities.
“For those who take heed to Jensen’s remarks, reasoning is the subsequent huge factor, even in line with Nvidia,” Wang mentioned, referring to Nvidia CEO Jensen Huang. “However what he’s not telling you is that reasoning makes the entire thing run 10 instances slower as a result of the mannequin has to assume and generate a bunch of inside monologue earlier than it provides you the ultimate reply.”
This slowdown creates a possibility for Cerebras, whose specialised {hardware} is designed to speed up these extra complicated AI workloads. The corporate has already secured high-profile clients together with Perplexity AI and Mistral AI, who use Cerebras to energy their AI search and assistant merchandise, respectively.
“We assist Perplexity turn into the world’s quickest AI search engine. This simply isn’t attainable in any other case,” Wang mentioned. “We assist Mistral obtain the identical feat. Now they’ve a purpose for individuals to subscribe to Le Chat Professional, whereas earlier than, your mannequin might be not the identical cutting-edge degree as GPT-4.”

The compelling economics behind Cerebras’ problem to OpenAI and Nvidia
Cerebras is betting that the mix of velocity and price will make its inference providers enticing even to firms already utilizing main fashions like GPT-4.
Wang identified that Meta’s Llama 3.3 70B, an open-source mannequin that Cerebras has optimized for its {hardware}, now scores the identical on intelligence checks as OpenAI’s GPT-4, whereas costing considerably much less to run.
“Anybody who’s utilizing GPT-4 as we speak can simply transfer to Llama 3.3 70B as a drop-in alternative,” he defined. “The value for GPT-4 is [about] $4.40 in blended phrases. And Llama 3.3 is like 60 cents. We’re about 60 cents, proper? So that you cut back price by nearly an order of magnitude. And should you use Cerebras, you improve velocity by one other order of magnitude.”
Inside Cerebras’ tornado-proof knowledge facilities constructed for AI resilience
The corporate is making substantial investments in resilient infrastructure as a part of its enlargement. Its Oklahoma Metropolis facility, scheduled to return on-line in June 2025, is designed to face up to excessive climate occasions.
“Oklahoma, as you already know, is a form of a twister zone. So this knowledge middle truly is rated and designed to be totally immune to tornadoes and seismic exercise,” Wang mentioned. “It can face up to the strongest twister ever recorded on report. If that factor simply goes by way of, this factor will simply maintain sending Llama tokens to builders.”
The Oklahoma Metropolis facility, operated in partnership with Scale Datacenter, will home over 300 Cerebras CS-3 programs and options triple redundant energy stations and customized water-cooling options particularly designed for Cerebras’ wafer-scale programs.

From skepticism to market management: How Cerebras is proving its worth
The enlargement and partnerships introduced as we speak symbolize a major milestone for Cerebras, which has been working to show itself in an AI {hardware} market dominated by Nvidia.
“I feel what was cheap skepticism about buyer uptake, perhaps once we first launched, I feel that’s now totally put to mattress, simply given the range of logos now we have,” Wang mentioned.
The corporate is focusing on three particular areas the place quick inference supplies probably the most worth: real-time voice and video processing, reasoning fashions, and coding purposes.
“Coding is one in every of these form of in-between reasoning and common Q&A that takes perhaps 30 seconds to a minute to generate all of the code,” Wang defined. “Pace instantly is proportional to developer productiveness. So having velocity there issues.”
By specializing in high-speed inference quite than competing throughout all AI workloads, Cerebras has discovered a distinct segment the place it could actually declare management over even the most important cloud suppliers.
“No person usually competes in opposition to AWS and Azure on their scale. We don’t clearly attain full scale like them, however to have the ability to replicate a key section… on the high-speed inference entrance, we can have extra capability than them,” Wang mentioned.
Why Cerebras’ US-centric enlargement issues for AI sovereignty and future workloads
The enlargement comes at a time when the AI {industry} is more and more targeted on inference capabilities, as firms transfer from experimenting with generative AI to deploying it in manufacturing purposes the place velocity and cost-efficiency are vital.
With 85% of its inference capability situated in the US, Cerebras can be positioning itself as a key participant in advancing home AI infrastructure at a time when technological sovereignty has turn into a nationwide precedence.
“Cerebras is turbocharging the way forward for U.S. AI management with unmatched efficiency, scale and effectivity – these new international datacenters will function the spine for the subsequent wave of AI innovation,” mentioned Dhiraj Mallick, COO of Cerebras Techniques, within the firm’s announcement.
As reasoning fashions like DeepSeek R1 and OpenAI’s o3 turn into extra prevalent, the demand for sooner inference options is prone to develop. These fashions, which might take minutes to generate solutions on conventional {hardware}, function near-instantaneously on Cerebras programs, in line with the corporate.
For technical resolution makers evaluating AI infrastructure choices, Cerebras’ enlargement represents a major new different to GPU-based options, significantly for purposes the place response time is vital to person expertise.
Whether or not the corporate can actually problem Nvidia’s dominance within the broader AI {hardware} market stays to be seen, however its concentrate on high-speed inference and substantial infrastructure funding demonstrates a transparent technique to carve out a invaluable section of the quickly evolving AI panorama.