Evidence96%Authoritative
FactConfirmedProduct·September 22, 2025
Oklahoma City AI Datacenter Ribbon-Cutting with 44+ Exaflops
Cerebras opened its newest AI datacenter in Oklahoma City with over 44 exaflops of AI compute, supporting the largest models and serving inference at 2,000–3,000 tokens per second for models from OpenAI, Meta, and Qwen.
Evidence Strength
Evidence96%Authoritative
Backed by official company doc
Single publisher source
Includes official or primary source
Key Development
High-significance development (rated 8/10)
Confirmed — verified event
Insights
First tracked
September 22, 2025
Last updated
September 22, 2025
Sources
1 source
Related Developments
CS-3 vs. NVIDIA DGX B200 Blackwell Benchmarks PublishedCerebras Delivers 3,000 Tokens/Second Inference for OpenAI's gpt-oss-120B Open-Weight ModelJais 2 Arabic-Centric LLMs Trained and Deployed on Cerebras Wafer-Scale ClustersGLM-4.7 Available on Cerebras Inference Cloud at 1,000-1,700 Tokens/SecondOpenAI Signs $10B+ Multiyear Compute Deal with Cerebras
Sources (1)
Source Timeline
The Fastest AI Datacenters will run on Cerebras: Meet OKCCerebras·Sep 22, 2025
Evidence Strength
Evidence96%Authoritative
Backed by official company doc
Single publisher source
Includes official or primary source
Key Development
High-significance development (rated 8/10)
Confirmed — verified event
Insights
First tracked
September 22, 2025
Last updated
September 22, 2025
Sources
1 source
Related Developments
CS-3 vs. NVIDIA DGX B200 Blackwell Benchmarks PublishedCerebras Delivers 3,000 Tokens/Second Inference for OpenAI's gpt-oss-120B Open-Weight ModelJais 2 Arabic-Centric LLMs Trained and Deployed on Cerebras Wafer-Scale ClustersGLM-4.7 Available on Cerebras Inference Cloud at 1,000-1,700 Tokens/SecondOpenAI Signs $10B+ Multiyear Compute Deal with Cerebras