Evidence96%Authoritative
FactConfirmedProduct·January 6, 2026
Jais 2 Arabic-Centric LLMs Trained and Deployed on Cerebras Wafer-Scale Clusters
Jais 2 family of Arabic-centric LLMs (8B and 70B), developed with G42's Inception and MBZUAI, was trained end-to-end and deployed for production inference on Cerebras at 2,000 tokens/second, establishing new state-of-the-art Arabic model performance.
Evidence Strength
Evidence96%Authoritative
Backed by official company doc
Single publisher source
Includes official or primary source
Key Development
High-significance development (rated 7/10)
Confirmed — verified event
Insights
First tracked
December 9, 2025
Last updated
January 6, 2026
Sources
2 sources
Related Developments
Cerebras Delivers 3,000 Tokens/Second Inference for OpenAI's gpt-oss-120B Open-Weight ModelCS-3 vs. NVIDIA DGX B200 Blackwell Benchmarks PublishedGLM-4.7 Available on Cerebras Inference Cloud at 1,000-1,700 Tokens/SecondOpenAI Signs $10B+ Multiyear Compute Deal with CerebrasOpenAI GPT-5.3-Codex-Spark Powered by Cerebras Launches in Research Preview
Sources (2)
Source Timeline
2026: Fast Inference Finds its GrooveCerebras·Jan 6
Jais 2: A Blueprint for Sovereign AICerebras·Dec 9, 2025
Evidence Strength
Evidence96%Authoritative
Backed by official company doc
Single publisher source
Includes official or primary source
Key Development
High-significance development (rated 7/10)
Confirmed — verified event
Insights
First tracked
December 9, 2025
Last updated
January 6, 2026
Sources
2 sources
Related Developments
Cerebras Delivers 3,000 Tokens/Second Inference for OpenAI's gpt-oss-120B Open-Weight ModelCS-3 vs. NVIDIA DGX B200 Blackwell Benchmarks PublishedGLM-4.7 Available on Cerebras Inference Cloud at 1,000-1,700 Tokens/SecondOpenAI Signs $10B+ Multiyear Compute Deal with CerebrasOpenAI GPT-5.3-Codex-Spark Powered by Cerebras Launches in Research Preview