Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/cisco-launched-its-silicon-one-g300-ai-networking-chip-in-a-move-that-aims-to-compete-with-nvidia-and-broadcom.24521/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030871
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Cisco launched its Silicon One G300 AI networking chip in a move that aims to compete with Nvidia and Broadcom.

And then there is Cerebras, where scale‑up is essentially “inside one wafer” (one CS system), and scale‑out is multiple wafers connected via SwarmX + MemoryX over Ethernet. For scale-out, Cerebras connects multiple CS systems using the SwarmX interconnect plus MemoryX servers in a broadcast‑reduce topology. SwarmX does broadcast of weights to many wafers and reduction of gradients back into MemoryX, so that many CS‑3s train one large model in data‑parallel fashion. CS‑3 supports scale‑out clusters of up to 2,048 CS‑3 systems, with low‑latency RDMA‑over‑Ethernet links carrying only activations/gradients between wafers while keeping the bulk of traffic on‑wafer.
 
And then there is Cerebras, where scale‑up is essentially “inside one wafer” (one CS system), and scale‑out is multiple wafers connected via SwarmX + MemoryX over Ethernet. For scale-out, Cerebras connects multiple CS systems using the SwarmX interconnect plus MemoryX servers in a broadcast‑reduce topology. SwarmX does broadcast of weights to many wafers and reduction of gradients back into MemoryX, so that many CS‑3s train one large model in data‑parallel fashion. CS‑3 supports scale‑out clusters of up to 2,048 CS‑3 systems, with low‑latency RDMA‑over‑Ethernet links carrying only activations/gradients between wafers while keeping the bulk of traffic on‑wafer.
curious how Cerebras handles large memory access. No matter how much SRAM they have on chips, it's no where near what HBM provides
 
curious how Cerebras handles large memory access. No matter how much SRAM they have on chips, it's no where near what HBM provides
Cerebras uses dedicated servers, called MemoryX servers, which are SwarmX fabric-connected to the WSE-3 nodes. The MemoryX configuration can include up to 1.2PB of shared memory storage, consisting of DDR5 and Flash tiers. There is 44GB of SRAM on each WSE-3, and the SRAM has far lower latency and fabric latency than any HBM.
 
Cerebras uses dedicated servers, called MemoryX servers, which are SwarmX fabric-connected to the WSE-3 nodes. The MemoryX configuration can include up to 1.2PB of shared memory storage, consisting of DDR5 and Flash tiers. There is 44GB of SRAM on each WSE-3, and the SRAM has far lower latency and fabric latency than any HBM.
do they have system that actually goes to the PB connection for training? SRAM along, while seems to be big, is far from enough. Perhaps their position is like Groq's inference in AI world
 
do they have system that actually goes to the PB connection for training? SRAM along, while seems to be big, is far from enough. Perhaps their position is like Groq's inference in AI world
Yes.

I'm not a big fan of TP Morgan for technical understanding, but Cerebras people provided the information and explanations in the article. However, the article is out of date, and Cerebras also does inference now. Claimed to be the world's fastest.


 
Yes.

I'm not a big fan of TP Morgan for technical understanding, but Cerebras people provided the information and explanations in the article. However, the article is out of date, and Cerebras also does inference now. Claimed to be the world's fastest.


HPC requirement is different from training. Inference makes sense. Wish we have more trustworthy independent BM as this part of industry matures
 
coherent-lite is likely to cover both scale out and across. as for CPO, indeed the overall DC power saving from CPO is very limited. Perhaps the world will continue to partition to NV and Google approach, just like GPU and TPU
If/when it happens -- not by any means certain yet, it's not even been defined! -- coherent-lite will be mainly targeted at scale-across, scale-out is more likely to use standard 1600G DD, DCI will use ZR pluggable.


CPO rollout is likely to stay limited until we get to the point where getting the high-speed data to pluggables becomes impossible -- even with flyover cables, which 448G SERDES will use -- because of loss budgets. At which point there's no choice except moving to CPO, but that's certainly still several years away, which means not this generation or even the next one.
 
Last edited:
HPC requirement is different from training. Inference makes sense. Wish we have more trustworthy independent BM as this part of industry matures
You misread the NextPlatform article, though I'm not surprised considering Morgan's flamboyant writing style calling the WSE-3 systems "supercomputers". That's nonsense in the traditional sense. Cerebras can't do traditional HPC, these are dedicated AI systems with processor cores completely focused on AI instructions.

With the CS-3 machines, there are now options for 24 TB and 36 TB for enterprises and 120 TB and 1,200 TB for hyperscalers, which provides 480 billion and 720 billion parameters of storage as the top end of the enterprise scale and 2.4 trillion or 24 trillion for the hyperscalers. Importantly, all of this MemoryX memory can be scaled independently from the compute – something you cannot do with any GPUs or even Nvidia’s Grace-Hopper superchip hybrid, which has static memory configurations as well.
Cerebras does training of very large models, but I don't think any of the big cloud companies have taken the bait yet. Cerebras seems to be focusing on sovereign AI, like G42, and the big pharma companies.
 
Cerebras does training of very large models, but I don't think any of the big cloud companies have taken the bait yet. Cerebras seems to be focusing on sovereign AI, like G42, and the big pharma companies.
I think they have gone pretty big into inference as well, especially on the fast, low latency end. They have announced deals to provide inference to both OpenAI, Perplexity and Mistral, but I'm pretty sure that they had to do the heavy lifting, by building and operating their own data centers (at least 6 of them globally), and selling services / tokens to the models companies. Seems like the only way to really build AI chips anymore, is to build and operate your own data centers (unless you're NVIDIA). That's the only way you can enough real world operating experience to do data-center level AI inference system co-optimization.
 
Cerebras uses dedicated servers, called MemoryX servers, which are SwarmX fabric-connected to the WSE-3 nodes. The MemoryX configuration can include up to 1.2PB of shared memory storage, consisting of DDR5 and Flash tiers. There is 44GB of SRAM on each WSE-3, and the SRAM has far lower latency and fabric latency than any HBM.
Simple case below for 2024 - 2024 was all about fitting large dense models into memory. 2025 has gotten far more complicated with MoE and bunches of "smarter" but smaller expert model-ettes activating, then coordinating results back to all experts. Add in separating pre-fill and decode to different groups of processors and building and managing a KV cache. Not sure how they do the 2025-2026 edition.

If the model fits on one wafer
• A CS‑3 has 44 GB of on‑chip SRAM across the wafer; for many production LLMs, all weights can be placed on‑wafer for inference.
• In that regime you don’t need weight streaming at runtime: parameters sit in SRAM next to the cores, so inference runs purely on‑wafer without repeatedly fetching weights from external memory.
• This is where Cerebras reports 10× faster LLM inference vs GPU clusters, driven by much higher effective memory bandwidth and no HBM/PCIe hops.

If the model is larger than one wafer
• For very large models whose weights exceed 44 GB per wafer, Cerebras can reuse the same weight‑streaming mechanism as in training:
• Weights reside in external MemoryX.
• For each layer (or group of layers), weights are streamed onto the wafer; activations stay on‑wafer; results move forward layer‑by‑layer.
• Latency is hidden the same way as in training: by overlapping weight transfers with compute and by exploiting coarse‑ and fine‑grained parallelism on the mesh.

Scaling out inference
• Multiple CS systems can serve inference together using SwarmX plus MemoryX, similar to training:
• MemoryX holds one or more model copies; SwarmX broadcasts weights (if streaming is used) and aggregates any needed results.
• For many inference workloads, the preferred pattern is replicated models across CS‑3s, each handling its own request stream, so most traffic stays local to each wafer and SwarmX is used more for management than for per‑token communication.
 
I think they have gone pretty big into inference as well, especially on the fast, low latency end. They have announced deals to provide inference to both OpenAI, Perplexity and Mistral, but I'm pretty sure that they had to do the heavy lifting, by building and operating their own data centers (at least 6 of them globally), and selling services / tokens to the models companies. Seems like the only way to really build AI chips anymore, is to build and operate your own data centers (unless you're NVIDIA). That's the only way you can enough real world operating experience to do data-center level AI inference system co-optimization.
Agreed. That's why I posted their inferencing link.

Cerebras has announced a cloud computing service, which is mostly about inference, but being a private company they don't break down revenue sources. I suspect a significant driver for their cloud services are that Cerebras systems probably have some very different operations and support requirements from more common x86 and Arm based servers and industry networks. Especially if something goes wrong. :)
 
Back
Top