You might have heard of the Multicore and Multiprocessor SoC (MPSoC) Forum sponsored by IEEE and other industry associations and companies. This group of top-notch academic and industry technical leaders gets together once a year to talk about hardware and software architecture and applications for multicore and multiprocessor systems-on-chip (SoCs). They gather to debate the latest and greatest ideas to meet emerging needs. Kurt Shuler, vice president of marketing at Arteris IP, calls these meetings “The Davos for chips.” They’re held in some pretty nice locations around the world, and he tells me the food and wine at these events are also quite good!
The forum will release a two-volume book to celebrate their 20th anniversary on May 11. You can buy this direct from Wiley, or you can pre-order on Amazon. The first book covers architectures, the second applications. The first book divides into sections on processor architectures, memory architectures, interconnects, and interfaces. K. Charles Janac, president and CEO of Arteris IP, wrote the first chapter in the third section on network-on-chip (NoC) architectures. I’m impressed that what must be considered a definitive technical reference on MPSoCs required a chapter on NoC interconnect, and the editors turned to Arteris IP to write that chapter.
Let me start by emphasizing that these books are a technical reference without marketing or advertising, not surprising given the authors and publisher. Charlie’s chapter kicks off with some background on how chip connectivity has evolved from buses through crossbars to NoCs. I’ve talked about this in a previous blog. He then goes into detail that I think teams new to NoCs will find helpful — considerations in architecting and configuring the network. This spans from architecture to floor planning since you must consider quality of service (QoS) and additional services you must support like debug and safety. Floorplan efficiency is a key advantage for NoCs over crossbars. Naturally you should plan this into the implementation.
The most obvious service a NoC can provide is in guaranteed QoS. What may be less familiar to many is the degree of flexibility designers have in that management. You can manage performance statically or dynamically within the NoC or through software-based controls.
Debug is another obvious service. Since the NoC sees all data traffic, designers can create probes to inspect data, monitor performance and generate traces for use by CoreSight and other debuggers.
For safety-critical designs, the NoC must also provide support for FMEDA analyses. And for safety mitigation techniques like parity, ECC, duplication and TMR. A NoC can support system-level ISO 26262 ASIL D safety by connecting a safety monitor through the network to each IP and supporting the isolation of connected IP blocks to test those IPs while the design is active in an application. For security, NoCs provide firewalls with the same intent as network firewalls, blocking malware activity inside the chip.
I’ve written before about cache coherence support in SoCs. The size and complexity of modern SoCs, driven particularly by computer vision and AI, create a need for coherence across many IPs in the chip. Just think of an ADAS object recognition system with a video front-end. Now coherence must span many non-CPU IPs, distributed across a large die. That wide distribution demands NoC interconnect, which must also support cache coherence. Charlie goes into some details on the mechanism, protocols and messaging here.
NoCs have been doing very well in keeping up with these needs. So well that now they’re the leading interconnect option across the top semis and system builders in many applications. From mobile phones to TVs, cameras, cars, drones, remotes and high-performance servers. You’d be hard-pressed to find an advanced design that isn’t based on NoC technology.
With this lead, NoC providers are being pushed to service more new demands on interconnect. Among these, topology synthesis and floorplan awareness rank high. The bigger these SoCs become, the more NoC teams need automation to test topologies against trial floorplans.
Proliferating AI architectures push the need for more creative interconnect options in grid-, ring- and torus-based accelerators. Broadcasting weights and aggregating reads across this architecture in one clock tick requires special support. AI already demands cache coherence support with the controller subsystem. Scalable accelerators want to rely on local cache coherence domains for 1, 2 or 4 accelerators at the top connecting to controllers. Making hierarchical cache coherence a reality.
The ASIL D “fail-operational” mechanism I talked about earlier is going to grow. Who wants the whole SoC to fail if one subsystem fails? Remember when you had to restart your browser if one website locked up? That’s Stone Age – we expect modern browsers to be resilient to page failures. SoCs will go the same way. Now system builders want to move beyond error detection to prediction. Sound familiar? This further emphasizes the central role the NoC will play in an SoC, moving from a passive interconnect to the heart of communication, monitoring and control within the chip/3D stack/intelligent system.