I blame it on Henry Ford, William Levitt, and the NY State Board of Regents, among others. We went through a phase with this irresistible urge to stamp out blocks of sameness, creating mass produced clones of everything from cars to houses to students.
Thank goodness, that’s pretty much over. The thinking of simplifying system design to quickly produce products of uniform quality had its run – everywhere that is, except the semiconductor industry. Reflected in slogans like “Copy Exactly” and the combined physics and economics of transistor theory demanding replication by the billions, semiconductors rely on sameness for viability in production.
Levittown, PA, circa 1959 – courtesy Wikipedia
System designers feel no such constraints, however. The last I checked with the folks at Semico Research, the number of unique IP blocks in a large SoC design is approaching 100. Diversity in blocks is on the increase, with CPUs, GPUs, memory, network interfaces, display interfaces, camera interfaces, and more – each with their own unique requirements for interconnect.
The problem lies where diversity of design meets parity of production, a process we oversimplify into the term “integration”. To get IP blocks working together in the design, they obviously have to connect somehow, and hardware teams went to work on optimizing interconnects by the type of block to get the most performance in the least space. For visibility into design, test teams demand that everything be accessible with the same interconnect. For programming, software teams demand that everything be accessible using as few protocols as necessary.
Keeping the interconnect simple proved to be more difficult than thought. We tried the bus approach; it worked when the IP block count was fairly small, but quickly resulted in conflicts as multiple devices vied for limited resources. We tried JTAG, which served the needs of test but didn’t help with performance. We tried the crossbar matrix; it achieved performance but became so complex in and of itself, it was difficult to implement for larger designs in smaller geometries.
The network-on-chip (NoC) was born, to provide an abstraction between IP blocks using an initiator-target strategy. As hardware designers got familiar with the approach, different NoC implementations evolved. This meant one of three things had to happen in larger SoC implementations: 1) design teams had to agree on a NoC, and adapt each IP block into it, meaning in some cases a lot of work; or 2) design teams were restricted in the IP they could select to only blocks using the NoC of choice; or 3) a top-level NoC layer communicating between disparate NoC layers had to evolve, adding latency with a second layer.
None of those are great choices for most designs. Arteris thinks they have the solution in a new strategy: NoC compositions. FlexNoC uses the connectivity and address maps from all NoCs in a system to derive a connectivity and address map of each target, seen from each initiator, in each mode of operation. It also builds a top-level model of interconnect, which allows a full SoC simulation, and is able to check for degenerate routing loops that allow for deadlocks.
This approach eliminates the dreaded second layer NoC, and doesn’t require additional bridging which would add further delays. Background on the NoC composition strategy is available in a new white paper authored by Jonah Probell, senior solutions architect at Arteris.
Rather than enforcing stiff rules on IP design teams in creating interconnects, or limiting their choices of third-party IP, NoC compositions could ease the process of generating high-performance interconnect between disparate IP subsystems within an SoC.
lang: en_USShare this post via: