WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 392
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 392
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)
            
Mobile Unleashed Banner SemiWiki
WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 392
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 392
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)

How to Bring Coherency to the World of Cache Memory

How to Bring Coherency to the World of Cache Memory
by Tom Simon on 07-11-2016 at 12:00 pm

As the size and complexity of System On Chip design has rapidly expanded in recent years, the need to use cache memory to improve throughput and reduce power has increased as well. Originally, cache memory was used to prevent what was then a single processor from making expensive off chip access for program or data memory. With the advent of multi-core preprocessors, caches began to play an essential role in enabling rapid sharing and exchange of data between the cores. Without caches many of the benefits of a multi-core architecture would be lost to the inefficiencies of SRAM access.

As a result, processors in multi-core chips are built with cache coherent memory interfaces. Over time many new IP blocks, such as PCIe, have been developed that are part of SoC ecosystems, and many have support for cache coherency. There are of course multiple implementations for cache coherency. Even within a given interface there are parameters that can affect interoperability. In many cases there are good reasons for differing protocols for cache coherency, however this diversity of choices has stymied SoC architects and designers.

Recently I wrote about Arteris and their new Ncore cache coherency network that can link together IP blocks that support a variety of cache coherency protocols. Naturally, Ncore supports ARM’s AMBA ACE protocol. ARM sees the Ncore offering from Arteris as an efficient means to link together IP that uses heterogeneous cache protocols. This is great for cache coherent enabled IP going into SOC’s, but what about IP that is still necessary but has no cache support?

Well, some of the strongest interest in Ncore apparently has come from SoC companies that are faced with integrating non-cache coherent blocks into their designs. Next I’ll discuss how this can be done with Ncore to provide all the advantages of cache coherency to those blocks.

Ncore uses the Arteris FlexNoc as a transport layer for cache agents. This provides tremendous flexibility in allocating resources for cache data transfers. It also supports the cache coherent agents. For blocks that already have local cache Ncore provides a protocol interface and provides logic units for managing coherency. IP Blocks with only a traditional memory interface can use a non-coherent bridge provided by Ncore. Also proxy caches can be synthesized to meet the IP block needs.

The Ncore non-coherent bridge translates non-coherent transactions into IO-coherent ones. Multiple non-coherent data channels can be connected to a single bridge, allowing aggregation for more efficiency. Ncore proxy caches have read pre-fetch, write merging and ordering capability. The proxy caches are configurable up to 1MB per port. Both MSI and a subset of MEI coherence models are supported.

From the perspective of the non-cache coherent IP, it is still talking to external SRAM. But in actuality Ncore is presenting this block as a fully cache coherent block to the rest of the cache coherent network. Ncore allows the SoC architect to tune the parameters for the cache bridge to ensure optimal operation. Ncore and FlexNOC come with a fully integrated and sophisticated design suite to tailor the system to the SoC power, area, and performance requirements.

With the addition of Ncore Arteris now is in the enviable position of offering IP for unified SOC interconnect. Using one underlying transport layer for both coherent and non-coherent SOC data transfers lets architects build in the optimal interconnect resources. This approach maximizes utilization of chip real estate, while ensuring sufficient throughput for all data requirements. For more information on Arteris and their Ncore Cache coherent network IP, go to their website.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.