IC Mask SemiWiki Webinar Banner
WP_Term Object
(
    [term_id] => 39
    [name] => NetSpeed Systems
    [slug] => netspeed-systems
    [term_group] => 0
    [term_taxonomy_id] => 39
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 28
    [filter] => raw
    [cat_ID] => 39
    [category_count] => 28
    [category_description] => 
    [cat_name] => NetSpeed Systems
    [category_nicename] => netspeed-systems
    [category_parent] => 14433
)

Top Mobile OEM Uses NetSpeed to Boost Its Next Gen Application Processor

Top Mobile OEM Uses NetSpeed to Boost Its Next Gen Application Processor
by Eric Esteve on 04-20-2016 at 12:00 pm

The smartphone segment is certainly the most competitive market for chip makers today and the yearly product launch cadence puts a lot of pressure on the application processor design cycle. End-users expect to benefit from higher image definition, better sound quality, ever faster and more complex applications which push the limits of application processor performance in terms of higher frequency, lower latency, and reduced power consumption. The race for ever better performance is also translating into always more cores, CPU or GPU.

Optimizing processing by integrating cache memory is a well-known architecture, but the core multiplication is creating a new challenge: cache-coherency. Because the memory has to be shared between many cores (6 GPU and 2 CPU in the picture below), when one core read a precise memory location, after another core has written this same location, the read must return the last written value, not an older one. You may define cache-coherency as the ability to maintain consistency between the cache and memory. Cache-coherency is adding to design complexity (a specific function has to be developed), but is severely impacting the overall system performance, that’s why it will become a must have functionality in the complexes multi-core SoC, even in consumer or mobile applications, as it is today in networking and data center.

One of NetSpeed’s customers is a mobile OEM developing his own Application Processor (AP) which it then integrates into its flagship smartphone product. This latest generation application processor was defined as a future-proof platform. To ensure that the processor would be adaptable for future generations, the spec required support for cache coherency. In light of a long list of stringent requirements (performance x2, lower power, complex QoS requirements), the team was relieved that they were not locked into a legacy design or forced into using a low bandwidth crossbar-based interconnect design.

There is only one commercially available on-chip interconnects solution that is capable of satisfying both coherent and non-coherent requirements and that is NetSpeed’s Gemini NoC IP. By selecting NetSpeed IP, the company was able to implement a single solution today that satisfied current requirements for a non-coherent design and future requirements for coherent designs and even designs with a mix of coherent and non-coherent traffic. This approach allowed the company to minimize the risk for future SoC designs because later when the design team needs to implement a new cache-coherent architecture they will be working with an interconnect IP that is already known and well understood.

Not all interconnects (or NoCs) are created equal. NetSpeed provided a physically aware interconnect synthesis engine, an innovative solution that optimizes the interconnect architecture based on workload models able to deliver the right topology within minutes. Implementation of NetSpeed’s NoC led to a new generation SoC that delivers 20% lower latency and 15% higher maximum frequency than target set by the customer. Because NetSpeed synthesizes a pre-verified interconnect design within minutes, the direct impact on design schedule is to shrink six months of analysis down to a few hours.

Designing an heterogeneous multi-core SoC for mobile requires to meet very aggressive target for power consumption and also for Quality of Service (QoS). QoS is not equal to performance (in term of frame/second or MIPS), but a mediocre QoS may lead to downgrade an excellent performance figure-on the paper. For example, NetSpeed’s Gemini NoC allows building a real time bandwidth allocation mechanism, through an automated virtual assignment. The number of wires after P&R is directly impacting the SoC power consumption, but also the SoC performance itself due to wiring congestion. You understand why obtaining 65% fewer wires next to the memory controller is such an important result. Not only the power consumption will decrease, but the easiest routability in this critical area will also help meeting more stringent timing constraints.

Using NetSpeed’ NoC solution to design this heterogeneous multi-core SoC Application Processor for mobile has helped to meet or exceed the incredible TTM requirement for this kind of SoC, improve QoS as well as push the maximum frequency limit, prepare the future by integrating a cache-coherent NoC, and finally help NetSpeed’s customer to launch an AP SoC with a power consumption behavior on line with mobile customer expectation.

This blog is extracted from NetSpeed “Mobile” Success Stories. You can read more about this story and Data Center AP, Automotive SoC, Networking, Digital Home SoC or Data Center Storage stories here

From Eric Esteve from IPNEST

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.