RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

IEDM 2017 Preview

IEDM 2017 Preview
by Scotten Jones on 10-20-2017 at 7:00 am

The 63rd annual IEDM (International Electron Devices Meeting) will be held December 2nd through 6th in San Francisco. In my opinion IEDM is one of, if not the premier conference on leading edge semiconductor technology. I will be attending the conference again this year and providing coverage for SemiWiki. As a member of the press I got some preview materials today and I wanted to share some of it with you.

Leading Edge Logic
As anyone who has read my articles on SemiWiki knows I follow the latest advances in logic process technology very closely. In the Platform Technology Session there will be papers from Intel on their 10nm technology and GLOBALFOUNDRIES on their 7nm technology and I am really looking forward to these papers:

  • Intel: Intel researchers will present a 10nm logic technology platform with excellent transistor and interconnect performance and aggressive design-rule scaling. They demonstrated its versatility by building a 204Mb SRAM having three different types of memory cells: a high-density 0.0312µm[SUP]2[/SUP] cell, a low voltage 0.0367µm[SUP]2[/SUP] cell, and a high-performance 0.0441µm[SUP]2[/SUP] cell. The platform features 3[SUP]rd[/SUP]-generation FinFETs fabricated with self-aligned quadruple patterning (SAQP) for critical layers, leading to a 7nm fin width at a 34nm pitch, and a 46nm fin height; a 5[SUP]th[/SUP]-generation high-k metal gate; and 7[SUP]th[/SUP]-generation strained silicon. There are 12 metal layers of interconnect, with cobalt wires in the lowest two layers that yield a 5-10x improvement in electromigration and a 2x reduction in via resistance. NMOS and PMOS current is 71% and 35% greater, respectively, compared to 14nm FinFET transistors. Metal stacks with four or six workfunctions enable operation at different threshold voltages, and novel self-aligned gate contacts over active gates are employed. (Paper 29.1, “A 10nm High Performance and Low-Power CMOS Technology Featuring 3[SUP]rd[/SUP]-Generation FinFET Transistors, Self-Aligned Quad Patterning, Contact Over Active Gate and Cobalt Local Interconnects,” C. Auth et al, Intel)
  • GLOBALFOUNDRIES (GF): GF researchers will present a fully integrated 7nm CMOS platform that provides significant density scaling and performance improvements over 14nm. It features a 3[SUP]rd[/SUP]-generation FinFET architecture with SAQP used for fin formation, and self-aligned double patterning for metallization. The 7nm platform features an improvement of 2.8x in routed logic density, along with impressive performance/power responses versus 14nm: a >40% performance increase at a fixed power, or alternatively a power reduction of >55% at a fixed frequency. The researchers demonstrated the platform by using it to build an incredibly small 0.0269µm[SUP]2[/SUP] SRAM cell. Multiple Cu/low-k BEOL stacks are possible for a range of system-on-chip (SoC) applications, and a unique multi-workfunction process makes possible a range of threshold voltages for diverse applications. A complete set of foundation and complex IP (intellectual property) is available in this advanced CMOS platform for both high-performance computing and mobile applications. (Paper 29.5, “A 7nm CMOS Technology Platform for Mobile and High-Performance Compute Applications,”S. Narasimha et al, Globalfoundries)

Silicon Photonics
Silicon Photonics is an area of great interest in the industry today and in my cost modeling business I am getting a lot of interest in Silicon Photonics costs. Session 34 will focus on Silicon Photonics.

Silicon Photonics: Current Status and Perspectives (Session #34) – Silicon photonics integrated circuits consist of devices such as optical transceivers, modulators, phase shifters and couplers, operating at >50 GHz for use in next-generation data centers. This session describes the latest in photonics IC advances in state-of-the-art 300mm fabrication technology; integrated nano-photonic crystals with fJ/bit optical links; and advanced packaging concepts for the specialized form factors this technology requires.

  • Developments in 300mm Silicon Photonics Using Traditional CMOS Fabrication Methods and Materials,” by Charles Baudot et al, STMicroelectronics
  • Reliable 50Gb/s Silicon Photonics Platform for Next-Generation Data Center Optical Interconnects,” by Philippe Absil et al, Imec
  • Advanced Silicon Photonics Technology Platform Leveraging the Semiconductor Supply Chain,” by Peter De Dobbelaere, Luxtera
  • Femtojoule-per-Bit Integrated Nanophotonics and Challenge for Optical Computation,” by Masaya Notomi et al, NTT Corporation
  • Advanced Devices and Packaging of Si-Photonics-Based Optical Transceiver for Optical Interconnection,” by K. Kurata et al, Photonics Electronics Technology Research Association

Nanowires
With FinFETs coming to the end of it’s scaling potential nanowires are garnering a lot of interest as the next generation technology. In session 37 there will be a couple of papers on nanowires incuding:

First Circuit Built With Stacked Si Nanowire Transistors: As scaling continues, gate-all-around MOSFETs are seen as a promising alternative to FinFETs. They are nanoscale devices in which the gate is completely wrapped around a nanowire, which serves as the transistor channel. Nanosheets, meanwhile, are sheets of arrays of GAA nanowires. A talk by Imec and Applied Materials will describe great progress in several key areas to make vertically stacked GAA nanowire and/or nanosheet MOSFETs practical. The team built the first functional ring oscillator test circuits ever demonstrated using stacked Si nanowire FETs, with devices that featured in-situ doped source/drain structures and dual-workfunction metal gates. An SiN STI liner was used to suppress oxidation-induced fin deformation and improve shape control; a high-selectivity etch was used for nanowire/nanosheet release and inner spacer cavity formation with no silicon reflow; and a new metallization process for n-type devices led to greater tunability of threshold voltage. (Paper 37.4, “Vertically Stacked Gate-All-Around Si Nanowire Transistors: Key Process Optimizations and Ring Oscillator Demonstration,” H. Mertens et al, Imec/Applied Materials)

Conclusion

These papers are just a sampling of what will be presented that are of interest to me. I highly recommend attending IEDM for anyone interested in staying current on the state-of-the art.

https://ieee-iedm.org/


How standard-cell based eFPGA IP can offer maximum safety, flexibility and TTM?

How standard-cell based eFPGA IP can offer maximum safety, flexibility and TTM?
by Eric Esteve on 10-19-2017 at 12:00 pm

Writing a white paper is never tedious, and when the product or the technology is emerging, it can become fascinating. Like for this white paper I have written for Menta “How Standard Cell Based eFPGA IP are Offering Maximum Flexibility to New System-on-Chip Generation”. eFPGA technology is not really emerging, but it’s fascinating to describe such a product: if you want to clearly explain eFPGA technology and highlight the differentiators linked with a specific approach, you must be subtle and crystal clear!


Let’s assume that you need to provide flexibility to a system. Before the emergence of eFPGA, the only way was to design a FPGA, or to add a programmable integrated circuit companion device (the FPGA) to an ASIC (the SoC). Menta has designed a family of FPGA blocks (the eFPGA) which can be integrated like any other hard IP into an ASIC. It’s important to realize that designing eFPGA IP product is not just cutting a FPGA block that you would deliver as is to an ASIC customer.

eFPGA is a new IP family that a designer will integrate into a SoC, and in this case, every IP may be unique. Menta is offering to the SoC architect the possibility to define a specific eFPGA where logic and memory size, MAC and DSP count are completely customizable, as well as the possibility to include inside this eFPGA certain customer defined blocks.

Menta has recently completed the 4[SUP]th[/SUP] generation of eFPGA IP (the company has been started 10 years ago) and the vendor offers some very specific features to build a solution more attractive than these offered by the competition. Why is Menta eFPGA IP more attractive? We will see that the solution is more robust, the architecture provides maximum flexibility and the porting to different technology node is safer and faster, allowing faster time-to-market. This solution also allows smoother integration in the EDA flow, including easier testability.

When most of FPGA are programmed via internal SRAM (as well as most of eFPGA), Menta has decided to rely on D-Flip-Flop for the programming. This approach makes the eFPGA safer, and for two reasons. At first, when SRAM are known to be prone to Single Upset Event (SUV), DFF show a better SUV immunity. The reason is very simple, the most significant factor is the physical size of the transistor geometries (smaller means less SEU energy required to trigger them), and the DFF geometry is larger than the equivalent storing cell in SRAM. That’s why Menta eFPGA architecture is well suited for automotive application, for example.

The second argument for a better safety is that designing programming SRAM will be based on a full custom approach, requiring new characterization every time you change technology node, when Menta is using DFF from a standard cell library, or pre-characterized, by the foundry or the library vendor.

In the white paper, you will learn why Menta eFPGA architecture eFPGA provide maximum flexibility, as the designer can include logic, memory, and internal I/O banks, infer pre-defined (by Menta) DSP primitives or include custom (made by the designer) DSP blocks.

Really, the key differentiator is linked with the decision to base eFPGA architecture only on standard blocks. The logic is based on standard cells, as well as the DSP primitives and internal I/O banks. Once Menta has validated eFPGA IP on a certain technology node, any customer defined eFPGA will be correct by construction. When a “mega cell” is only made of standards cells characterized by the foundry or the library vendor, the direct two consequences are safety and ease of use.

Safety because there is no risk of failure when using pre-characterized library and ease of use because the “mega cell” will integrate smoothly into the EDA flow. All required models or deliverables are already provided and guaranteed accurate by standard-cell library providers. There is a subtler consequence, which may have a significant impact on safety and time-to-market. If the SoC customer, for any reason, has to target a different technology node, the porting is accelerated due to the absence of full custom blocks as there is no need for a complete characterization, this has been previously done by the library provider. No full-custom block also greatly minimizes the risk of failure during the porting.



Menta has developed a patented technology (System and Method for Testing and Configuration of an FPGA) to offer to the designer a standard DFT approach. The eFPGA testability is based on multiplexed scan, using boundary scan isolation wrapper. Once again, the selected approach allows following a standard design flow.

By reading this white paper, you will also learn about the specific design flow to define the eFPGA itself. No surprise, this flow allows to interface via industry standards (Verilog, SDF annotation, gds, etc.) with the SoC integration flow from the EDA vendor.

As far as I am concerned, I really think that the semiconductor industry will adopt eFPGA when adding flexibility to a SoC is needed. The multiple benefits in term of solution cost and power consumption should be the drivers, and Menta is well positioned to get a good share of this new IP market, thanks to the key differentiators offered by the architecture.

You can find the white paper here: http://www.menta-efpga.com

From Eric Esteve from IPnest


How standard-cell based eFPGA IP can offer maximum safety, flexibility and TTM?

How standard-cell based eFPGA IP can offer maximum safety, flexibility and TTM?
by Eric Esteve on 10-19-2017 at 12:00 pm

Writing a white paper is never tedious, and when the product or the technology is emerging, it can become fascinating. Like for this white paper I have written for Menta “How Standard Cell Based eFPGA IP are Offering Maximum Flexibility to New System-on-Chip Generation”. eFPGA technology is not really emerging, but it’s fascinating to describe such a product: if you want to clearly explain eFPGA technology and highlight the differentiators linked with a specific approach, you must be subtle and crystal clear!


Let’s assume that you need to provide flexibility to a system. Before the emergence of eFPGA, the only way was to design a FPGA, or to add a programmable integrated circuit companion device (the FPGA) to an ASIC (the SoC). Menta has designed a family of FPGA blocks (the eFPGA) which can be integrated like any other hard IP into an ASIC. It’s important to realize that designing eFPGA IP product is not just cutting a FPGA block that you would deliver as is to an ASIC customer.

eFPGA is a new IP family that a designer will integrate into a SoC, and in this case, every IP may be unique. Menta is offering to the SoC architect the possibility to define a specific eFPGA where logic and memory size, MAC and DSP count are completely customizable, as well as the possibility to include inside this eFPGA certain customer defined blocks.

Menta has recently completed the 4[SUP]th[/SUP] generation of eFPGA IP (the company has been started 10 years ago) and the vendor offers some very specific features to build a solution more attractive than these offered by the competition. Why is Menta eFPGA IP more attractive? We will see that the solution is more robust, the architecture provides maximum flexibility and the porting to different technology node is safer and faster, allowing faster time-to-market. This solution also allows smoother integration in the EDA flow, including easier testability.

When most of FPGA are programmed via internal SRAM (as well as most of eFPGA), Menta has decided to rely on D-Flip-Flop for the programming. This approach makes the eFPGA safer, and for two reasons. At first, when SRAM are known to be prone to Single Upset Event (SUV), DFF show a better SUV immunity. The reason is very simple, the most significant factor is the physical size of the transistor geometries (smaller means less SEU energy required to trigger them), and the DFF geometry is larger than the equivalent storing cell in SRAM. That’s why Menta eFPGA architecture is well suited for automotive application, for example.

The second argument for a better safety is that designing programming SRAM will be based on a full custom approach, requiring new characterization every time you change technology node, when Menta is using DFF from a standard cell library, or pre-characterized, by the foundry or the library vendor.

In the white paper, you will learn why Menta eFPGA architecture eFPGA provide maximum flexibility, as the designer can include logic, memory, and internal I/O banks, infer pre-defined (by Menta) DSP primitives or include custom (made by the designer) DSP blocks.

Really, the key differentiator is linked with the decision to base eFPGA architecture only on standard blocks. The logic is based on standard cells, as well as the DSP primitives and internal I/O banks. Once Menta has validated eFPGA IP on a certain technology node, any customer defined eFPGA will be correct by construction. When a “mega cell” is only made of standards cells characterized by the foundry or the library vendor, the direct two consequences are safety and ease of use.

Safety because there is no risk of failure when using pre-characterized library and ease of use because the “mega cell” will integrate smoothly into the EDA flow. All required models or deliverables are already provided and guaranteed accurate by standard-cell library providers. There is a subtler consequence, which may have a significant impact on safety and time-to-market. If the SoC customer, for any reason, has to target a different technology node, the porting is accelerated due to the absence of full custom blocks as there is no need for a complete characterization, this has been previously done by the library provider. No full-custom block also greatly minimizes the risk of failure during the porting.



Menta has developed a patented technology (System and Method for Testing and Configuration of an FPGA) to offer to the designer a standard DFT approach. The eFPGA testability is based on multiplexed scan, using boundary scan isolation wrapper. Once again, the selected approach allows following a standard design flow.

By reading this white paper, you will also learn about the specific design flow to define the eFPGA itself. No surprise, this flow allows to interface via industry standards (Verilog, SDF annotation, gds, etc.) with the SoC integration flow from the EDA vendor.

As far as I am concerned, I really think that the semiconductor industry will adopt eFPGA when adding flexibility to a SoC is needed. The multiple benefits in term of solution cost and power consumption should be the drivers, and Menta is well positioned to get a good share of this new IP market, thanks to the key differentiators offered by the architecture.

You can find the white paper here: http://www.menta-efpga.com

From Eric Esteve from IPnest


Accelerating Accelerators

Accelerating Accelerators
by Bernard Murphy on 10-19-2017 at 7:00 am

Accelerating compute-intensive software functions by moving them into hardware has a long history, stretching back (as far as I remember) to floating-point co-processors. Modern SoCs are stuffed with these applications, from signal processors, to graphics processors, codecs and many more functions. All of these accelerators work extremely well for functions with broad application where any need for on-going configurability can be handled through switches or firmware / software upgrades in aspects which don’t significantly compromise performance.


But that constraint doesn’t always fit well with needs in the very dynamic markets which are common today, where competitive differentiation continually changes targets for solution-providers. That’s why FPGAs have become hot in big datacenter applications. Both Amazon Web Services (AWS) and Microsoft Azure have announced FPGA-based capabilities within their datacenters, for differentiated high-speed networking and to provide customizable high-performance options to cloud customers. The value proposition is simple – as demands change, the FPGA can be adapted more quickly than you could build a new ASIC, and often more cheaply given relatively low volumes in these applications.

Naturally there is a middle ground between ASIC and FPGA options. FPGA SoCs might be an answer in some cases, but when you’re stretching for a differentiated edge or wanting to offer an SoC solution to those who are, it’s not hard to imagine cases where an application-specific ASIC shell around an embedded FPGA core might be just right. You get all the flexibility of the FPGA core, combined with high performance plus low power and area of the fit-to-purpose ASIC functionality around the core. Target applications include data intensive AI / machine learning, 5G wireless, automotive ADAS and datacenter and networking applications.


As in any good FPGA, you expect support for logic and ALUs, DSP functions, also block RAMs (BRAM) and smaller RAM blocks (LRAMs in the picture above). When you want to customize the embedded FPGA (eFPGA) in your SoC, you go through the usual design cycle to map a logic design onto the primitives in the eFPGA. If you are using the Achronix Speedcore technology, you will use their ACE design tools.

Now take this a step further. When you write a piece of software, you can profile it to find areas where some additional focus could greatly speed up performance. The same concept can apply in your eFPGA design. By profiling benchmark test cases (Achronix works collaboratively with customers to do this), you can identify performance bottlenecks. Based on this analysis, Achronix can then build custom blocks for certain functions, which can be tiled into the eFPGA. Now you have the advantage of the high-performance shell along with configurability in the eFPGA, yet with significantly better PPA than you would get in a conventional eFPGA.


Achronix offer several application examples where the benefit of their Speedcore Custom Blocks is quite obvious. The first is for a YOLO (you only look once) function supporting a convolutional neural net (CNN) in real-time object detection. By converting a matrix-multiply operation to a custom block they have been able to reduce the size of the eFPGA by 35%.


In another example for networking, they have been able to build custom functions which can examine network traffic at line speed (400Gb/s line rate), for example to do header inspection. In this example, the purple blocks are the custom packet segment extraction/insertion blocks.


Another especially interesting example is use of this capability in building TCAMs. These functions are widely used in networking but are typically considered very expensive to implement in standalone FPGAs. However they can be very feasible in application-specific uses in an eFPGA when implemented as Custom Blocks.


One final example – string search. This has many applications, not least in genome matching, another hot area. (If you don’t like that example, think of how many programs contain string equaloperations, how that operation dominates many profiles and is therefore likely to be a bottleneck in real-time matching on streams or fast matching on giant datasets.) FPGAs are already used to accelerate these operations but are still not fast enough. Which makes this a great candidate for Custom Block acceleration. Achronix show an example where they can reduce time to do a match from 72 cycles to 1 cycle and massively reduce area.

No big surprise in a way – we all know that custom is going to be much faster and smaller than FPGA. The difference here is that now you can embed custom in eFPGA – pretty neat. Of course, this takes work. Robert Blake, the CEO of Achronix, told me that you might typically expect a 6-month cycle for profiling and custom block development. And there will be an NRE (you didn’t think it would be free, did you?). But if it can deliver this kind of advantage, it may be worth the investment.


Achronix business is growing very nicely, thanks to development in each of their FPGA accelerator lines. They expect to close 2017 at >$100M, with a strong pipeline and apparently well-balanced between their standalone FPGA (Speedster) and embedded applications. Speedcore, introduced to customers in 2015, is their fastest–growing product line and is already in production on TSMC 16nm and at testchip and first designs in TSMC 7nm.

You can read more HERE. You can also see Achronix present at ARM TechCon on:

· Reprogammable Logic in an Arm-Based SoC, presented by Kent Orthner, Systems Architect
· Smaller, Faster and Programmable – Customizing Your On-Chip FPGA, presented by Steve Mensor, VP of Marketing
· Customize Your eFPGA – Control Your Destiny for Machine Learning, 5G and Beyond, presented by Kent Orthner, Systems Architect


Rethinking IP Lifecycle Management

Rethinking IP Lifecycle Management
by Daniel Payne on 10-18-2017 at 12:00 pm

We recently saw both Apple and Samsung introduce new smart phones, and realize that the annual race to introduce sophisticated devices that are attractive and differentiated is highly competitive. If either of these companies misses a market window then fortunes can quickly change. SoCs with billions of transistors like smart phone processors make semiconductor IP re-use a central approach in design productivity, instead of starting from scratch for each new generation.

Tracking and managing hundreds of IP blocks in an SoC is a task best suited for an optimized tool, not using an Excel spreadsheet and manual email notifications. I’ve written before about Methodics and how their IP Lifecycle Management (IPLM) approach in the Percipient tool is such an optimized tool for IP-centric design flows. One aspect of Percipient that is worthy of attention is their Graph Database (white paper here) which is the key technology for fast and seamless IP reuse.

My first introduction to Relational Database Management Systems (RDBMS) was in the 1990’s while learning MySQL and PHP in building custom, data-driven web sites. Oracle now owns MySQL and it powers many web sites today, like WordPress sites with some 150,000,000 users. Tables are used in MySQL to store rows of information, where each row has multiple columns and some index field. Tables can be related to each other by joining them which enable complex queries.

Percipient instead uses a Graph Database which stores data using nodes and relationships, with key-value properties. A relationship will connect two nodes to one another, and they are both typed and directed. The beauty of this graph database approach is that the relationships can be traveled in either direction. SoCs use hierarchy to define how IP is placed, and a graph database models hierarchy natively.

In contrast, a RDBMS doesn’t natively support or use hierarchy at all. Sure, you could use a series of MySQL database tables to store and traverse all of your IP but the performance would begin to suffer as the data scales up in size.

Related blog – Something new in IP Lifecycle Management

Each IP block in your system has dependencies, so for example a USB component depends on PDKs, libraries and test-benches. Our IPLM has to understand and track all of these dependencies efficiently. Your system may even use different versions of the same component in the same design, so knowing how to avoid conflicts is essential. Dependencies map directly into a graph database, so it’s straight forward to add, delete or manage conflicts. The Percipient tool is used on SoC hierarchies that use several hundred nodes, even up to 8 levels of hierarchy.

The team at Methodics chose the Neo4j graph database in their Percipient tool because of its popularity, speed and scalability. The previous generation IPLM tool from Methodics was called ProjectIC and it used SQL tables with PostGres, which worked fine for smaller designs but didn’t scale up with enough speed. Let’s take a quick look at speed comparisons between the older ProjectIC approach and the newest Percipient through the following scatter plot showing response time on the Y-axis in seconds versus calendar time on the X-axis:

Notice the general increase in time to manage IP as the hierarchy grew to six levels and about 290 nodes while using the older ProjectIC tool, then the customer started using Percipient with a graph database which dramatically lowered their response times and continued to scale well. Actual customer usage created this graph while doing a production IC, not a benchmark. Using the graph database approach the customer can now query the status of a hierarchical IP in a workspace, or even view conflicts in just seconds instead of minutes. These speed improvements will scale into hundreds, thousands or millions of nodes.

Related blog – New concepts in semiconductor IP lifecycle management

Summary

Methodics has been offering their ProjectIC IPLM tool for many years, then took the next step and re-engineered their approach to exploit a graph database in their newest Percipient IPLM tool. The speed improvements and scalability with the Neo4j graph database look excellent, which means that SoC designers save time and are more likely to meet critical deadlines.


ARM TechCon 2017 Preview with Mentor!

ARM TechCon 2017 Preview with Mentor!
by Daniel Nenni on 10-18-2017 at 7:00 am

Next week is ARM TechCon which is one of my favorite conferences (SemiWiki is an event partner). This year is lucky number thirteen for ARM TechCon and it includes more than sixty hours of sessions plus more than one hundred ARM partners in the exposition. I will be signing free copies of our new book “Custom SoCs for IoT: Simplified” in the Open-Silicon booth #918. Please stop by Wednesday morning and get a book. It would be a pleasure to meet you!

This year Mentor Graphics, A Siemens Business, is a platinum sponsor and has an impressive line-up. Emulation of course is featured due to the conference emphasis on silicon design to software development. Emulation is a fast growing market and, coincidentally, is the topic of our next book (a collaboration with Mentor) which is due out early next year:

Mentor Graphics delivers the most comprehensive Enterprise Verification Platform™ available for ARM based SoCs and Interfaces: including the Visualizer™ Debug Environment for common debug across simulation, formal, emulation and prototyping, Questa® for high performance simulation, verification management and coverage closure, low-power verification with UPF, CDC, Formal Verification and Veloce® for high-performance system emulation, hardware/software co-verification or integration, system-level prototyping, and power estimation and performance characterization. This comprehensive platform supports UVM. Come check out our latest demos…

Tanner EDA is also featured this year which makes complete sense considering their focus on AMS and MEMs design. I worked with Tanner prior to the acquisition and am a big fan of their tools. I pushed for the acquisition believing that Mentor and Tanner would be a 1+1=3 proposition and I was right, absolutely.

Tanner EDA offers complete design flows for the design, implementation and verification of Analog, Mixed Signal and RF integrated circuits, as well as MEMS. Tanner enables the next generation of IoT edge devices by making it easier for to designers of sensors, MEMS and actuators to create custom SoCs…

Low power design has also been a top trending key word on SemiWiki as it touches just about every market segment and I do not expect that to change ever. Low power design is also prevalent at Arm events for obvious reasons which is why Catapult is also featured:

The Catapult® High Level Synthesis (HLS) PowerPro® Register Transfer Level (RTL) Lower-Power family of products enable ASIC, SOC and FPGA designers to quickly create fully-verified, power-optimized RTL for downstream synthesis and physical design…

And of course Mentor Embedded is featured due to the more than four thousand embedded designers that are expected to attend:

Mentor solutions for Arm® processors enable the development of advanced embedded systems, scalable footprint for Cortex®-M and Cortex®-A applications targeting single to heterogeneous multicore devices for high-performance, power-efficient, secure and safety certified embedded devices. Embedded developers can create systems with the latest Arm processors and micro-controllers with commercially supported and customizable Linux®-based solutions including the industry-leading Sourcery™ CodeBench and Mentor® Embedded Linux products. For real-time systems, developers can take advantage of the small-foot-print and low-power-capable Nucleus® real-time operating system (RTOS)…

See the full Mentor ARM TechCon landing pageHERE

See the ARM TechCon WebsiteHERE

See SemiWiki ARM contentHERE

See SemiWiki Mentor ContentHERE


Getting A Handle On Your Automotive SoCs For ISO 26262

Getting A Handle On Your Automotive SoCs For ISO 26262
by Mitch Heins on 10-17-2017 at 12:00 pm

When it comes to safety and automotive systems, ISO 26262 is the standard by which system vendors are judged. As with all things the devil is in the details. To be compliant to the standard, design teams must have a well-defined and rigorous design and validation process in place that copiously documents all the requirements of their system. Additionally, they must also be able to prove that all the system requirements were in fact implemented and validated satisfactorily. These requirements are representative of how the system should respond when things are going right.

Because safety is involved, ISO 26262 also compels designers to specify how the system will respond when things are not going right. This creates another set of requirements that describes how the system is to respond to both permanent (bugs) and intermittent faults. As with the design requirements, you must also be able to trace these requirements through implementation and verification.

I’ve been on a lot of software teams and while there are usually good intentions, it seems that it is in the validation stage where things are usually lacking. Perhaps this is because most of the software with which I’ve worked hasn’t been deemed “safety critical”. None the less, it scares the death out of me to think about how rigorous a design team needs to be when working on something like autonomous driving systems. Fortunately, Mentor, a Siemens business, has a tool called ReqTracer that can be used to help design teams get a handle on this problem.

ReqTracer is a tool used to manage and track requirements by providing traceability from requirements and plan documents, through implementation and verification. The tool allows an auditor to select and trace a specific requirement to prove that the requirement was indeed implemented and verified by the design team. The tool however is not just useful for auditors. Design teams are finding that it is a great tool to help them organize and communicate between various functional teams that are working on the system.

As an example, system functional requirements are typically generated from a set of system “use cases”. The resulting functional requirements may impact system hardware, software or both. There may also be functional safety requirements of the system that interact with these functional requirements. Per the ISO 26262 standard, all these requirements must be documented along with implementation and verification plans for each requirement. There are teams of people that work on various parts of the design (e.g. system designers, hardware RTL engineers, hardware implementation engineers, software designers, logic validation engineers and the like). For the system to work, all these people must be in sync with the requirements, implementation and validation plans.

ReqTracer is used to organize a design project by gathering information from a wide variety of sources like office documents, requirements databases, design and simulation databases etc. and then used to draw the lines that connect the proverbial dots between individual requirements, design implementation and validation files and final validation results. Once these relationships have been created, ReqTracer can be used to report on and visualize the development process as it proceeds. Any team member can visualize and trace requirements to actual status of implementation and validation for those requirements. A designer can ask, “What should I work on next?” or a quality team can ask, “What tests have not yet run?” and ReqTracer will be able to highlight those requirements that have been met or not met.

ReqTracer can also be used to manage changes to requirements as the design and implementation process proceeds. Rarely are all requirements known before design starts. Design is usually an iterative process whereby additional items, sometimes called derived requirements, will come out of the design process. There may also be missing requirements that emerge at the last minute before the design is complete. Last-minute changes can be the costliest, as they can have far reaching impacts across the system. All these requirements must also be tracked and traced to ensure that they too have been properly implemented and verified.

The beauty of ReqTracer is that it can be used to visualize the impact on the system for a given requirement change. The tool will let the design team visualize all the work items that will be impacted by a proposed requirement change, leading to a more informed decision-making process before accepting additional requirements. If the new requirement is accepted, ReqTracer will also ensure that no components affected by the change will be forgotten in the last-minute crunch to implement the requirement.

Interestingly, a tool like ReqTracer may be at first thought of as a necessary evil. You really need something like this to be able to work your system through the ISO 26262 process. Upon closer examination though, it turns out the necessary evil may be in fact be more of a big productivity booster for your team. I like tools that give designers more control and visibility over their design processes. Even better are tools that boost your overall team productivity. Having a good understanding of where your project stands, what has been done and what still remains, is key to predictable design schedules. And in the world of ISO 26262, all the better that you can ensure that your design has complete coverage for its requirements. ReqTracer is a unique tool in this regard and is well worth a look if you haven’t already.

Personally, when I get my first autonomous vehicle, I’m going to be really hoping that ReqTracer was used by the design team who put it all together.

See Also:
Mentor ReqTracer datasheet
Whitepaper: Automotive Defect Recall? Trace it to Requirements


Reliability Signoff for FinFET Designs

Reliability Signoff for FinFET Designs
by Bernard Murphy on 10-17-2017 at 7:00 am

Ansys recently hosted a webinar on reliability signoff for FinFET-based designs, spanning thermal, EM, ESD, EMC and aging effects. I doubt you’re going to easily find a more comprehensive coverage of reliability impact and analysis solutions. If you care about reliability in FinFET designs, you might want to check out this webinar. It covers a lot of ground, so much that I’ll touch only on aspects of thermal analysis here with just a few hints to the other topics. The webinar covers domains with products highlighted in red below.
Incidentally, ANSYS and TSMC are jointly presenting on this topic at ARM TechCon. You can get a free Expo pass which will let you into this presentation HERE.

Why is reliability a big deal in FinFET-based designs? There are multiple issues impacting aging, stress and other factors, but one particular issue should by now be well-known – the self-heating problem in FinFET devices. In planar devices, heat generated inside a transistor can escape largely through the substrate. But in a FinFET, dielectric is wrapped around the fin structure and, since dielectrics generally are poor thermal conductors, heat can’t as easily escape leading to a local temperature increase, and will ultimately escape significantly through local interconnect leading to additional heating in that interconnect.


Also, since FinFETs are built for high drive strength, they are driving more current through thinner interconnect resulting in more Joule heating. In addition to these effects, you have to consider the standard sources of heating, thanks to complex IP activity profiles in modern SoCs: active, idle, sleep modes and power off – all of which contribute to a heat map across the die which will vary with use-cases. Self-heating effects may contribute 5[SUP]o[/SUP] or more in variation and use-case effects may contribute 30[SUP]o[/SUP] or more across the die.

An accurate analysis has to take both these factors into account to meaningfully assess reliability impact. Typical margin-based (across the die) approaches are ineffective and lead to grossly uneconomic overdesign. Which of course would next take us into the big data and SeaScape topic but I’m not going to talk about that here. In this webinar Ansys’ focus is the reliability analysis.


The thermal reliability workflow starts with Totem-CTA for analysis of AMS or custom blocks. This is based on a transient simulation and library models to determine local heating, EM violations and FIT violations. Totem will also build a model for the block which you can then use in the next step.


RedHawk-CTA will analyze digital IPs and the full chip-package system in a power-thermal-electrical loop simulation to determine temperature profiles by use-case, along with thermal-aware EM and FIT violations. You probably know from my previous posts that it can also do this for 2.5D and 3D systems. Out of all of this, RedHawk-CTA tool will generate a model which can be used in system level analysis using Ansys IcePak, since system reliability concerns don’t stop at the package.

Ansys talks about a couple of customer case studies in the webinar where focus is very much on the additional complexity self-heating introduces to increasing FIT rates and how improved visibility into root causes can help manage these down to an acceptable level through local (modest impact) rather than global (high impact) fixes.

In other aspects of reliability, the webinar first touches on ESD and path finding. Again, both Totem and RedHawk provide support to aid in ESD signoff through resistance, current density, driver-receiver checks and dynamic checks. And out of this RedHawk (PathFinder) will also build a system-level model for system-level ESD analysis.

Electromagnetic compatibility (EMC) is an important component of reliability in part because many SoCs now have multiple radios. So it becomes important to analyze both for EMI (EM noise) and EMS (EM immunity). An interesting consequence of studies in this area is around the EMI impact of power switching in an SoC. We normally think of the impact of power switching on power noise, but also, unsurprisingly perhaps, power switching can create significant EMI spikes.

Finally the webinar covers analysis of aging effect using Path-FX. Aging is a hot topic these days. It’s important first to prove a design works correctly when built, within whatever margins, but what happens if behavior drifts over time, as it inevitably will, thanks to aging? One consequence can be that new critical paths can emerge, and therefore what were once safe operating conditions can become unsafe unless (in some cases) you slow the clock down. As a result, aging can create reliability problems. Since this aging won’t be uniform across the die, again you need detailed analysis to guide selective mitigation if you are going to avoid massive over-design.

That’s where Path-FX comes in; it simulates orders of magnitude faster than conventional circuit sim solutions, but still with Spice-level accuracy, using all design model, layout, parasitics and reliability PDKs from the foundry. From this you can compare the fresh design model critical paths with the aged model to find those paths where you need to take corrective design action.

Ansys really does seem to be in a class of its own in reliability analysis; I can see why they got a partner of the year award this year at TSMC. For anyone who cares about reliability tightly coupled with advanced foundry processes, they seem to be unbeatable. You can watch the webinar HERE.


Implementing IEEE 1149.1-2013 to solve IC counterfeiting, security and quality issues

Implementing IEEE 1149.1-2013 to solve IC counterfeiting, security and quality issues
by Tom Simon on 10-16-2017 at 12:00 pm

As chips for any design are fabricated, it turns out that no two are the exactly the same. This is both a blessing and a curse. Current silicon fabrication technology is amazingly good at controlling factors that affect chip to chip uniformity. Nevertheless, each chip has different characteristics. The most extreme case of happens with chips that fail to meet timing. Next in line are chips that perform better or worse than others. I’ll touch on these kinds of differences and the implications a bit later. However, there is another reason to want to discern among unique chips from the same mask set.

If individual chips can be distinguished securely, it creates the potential to enable many important capabilities. If each chip can be given a unique and unalterable and non-duplicable identity, it enables secure boot, cloning protection, keyed feature upgrades and configurability, and secure encryption and decryption. The short version is that we want to transfer publicly viewable but encrypted information to a specific unique IC that is the only device that can decrypt that information. A prevalent way to do this is with public-private key encryption.

However, we have a chicken and egg problem. If all the chips that roll off the production line are identical how can we seed the chips with unique secure keys so they can bootstrap the security process? We need some kind of non-volatile storage that can be easily provisioned in silicon and easily written to right after fabrication. If the key is going to be verifiable and non-clonable, there needs to be hash data to verify it and the on-chip storage must prevent reverse engineering of the data.

Because this level of security is just as important for smaller and low volume IoT designs as it is for large high volume consumer chips, the non-volatile memory must also be cost effective and easy to implement. This rules out many technologies like Flash-NAND, eFuse, etc. They can add the need for additional process layers, complex write support circuitry, external power pads and so on.

Many people are turning to one time programmable (OTP) NVM, like that offered by Sidense. It avoids these pitfalls and offers a high degree of flexibility. To facilitate this Sidense has partnered with Intellitech to provide a complete solution for externally writing security information to on-chip OTP NVM using the IEEE 1149.1-2013 standard. This is done using a JTAG TAP or an SPI interface that is easily added to the chip, and most likely already used for other JTAG functions.

Coming back to the topic of performance variations in chips, we should look at how chips are graded for different applications. It is a common practice to test chips to evaluate their individual speed and thermal performance. The failing chips are discarded – hopefully for good. The rest are often graded and sold for different end applications. Some are sold for higher prices because they run faster. Other better performing parts are used in systems that require higher reliability, such as aircraft, cars or military equipment.

However, there are many instances of lower performing chips illicitly relabeled as higher performance parts. Or even worse, failed parts have been put back into the supply chain. The customary method of indicating the grade of a part after testing is by marking the package. Package markings can be altered, leading to expensive quality and reliability issues in final assembled systems. What is needed is a system for storing part grading within the parts in a tamperproof format.

Once again IEEE 1149.1-2013 offers assistance through its Electronic Chip ID (ECID) specification. ECID allows on chip storage of test results, temperature and speed grade, wafer number, die xy, location and other information. The storage area for ECID can be used for private information as well. By using ECID, it is possible ensure that genuine and correct parts are being used in systems. It also enables a number of key reliability activities. If there are field issues, the wafer lot and die location information can be fed back to the supplier to help resolve quality issues.

ECID is another area that Sidense and Intellitech have focused on. Their complete solution provides for secure writing of the ECID data block. Intellitech also offers user level software and interface boards that allow for easy reading of the ECID information so it can be used to verify parts before they are soldered to a board. Additionally, in the case of failures, it is possible to read out the information needed for resolving reliability issues.

IEEE 1149.1-2013 is playing a major role in adding value and preventing fraud in the supply chain. With a solution like the one proposed by Sidense and Intellitech, it becomes feasible to maximize the benefits of ECID and to ensure that chips for niche markets can have security features matching larger mainstream SOC’s. After all, the most likely target for a security attack would be edge node chips that might not be designed with robust security.

Sidense OTP-NVM has a multitude of features to prevent reverse engineering, side-channel attacks. They also can come with completely self-contained write logic that can work with system supply voltages. This, and the requirement for no additional layers makes it an excellent choice for implementing ECID, and key and feature configuration storage. More detailed information about how the Sidense and Intellitech joint solution works can be found on the Sidense Website.