Banner Electrical Verification The invisible bottleneck in IC design updated 1

CDNLive Boston Keynote Address Highlights Emergence of Silicon Photonics

CDNLive Boston Keynote Address Highlights Emergence of Silicon Photonics
by Mitch Heins on 09-27-2017 at 7:00 am

I had the pleasure of being able to attend the CDNLive event held in the Boston, MA area last month and I was pleasantly surprised to see that Cadence highlighted Silicon Photonics as one of its Keynote topics. MIT Professor Duane Boning gave an excellent overview of the current state of silicon photonics and why he believes it is time for the electrical engineering community to learn about this important technology. In Professor Boning’s presentation, he made the claim that silicon photonics has become “Interesting, Convergent and Designable”.

Professor Boning noted silicon photonics as interesting because it is bringing new functionality and capabilities to the market that are tied to real business. It is predicted that by the year 2025, the silicon photonics IC market will have reached $108 Billion with a little over half of that coming from the DataCom and Telecom industries. The data centers are pushing for greater bandwidth density while simultaneously desiring lower signal latency and lower overall power consumption. Silicon photonics technology gives them all three without compromise. Moving the photonic components from the active optical cables (AOCs) onto the server boards and eventually integrated with the processors themselves will drive IC volumes up dramatically. That will give the data centers the last important part of the equation, which is reduced cost. How many technologies give you more bandwidth density, less latency and less power consumption while also driving down costs?

The second biggest silicon photonics market will be in the area of sensors. This market is projected to grow to $41.4 billion by 2025. With the advent of the internet of things (IoT), sensors will be literally everywhere. One of the biggest drivers for volume ICs in sensing will be the healthcare industry where photonics will be combined with fluidics and electronics to deliver the long-awaited lab-on-a-chip. The lab will literally be brought to a patient’s bedside in the pocket of a technician. These ICs will be one-time use which means really high volumes and that will help to drive down IC costs. Sensing doesn’t stop at healthcare however. Sensors will also be heavily used in industrial and military applications as well.

One specialized form of sensor is used for imaging and will be especially useful for the autonomous vehicle market in the form of LIDAR (Light Detecting and Ranging). The market for LIDAR based imaging and display systems ICs is projected to reach $4.5 billion by 2025. Given all these applications it is no small wonder that Cadence Design Systems has hopped onto the photonics bandwagon.

The second thing Professor Boning noted was that silicon photonics is convergent. By that he explained that CMOS electronics and silicon-based photonics fabrication technologies have evolved toward each other to enable monolithic electro-photonic solutions.

There are currently two main approaches for monolithic integration. The first involves integrating photonics into the front-end-of-line (FEOL) layers while the second integrates photonics into the back-end-of-line (BEOL) layers. Both have their merits and their detractors. The good news is that no matter which way the industry goes, in both cases, photonics is sharing wafer processing, equipment and materials with standard CMOS. That means that silicon photonics can be added rapidly to commercial CMOS product lines with relatively low capital investment on the part of the foundries. We’ve already seen announcements from several of the production foundries adding silicon photonics to their offerings (Global Foundries, TowerJazz, TSMC/Luxtera, ST Microelectronics, and AMS).

Also along the convergence path is the fact that photonics is mirroring electronics in its approach to design flow and design automation tools. The photonics ecosystem has been working for almost a decade now on establishing a common set of photonic device building blocks from which photonic circuits can be assembled. These building blocks are common in multiple foundry process design kits (PDKs) and can be used to design and map photonic circuits from one foundry to another through a process PhoeniX Software calls photonic synthesis.
Professor Boning also showed a slide of the recently announced Cadence, Lumerical, PhoeniX EPDA (electronic/photonic design automation) flow and noted that two of the break-out sessions presented at this CDNLive event were about the design and manufacturing of silicon photonics (courtesy Lumerical Solutions and Analog Photonics).

The last point Professor Boning made was that silicon photonics is now “designable”. As just mentioned, the design flow is mimicking that of electronics and now includes tools from multiple photonic and electronic design automation vendors. Design tools are now available for photonic device simulation, abstraction of photonic component behavior into compact models, photonic circuit simulation, automated layout generation including parameterized cells (pcells), DRC (design rule checking) and emerging LVS (layout vs schematic) flows. These tools and flows are supported by multiple foundries with qualified PDKs. More recently new flows for integration of digital and analog/mixed-signal electronics with photonic is now being put into place for systems-in-package (SiP) configurations (see Cadence Photonic Summit 2017).

For those interested in seeing where photonics is going and the kinds of capabilities that will be coming online in the next decade I refer you to the IPSR (Integrated Photonic Systems Roadmap) that can be found at http://photonicsmanufacturing.org/.

In summary, Cadence has done an excellent job of keeping their customers abreast of the latest evolving technologies and their CDNLive events are definitely worth attending. If you haven’t been to one in a while I would highly recommend you attend one to see what the world is up to. This one really shined some light (laser light that is) on the future.

See Also:
CDNLive website


How to Avoid Jeopardizing SoC Security when Implementing eSIM?

How to Avoid Jeopardizing SoC Security when Implementing eSIM?
by Eric Esteve on 09-26-2017 at 12:00 pm

Smart card business is now more than 25 years old, we can assess that the semiconductor industry is able to protect the chips used for smart card or SIM application with a very good level (unfortunately, it’s very difficult to get access to the fraud percentage linked with smart cards, as bankers really don’t like to communicate on this topic!). The various techniques and algorithms to protect chips against attacks have proven to be efficient, but just keep in mind that security experts like Alain Merle (Dr of security with CEA-LETI) evaluate the weight of good security level to be 50% of the IC total cost, for a smart card or SIM.

Now the problem is moving from a single chip dedicated to SIM (or smart card), that we know how to best protect against fraud, to the embedded version of this chip, integrated as one of the functions integrated into an application processor SoC. We are talking about embedded SIM (eSIM) and any function supporting mobile payment being integrated into a SoC targeting mobile, IoT and automotive applications. How to best protect it from the various possible attacks?

Synopsys has developed pre-verified DesignWare ARC Secure IP Subsystem to provide a trusted hardware and software SoC environment. This subsystem is created around the new ARC SEM110, or ARC SEM120D (the D stands for DSP extension).

The ARC SEM is an ultra-low power security processor with SecureShield technology, enabling creation of Trusted Execution Environment (TEE) and protecting against side-channel attacks and data breaches. Synopsys offers cryptography options to accelerate encryption for a range of algorithms including AES, SHA-256, RSA and ECC. To protect the instruction code and data, the code and data are stored encrypted in the memories, embedded or external and secured instruction and data controllers provide external memory access protection and runtime tamper detect. This secure external memory controller is a licensable product option.

SoC architect can implement security at S/W level only, using subsystem software including NIST-validated crypto library (for non US: NIST stand for National Institute of Standard Technology), secure boot and SecureShield runtime library (embARC), or complement with H/W engines like Symmetric Crypto/Hashing or Asymmetric Crypto coprocessors.

ARC SEM110 and SEM120D are well suited for secure element implementations, like eSIM or eUICC. UICC being the new generation of SIM, able to support multiple applications, like USIM (to identify you and your phone plan to your service provider) or ISIM (to secure mobile access to multimedia services, and non-telecom applications such as payment).



At the chip level, the number of potential attacks is vast, as we can see on the above picture. During communication, we can mention sniffing of sensitive data like passwords, direct remote attacks via backdoors or indirect via remotes nodes. Software is well-known for the sensibility to malware (viruses or rootkits), but it can also a vector to exploit buffer/stack overflows, or for privilege level tampering. Hardware can be sensible to invasive attacks like decapsulation or probing (even if these can also be used by analyst!) and non-invasive like side-channel of access to the inside by using debug ports.

ARC SEM security processor has been designed to be immune to these attacks, hardware or software. SecureShield offers multiple isolated execution context. Side-channel protection is realized by defining an uniform instruction timing, and via timing and power randomization. The processor pipeline is made tamper-resistant by in-line instruction, data and address scrambling. Error detection and parity has been implemented for memories and registers. Even the debug functionality has been secured to prevent non-invasive side-channel attacks. Integrated watchdog timer detects system failures, including tampering.

You shouldn’t be surprised to learn that ARC SEM has been the analysts’ choice in 2016 for the Linley Group Best Processor IP award!

The ARC Secure IP Subsystem is available now.

From Eric Esteve from IPnest


Verification Trends: 2016

Verification Trends: 2016
by Bernard Murphy on 09-26-2017 at 7:00 am

Periodically Mentor does us all a big favor by commissioning a survey of verification engineers across the world to illuminate trends in verification. This is valuable not only to satisfy our intellectual curiosity but also to help convince managers and finance mandarins that our enthusiasm to invest in new methods and tools is supported by broad industry trends.

I always find these surveys fascinating, as much for how they align (or not) with conventional wisdom as for new or evolving insights. So let’s dive in, starting with design sizes. Not including memory, ~31% of projects in 2016 were at 80M gates or more and ~20% were at 500M gates or more. Nearly 75% of all surveyed designs had at least one embedded processor, half had 2 or more processors and 16% had eight or more. They also note that “it is not uncommon” (no percentage provided) to find 150 or more IP blocks in a design.

On time spent in verification, the survey shows an average of 55% in 2016, though I see a fairly flat top to the distribution, from 50% to 70%. In 2012 more projects were between 60% and 70%, though the average was barely higher than in 2016. Perhaps growing verification teams contributed to flattening the peak. That would be thanks to the compound annual growth rate (CAGR) in verification heads over this period, at over 10%, versus CAGR in design heads at 3-4%. Also of note is that design engineers are spending half their time in verification, and that this hasn’t changed significantly since 2007.

Where do verification engineers spend their time? Unsurprisingly, nearly 40% in debug, 22% each in creating tests/running simulations and testbench development and 14% in test planning. I doubt much has changed here.

In dynamic verification, well over half of all projects are using code coverage and functional coverage metrics, along with assertions A slightly smaller number, apparently declining from earlier years, are using constrained random techniques, though Mentor note that this is skewed by an increased number of designs at under 100K gates (perhaps around sensor designs). They speculate that these teams may be less mature in digital verification methods. In general, the survey finds adoption of all these techniques leveling off, which they attribute to scaling limits in simulation – these methods are useful at the IP level, perhaps less so at the SoC level.

Some interesting results are shown around adoption of formal methods. In 2014, adoption of automatic formal methods (smart lint, connectivity checking, apps in general) was picking up fast and property checking (“classic formal”) grew very little. In 2016, automatic usage leveled off at around 20% of projects while property checking grew significantly to nearly 35% of projects. Mentor attribute this to teams having grown sufficiently comfortable with automatic methods to now branch out into property checking. This certainly suggests fertile territory for tool vendors to continue to grow adoption in both areas.

In hardware-assisted verification, 24% of projects are using emulation and 30% are using FPGA prototyping. For emulation, dominant usage is in hardware/chip verification, software development and HW/SW code-design and verification (I would guess the software here would be bare-metal), also system validation, though there is surprisingly significant (and growing) usage in IP development and validation.

In FPGA prototyping, dominant usage is in hardware/chip verification also with a significant component in IP design and verification. This is surprising for me since FPGA prototypes are not generally very good for hardware debug (thanks to lack of extensive internal visibility). However, they are much cheaper than emulators, so perhaps that’s the reason – trading off cost versus verification effort and cleverness. Less surprisingly, system validation, software design and HW/SW co-verification are leading use-models.

In verification languages and methodologies SystemVerilog still dominates at ~75%, Verilog (for verification) is still around but declining fast and the only other significant player is C/C++ at ~25% presumably for software-based verification. (By the way, the survey allows for multiple answers in many of these questions so don’t expect stats to add to 100%.) In testbench methodology UVM dominates at ~70% and everything else is in rapid decline. Equally SVA is massively preferred for assertions. No big surprises here.


The survey wraps up with an ever-interesting survey (anonymous of course) of design schedules – what goes wrong and why. The peak of the distribution in 2016 is at on-schedule completion (funny how that happens) but there’s a long and fat tail out to a 30% overrun, with an average of 69% of projects behind schedule. This seems to be worse than in previous years (61% behind in 2014 for example), not an encouraging direction.

Only 30% of designs were able to get to success on first silicon, ~40% required at least one and ~20% two respins. I have recently come to think of this as a routine need primarily to handle software and analog problems, but I was wrong. Logic/functional problems remain the leading cause (~50%) and power consumption problems grew rapidly as a contender (~30%). Root causes were led by design errors (70%!) and spec problems (incorrect/incomplete at 50%, change in spec at nearly 40%).

Hats off again to Mentor for sponsoring and summarizing the results of these surveys. This is real value for all of us in the industry. You can access the survey HERE.


Robust NVM Solutions for Specialty and Advanced FinFET Technologies Webinar

Robust NVM Solutions for Specialty and Advanced FinFET Technologies Webinar
by Daniel Nenni on 09-25-2017 at 12:00 pm

Webinars are a very effective communications channel in a fast paced industry like semiconductor design. If you sign-up in advance and you can’t make the live version, you will be automatically notified when the replay is available so you can watch it at your leisure. I’m guilty of this for sure, because of my hectic schedule I watch many replays. The next webinar that I will no doubt watch the replay of is from Sidense featuring Betina Hold, Sidense R&D Director . If you are designing mobile, automotive, industrial or consumer IoT devices Non Volatile Memory is in there somewhere so this is your chance to see the latest and greatest NVM technology, absolutely.

REGISTER HERE

Abstract
The rapid progress in new Smart Connected ICs is driving the deployment of new specialty processes and smaller advanced technology nodes. Sidense 1T-NVM memory macros support the stringent requirements for a wide range of Smart Connected devices: operation from low-voltage sources with limited energy budgets, operation in harsh environments and robust, high-reliability operation over extended temperature ranges. Today I will discuss how the latest 1T-NVM developments from Sidense address Smart Connected requirements with designs for specialty processes and 3D bit-cell designs for advanced process nodes.

The Smart Connected universe comprises devices and systems that are networked, usually wirelessly, and have some compute power. These include edge nodes (commonly called smart sensors when they are coupled), hub nodes that aggregate data from edge nodes, and central computation, usually in the Cloud, that performs analytics on the aggregated edge-node data and controls resultant actions. The Smart Connected universe encompasses several markets including mobile computing/communication, automotive, industrial, IoT wearables and medical.

The demands of Smart Connected devices vary. While the importance of each requirement changes depending on the application, some of the more universal demands are: broad process and variant coverage, low-voltage operation, ability to function in harsh environments, highly reliable and robust operation over extended temperatures, and very high security.

Sidense’s split-channel 1T-Fuse bit cell, the heart of all 1T-NVM products, works with all specialty and advanced processes and is optimized for various operational and fabrication requirements. The 1T-NVM macros have been designed to take full advantage of key process features.

Sidense supports a wide range of TSMC BCD and HV processes and is in volume production in many PMIC and sensor designs. We are an acknowledged leader in term of area advantage in these technologies. Sidense 1T-NVM macros meet the stringent AEC-Q100 Grade 1 requirements for high temperature (125°C) operation for all of these processes and Grade 0 (150°C) for selected processes.

Sidense’s 1T-NVM products have been certified to IS0-9001, which defines the requirements of a quality management system to provide products and services that meet customer and applicable statutory and regulatory requirements. Our 1T-NVM macros go through many phases of high-reliability testing and meet AEC-Q100 Grade 1 and, for products targeting automotive and industrial applications, Grade 0 requirements. All 1T-NVM products are designed for over 10 years of operation at 100% read duty cycles. We are also developing our memory macros to meet the trend to elevated temperature operation, 185°C and even higher.

REGISTER HERE

About Sidense Corp.
Sidense Corp. provides very dense, highly reliable, and secure Logic Non-Volatile Memory (LNVM) IP for one-time programmable (OTP) and emulated Multi-time Programmable (eMTP) use in standard-logic CMOS processes. The Company, with over 120 patents granted or pending, licenses OTP memory IP based on its innovative one-transistor 1T-Fuse™ bit cell, which does not require extra masks or process steps to manufacture. Sidense 1T-NVM macros provide a better field-programmable, reliable and cost-effective solution than flash, mask ROM, eFuse and other embedded and off-chip NVM technologies for many code storage, encryption key, analog trimming, and device configuration uses.

Over 150 companies, including many of the top fabless semiconductor manufacturers and IDMs, have adopted Sidense 1T-NVM as their embedded non-volatile memory solution for more than 500 designs. Customers are realizing outstanding savings in solution cost and power consumption along with better security and reliability for applications ranging from mobile and consumer devices to high-temperature, high-reliability automotive and industrial electronics. The IP is offered at and supported by all top-tier semiconductor foundries and selected IDMs. Sidense is headquartered in Ottawa, Canada with sales offices worldwide. For more information, please visit www.sidense.com.

Also read: Making Sensors of the World


Semiconductor and EDA 2017 Update!

Semiconductor and EDA 2017 Update!
by Daniel Nenni on 09-25-2017 at 7:00 am

It really is an exciting time in semiconductors. The benchmarks on the new Apple A11 SoC and the Nvidia GPU are simply amazing. Even though Moore’s Law is slowing, the resulting chips are improving well above and beyond expectations, absolutely.

As I have mentioned before, non-traditional chip companies such as Apple, Amazon, Tesla, Google, and FaceBook are now driving the semiconductor industry in ways most would not have imagined. Mobile has been great but now with AI/AR/VR coming to our smartphones we should see another surge of silicon. Electric and autonomous cars and the need for exponentially more automotive grade silicon is also a significant challenge/opportunity and IoT is bringing in many more non-traditional chip companies. Connectivity is another big driver. Our cars, for example, will generate hundreds of gigabytes of data every day that will need to be transmitted to the cloud for processing. 4G may get us there but 5G will enable so much more.

Non-traditional chip companies continue to dominate SemiWiki traffic and the EDA executives are saying the same in regards to revenue. A big tell is the rapid growth of the emulation and FPGA prototyping systems business. The ability to develop software in parallel with an SOC is now required for leading edge system companies.

In the second quarter of 2017 semiconductor revenue hit a record $97.9B scoring a record 23.7% growth following an already strong 18.1% in Q1 2017. Semiconductor revenues for 2017 are expected to be over $401B with greater than 16% growth rate.


EDA generally lags the semiconductor industry as we can see by 2017 numbers. EDA reporting has also changed. Now that the #3 EDA company is part of Siemens (Mentor) reporting is based on the current EDA duopoly of Synopsys (SNPS) and Cadence (CDNS).


In Q2 2017 SNPS grew revenues 13% ($80.2M) vs Q2 2016 and CDNS grew Q2 by 5.7% ($26M). The comparable growth rates in Q1 were SNPS by 12.4% and CDNS by 6.5%. SNPS and CDNS both beat their mid-point revenue guidance for the quarter by $2.9M and $4M respectively.


Mid-point guidance for Q3 2017 for SNPS is 2.5% ($15.8M growth to $649.5M) and CDNS is 7.6% growth ($33.8M growth to $480.0M).

At the beginning of 2017 SNPS started at $60 per share and is now over $80. CDNS started at $25 and today is well over $38.

BOTTOM LINE: SNPS and CDNS are both pushing products up to the system level catering more to the non-traditional chip companies so moderate growth is expected to continue in 2017 and 2018.

* And thank you to Gerry Byrne, the founder of edalics, the only independent EDA Budget Management advisor to leading semiconductor companies, for the above data.


Walden Rhines on the Automotive Electronics Landscape

Walden Rhines on the Automotive Electronics Landscape
by Roger C. Lanctot on 09-23-2017 at 8:00 am

Mentor President and CEO Walden Rhines gave a comprehensive overview of the automotive electronics landscape at the Mentor Integrated Electrical Solutions Forum (IESF) in Plymouth, Mich., this week. A key focal point of Rhines’ comments was the twin industry disruptors: EVs and AVs.

A Texas Instruments alum, Rhines described the normal electronics industry technology adoption life-cycle as a ramp up, during which a market grows and attracts new players, followed by continued growth toward an ultimate plateau and consolidation. He pointed to the onset of electrification and automation as forces drawing in dozens of new entrants to the automotive industry.

Rhines counts 300 companies around the world developing electrically propelled cars and light trucks and,, separately, 96 companies with announced autonomous drive programs. Supporting the development of these systems is creating substantial demand for increased investments in greater in-vehicle processing capabilities and the need for new vehicle architectures.

These demands are not only extreme for the financial demands and opportunities they are creating but they are also reshaping and being reshaped by new virtual design requirements and thermal management priorities. One might argue that the shift toward the greater integration of vehicle systems is even testing prior philosophical underpinnings of vehicle design while introducing new concerns regarding cybersecurity and privacy.

Rhines’ presentation was a tour de force of automotive electronics developments and set the stage for the in-depth discussions that followed at the event on topics ranging from cybersecurity and infotainment to the twin disruptors previously noted. Following up Rhines’ insightful offering was industry curmudgeon, gadfly, visionary, bad boy and former senior GM, Chrysler, BMW executive and author, Bob Lutz, who described the “Future of Human Transportation.”

A self-described car guy and frequent Tesla Motors critic, Lutz lamented the anticipated demise of car driving by humans. Lutz described what he sees as a future dominated by “sausage-shaped” self-driving pods as governments and insurance companies wrest the steering wheels out of the hands of human drivers.

Lutz says his sausage-pod future driving scenario will erase the value of legacy automotive brands reducing today’s titans of vehicle production to contract manufacturers for the likes of Uber and Lyft or their ilk. He further postulated the end of vehicle ownership as there will be no point, in Lutz’s transportation “end state” vision, of owning your own sausage pod when the pod of your dreams will be available on demand.

Lutz gave no timeframe for his apocalyptic picture of human-less driving but he was careful to emphasize that in his unique view the internal combustion engine is likely to persist. While he allowed that the current enthusiasm for EVs – primarily on the part of governments (and perhaps Tesla owners) – will eventually reduce fossil fuel demand, he said that the reduced demand will drive down fossil fuel prices preserving and extending the delta between the operating costs of EVs vs. cheaper ICE-based vehicles.

Casting his eye to the distant transportation horizon, Lutz sees no breakthrough in energy density or cost reduction capable of giving EVs a serious cost advantage to ICE vehicles regardless of the other advantages in performance and emissions reduction. In the end, though, after car companies and car ownership have fallen by the wayside car enthusiasts like Lutz will be confined to automobile dude ranches, he says. And motorcycles, another Lutz passion, don’t have a chance (or a place on roads dominated by AVs) over the long run, in his estimation. (Hasn’t Lutz heard of autonomous motorcycles – or does he simply see no point in such a concept?)

It’s good to know that an 85-year-old can embrace new technology like automated driving, even if it threatens a favorite pass-time or two. But when it comes to Bob Lutz accepting pure EVs as the dominant mode of human transportation in the future, it looks like the old GM Vice Chairman simply has to draw the line – in spite of the 300 companies around the world chasing the dream. It seems that EVs will remain the burr in Bob Lutz’s saddle to the end. It’s worth noting that Lutz tipped his hat to the still-thriving horse-riding industry, valued north of $10B, before leaving the stage.


This is a Different GLOBALFOUNDRIES!

This is a Different GLOBALFOUNDRIES!
by Daniel Nenni on 09-22-2017 at 7:00 am

Having followed GF since its inception, I agree with CTO Gary Patton, what we are seeing today truly is a different GLOBALFOUNDRIES! Our first GF blog was published on 9/13/2009 and we have done a total of 173 GF related blogs that have collected more than 1.5M views thus far. 72 of those blogs were written by me so I have followed this story closely. It was always my hope that GF would bring serious competition back to the Pure-play foundry business and they have, absolutely.

The GLOBALFOUNDRIES Technical Conference theme this year is Enabling Connected Intelligence which is an umbrella for the current semiconductor market drivers: Mobile, Automotive, and IoT. The key ingredient of course is 5G. Per his keynote, GF CEO Sanjay Ja expects 5G to be as disruptive as the transition from voice to data which of course was the precursor to the internet and world wide web and I agree completely.

In looking at the onslaught of GF press releases at GTC you can see how 5G plays into their strategy:

GLOBALFOUNDRIES Introduces New 12nm FinFET Technology for High-Performance Applications

GLOBALFOUNDRIES Delivers Custom 14nm FinFET Technology for IBM Systems

GLOBALFOUNDRIES Unveils Vision and Roadmap for Next-Generation 5G Applications

GLOBALFOUNDRIES Delivers 8SW RF SOI Technology for Next-Generation Mobile and 5G Applications

GLOBALFOUNDRIES Announces Availability of Embedded MRAM on Leading 22FDX® FD-SOI Platform

GLOBALFOUNDRIES Announces Availability of mmWave and RF/Analog on Leading FDX™ FD-SOI Technology Platform

GLOBALFOUNDRIES and Soitec Enter Into Long-term Supply Agreement on FD-SOI Wafers

Remember, 5g is all about RF and GF has the famed IBM RF group that boasts top market share with 32B RF SOI chips and more than 5B SIGe chips shipped.

The second keynote was from Qualcomm’s EVP Cristiano Amon. I was quite happy to see his name on the agenda because Cristiano is a great speaker and I had always hoped that QCOM would throw their weight behind the GF foundry effort, especially when Sanjay Ja took over as CEO (Sanjay is from QCOM). Unfortunately that has not happened yet. QCOM left long term foundry partner TSMC for Samsung at 14nm and 10nm then went back to TSMC for 7nm and is now using Samsung for 11nm and 8nm.

This is not surprising as QCOM is historically a hard core foundry outsourcing company. I remember them using (4) different foundries for 40nm. It really was infuriating for TSMC to do all of the leading edge process work with Qualcomm only to lose the very profitable 2[SUP]nd[/SUP], 3[SUP]rd[/SUP], and 4[SUP]th[/SUP] source business to UMC, SMIC, and Chartered. But competition is the foundation of semiconductor manufacturing. QCOM is also not a big FD-SOI supporter which is a disappointment, especially when they are talking up automotive and IoT.

The third keynote was Gary Patton which covered the GF CMOS and FD-SOI roadmaps which included slides on Photonics, EUV, and of course Machine Learning. A couple of Gary’s FD-SOI slides caught my eye. 22FDX has a nice ecosystem developing with all of the top names. Currently, 22FDX has 15 confirmed tape-outs happening this year and next. I know several of those first hand so that number is easy to believe. Scott Jones attended the conference as well and spent time with Gary so expect much more detail from him next week.

All-in-all it really was a great conference, the food was excellent, the 500+ crowd included familiar faces amongst the semiconductor elite including Aart de Geus. If you want to stalk Aart get on the Legally Blue FaceBook page. Great music and you get to see Aart unload and set up his own equipment. Rumor has it his next gig is at Google to benefit animal rescue.

The only downside of the day was the EUV talk. GF was not as EUV positive as TSMC so that will be an interesting story to follow next year. After my last trip to Hsinchu I’m very confident TSMC will have 7nm (N7+) EUV out in time for the 2019 Apple products. I also believe current TSMC 7nm customers (just about every top semiconductor company) will move to N7+. It is easier to design to and you get a 1.2x density and a 10% performance or 20% power improvement so hopefully GF will up their EUV game.

I just scratched the surface here so hit me up in the comments section for a more detailed discussion. I have photos of the slides but they are a bit fuzzy. The event was recorded so it should be up on the GF website sometime soon.


Intrinsix Fields Ultra-Low Power Security IP for the IoT Market

Intrinsix Fields Ultra-Low Power Security IP for the IoT Market
by Mitch Heins on 09-21-2017 at 7:00 pm

As the Internet-of-Things (IoT) market continues to grow, the industry is coming to grips with the need to secure their IoT systems across the entire spectrum of IoT devices (edge, gateway, and cloud). One need only look back to the 2016 distributed denial-of-service (DDoS) attacks that caused internet outages for major portions of North America and Europe to realize how vulnerable the internet is to such attacks. Perpetrators, in that case, used tens of millions of addressable IoT devices to bombard Dyn, a DNS provider, with DNS lookup requests. Analysts predict that by the year 2020, there will be over 212 billion sensor-enabled objects available to be connected to the internet. That’s about 28 objects for each person on the planet. While the opportunity for disaster seems obvious, the opportunity to make a lot of money on IoT is even bigger, so the industry needs to urgently address the problem. How can you make your IoT SoC devices secure?

Recently I attended the CDN Live event in Burlington, MA where I had the chance to sit down with Mark Beal (CTO) and Steve Stecyk (Director of Engineering) of Intrinsix, a design services company, to find out how they are helping their clients deal with IoT device security. It was a fascinating conversation as they had just recently released a new drop-in ready IoT security sub-system IP that is NSA Suite-B compliant and they were demoing the system to prospective customers at CDN Live who use Cadence’s Tensilica cores. I’ve captured a few highlights from that conversation as I felt they address some points that all IoT devices engineers may find interesting.

First, the Intrinsix security sub-system is offered as synthesizable RTL IP that is CAD-tool and technology-platform agnostic. Intrinsix was obviously catering to Cadence users at CDN Live, but their IP can be readily ported to any standard CMOS platform and EDA toolset. Its claim-to-fame, other than being super easy to use, is its incredibly low power profile, typically 10X better than what is offered using standard CPU-based security methods. Intrinsix leveraged more than 1000 equivalent-years of design experience to create a specialized hardware and software cryptographic accelerator security sub-system that is responsible for providing a secure boot environment for ARM, RISC-V, and Tensilica-based IoT systems.

When one mentions dedicated security accelerators, it seems counter intuitive that this would make for a lower power system. Why would more hardware mean less power? First, remember that this dedicated hardware has one purpose and because of that, it is optimized for the task given it. That means no wasted cycles when it is doing its job.

Second, the real power savings comes when the IoT device is turned off (which for many IoT edge devices represent about 99% of their lifetime). However, this is a bit of a misnomer. To stay responsive, most IoT systems don’t fully power down. They instead go into a sleep state using the processor to monitor a wake-up pin. The memories of the device, however, remain fully up and running, consuming power. The reason for this is that if you power off the state of the system, you must go through the authentication process again to ensure a secure boot when it’s time to wake back up. This authentication process takes time, typically on the order of multiple seconds if you are using the system CPU to do the work and that cuts down on system responsiveness. By using accelerator technology, Intrinsix cuts this time from seconds down to milliseconds, regaining system responsivity. Being able to turn off 99% of the chip allows Intrinsix to reduce power consumption by up to 1000X and increase battery life by as much as 10X. All the while, the area consumed by the accelerator hardware is negligible, and even with the lower power consumption, they can still provide device security that meets NSA Secret level requirements.

The architecture of the security sub-system is self-contained and provides the secure boot environment for the SoC. This approach means that the sub-system has its own security processor, security ROM, and various engines needed for authentication, encryption/de-encryption, random number generation, and establishing secure tunnels for over-the-air updates (OTA). The system maintains control of the rest of the SoC until it can make sure the device is securely up and running, after which it turns control over to the SoC’s host processors. The security sub-system also contains a monotonic counter that is used to check the validity of updates coming in, ensuring that a nefarious actor cannot take the system back to a previously valid but possibly more susceptible version of the firmware.

In addition to the IP, Intrinsix also provides best practice design services to their clients to ensure their SoC takes full advantage of the security sub-system. One example of this is how they deal with design-for-test (DFT). DFT by its nature is meant to “open up” the system logic for manufacturing test. DFT can cause security problems when the SoC is in the field because the testability ports could be used for unauthorized access to registers that the security system protects. Intrinsix uses a strategy that enables SoC testability while the system is still unprovisioned. When the security sub-system’s one-time-programmable (OTP) memories are loaded with public keys and secure boot firmware, the provisioned system then has logic that detects that the OTPs have been written and disables the DFT ports from further use. Very slick.

There was a lot more to the conversation, and I’ll try to follow up with more information in later articles but suffice it to say, I found it refreshing to see that Intrinsix and the industry are indeed working hard to make IoT SoC designs secure.

If you want to learn more about Intrinsix and their IoT offerings, you can find them online at the link below, download their IoT eBook, or you can visit with them at the upcoming IoT Security Developers Conference being held in Santa Clara, CA on September 28th.

See also:
IoT Security Developers Conference
eBook: IoT Security The 4[SUP]th[/SUP] Element
Intrinsix website


Clock Gating Optimization

Clock Gating Optimization
by Bernard Murphy on 09-21-2017 at 7:00 am

You can save a lot of power in a design by gating clocks. For much of the time in a complex multi-function design, many (often most) of the clocks are toggling registers whose input values aren’t changing. Which means that those toggles are changing nothing functionally yet they are still burning power. Why not turn off those clock toggles at those registers when they’re not needed? That’s the reasoning behind clock gating.


Nice principle, but figuring out how best to apply it to your design in realistic use-cases takes some work. First, modifying the design to add clock gating definitely isn’t practical at layout or even pre-layout gate-level. You’re going to need to add logic and wiring and that’s best done at RTL. But then you worry about accuracy: to know where best to gate, you need to be able to run power estimates. At the layout/gate-level these can be pretty accurate (typically within 5% of silicon). But at RTL there’s more uncertainty so accuracy is more like ±15% of accuracy at layout.

Is this uncertainty too big to make power optimization useful at this level? Not at all but you have to think carefully about your approach and the choices you make. An important point is that, in this kind of analysis, relativeaccuracy is typically much better than absolute accuracy. If estimation shows that gating opportunity X will save twice as much power as gating opportunity Y, you can be pretty sure those two options would rank the same way in final implementation, irrespective of absolute predicted power reduction. And – engineering 101 – you want to focus on bigger savings. The error bars are still big enough you don’t want to waste time on 1% deltas. That said, if you want to delve into tweaking you can improve absolute accuracy in RTL power estimation through correlation with previous similar and implemented designs.

Building on that base you now have two objectives – figuring out where gating has most impact and then figuring out how you should drive that gating. The first takes careful analysis, across the design and across use-case scenarios. Synopsys’ SpyGlass Power has been tuned for many years to help with this task. It starts with a hierarchical spreadsheet view of power, leakage and dynamic, so you can figure out where you want to focus. It’s all neatly color-coded so you can easily spot priority problems then drill down through those to find top offenders and so on down. The tool provides more than 150 metrics so you can slice and dice this however you want.


Some of the metrics called out in the webinar look at efficiency of clock gating. SpyGlass looks at this through several metrics. One, clock gating ratio, is purely structural – how many register clocks (within a block) are actually gated, when you trace back to the root clock for that block? This may be a somewhat crude estimate but it provides a baseline.

Clock gating efficiency is a more refined estimate, requiring activity (simulation) data. Looking at all registers in the block, how many clock toggles occur on those registers versus clock toggles at the root clock? By this metric if a clock on a register toggles a few times versus the root clock, it is efficiently gated. Of course maybe the clock on a particular register has to toggle a lot because data is frequently changing, but this metric still provides a pointer to root cause for dynamic power.

The next two metrics, ROADE™ and ROADF™, are more refined. ROADF considers the number of Q pin toggles versus the number of clock toggles on a register. This is obviously much closer to a measure of ideal efficiency. If Q doesn’t change, in principle the clock didn’t need to toggle. ROADE considers the same measure across all registers sharing a common clock enable, so is a metric by enable signal.

Put these metrics together and you can build a pretty decent sense of where clock gating can have the biggest impact on power. But of course what you are seeing in the dynamic power analysis is metrics for one use-case/scenario; efficiency metrics could change significantly in different use cases.

The next step is how to drive those clock enables, once you have decided which clocks you want to gate. This step was illuminated by a question that came up in the Q&A. Why wouldn’t I just use automatic clock gating where a tool figures out formally or in some other way where best to gate clocks, figures out how to build the enable and maybe even inserts the logic for me? The answer was interesting. There are such tools available and they indeed work but there aren’t getting heavy traction. In practice designers seem to be much more interested in making their own decisions around enabling logic, often factoring in considerations (e.g. boundary use-cases not represented in the scenarios you ran) which simply aren’t visible to a tool.

You can get a lot more detail from the recorded Webinar HERE.


IoT SoCs Demand Good Data Management and Design Collaboration

IoT SoCs Demand Good Data Management and Design Collaboration
by Mitch Heins on 09-20-2017 at 12:00 pm

Design data management has always been important. Board designers have known this for decades as they had to have ways to keep all their discreet components organized and understood. Sourcing components is not easy as it means hours of reading and reviewing specifications, finding reliable sourcing partners and understanding the nuances of what the documentation isn’t telling you.

This was not a big problem for the custom / ASIC designers until the advent of 3[SUP]rd[/SUP] party intellectual property (IP) and system-on-chip (SoC) designs. Up until the early 2000’s most if not all of their design was built and controlled by the design team doing the IC. Since then however, there has been a literal explosion of 3[SUP]rd[/SUP] party IPs made available for IC design, and now with the advent of the Internet-of-Things (IoT), there seems to be a pull for even more.

IoT design, by its very nature implies a combination of multiple disciplines onto a single IoT SoC. SoCs must be power efficient while at the same time able to fuse data from a variety of sensors and sources some of which themselves will be on the same SoC. The fact that so many domains are being combined implies IP coming from multiple different sources. There is no one company that specializes in analog, digital, microelectromechanical (MEMs), chemical, photonics, memory, processors, graphics, neural nets, big data, RF communications etc. etc. IoT truly puts the term “system” into SoC and that means bringing together the best IP for the job at hand. The value-add for SoC design is how the system brings together different IP into something that creates a unique value proposition to the market place.

So, history repeats itself again. We are back to working out all the sourcing issues that the board designers have had over the last 40+ years. Fortunately for the IoT SoC designers, we’ve had some tool suites come along to help them manage this task. ClioSoft’s designHUB comes to mind. ClioSoft is best known for their data management tool SOS7. While SOS7 is a great tool for data management, it is designHUB that is meant to bring context and organization to the designer’s desktop.

The thing that makes tools like designHUB so necessary is the complexity of what IoT and SoC designs really imply. Many IP suppliers and design houses try to reduce complexity and risk by promoting configurable pre-designed platforms complete with processors and software stacks, interface IP, bus and network-on-chip (NoC) communications fabrics, encryption hardware, radio transceivers, and the like. That’s just the design IP. Then there is all the verification IP that goes with each piece along with prototype test boards, compilers, FPGA emulators etc.

And…since this is all going into an IC you also must remember packaging, design-for-test, design-for-manufacturing, and any qualification steps needed once the IC is manufactured. Needless to say, each one of these items is continually going through revisions and changes. That means the platform you used 6 months ago, and now want to use again has had hundreds of changes applied, some of which you want and some which you may not.

Platforms are great and the way to go, but it all implies a tremendous amount of information that must be obtained, consumed, applied, remembered and revised each time you make a revision to your IoT SoC design. So, while it’s like what the board guys have been doing for years, it’s also made more complex given the systems and designs are now orders of magnitude more complex.

ClioSoft’s designHUB is a design collaboration ecosystem where users can create, share and reuse design data and IP more easily. ClioSoft’s vision is that this is all encompassing and includes design IPs (internal and 3[SUP]rd[/SUP]party), verification IP, documents, user experiences, scripts, methodologies, libraries, ideas and even discussions. It’s ability to enable designers to collaborate, share and reuse all types of information is what makes it truly interesting for IoT SoC design. As already mentioned, IoT design is diverse and cuts across multiple engineering domains and it is designHUB’s crowd sourcing and dashboard social media capabilities that make it attractive for enabling engineers of disparate backgrounds to come together on design solutions.

As IoT moves into mission critical applications such as autonomous vehicles, electric grids, traffic control systems and the like, there will be a bigger and bigger demand for requirement traceability and IP tracking both to ensure quality before deployment but also to be able to service these systems once in the field. ClioSoft’s designHUB seems to be well positioned to address the IoT space and would look to be an important part of anyone’s IoT design environment.

About ClioSoft: ClioSoft was launched in 1997 as a self-funded company, with the SOS design collaboration platform as its first product. The objective was to help manage front end flows for SoC designs. The SOS platform was later extended to incorporate analog and mixed-signal design flows wherever Cadence Virtuoso[SUP]®[/SUP] was predominantly used. SOS is currently integrated with tools from Cadence[SUP]®[/SUP], Synopsys[SUP]®[/SUP], Mentor Graphics[SUP]®[/SUP] and Keysight Technologies[SUP]®[/SUP]. ClioSoft also provides an enterprise IP management platform for design companies to easily create, publish and reuse their design IPs.

See Also:
ClioSoft Products

Also Read

ClioSoft’s designHUB Debut Well Received

The Official SemiWiki #54DAC Party Guide!

Scaling Enterprise Potential with ClioSoft’s designHUB platform