Banner Electrical Verification The invisible bottleneck in IC design updated 1

Snapback behavior determines ESD protection effectiveness

Snapback behavior determines ESD protection effectiveness
by Tom Simon on 12-14-2017 at 12:00 pm

Terms like avalanche breakdown and impact ionization sound like they come from the world of science fiction. They do indeed come from a high stakes world, but one that plays out over and over again here and now, on a microscopic scale in semiconductor devices – namely as part of electrostatic discharge (ESD) protection. Semiconductor devices are highly vulnerable to the high voltage spikes that commonly occur when triboelectric charged objects are exposed to their terminal pins. ESD can be lethal to a device if there is inadequate protection designed into chips.

In an unprotected or incorrectly protected circuit ESD events can lead to failures or affect chip reliability. The mechanisms for these failures fall into several categories. These include oxide breakdown, junction burnout or metal burnout. Many failures are immediately obvious, but ESD can also cause latent failures that can go undetected during testing. Latent failures can become worse over time if there is inadequate ESD protection in the design, and can ultimately lead to field failures months or years later.

Typically, ESD protection is provided by creating alternative parallel electrical paths that harmlessly direct discharge energy so that it dissipates in ESD devices. These circuit paths contain so-called clamp devices and diodes that are usually switched off to allow normal circuit operation. However, clamps begin conducting current and limit voltage once a specified threshold voltage is reached. Many types of ESD clamps rely on the internal structure of MOS FET devices to perform their job.

Every NMOS FET contains a parasitic bipolar junction transistor (BJT) due to the configuration of the doped material. In the normal operation of an NMOS device the parasitic BJT does not come into play. As shown in Figure 1, the BJT has the source and drain as its emitter and collector respectively, and the bulk node underneath can serve as the base of the BJT under the right conditions. This is where impact ionization comes into play. A mobile electron or hole will move under the influence of an electric field. With a low applied electric field, it will move without causing any changes to the system. However, under a strong electric field, like the one found in a device with a high voltage across its terminals, the mobile charge carrier will energetically strike bound charge carriers, which can then break free. These new charge carriers can in turn repeat the process, leading to an avalanche current.

Figure 1

When this avalanche current is moving toward the base of the parasitic BJT, the base current can trigger the device, allowing large current flow between the collector and emitter. The most interesting thing about this is that once the device triggers, the high electric field that started the process is no longer necessary, or even present, to sustain the current. Nevertheless, the conduction continues with increasing current, but at much lower voltages. This phenomenon of triggering at a relatively high voltage and then falling back to conduction at a lower voltage is called snapback. For an effective ESD protection device the trigger voltage should not be so high that it will cause damage to sensitive circuit devices. Also, the lower hold voltage should be above the normal circuit operating voltage so that the device will switch off once the ESD event ends.

The IV-characterization curves for ESD devices are normally created by running carefully timed and controlled square waves through the device. When done correctly joule heating will not cause the device to fail and a series of measurements can be taken. This process is called transmission line pulse (TLP) testing. It is possible to verify proper operation of the ESD device in the circuit with the IV curves created from TLP measurements. However, normal SPICE simulation will not work well because SPICE cannot model the device turn on through impact ionization and the subsequent voltage snapback. Negative differential resistance during snapback causes convergence problems in SPICE. If analysis is performed on an ESD protection network without proper modeling of snapback behavior, incorrect results may be obtained. In some cases, a chip failure or ESD violation could be overlooked, leading to tester or field failures after tapeout.

Let’s examine one such case. A software tool that can properly analyze the full dynamics of an ESD event involving snapback devices must have two critical capabilities: a) proper handling of snapback behavior, and b) the ability to correctly simulate triggering of two or more parallel devices. Both are necessary. A tool designed to handle cases like this is Magwel’s ESDi® which can simulate an ESD event where there are snapback devices and competitive triggering between parallel devices. The case below shows how a serious violation can be overlooked if the analysis tool cannot properly model circuit behavior under ESD stress.

Figure 2

In figure 2 the NMOS device “esd_cell2” is a snapback device. If during an ESD event on pad “IO” the voltage across esd_cell2 does not reach its snapback trigger voltage, all the current will pass through the already triggered PMOS esd_cell1. This will violate margins on PMOS esd_cell1 and cause an ESD related failure. On the other hand, if esd_cell2 is not modeled with snapback and its holding voltage is used as the trigger voltage, it will appear to trigger, seemingly lowering the voltage and providing another current path.

First here is the example of an ESD event on “IO” with no snapback modeling, where NMOS esd_cell2 appears to trigger:

However, we can see below what actually happens in the circuit with an HBM test on the pad “IO”. With the snapback device esd_cell2 below its trigger threshold, the pad voltage reaches 10.369V and only PMOS esd_cell1 is triggered. In this instance, the voltage across esd_cell1 is 9.027V and the total current of 1.33A passes on this one path. There is both an overcurrent and overvoltage violation on PMOS esd_cell1, -14% and -6.6% margin respectively.

Magwel’s ESDi® uses a built-in simulator designed to take snapback into consideration. It correctly identifies which devices will and won’t trigger based on Vt1 trigger voltage. It will also account for the correct Vh1 holding voltage of snapback devices. This is important because once triggered, higher current values are sustained only above the holding voltage. Simply modeling snapback devices as ordinary diodes can cause serious ESD violations to go unnoticed.


Magwel ESDi Fieldviewer showing current density violation during ESD event

Magwel ESDi® runs using layout data and does not require the design to be LVS clean. It comes with a field viewer that shows current densities in metal routing in the discharge path, so that electro-migration (EM) issues can be visually inspected after reviewing the interactive error report. HBM checks are performed with simulation on all pin-pair combinations. This is possible due to ESDi’s rapid simulation capability. Though users can select a subset of pin pairs if focused analysis is needed.

With much tougher requirements for reliability due to standards like ISO 26262, preventing ESD related failures during testing, or even more importantly in the field, is becoming paramount. Using the right analysis tool can make the difference between catching or missing a potential failure. With the special requirements of ESD simulation it makes sense to apply a tool that is known for consistent and reliable results, and can handle snapback devices and parallel current paths. For more information on ESDi®, please look at the Magwel website.


Shifting Left with Static and Formal Verification

Shifting Left with Static and Formal Verification
by Bernard Murphy on 12-14-2017 at 7:00 am

Unless you have been living in a cave for the last several years, by now you know that “Shift Left” is a big priority in product design and delivery, and particularly in verification. Within the semiconductor industry I believe Intel coined this term as early as 2002, though it seems now to be popular throughout all forms of technology development.

The idea behind Shift Left in verification starts from a difficult mismatch between traditional product flows and current market needs. A long-favored approach to design and verification has been to make these stages more or less serial, only starting serious verification on a phase of design when that phase is nearing completion. In more relaxed times, that approach was eminently sensible – why spend much time in test if what you are testing is still evolving?

But as designs become more complex, they also become harder to test, leading to more unpredictable verification closure. Semiconductor companies are now responsible for a lot more software content in the product, so verification spans both hardware and software. Also, product design is becoming a lot more collaborative – general-purpose catalog parts may still be around but the real money-makers are frequently co-developed; as a result, spec/requirements may continue to evolve through design.

All of which means that verification has to be pulled in earlier in the design cycle – the Great Shift Left. Sadly, appeals to fairness (the spec changed, you didn’t allow enough time in the schedule, you didn’t give us enough staff) don’t wash. Competitors are stepping up to shift left and delivering outstanding products on these crazy schedules. Buckle up; it’s natural selection time.

How can you push verification left when the design/assembly is still in flux? By moving more verification responsibility into design, or verification teams closely aligned with design. An increasingly popular way to do that is through is better smoke-testing (so the verification team doesn’t waste time on basic bugs) and coverage help (why can’t I get past 90% coverage on this block?).

Also of note, an important advance mentioned in the webinar is that SpyGlass looks like it is fully integrated into the Synopsys verification flow. You can start from a design compiled for VCS and you can setup to automatically switch into Verdi for debug. It’s getting a lot easier to stay in these fully integrated environments, which can’t be good news for the smaller verification players. VCS, VC Formal and other verification solutions leverage the same unified compile and debug environment.

Webinar Summary
With increasing complexity in chip designs, IP and SoC teams are faced with the challenge of minimizing risk, while still maintaining high levels of productivity. Poorly coded RTL is a primary concern, as it leads to bugs, longer verification cycles, unpredictable design processes and delayed time to market.

The combination of static and formal technologies enables smarter, faster and deeper lint analysis at RTL for early signoff. These advanced capabilities enable designers to perform a series of even more comprehensive checks, ensuring fewer bugs, a more stable design flow and accelerated verification closure.

In this webinar, Synopsys discusses how SpyGlass® Lint Turbo, VC Formal™ Auto Extracted Properties (AEP) and Formal Coverage Analyzer (FCA) Apps identify RTL issues at their source, pinpoint coding and consistency problems in the RTL descriptions, and help designers resolve issues quickly before design implementation. The webinar also shows a tool demo on the easy-to-use Verdi® unified debug capabilities, low noise technology to automatically find RTL bugs, dead code, unreachable FSM states and transitions using advanced static and formal technology.

You can watch the webinar HERE.


DSP Benchmarks and Libraries for ARC DSP families

DSP Benchmarks and Libraries for ARC DSP families
by Eric Esteve on 12-13-2017 at 12:00 pm

Synopsys DesignWareARC HS4xD family is a perfect example of high performance DSP, enhanced RISC CPU IP core, able to address high-end IoT, mid to high-end audio or baseband control. ARC HS4xD architecture is 10-stage pipeline for high Fmax, resulting in excellent RISC efficiency with 5.2 CoreMark per MHz. ARC EMxD processors are offering lowest energy DSP solution to address sensor processing or data fusion, voice & speech, audio or low bit-rate communication. The shallow pipeline architecture has a direct impact on area and power, both optimized, while keeping very good RISC efficiency with 4.02 CoreMark per MHz. We will review the various DSP capabilities of Synopsys DesignWare ARC DSP oriented RISC processors: EM5D & EM7D, EM9D & EM11D and HS45D & HS47D.



The complete DSP features are listed on the above table for the various family cores. These cores are fixed-point by default, but all offer floating-point operation as an option. ARC HS45D/47D exhibits more muscle with 4×16 SIMD operation (instead of 2×16 for EMxD) and 64-bit load/store (instead of 32-bit for EMxD).

ARC HS45D/47D cores are supporting dual-issue, increasing utilization of functional units with limited amount of extra hardware. What is dual-issue? The capability for up to two instructions per clock, with in-order execution and the same software view as single issue. Dual-issue increases both RISC and DSP performance, with very decent area and power penalty, with only 15% increase. The instruction set has been improved to increase instruction per clock, allowing to execute multiple instructions in parallel and take benefit of the dual-issue pipeline.

Instruction Set Architecture (ISA) is ARCv2DSP, designed for high DSP core efficiency and small code size. Synopsys is showing benchmarks (cycle count ratio) for several DSP functions (vector functions, complex maths, scalar maths, matrix functions, IIR filters, FIR filters, Interpolation and transforms), comparing ARC EM9D, ARC EM5D and Processor Y. Core efficiency of ARC EM9D is always better than for ARC EM5D, and better than the competition with a ratio lower by 40% in average.

ARCv2DSP is including 4×8, 2×16, 4×16, and 2×32 SIMD, 16+16 complex MPY/MAC, FFT butterfly instructions, and ITU support for voice. Because HS4xD implements same instructions as EMxD, that provide consistency to ARC portfolio and ease of software portability. HS4xD implementing 64-bit source operations, allows 4×16 and 2×32 SIMD instructions for improved DSP efficiency.




When combining RISC and DSP capabilities in a processor, the key is the software tools and library support, allowing seamless C/C++ programming and debug. DSP software development is made easy, thanks to enhanced C/C++ DSP compiler and rich DSP software library. The C/C++ DSP compiler offers DSP support with fractional data-types and fixed-points API, as well as LLVM-based with excellent performance and code density.

But it would not be easy to develop DSP software without rich DSP software library. Synopsys propose highly optimized set of common DSP functions, vector, matrix, filters (FIR & IIR), transforms (FFT) or interpolation and fixed-points DSP for Q15 and Q31 data-types. DSP libraries also includes ITU-T base operations library for voice codecs, C++ class library for fast emulation on x86 platform. To support efficient simulation, Synopsys provides nSIM bit-accurate DSP models and HW targets for emulation: nSIM/xCAM, RTL, FPGA and Silicon.

The next picture list the DSP functions supported by ARC DSP family:



Every CPU/DSP IP vendor will claim offering the best solution, that’s why it could be wise to look at verified facts when comparing with the competition. Synopsys has compiled benchmarks, comparing cycle count ratios of Cortex-M4, Cortex-M7, Cortex-A8 and ARC EM9D:



This blog has been extracted from a presentation made during ARC Processor summit “Programming DSP Processors Efficiently and with Ease” done by Pieter van der Wolf, Principal Product Architect, Synopsys. Abstract: “The ARC EM and HS processor families both offer processors with the ARCv2DSP ISA extension to support a wide range of DSP applications. These DSP processors come with an advanced tool suite, including a powerful DSP compiler, to support C-level programming of DSP applications. In this presentation we show how excellent results, in terms of high performance and small code size, can be achieved with high-level DSP programming. Ease of programming is also supported with an extensive library of DSP functions. The high-level programming enables software compatibility across different ARC DSP processors.

By Eric Esteve from IPnest


Webinar: ADAS and Real-Time Vision Processing

Webinar: ADAS and Real-Time Vision Processing
by Daniel Nenni on 12-13-2017 at 7:00 am

ADAS is in many ways the epicenter of directions in the driverless car (or bus or truck). Short of actually running the car hands-free through a whole trip, ADAS has now advanced beyond mere warnings to providing some level of steering and braking control (in both cases for collision avoidance), providing more adaptive cruise control, adapting lighting (high-beam/low-beam), parking assistance, the list goes on. All of this requires a greatly enhanced idea of what is going on around the car and that takes lots of sensors (cameras, radar, lidar, …), lots of sensor fusion and a very significant level of image recognition. This starts with some very sophisticated signal processing. You might want to learn more about the camera monitoring part of this, including monitoring you, the not always reliable driver) by watching an upcoming CEVA webinar on the topic.

Register Now for this important Webinar on January 3[SUP]rd[/SUP] at 7:00am PST

Now if you want to build this kind of solution into your ADAS system (CEVA is looking particularly at Tier-1 providers here), you could start by building up all that image recognition technology yourself or you could start with a solution that already incorporates detection engines for pedestrian, vehicles, lane markers and moving objects (deer for example, a major cause of injuries and car damage where I live; bears and cows are generally fatal all round). The platform also includes CEVA’s programmable vision platform, to which you can add your own differentiated image processing.

The great thing about emerging ADAS technologies is the amazing capabilities they enable. The bad thing from a provider point of view is the incredible range of technologies you have to bring together, and qualify, to make all those amazing feature possible. Why start from scratch when companies like CEVA have already done the base-level heavy lifting for you?

Register Now for this important Webinar on January 3[SUP]rd[/SUP] at 7:00am PST

Summary
As the automotive market experiences accelerated growth and rapid adoption of vision applications such as Camera Monitoring Systems, Smart Rear Cameras, and Driver Monitoring Systems, there is a need for solutions that are both efficient and cost effective to address these applications in high volumes. In addition, these solutions must also allow for Tier-1s to both differentiate and meet the growing demands in performance from today’s OEMs.

NextChip’s APACHE4 is a vision-based pre-processor SoC targeting next-generation ADAS systems that uses a dedicated sub-system of image processing accelerators and optimized software. The APACHE4 incorporates dedicated detection engines that include pedestrian detection, vehicle detection, lane detection and moving object detection and have incorporated CEVA’s programmable vision platform into the APACHE4 alongside its differentiated image processing accelerators to enable advanced and affordable ADAS applications.

Join CEVA and Nextchip experts to learn about:
· Challenges of ADAS and vision based autonomous driving
· Overview of Nextchip APACHE4 ADAS SOC
· Utilization of CEVA-XM4 for differentiation and performance
· Applications use cases with APACHE4 and CEVA-XM4

Target Audience

Computer vision engineers, deep learning application designers, project managers, marketing experts and others interested in embedded vision, machine learning, and autonomous driving.

Speakers
Jeff VanWashenova
Director, Automotive Segment Marketing, CEVA

Young-Jun Yoo
Director, Strategic Marketing, Nextchip

About CEVA, Inc.
CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.


Starblaze Uses Synopsys DesignWare IP to Launch SSD Controller SoC

Starblaze Uses Synopsys DesignWare IP to Launch SSD Controller SoC
by Mitch Heins on 12-12-2017 at 12:00 pm

I recently wrote an article about Synopsys’ DesignWare Security IP for the Internet-of-Things market and was interested to see that a startup, Starblaze Technology, has now used parts of the same IP in its latest Solid-State Drive (SSD) controller. The security IP caught my eye, but the rest of the story really put things into focus. Synopsys was able to provide a one-stop shop experience for Starblaze that combined DesignWare Foundational IP (high performance cores, memories and standard cell logic), DesignWare Security IP (True Random Number Generator) and DesignWare Interface IP (DDR4 and PCI Express 3.1). The icing on the cake was the fact that they provided this IP in a flexible and configurable way that let Starblaze differentiate their product offering while getting all the advantages of using silicon proven IP and first-pass silicon success.

Starblaze, headquartered in Beijing, China, is a relative newcomer to the market being founded in November of 2015. Only two years after being founded, the young startup has used Synopsys’ DesignWare IP portfolio to design and launch a new product, their STAR1000 enterprise storage SDD controller, which they claimed met with first-pass silicon success and for which they are already shipping in volume production. That is not a minor feat for a complex multi-core high-speed device.

Starblaze attributes much of their first-pass silicon success to their use of Synopsys DesignWare IP. Their ability to work with Synopsys allowed them to focus on their design while gaining all the advantages of using silicon-proven IP. The STAR1000 SDD controller uses the Synopsys ARC HS38 processor and integrated MetaWare Development toolkit as well as several DesignWare Foundational IP blocks such as high-speed, low-power memories and logic components.

The Starblaze implementation used multiple ARC cores to achieve the high IOPS they needed as well as the 40-bit physical address extensions required to support one Terabytes of physical memory. Starblaze took advantage of the HS core’s integrated error correction code (ECC) support to provide implicit error handling needed to ensure the extremely high data reliability required by SDDs. They also used the ARC’s flexible architecture to add custom instructions, condition codes, and core and auxiliary registers to achieve lower power consumption and to reduce I/O latencies by 50 percent over competing alternatives.

Because Starblaze is selling into the enterprise storage SDD market they also needed to meet very stringent security standards to ensure protection against malicious attacks and backdoor security issues. To do this they made use of Synopsys DesignWare Security IP to implement a True Random Number Generator (TRNG) for key generation and other cryptography data required by security protocols used in their SoC. Synopsys’ TRNG module is FIPS 140-2 certified and offered a high-quality entropy source to enable high levels of security.

In addition to the cores, memories and security IP, Starblaze made use of Synopsys silicon-proven DesignWare Interface IP for their DDR4 and PCI Express 3.1 interfaces. Especially important to Starblaze was the ability of the DDR4 interface IP to support DDR4 3D stacked (DDR4-3DS) DRAM with 16 ranks of memory. That capability expanded Starblaze’s capacity by up to 400 percent compared to the previously supported 4 rank memories.

As for their PCI Express 3.1 interface, Starblaze used DesignWare IP to support single root I/O virtualization (SR-IOV) features which allows the Starblaze controller to increase system performance when used with enterprise systems employing virtualization. SR-IOV allows for sharing of the SDD across multiple CPUs or operating systems. Synopsys’ DesignWare DDR4 and PCI Express 3.1 IP also enabled Starblaze to include reliability, availability and serviceability features into their SoC that helped to increase data protection, system availability and issue diagnosis.

All in all, this is a very nice success story for the Synopsys DesignWare IP portfolio and it highlights the breadth and complexity of what can be done with their DesignWare IP portfolio. For more information on other Synopsys and Starblaze offerings please see the links below.

See Also:
Starblaze / Synopsys Press Release
Synopsys / Starblaze Success Story
Synopsys ARC Processors
Synopsys DesignWare IP


Creative Noise-Reduction in Automotive AMS

Creative Noise-Reduction in Automotive AMS
by Bernard Murphy on 12-12-2017 at 7:00 am

Automotive applications are one of the hottest domains today in semiconductor design. We’re bombarded daily with articles on new hybrids, electric cars, ADAS and autonomous cars, trucks and busses. All of these applications are certainly amazing, but the devices that make them work still have to deal with the same old challenges, often amplified in the cabin / body / drivetrain of a vehicle. One of these is noise in all its many manifestations, particularly (in this context) the noise impact of digital electronics sitting right next to analog circuitry on the same device.


A lot of what we read about managing noise, particularly between analog and digital circuitry, seems to start from the assumption that the noise source is immutable and what we need to manage is immunity to that source, through shielding, decoupling, separated power and ground planes, and so on. All of these methods are of course important, but there is also value in challenging that assumption of immutability. If the design can be modified to reduce noise generated by the digital circuitry, that should be a big win.

A while ago, ANSYS sponsored a webinar on just this topic, centering on a presentation by Dr. Peter Blinzer of NXP Hamburg. Although this isn’t hot off the presses, I think it is arguably even more important today, as mixed analog and digital content appears throughout infotainment devices, sensor fusion and other AMS functions in advanced automotive electronics. It’s also, I think, a creative application of a tool you wouldn’t normally associate with noise management.

What generates noise in digital circuitry is switching. The other thing greatly influenced by switching is dynamic power consumption. So, unsurprising, noise and dynamic power are related. But did you ever think about RTL power optimization as a way to also reduce noise? Peter Blinzer did, and came up with some compelling results.

Optimizing power is always a good thing; you work at reducing power until you hit whatever power budget you were given, then you stop, right? But if switching also affects noise, is hitting the power budget the right place to stop? Maybe not – maybe some further tweaking will pay off in noise reduction. Peter illustrated this through his analysis and optimization of a digital radio IC used in infotainment systems. He starts by showing a very detailed analysis of signal to noise in such a radio and the frequencies (including digital clock frequencies and related harmonics) and root causes where noise can dramatically reduce the quality of reception. These are what he wanted to reduce.

Peter ran PowerArtist analyses on multiple use-cases, looking for opportunities to reduce power. In one case, he found that a digital controlled oscillator (DCO) should have been gated off, but was actually still on. That’s also a pointer to unnecessary noise. By fixing clock gating, power was reduced naturally. We’ll get to overall noise impact next. In fact, similar analysis pointed to opportunities to improve gating on multiple DCOs. More power reductions, together with others adding up to a 60% power saving in the digital PLL alone.

Impact on noise, as measured in layout-based dynamic power simulations, was just as striking. This took total power from an average of ~30mA with noise of ~15mA down to ~5mA with noise of maybe ~1mA. Similar results were apparent in a frequency spectrum analysis, where most of the original noise contributors were reduced by almost 20dB, below the level where they can be a serious threat to reception quality (a few were not reduced but these were sources outside the scope of this analysis).

Peter noted several RTL changes they made to implement these improvements – replacing floating-point arithmetic with fixed-point, optimizations in clock gating and optimization of test-support logic. They were careful also to avoid over-constraining the design – naturally to avoid excess power but equally to limit noise. Not something we might normally consider.

To get more detail on this lateral-thinking approach to noise management in AMS designs, you can watch the webinar HERE.


Synopsys White Paper on IoT Security – Introduces DesignWare Root-of-Trust Module

Synopsys White Paper on IoT Security – Introduces DesignWare Root-of-Trust Module
by Mitch Heins on 12-11-2017 at 12:00 pm

As the internet of things (IoT) continues its climb to a trillion devices, there has been many articles and books written on the need for securing those devices. With all the IoT gear that I seem to be picking up as Christmas presents, I feel like I’m doing my part to help the market get there, but I have to say, I sure hope the SoC designers that created those devices have been reading and listening to the need for IoT security.

And, it does indeed seem that more companies are paying attention to the need for integrated security in their IoT devices. It all starts with something called Root-of-Trust (RoT) and now one of the bigger IP players in the industry, Synopsys, has written a white paper on RoT and is backing it up by introducing a new DesignWare module targeted at providing IoT security IP for SoC devices. The new module is called ‘tRoot H5 Hardware Secure Module’. More on that is a second, but first let’s do a quick review of what we mean by RoT.

As a quick reminder of why we need to secure our IoT devices, one only need remember back to late 2016 when researchers were able to demonstrate how hackers could possibly put people into harm’s way by remotely accessing a Tesla Model S car’s control system. The researchers were able to access and control the engine, braking system, sunroof, door locks, trunk, side-view mirrors and more. They did this by hacking into the car’s infotainment and WiFi connections which then got them access to the car’s controller area network (CAN) bus. The good news is that these were benevolent researchers who then worked with Tesla to close the security holes. But what if it had been someone more nefarious? Enough said.


One way to combat this type of hacking is to use what is known as a hardware Root-of-Trust (RoT). RoT is a set of functions that is explicitly trusted by the SoC’s operating system. These functions are used to ensure secure communications between an IoT device and the outside world using encryption and secure keyed handshaking. Additionally, RoT functions can also monitor the SoC’s memory, registers, and internal bus structures to ensure that data that does make it into the device has not been tampered with.

RoT functions can be implemented in software but most SoC designers are now turning to dedicated hardware accelerators to implement their RoT. Hardware cryptographic accelerators run very fast while taking the load off the SoC’s CPU(s), thus saving power and conserving device runtime memory. Additionally, security hardware can also be linked into the rest of the SoC architecture to monitor the SoC as it is running.

Per a recently released Synopsys white paper, RoT functions are typically made up of multiple hardware units. One function would be to ensure that a SoC’s boot up sequence is secure, usually by using a dedicated ROM that can only be accessed by the RoT module. Another unit takes care of True Random Number Generation (TRNG). TRNG ensure that ephemeral data used for secure connections is not easily predictable. The more random the better. Yet another unit may be a secure real-time clock (RTC) that is used to manage time-based security policies.

The important part about RoT is that it must offer strong protection for all operational phases of the SoC, such as power off, power up, run time operations and communications with external entities. Secure monitoring during power up is a must but as mentioned, many teams are now looking to integrate security into all phases of the SoC including runtime operation. Some applications require proper authentication of certificates during runtime. That means that hardware RoT modules are needed to handle common cryptographic functions such as generation of RSA signatures and ECDSA.

Key management is done inside the hardware Root of Trust module. Only indirect access to these keys is allowed and managed by the RoT application layer. Assuming the privilege levels are correct, any importing of keys must be authenticated, and any exporting of keys must be wrapped to ensure continued protection of the secret material. An example of common key management applications would be a hardware secure module (HSM) using a public key cryptography standard (PKCS)#11 interface application to manage the policies, permissions, and handling of keys.

So, what is Synopsys doing in this space? They recently released a DesignWare Module called tRoot H5 Hardware Secure Module. The module is a RoT HSM that enables connected devices to securely and uniquely identify and authenticate themselves to create secure channels for remote device management. The tRoot module is designed to protect devices when they are powered down, at boot time and at runtime. The DesignWare HSM handles secure communications management with other devices and has logic blocks for a Secure Instruction Controller and a Secure Data Controller. These latter two blocks can be used to build further security into the rest of the SoC.

A block diagram for the tRoot HSM module is shown. Being DesignWare, the module can be synthesized and mapped to any standard process technology used for IoT SoCs. The module is highly scalable and flexible as it uses software running on the HSM CPU to define the security features that are to be supported by the SoC. The tRoot module also offers Key Management which uses the PKCS#11 interface to help manage both static and ephemeral keys and can also be used to manage secure in-field firmware updates for the IoT device.

To summarize, making IoT devices secure all starts with a hardware Root-of-Trust. More companies are addressing this space with dedicated RoT IP and Synopsys is now in the mix with their new DesignWare tRoot H5 HSM with Root-of-Trust.

You can learn more about Synopsys offerings in this space by visiting their DesignWare Security IP web page, viewing their webinars and downloading their “Understanding Root-of-Trust” white paper.
See Also:
Understanding Hardware Root of Trust
DesignWare Security IP web page
Webinar: Building Security into Your SoC with Hardware Secure Modules


When Invaluable Kills Business

When Invaluable Kills Business
by Frederic Leens on 12-11-2017 at 7:00 am

Productivity is notoriously hard to sell. I recently visited a company where the engineering team wanted to evaluate one of our FPGA debug and analysis products on an existing board. This board had an FPGA that we supported and had all the required connectivity – it could just be used ‘out of the box’. Our tool – Exostiv – involves the insertion of a debug IP in the FPGA. We offered to set up the tool with the engineering team and within 60 minutes, the board was instrumented and ready to use. As I did it by myself, there was no initial setup cost nor learning curve cost in this example.

Well, a good demonstration as it seemed… After 2 hours of discussion and having shown the product’s main features, I left the engineering team a unit with a license until the next day.

The next day, the engineering team told me that the tool was easy to use for those used to JTAG-based logic analyzers such as Chipscope / Xilinx logic analyzer. Basically, the flow was identical. Specific items like transceivers configuration required some additional understanding of the parameters, but overall, they said the setup and trial had run pretty smoothly.

Then, they told me that our tool had allowed them to find and correct a bug in an Ethernet IP that they were unaware of. With our tool, they had been able to cover a specific test scenario that had not been explored until then. They were about to send the board and the firmware to production and said that this new result was‘invaluable’.

As for me, I was absolutely delighted. This trial had totally exceeded my expectations. I expected to receive a purchase order the same day.

I was wrong.

Actually, they were puzzled. They somehow went to the conclusion that our tool was priced too high because our model involves subscribing for our software for a minimum of 12 month – and here they had resolved the bug so quickly… (I am still perplexed by this reasoning).

Anyway, they decided to wait until they had a new bug or alert that could *justify* buying the tool. Practically, this means waiting until ‘someone’ (a customer?) complains after the system is released.

They had discovered an issue that they were not aware of – and before it had become painful to anyone…

And what about the management? He was almost not aware of it. Practically, nothing harmful had happened at all – so nobody was even considering a purchase…

This – real – case is a little extreme, I agree – and I certainly do not know all the details of the decision process in this company.

However, we are seeing a real trend this days of engineering ‘waiting for pain’ and then look for a remedy for it.

The problem with this approach can be a severe loss of money.

Going to production with unknown bugs has a cost that generally reduces to how much market (share) you’ll loose by arriving late on the market with a working product. In this case, it seemed that the product was already reasonably stable: the engineering team was perfectly qualified and had not seen anything wrong (somehow someone had second thoughts or doubts, since they showed some interest in testing our solution). This cost can be estimated at the value of the market that is left to the competitor because you are delaying your product launch – or fixing the product.

Even if this cost is very large (loosing a few % of market share should be a lot of money – or you do not address a market that is large enough), it can have no impact on a decision to invest in a new hardware or software EDA tool to debug and analyse electronic systems in the late stages.

My opinion is that it is usually too complicated for many companies to put a number on it. The value cannot be estimated accurately and its consequences are usually unpredictable and too distant. We are constantly working at gathering such numbers… But who really believes the salesman who tries to frighten the customer to get a sale…? As a professional, I am personally inclined to calling to the intelligence of my prospects, not their fear…

In my opinion, as an electronic engineer, we should all ask ourselves the following questions:

– Will there be bugs in your design?Absolutely. FPGA are such complex beasts that this cannot be avoided. No wonder why 40% of the total design time is spent on debug and verification.
– When do those bugs cost the most? When they ‘escape’ to production: the cost of having to stop the production and get back to design is gigantic.
– Who is responsible to avoid this?The engineering team.
– Why would you reserve a budget for any tool?Because it pays back.

It pays back first from the saved engineering hours. It pays MUCH MORE back if the tools help avoid bugs escaping to production, even on FPGAs.

Thank you for reading.
– Frederic


Top Cybersecurity Concerns Are WRONG

Top Cybersecurity Concerns Are WRONG
by Matthew Rosenquist on 12-09-2017 at 7:00 am

A recent survey by Varonis of 500 security professionals from the U.S., UK, and France highlights the top three cybersecurity concern for 2018: Data Loss, Data Theft, and Ransomware. Sadly, we are overlooking the bigger problems!


Missed the Target by a Mile
I think we are scrutinizing at the small and known threats, when we should be looking forward at the significant risks coming our way. In some ways, it is like the child in the crosswalk who is looking down at their untied shoes, while oblivious to the truck speeding towards the intersection. The top survey results are not surprising, just disappointing.

The Real Threats
Here is what the world should really be concerned about, when it comes to cybersecurity:

[LIST=1]

  • Data Integrity Compromises. These types of attacks can cause catastrophic impacts and losses, orders of magnitude greater than data breaches and common theft. By just modifying a few transactions or data records, thieves have been able to steal tens to hundreds of millions of dollars, researchers have taken control over the operation of cars and planes, and national infrastructure systems have been physically damaged.
  • Escape of Nation-State Attack Techniques and Code. Highly sophisticated and funded capabilities are normally reserved by nation states for precision attacks. But once the vulnerabilities, exploits, and tactics are used in the wild or leaked, others will have the opportunity to harvest, dissect, and duplicate functions for their purposes. Threats such as cyber criminals, anarchists, and other nation states will gladly wield these super weapons for their end-goals and to the severe detriment of others.
  • Exploits in IoT Devices Which Pose a Risk to Life-Safety. Society is sliding over the verge where we place our lives and safety in the hands of intelligent machines. It is most relevant in the automotive, critical infrastructure, healthcare industries. Although astonishingly wonderful if used for good, it comes with risks. Autonomous vehicles, electrical grids, and medical devices all play an important role in keeping people alive and healthy. When attacks undermine functions and turn malicious, people will be put in harm’s way.

    Not a Flawed Survey
    Sadly, I believe the survey was accurate. This means those professionals who provided answers are only seeing the near-term problems: the very ones they fear most. These issues are annoying, but do not compare to what is just around the corner. The risks are as mismatched as much as the capabilities to prevent, detect, and respond to them. Consider that there are already mature tools and defenses for data loss, theft, and ransomware. They just must be instituted, configured, and maintained to work against most attacks. For the real threats, we are much less capable in our defenses. Granted, the participants may not have many options to choose from, but the answers given may speak volumes about those who voted for these categories. Namely, that they are likely not as prepared for these basic risks as they would like, therefore they fear what they know will come. With their focus on these, they fail to see the long-term strategic picture. That is bad for everyone, except the attackers. Without looking forward, like the child in the crosswalk, they are likely to be surprised when the truck hits.

    We Must Do Better
    We must think strategically if we want to be prepared and make a meaningful difference.

    “Plan for what is difficult while it is easy, do what is great while it is small” – Sun Tzu

    If we don’t perceive and understand the big problems ahead, we stand little chance in addressing them early.

    Where do you stand? Is your attention only on the immediate and well-understood risks?

    Interested in more? Follow me on your favorite social sites for insights and what is going on in cybersecurity: LinkedIn, Twitter (@Matt_Rosenquist), YouTube, Information Security Strategy blog, Medium, and Steemit


  • Free IoT SOC Books at REUSE 2017

    Free IoT SOC Books at REUSE 2017
    by Daniel Nenni on 12-08-2017 at 7:00 am

    The second annual REUSE conference is next week bringing the fabless semiconductor ecosystem together for a day of food, fun, and some very interesting presentations. It’s at the Santa Clara Convention Center this year which is nice and it is FREE! More importantly, there will be 30+ vendors in the exhibit hall which opens at 9am for registration and breakfast. Exhibit hall conversations are the best for networking, absolutely.

    You can find me at the Open Silicon booth signing free copies of our latest eBook “Custom SoCs for IoT: Simplified – A Book Focusing on the Emergence of Custom Silicon for IoT Devices. More than 500 people are expected to attend and we only have 100 books so get there early if you can, it would be a pleasure to meet you!

    In case you have not downloaded the PDF version here is the book foreword by Taher Madraswala, President and CEO of Open-Silicon. It has been a pleasure to work with Taher and the Open Silicon people over the last two years on SemiWiki.com. Not including this one, we have written 30 blogs about Open Silicon that have earned close to one million views.

    FOREWORD

    Enablers of the Internet of Things (IoT) are improving the growth rate of the semiconductor industry in a significant way. Technology advancements in algorithms and processing units have made human-to-machine communication a reality. We are now entering an era where incorporating this capability in smart devices has the potential to simplify, enhance and even save lives. The IoT ecosystem is a symbiotic collaboration of hardware and software developers, building block (aka IP) providers, architects and visionaries who want to translate complex human functions (such as voice, vision, and thought) into simpler, machine-decipherable functions. At the core of this effort are the custom system-on-chip (SoC) solutions that enable designers across vertical markets to meet the performance, power, price, and time-to-market constraints of the quickly evolving IoT universe.

    The semiconductor ecosystem has categorized the IoT space into three distinct segments: IoT cloud, IoT gateway, and IoT edge. This segmentation allows key players to devise strategies and offerings in areas of their expertise, which benefits customers with much-needed competition in each segment. Similar segmentation in the computation world helped create the “WinTel” (Microsoft + Intel) ecosystem, which ruled humanity for decades. Segmentation also helps address new and evolving standards, markets and customers in a rapid response manner. Custom silicon solutions have been deployed on the cloud side of the IoT for many years, specifically in networking, telecommunications, storage, and computing. However, until very recently, custom solutions were out of reach for IoT edge and IoT gateway segments due to cost or lengthy development schedules.

    The IoT SoC platform approach has opened up many new use-cases for edge applications. Among them are sensor hubs for industrial applications, including outdoor, factory floor and in-room environmental control. IoT gateway applications are also experiencing rapid growth from the custom IoT SoC platform approach. For example, a well-designed IoT gateway SoC platform can address multiple smart city applications, such as waste management, transport, traffic, parking, lighting, and metering.

    Custom SoCs for IoT: Simplified 6 The custom IoT SoC platform approach can speed custom design, reduce risk and cost, and enable the critical differentiation that customers demand. Quality platform development requires extensive experience and knowledge. Platform creators must think like a system company as well as a startup. They need to consider end-use-cases in the vertical IoT markets while designing an easy-to-use platform. Such developers need to be responsible for the core block and its verification, which allows for the highly customized software drivers to be written and used as the core library.

    The use of platforms not only opens the door to faster validation of new designs with very little risk but also allows the visionaries and architects to focus on their end goal, which is to bring product differentiation, more use-cases, more functionality and more ingenuity to the world of IoT.

    “Custom SoCs for IoT: Simplified”is the first comprehensive book to explicitly define and detail the various IoT architectures. It covers the multitude of security factors, the power budgets associated with different IoT applications, and many more technical considerations that dictate the success of a custom IoT SoC platform, including but not limited to implementation methodologies, as well as hardware and software tradeoffs. This book also provides a detailed case study of a highly successful approach to custom SoC design for an IoT gateway SoC using Spec2Chip turnkey solutions.

    It is important to mention that the implications of the Spec2Chip offerings outlined here extend far beyond IoT cloud, IoT edge, and IoT gateway devices. OEMs in other emerging technologies, such as deep learning, artificial intelligence, virtual reality, gaming and autonomous driving cars are benefitting from this Spec2Chip platform approach. Customers in these markets are collaborating with turnkey ASIC providers so they can scale back on, or even eliminate, the risks and loopholes of a lengthy chip design flow, and focus specifically on the core hardware differentiation IP and end application software that they bring to their innovation.

    This book deliberately includes a great deal of data and references to real products. We want you to fully understand and appreciate the scope of the IoT ecosystem and the Spec2Chip platform approach that is fueling its expansion. The goal is for you to take this experience and knowledge and apply it to your personal or organizational design flow. Our sincere hope is that your ideas, combined with the proven design methodologies outlined here, will result in a technological advancement that contributes to the IoT universe and those who live within it.

    You can register for the conference HERE. I hope to see you there!

    Also read: 35 Semiconductor IP Companies Hold 2nd Annual Conference