webinar banner2025 (1)

Low Power SRAM Complier and Characterization Enable IoT Applications

Low Power SRAM Complier and Characterization Enable IoT Applications
by Tom Simon on 02-22-2019 at 7:00 am

If you are designing an SOC for an IoT application and looking to minimize power consumption, there are a lot of choices. However, more often than not, looking at reducing SRAM power is a good place to start. SRAMs can consume up to 70% of an IC’s power. SureCore, a leading memory IP supplier, offers highly optimized SRAM instances for such applications. They took the approach of looking at first principles to effectively rethink how to reduce SRAM power. Making good use of their approach, they have developed memory compilers that deliver front and back end views of the memory instances required by their users. As part of this, accurate timing and power views are needed to complete designs incorporating these instances.

Designers utilizing SRAM instances look to Liberty model files to provide characterized timing and power information so that system level simulations are fully accurate. Generating this characterization data is computationally intensive according to sureCore. However, they make use of advanced tools and techniques to make the task manageable. In my conversations with them they discussed how they manage the characterization process for their EverOn 40ULP family of SRAM instances.
For each synchronous input, for a range of clock and data edge speeds (typically around 7 of each) they needed to examine 49 (7×7) setup and hold values. On the power side, they needed to look at static and dynamic power for operation modes such as read and write, as well as the full range of available power down and sleep modes. As you can see, this becomes an exponentially growing problem since different PVTs are added and consideration is given for each of the different configurations.

The EverOn[SUP]TM[/SUP] family consists of 276 different SRAM instances that vary in aspect ratio, word count and word length. The family’s operating range is between 0.6V and 1.21V, creating a large PVT space for full characterization. A brute force approach to simulation could easily require an unworkable 24 hours per instance. One aspect of their characterization solution is to take advantage of the most recent and advanced features of Liberate-MX provided by Cadence.
(Note: Is this too much of an ad for Cadence?)

They explain how several features in Liberate-MX accelerate the process. First Liberate-MX can carefully prune the netlist during timing estimation to include only the circuit elements necessary to provide an accurate value of the timing parameter being characterized. The other technique they employ is using interpolation to provide power numbers over a wide range of memory configurations. SureCore has used full characterization runs on sample memory sizes to validate the interpolation results and has seen excellent correlation.
The Cadence tool suite is used to optimize runtime while maintaining accuracy. Liberate-MX cleverly dispatches leaf level pieces of the memory instance to Spectre XPS for detailed SPICE simulation results. With smaller process nodes there has been an increase in PVT corners, and Monte Carlo analysis is becoming necessary. The number of simulation runs needed has exploded. They use the new Cadence Super Sweep technology, leveraging simulation steps that can be shared between different corners that accelerates simulation. SureCore has seen a 2x speed up in runtime and an improvement in accuracy using these techniques.

However, a substantial part of reducing their computational requirements for memory characterization come from the flow that sureCore has developed, including specific parasitic reduction techniques to deliver optimized netlists that provide optimal inputs to each step in the flow. They report dramatic reductions in netlist sizes for timing, static and dynamic power.

SureCore also focuses on validation to ensure the characterization flow is producing safe and accurate results. They have a scripted environment to check simulation results to ensure that the models perform properly. They even run checks that validate that the correct internal structures were included in the characterization runs. On top of this they run stressed simulations with Monte Carlo variation.

SureCore is filling a need for low power SRAM IP, which is critical for a variety of edge devices in a plethora of applications. I found it fascinating to learn about their comprehensive process dedicated to characterization. They have white papers on their website that offer interesting information on their technology. Without a flow like this, it would be a computational challenge to deliver high quality and consistent IP deliverables in a reasonable timeframe.

You may want to check-out more about this unique characterization methodology by clicking here or going to www.sure-core.com.


Accelerating Post-Silicon Debug and Test

Accelerating Post-Silicon Debug and Test
by Alex Tan on 02-22-2019 at 7:00 am

The recent growing complexity in SoC designs attributed to the increased use of embedded IP’s for more design functionalities, has imposed a pressing challenge to the post-silicon bring-up process and impacting the overall product time-to-market.

According to data from Semico Research, more than 60% of design starts contain IP reuse and the number is expected to increase due to the high silicon demand related to today emerging applications such as 5G wireless communication, autonomous driving and AI.

Based on data from Gartner, the staff-years equivalent effort for designing a 7nm SoC is more than 5 times than those of 28nm. The cost to testing the associated IPs is also on the rise. To mitigate this post silicon validation and debug challenge, design teams have resorted in applying on-chip debug strategy, more automated techniques for post-silicon test generation and pre-tapeout assertions for effective coverage/analysis. For example, on-chip buffers is deployed to improve observability and controllability of the internal signals during trace-based debugging.

The traditional silicon bring-up and debug flow has been inherently inefficient as it involves multiple translations of test related collaterals. In this scenario, a DFT engineer or designer initially uses a mix of document based test descriptions and simulation generated tests to handoff the testing directives to the test engineer, who will then reformat them to the ATE of choice for silicon validation. The subsequent test generated results is then re-translated back into tool format used by the DFT engineer for review. Such iterative process is prone to delays as access to testers may be interrupted while run data being process for assessment.

IJTAG (Internal Joint Test Action Group) or IEEE 1687 provides an access standard to the embedded instruments and allows vector and procedural retargeting. It incorporates the mainstream IEEE 1149.1-x and the design-for-test standard IEEE 1500. Since its introduction in 2013, the IJTAG based adoption has been on the increase. The JTAG TAP (Test Access Port), a five-pin, state-machine based interface, not only controls the boundary-scan logic and tests, but it has been used to access more embedded instruments and IPs.

IJTAG creates a plug-and-play integrated environment and use of the instrumentation portions of IP blocks which includes test, debug, and monitoring functions. Part of the standard includes two languages: first, ICL (Instrument Connectivity Language) hardware rules related to the instrumentation interfaces and connectivity between these interfaces; and second, a Tcl-based PDL (Procedural Description Language) that defines operations to be applied to the individual IP blocks. While ICL is an abstraction of the design description needed to scan read/write from/to the instrument, PDL defines the syntax and semantics of these operations. The PDL may be written with respect to the instrument’s I/Os and is also retargetable. Retargeting translates the operations from the instrument through hierarchical logic described in ICL up to the top level of the design.

Even though IJTAG streamlines IP integration during the design phase, frequent third party related IP evaluation and debug issues still persisted during silicon bring-up –thus affecting the production yield ramp-up. To address this issue, Mentor’s Tessent SiliconInsight with ATE-Connect™ technology paired with Teradyne’s PortBridge for UltraFLEX, to enable DFT engineers to directly control and observe IPs in the SoC-under-test on the ATE. This solution resolves a number of key problems of an IJTAG-based IP evaluation and debug. It delivers a protocol-based flow instead of a pattern-based flow using IJTAG commands, and utilize Tcl-based Tessent shell interface to access ATE unit remotely through TCP connection. Another Tessent’s tool SimDUT to allow users to debug and validate the PDL procedures and related Tcl procedures

Figure 4 shows an example on how the environment is being utilized. The MBIST engines at the upper right are being accessed and controlled through the IJTAG. Similarly, the debug of two mixed signals IP blocks, DAC and ADC can also be achieved through the same approach. Tessent SiliconInsight tools could address both the test engineer need of having a fast and reliable test to optimize yield and minimize test cost, and the DFT engineer interest in confirming the functionality and critical metric extraction.
Initially, the test engineer can configure the ATE and perform proper setup/biasing of the DAC/ADC blocks. Once setup is completed, the test engineer passes control to the DFT engineer to run the previously designed tests or to do any needed interactive debug. Upon verifying both IP blocks, the external ATE resources can be optionally replaced with a less costly loopback connection mode. Running subsequent what-if testing, applying different adjustment on the adjoining block such as PLL and reassess system level functionalities can be done. Both the DFT and test engineers viewpoints are aligned (while they might not in the same geographical locations), enabling also pattern generation need or using ATE-Connect to target a bench setup with the debugged tests –streamlining further the three environments (design, test, bench) to accelerate time-to-market.

The takeaway from having Tessent SiliconInsight with ATE-Connect technology is that it delivers efficiencies in silicon bring-up, post-silicon and IP debug or evaluation. The simplified IJTAG based standard also provides DFT and test engineers with option to scale their IP testings.

For more info on Tessent based test flow, check HERE.


The RISC-V Revolution is Going Global!

The RISC-V Revolution is Going Global!
by Daniel Nenni on 02-21-2019 at 12:00 pm

This Month, you can Join us in Austin, Mountain View or Boston
In 2018, we hosted several RISC-V technology symposia in India, China and Israel. These events were very successful in fueling the growing momentum surrounding the RISC-V ISA in these countries. It turns out that these events were just the tip of the iceberg. In 2019, SiFive is greatly expanding its reach by hosting over 50 SiFive Tech Symposia in cities throughout the world. The first leg of the global tour begins in the USA. In collaboration with our co-hosts and partner companies, we aim to foster deeper education, collaboration and engagement within the open-source community.

What’s Happening in Austin?

With Microchip as our co-host, we have created an exciting lineup of speakers, tutorials and demonstrations for the event in Austin, TX on February 21. Ted Speers, a member of the board of directors for the RISC-V Foundation, will present on the history and current state of the union of the RISC-V ISA. Naveed Sherwani, CEO of SiFive, will deliver a keynote presentation about the semiconductor industry and how RISC-V is leading a design revolution. Another keynote presentation will be given by Tim Morin, director of product line marketing for Microchip, who will present on RISC-V based SoC FPGAs. Esha Choukse, a PhD candidate in computer architecture at UT Austin, will present on compression in deep learning for AI applications. We will also have presentations by several other leaders in the RISC-V ecosystem, including NXP and Hex Five Security. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information on the Austin event, please visit https://sifivetechsymposium.com/agenda-austin/

What’s Happening in Mountain View?

This event will take place on February 26 and will feature several presentations by key industry veterans and luminaries. Martin Fink, CEO of the RISC-V Foundation and CTO at Western Digital, will deliver a keynote presentation on his vision for the RISC-V Foundation and his plans for the next several years. Naveed Sherwani, CEO of SiFive, will present on the semiconductor industry and how RISC-V is leading a design revolution. Another highlight at this event will be a keynote presentation by Darrin Jones, the senior director of technology development for cloud hardware infrastructure at Microsoft, who will present on SoC design in the cloud. Krste Asanovic, chairman of the RISC-V Foundation and co-founder and chief architect at SiFive, will also deliver a keynote presentation on customizable RISC-V AI SoC platforms. Other highlights include a presentation by Megan Wachs, VP of engineering at SiFive, who will talk about RISC-V development platforms. There will also be presentations by the CEOs of Imperas, Mobiveil and DinolusAI. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, please visit https://sifivetechsymposium.com/agenda-mountain-view/

What’s Happening in Boston?
With Bluespec as our co-host, this event on February 28 will include a powerful lineup of speakers. Rishiyur Nikhil, ISA Formal Spec Task Group Chair at the RISC-V Foundation, will present on the history and current state of the union of the RISC-V ISA, and will also deliver a keynote about RISC-V verification and design from his perspective as CTO at Bluespec. Krste Asanovic, chairman of the RISC-V Foundation and co-founder and chief architect at SiFive, will deliver a keynote presentation on RISC-V and its role in leading a design revolution. Adam Chlipala, associate professor of computer science at MIT, will present on the state of RISC-V academic research at MIT CSAIL. There will also be a presentation by Greg Sullivan, co-founder and chief scientist at Dover Microsystems. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, please visit https://sifivetechsymposium.com/agenda-boston/
We look forward to seeing you in Austin, Mountain View and Boston!

Swamy Irrinki,
Senior Director of Marketing at SiFive
— February 20, 2019


CEO Interview: Adnan Hamid of Breker Systems

CEO Interview: Adnan Hamid of Breker Systems
by Daniel Nenni on 02-21-2019 at 7:00 am

Breker Verification Systems solves challenges across the functional verification process for large, complex semiconductors. This includes streamlining UVM-based testbenches for IP verification, synchronizing software and hardware tests for large system-on-chips (SoCs), and simplifying test sets for hardware emulation and post-fabricated silicon. The Breker solutions are designed to layer into existing environments.

Adnan Hamid is founder CEO of Breker Verification Systems and the inventor of its core technology. Under his leadership, Breker has come to be a market leader in functional verification technologies. Prior to Breker, he managed AMD’s System Logic Division, and also led its verification team to create the first test case generator providing 100% coverage for an x86-class microprocessor. In addition, Hamid spent several years at Cadence Design Systems and served as the subject matter expert in system-level verification, developing solutions for Texas Instruments, Siemens/Infineon, Motorola/Freescale, and General Motors. He holds 12 patents in test case generation and synthesis. He received Bachelor of Science degrees in Electrical Engineering and Computer Science from Princeton University, and an MBA from the University of Texas at Austin.

What is your background?
I knew at a young age that I wanted to be in the business of building computers. While studying Electrical Engineering and Computer Science at Princeton University, I worked an on-campus job in artificial intelligence at the psych lab. It opened a whole new world of innovation for me and convinced me that wherever possible, we must teach computers to do our work for us. I stumbled upon functional verification early in my career and led a team at AMD responsible for verifying that the AMD x86 chips were functionally correct. Given our time pressures, I invented an AI problem solver-based test generator, which was a huge success for our stellar team in meeting our deadlines and providing 100% coverage. I moved on to verification methodology and system-level jobs and understood I could envision a better solution to the disparate nature of verification across the full system flow.

What made you start Breker?
“When there’s a gold rush, sell pick-axes” was sage advice shared by my investment manager. This, coupled with the increasing costs in verification and my career success, encouraged me to take the risk to start Breker that pioneered a graph-based approach to automation of C-test generation across different platforms. This represents a big improvement in verification, and was an opportunity I simply could not ignore.

Where did the name “Breker” come from?
On my first day of my Executive MBA at UT-Austin, we were asked to share a blurb about who we were and what we do. Never known to do the expected, when it was my turn, I said, “I break things for a living.” It livened up a class of middle-management folks, and earned me the nickname of “The Breaker.”

Toward the end of my course, my team participated in a business case competition and pitched my idea for a system-level verification product. When searching for a name for the project, we decided on Breker, which sounded bold while capturing what we do: break things.

You and your wife founded Breker. Is she involved? What about Breker’s executives and board members?
My wife is a fellow MBA and built her career in investment banking. She co-founded Breker with me and has been a part of this journey from the beginning, where we complement each other’s strengths across the functional areas required to build a thriving business. She serves as Chief Financial Officer.

We have a fantastic, motivated team at Breker who are all excellent at what they do. Industry veterans with some of the most creative minds in the space of verification have naturally gravitated toward Breker, which pioneered the field of Portable Stimulus. Seasoned board members like Jim Hogan and Michel Courtoy believe in the vision for Portable Stimulus and see the far-reaching benefits it can bring to users.

How long ago was Breker founded? Where is its corporate headquarters located? How many employees does Breker have?
Breker was founded in 2003 and we started selling portable stimulus solutions a few years later. Since then, our product portfolio has grown significantly. It now includes test suite synthesis flows whose output is optimized for universal verification methodology (UVM) block verification, Software-Driven Verification (SDV) and Post-Silicon environments, providing a complete verification solution that generates stimulus, checks and coverage. Privately funded and headquartered in San Jose, Calif., Breker has global presence and a core team of 25 people.

What is the Breker vision and how are you going to change verification?
Since the beginning of HDL-based verification more than 30- years ago, the industry has dreamed of “specification-driven verification” where the original product specification is used to drive the entire verification process. Breker is the first company that truly realized this vision and, now that Portable Stimulus is an Accellera standard, the industry is accepting this notion. Starting with an easily understood spec and automating the entire process for stimulus, checks, coverage and debug for the most complex verification problems is the path Breker is on.

What keeps your users up at night?
Verification is absolutely at the sharp end of semiconductor development. It continues to take 70% of the overall process, and represents the most risk if it goes wrong. Verification managers are most worried about a bug escaping from this process into the final chip, causing a re-spin with the associated schedule slip and cost. To avoid this, they drive as comprehensive a process as possible, with high coverage and quality testing. They are always time and resource limited. It is always interesting, though, that if they are able to save some time, they will put that back into extra testing rather than shrinking the schedule.

What do your top users find so useful about Breker and your Trek portfolio?
Given the complete solution focus, there are two areas of interest. The first is what can PSS do for their individual flows. Depending on their area of interest, they enjoy eliminating the more painful activities around UVM test authoring and tracking corner-cases. Also, complex activities in a Software Driven Verification (SDV) flow, often on an emulator or post silicon validation where they use their verification test suite for the first time while gaining visibility into the final silicon.

The second is the more global perspective where managers think about portability between the verification activities, and the reuse of the tests across their teams and future projects. What is nice about the Breker approach is that we can satisfy both short-term requirements and longer-term perspective.

What is special about Breker that allows you to differentiate against big three competition?
Breker has been at this for 12 years. In this time, we have worked with many of the world’s leading semiconductor verification teams. These engineers have driven a whole solution approach, driving us to introduce practical features that save them time and energy.

For example, others will generate some high-level tests for Software Driven Verification. To mount these software tests on a processor requires extra work to make up for the lack of OS services, such as memory allocation and handling register access. We have automated this layer to eliminate this issue. The same is true of our UVM flow and post silicon. We also have advantages in the modeling area, use of the tools for debug, and coverage and profiling.

Has Accellera’s Portable Stimulus Standard helped move the chip design verification community closer to adopting Portable Stimulus tools?
Oh yes, clearly. For a number of years, we have been working with power users who were unconcerned with developing models using the Breker proprietary language based on C++. Indeed, our original language is more advanced than the standard. It has procedural as well as declarative constructs, and our power users are still actively employing it. However, to allow mainstream users to enjoy the benefits of these tools, they had to be assured that models they develop could be supported by multiple vendors, and this is where the standard has proven useful. We have seen a significant uptick in our business from the mainstream market as a result of its release last June. We fully expect it to overtake other verification languages over the next few years as it matures.

What tips can you give to entrepreneurs who are just starting out?
Start the journey if you have a good understanding of the end-user market and feel that you have something of value for them. In industries like ours where barriers of entry are high, innovative, compelling solutions are keys to success. A few other mantras we live by: take a user-centric approach to building solutions, treat your team like your family, and go out there and have fun. There will be many days where the end of the road is not visible. Be patient and believe in your journey. Eventually the world will converge.

What’s the status of Breker today, and what’s next for the company?
Breker is doing well. We witnessed dramatic growth in our business over the last two or three years, and hired the best and brightest as our team grows to meet this demand. Apart from all the general verification flows, we are seeing more specialized uses for the technology, an interesting development. For example, we are working on ISO 26262 automotive flows and find that requirements for this segment are easily specified to allow a full coverage test against them, a significant benefit. We offer TrekApps for ARMv8 integration testing, and now see interest in a similar platform for RISC-V with enhancements to allow for instruction set extensions. Security is another area where our tools can play an expanded role, providing powerful all-inclusive tests that attempt to find security holes. The list is endless and, right now, the verification world appears to be our oyster!

Editor’s Note:Breker will showcase the full complement of Trek5’s feature-rich set of expanded capabilities that go beyond Portable Stimulus test suite generation in Booth #701 during DVCon US next week (February 25-27, DoubleTree Hotel, San Jose, Calif). It will demonstrate practical applications of portable stimulus with examples of how PSS can be applied to accelerate UVM coding for complex blocks and SDV for large SoCs.

Applications for Breker’s Trek5 will be profiled throughout DVCon:

Also Read:

CEO Interview: Cristian Amitroaie of AMIQ EDA

CEO Interview: Jason Oberg of Tortuga Logic

CEO Interview: YJ Su of Anaglobe


Silvaco on Simulation of Reliability and NBTI Aging in MOS Microelectronics

Silvaco on Simulation of Reliability and NBTI Aging in MOS Microelectronics
by Daniel Nenni on 02-20-2019 at 12:00 pm

Silvaco was founded the same year I entered the EDA industry (1984) fresh from University. I first met them at the Design Automation Conference in Albuquerque, New Mexico, and have been an active observer of their growth ever since. In fact, Silvaco is now the largest privately held EDA company and is growing at a rapid pace. In 2014 Silvaco hired Dave Dutton as CEO bringing us Silvaco 2.0 and the rest is history in the making, absolutely.

Today, Silvaco delivers a full TCAD-to-Signoff flow for vertical markets including: displays, power electronics, optical devices, radiation & soft error reliability, analog and HSIO design, library and memory design, advanced CMOS process and IP development. It all begins with TCAD which brings us to the topic at hand today for their upcoming webinar:

Simulation of Reliability and NBTI Aging in MOS Microelectronics

Abstract:
The continuous scaling of semiconductor devices is a driving force in the field of microelectronics. However, this miniaturization goes hand in hand with various undesired degradation effects, which make a prediction of the MOS device operation less reliable. In particular, the Negative Bias Temperature Instability (NBTI) has attracted much industrial attention due to its severe impact on the device performance. In order to understand, predict, and reduce these degradation effects, TCAD simulations are of high importance.

This webinar will cover several of the most prominent reliability models (available in Silvaco’s TCAD tools). We will review their basic features and key parameters and discuss their correct calibration and comparison to experimental results.

What attendees will learn:
Capabilities of Silvaco’s TCAD solutions for reliability issues
Presentation of the most important models
Discussion of their basic features and key parameters
How to perform TCAD simulations
Correct setup of a reliability simulation
Correct comparison to the experimental data

Presenter:

Dr. Wolfgang Goes is a development engineer in Silvaco’s TCAD Division. Since joining Silvaco in 2016, he has worked primarily on Victory Device but also on Atlas and is responsible for trapping and reliability models. Dr. Goes holds an MSc in Technical Physics and a PhD in Electrical Engineering, both from the TU Vienna. He continued working there as a post-doc at the Institute for Microelectronics focusing on reliability issues in microelectronic devices.

Who should attend:
Academics, engineers, and management interested in investigations of degradation effects in semiconductor devices.

When: February 28, 2019
Where: Online
Time: 10:00am – 11:00am – (PST)
Language: English

About Us:
Silvaco, Inc. is a leading EDA provider of software tools used for process and device development and for analog/mixed-signal, power IC and memory design. The portfolio also includes tools for power integrity sign off, reduction of extracted netlist, variation analysis and also production-proven intellectual property (IP) cores. Silvaco delivers a full TCAD-to-Signoff flow for vertical markets including: displays, power electronics, optical devices, radiation & soft error reliability, analog and HSIO design, library and memory design, advanced CMOS process and IP development. The company is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, Japan and Asia. For over 30 years, Silvaco has enabled its customers to bring superior products to market in the shortest time with reduced cost. Semiconductor fabs and design houses from around the globe have relied on Silvaco’s expertise to help develop the “technology behind the chip”. Silvaco’s mission is to help our customers accelerate the pace of technological innovation and their time to market while reducing their costs in developing the next-generation chips. We strive to understand our customers’ challenges so as to tailor the innovative products, services and support they need to succeed in their technology development and productivity goals. www.silvaco.com


The Best Way to Keep a Secret

The Best Way to Keep a Secret
by Bernard Murphy on 02-20-2019 at 7:00 am

Anyone knows that the best way to keep a secret is never to share it with anyone. Which works fine for your own most personal secrets, but it’s not very useful when you have to share with at least one other, such as in cyber-security. One such need, of enormous importance in the IoT, is authentication; are you who you claim to be? Seas of (digital) ink have been spilled over the manifest insecurity of the IoT, from a jeep to a Boeing 757, kids’ watches, pacemakers, and Philips Hue lights, all have published white-hat attacks. Some attacks have not been so benevolent, such as hacks on the Ukrainian power grid and recent attempts on US infrastructure.

Authentication is the first wall in preventing (or at least limiting) such attacks. If you, an IoT device, want to communicate with me you must prove that you are allowed to communicate with me. There are a variety of ways to do this, all dependent on some kind of key stored on the IoT device – the secret. Setting up this key is a part of provisioning, the first-time setup of the device. The key might be created during chip manufacture (in ROM for example) or it might be generated in the cloud, communicated to the device and stored in flash memory, or it might be generated by the device itself using a true random number generator (TRNG), then stored in flash. Whichever method is used, the device can share its key (suitably encrypted) with the cloud and the cloud can compare against its stored inventory of allowed keys to verify and approve.

All of these methods work well in defending against “conventional” threats, primarily software-based attacks. But the really bad actors – states and criminal enterprises – are already capable of doing much more, particularly in hacking fabrication and/or the hardware. These actors can plant spies in fabs, or phish for key databases, or they can use focused ion-beam equipment to read stored keys from flash memory. All of this requires serious organization and/or special equipment but is well within the capabilities of a government or a major criminal enterprise.

A better approach would be to have a secret that is generated on the device but is never stored and never exchanged. At first sight, this looks like a useless secret, but stay with me here. First you want to generate the key on device (sounds like a TRNG) but not store it (not like a TRNG). So it has to be reliably consistent yet generated on-the-fly. That’s what physically unclonable functions (PUFs) do, something (not storage) from which you can read a random string which is nevertheless consistent each time you read it. A good example is the power-up state of an on-chip SRAM. Small manufacturing variations will ensure that each bit in the SRAM will initialize to 0 or 1 with some unique distribution to that device. There’s your secret key, at least for a decent-size SRAM (at least in principle, for N bits in the SRAM only a 1 in 2[SUP]N[/SUP] chance that this will match the key for another device).

At least that’s the theory; reality is always more complicated. There’s some noise. Not all bits fall readily into a 0 or 1 state; some could go either way and may do so on each power up, depending on temperature, voltage and other factors. Then there’s aging. SRAMs get older, just as we do. Even while the SRAM continues to function, the subtle factors on which the PUF depends can change noticeably. A reliable SRAM PUF has to factor out all of these sources of indeterminacy and degradation without compromising the uniqueness and stability of the generated device key.

OK, we have our secret, but how do we avoid sharing it, at least with our authentication partner? I’m aware of one clever mathematical technique called a zero-knowledge proof (ZKP) which allows you to share a piece of information with a partner. On subsequent requests by the partner, you can prove to them that you know the (correct) secret but you never have to share the secret.

Intrinsic ID have developed very interesting technology in this class, based on research deriving from Leuven. The details are somewhat different from what I outlined above. They put a lot of work first into managing noise and aging, using proprietary algorithms which they have tested across many foundries and processes. The shared data, which they call an Activation Code appears to be based on a similar philosophy to ZKP, though I can’t speak to how close it might be. What is important is that they have run a lot of very detailed analyses to demonstrate the stability and reliability of their approach.

The solution is offered in software-only and RTL options. The software-based solution is particularly interesting to a lot of customers because it allows them to retro-fit a higher level of security with no change to the hardware. One interesting real case mentioned in a webinar (see below) has been used for anti-counterfeiting. A company making a radio along with batteries to work with that radio found they were losing revenue and running into reliability/ reputation problems because customers were replacing original batteries with grey-market copies. By adding authentication from the radio to the batteries, they were able to detect and disallow use of counterfeits.

This isn’t a startup with one customer. They have multiple customers, including Intel, NXP, Renesas and Samsung at the Chip/module level and they are deployed in multiple IoT, secure transaction (e.g. payment) and government/defense applications. You can check out a more comprehensive description in this webinar.


Physical Design for Secure Split Manufacturing of ICs

Physical Design for Secure Split Manufacturing of ICs
by Daniel Nenni on 02-19-2019 at 12:00 pm

Semiconductors are not only critical to modern life, semiconductors are critical to National Security. Now that leading edge semiconductor foundries have left the United States one of the more pressing challenges is secure semiconductor manufacturing. This applies to all countries of course so let’s take a look at the International wafer capacity landscape:

One thing that we should note here is Government semiconductor industry support. Taiwan, Korea, China, and even Japan have significant government support for semiconductor manufacturing. Europe and the United States not so much. In fact, here in the United States semiconductor manufacturing is going the way of the dinosaur. My guess is that Asia and China specifically will continue to chip away at global wafer capacity leaving the US figuratively fabless, absolutely.

All is not lost of course because in my experience semiconductor professionals are the smartest people in the world and will not be defeated by political ignorance. One example is physical design for secure split manufacturing ICs. This is not new but now the NSF is getting involved so it is getting real:

Award Abstract #1822840
STARSS: Small: Collaborative: Physical Design for Secure Split Manufacturing of ICs

Abstract—The trend of outsourcing semiconductor manufacturing to oversea foundries has introduced several security vulnerabilities — reverse engineering, malicious circuit insertion, counterfeiting, and intellectual property piracy — making the semiconductor industry lose billions of dollars. Split manufacturing of integrated circuits reduces vulnerabilities introduced by an untrusted foundry by manufacturing only some of the layers at an untrusted high-end foundry and the remaining layers at a trusted low-end foundry. An attacker in the untrusted foundry has access only to an incomplete design, and therefore cannot easily pirate or insert Trojans into it. However, split manufacturing alone is not sufficiently secure, and naïve security enhancement techniques incur tremendous power, area, and delay overhead. The goal of this research is to develop new physical-design techniques that can ensure security through split manufacturing and simultaneously minimize the overhead on performance, power and area of semiconductor products.

This research lays the foundations for a comprehensive set of physical design tools for security. Its expected outcomes are: 1) Systematic techniques for modeling attacks that recover the missing parts of the design from the information available to the attacker; 2) Security metrics to assess the strength of integrated circuit designs by measuring the difficulty for an attacker to reverse engineer the design in the context of split manufacturing; 3) Active defenses through physical designs techniques such as cell layout, placement perturbation and rerouting designs to increase security; 4) Techniques to reduce the overhead of secure split manufacturing and make the security enhancement seamlessly compatible with existing design flows.

And here are a couple of background papers:

Building trusted ICs using split fabrication
Abstract— Due to escalating manufacturing costs the latest and most advanced semiconductor technologies are often available at off-shore foundries. Utilizing these facilities significantly limits the trustworthiness of the corresponding integrated circuits for mission critical applications. We address this challenge of cost-effective and trustworthy CMOS manufacturing for advanced technologies using split fabrication. Split fabrication, the process of splitting an IC into an untrusted and trusted component, enables the designer to exploit the most advanced semiconductor manufacturing capabilities available offshore without disclosing critical IP or system design intent. We show that split fabrication after the Metal1 layer is secure and has negligible performance and area overhead compared to complete IC manufacturing in the off-shore foundry. Measurements from split fabricated 130nm testchips demonstrate the feasibility and efficacy of the proposed approach.

Published in:2014 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST)

Is Split Manufacturing Secure?
Abstract—Split manufacturing of integrated circuits (IC) is being investigated as a way to simultaneously alleviate the cost of owning a trusted foundry and eliminate the security risks associated with outsourcing IC fabrication. In split manufacturing, a design house (with a low-end, in-house, trusted foundry) fabricates the Front End Of Line (FEOL) layers (transistors and lower metal layers) in advanced technology nodes at an untrusted high-end foundry. The Back End Of Line (BEOL) layers (higher metal layers) are then fabricated at the design house’s trusted low-end foundry. Split manufacturing is considered secure (prevents reverse engineering and IC piracy) as it hides the BEOL connections from an attacker in the FEOL foundry. We show that an attacker in the FEOL foundry can exploit the heuristics used in typical floorplanning, placement, and routing tools to bypass the security afforded by straightforward split manufacturing. We developed an attack where an attacker in the FEOL foundry can connect 96% of the missing BEOL connections correctly. To overcome this security vulnerability in split manufacturing, we developed a fault analysis-based defense. This defense improves the security of split manufacturing by deceiving the FEOL attacker into making wrong connections.

Published in:2013 Design, Automation & Test in Europe Conference & Exhibition (DATE)

Bottom line:
The need for a trusted foundry source at advanced process nodes will require unique fabrication methods, utilizing split manufacturing. There are numerous technical challenges:

  • defining the appropriate FEOL and BEOL layers for the split
  • coordinating the lithography alignment and depth-of-field requirements between foundries at the split
  • ensuring compatibility of the passivation and interconnect material(s) between foundry processes
  • defining incoming wafer inspection procedures at the BEOL, both metrology and electrical characteristics

These fabrication challenges are daunting, but solvable. A greater issue is how to ensure the FEOL data provided to the untrusted foundry cannot be easily reverse-engineered, and that the subsequent FEOL processing does not incorporate any unwanted, latent logical or electrical behavior. The detection of a potential “Trojan” inserted into a design to cause erroneous system behavior in the field is paramount.

As mission-critical electronics advances to new process nodes, it will be very interesting to see how both the design and fabrication aspects of split manufacturing evolve.


Top 3 Reasons Why Design IP Is Business Friendly

Top 3 Reasons Why Design IP Is Business Friendly
by Eric Esteve on 02-19-2019 at 7:00 am

The Design IP market is doing well, growing at higher CAGR that the semiconductor market it is serving, in fact 10% higher for 2007-2017! You may wonder why this IP market is so business friendly? I will try to answer and propose the top 3 reasons explaining this behavior. To name it: IP business is recurrent, external IP sourcing is increasing penetration,and product ASP are constantly growing, for most of the IP. I will obviously develop and explain these statements (which are not necessarily obvious) and mention the mandatory conditions at which the vendors can benefit from these dynamics.

The Design IP, like the EDA business, is supposed to be recurrent. This means that you develop the product one time and sale it many times. This is completely true for the EDA business, but almost true for the IP business. Almost true because, when you consider the complexes IP generating high ASP (few $100K to several $ million), it’s very likely that the customer will request “some” level of customization. For a hard-wired IP (like a PHY), it may be a different lay-out option (North-South Poly orientation instead of East-West), or a modification allowing to fit with a different metal layers count.

When dealing with RTL IP, like a complex interface controller (USB, PCIe, Ethernet MAC or PCS,…) the possible modifications are multiple, usually to interface with a specific PHY, or with a different bus or application layer. Please note that this service is considered as IP customization, not design service. Why? Because the goal is eventually to sale your IP to this customer and not design service, and because you can re-sale it to a different customer.

By the way, design service business is by nature not recurrent, the reason why it’s by far less business friendly than IP business…

I should define that I mean by “business friendly”! If you want to rise funds for a start-up, you need to attract VC, and some businesses (or market) do it very well, and some don’t. This means that you must convince a VC that the investment will generate high return (x5 or even x10). If you look at the successful exit made by IP start-up during the last decade, you can name Virage Logic (sold to Synopsys in 2010), Denali (acquired by Cadence the same year) or Arteris sold to one of his largest customers, Qualcomm. In each case, the multiplication factor (acquisition price/last year revenue) was in the 6 to 10 range. Unfortunately, the data is not public, but I am very positive about these numbers.

If you don’t need VC, but want to create and develop a healthy business, we will see why the IP business is the type of business to target.

If you attend to a MBA, you will probably learn about ASP behavior: price premium at the product launch, then price stabilization for some time and price erosion. This model is true, especially for commodity products. But, if some IP products can be commodity (8051 for example), most of the IP generating high ASP are not commodity. We consider in this post the high-end, complexes, differentiatingIP addressing the modern SoC needs (CPU, DSP, GPU, memory controller, high-speed interfaces, Network-on-Chip and the like) which represent about 80% of the IP market value.

For these products, we clearly see an ASP increase over time, thanks to the continuous launch of new release (DDR3 then DDR4 and now DDR5 for example). This is surprising, and I was the first to be surprised when I built the first IP market surveys and realize that the ASP erosion model didn’t fit with the actual market behavior.

I can give you an excellent example with the PCI Express PHY. In 2005, PCIe 1.0 PHY (on 90 nm) was the state-of-the-art and selling for $150K. More than 10 years later, PCIe 5.0 PHY (on 10 or 7nm) is today selling for $1.5 million, or X10. But the PHY design is much more complex, deliver 32 Gbps instead of 2.5 and the technology has notably changed between 90 nm and 7 nm…

The controller (RTL IP) pricing is also increasing, at first due to the specification becoming much more complex at each new release and because the customers are usually asking for some customization. If we take the same PCIe example, but for the controller, the growth is more in the x3 to x5 range.

Another reason explaining the IP market growth is the externalization trend. In fact, the IP market mimic the behavior of the EDA market in the 1990’s. Every year, more chip makers are sourcing (more) IP externally. These two combined causes are explaining why the IP market has grown with 13.7% CAGR between 2007 and 2017 (see the first picture).

A good question is for how long this market will grow more than the market it serves, and how much will be the growth delta? It seems reasonable to forecast 2028 (+/- 1 year) as the year where the IP market will pass the TOTAL EDA market and probably reach asymptote. Nevertheless, this forecast is based on the IP market behavior during the last 15 years. IPnest think that this growth behavior should continue, at least up to the point of time where most chip makers will have decided to source IP externally, except for highly strategic IP, key to build differentiator. But these key IP only represent a very small portion of all IP integrated into a SoC. If we consider that the total number of IP can be one hundred (or even more), the key IP are limited to a very few % of the total.

The IP business is business friendly, and the future is bright, but there are some essential conditions to meet if you want to participate (there is no free lunch!). Ranking these conditions, one appears to be number one, and by far, it’s the high requested Quality level of the IP to launch on the market.

If you consider this IP from the user point of view, the quality means extended verification (running VIP for digital). If your IP is analog, like a SerDes for example, the customer will expect to see a test chip and a comprehensive characterization run on this test chip. Silicon proven is the magic word. Production proven is even better, but if the function is innovative, or just advanced, it’s likely to be too early to demonstrate it.

For certain IP, like PLL or DRAM memory controller, the function must be 100% error free: if it doesn’t work properly, the complete SoC development is wasted. The time to correct the bug, the customer will have missed the time-to-market, and most of his investment.

To propose the highest quality level, you need to build a strong engineering team, including at least an excellent architect, able to propose the best design solution. At the beginning of the 2000, we used to see IP start-up emerging and propose IP when they only had data sheet available. This time is over, and this is good news for professional IP provider and for customers!

Good product, strong engineering team are necessary but not enough. You also need to be able to propose good technical support. This means customer oriented and reacting fast support, based on FAE (who could also be part of the design team). Your customer will need these FAE to help him to integrate the IP into his design.

Now, when the technical offer is meeting the stringent quality requirements above describe, you could think that you’re done and neglect marketing… This is unfortunately a frequent mistake made by IP start-up who consider that building strong marketing and communication around their product is not necessary: they have the best IP! They just forget that the IP market is one of the fastest moving, if you leave this space uncovered, your competition will occupy it, even if your product is better.

If you don’t invest enough into market communication, nobody will do it for you. If you plan to use Rep to approach customers around the world, this sales force will need you to build the appropriate story and make noise about your product. If not, their task to convince customer will be just too difficult, and a Rep may give up and spend time selling other products… Building the right market communication campaign will be the next condition, right after technical quality, to turn your product sale into a business-friendly story or success story.

Last point, coming from 30 years experience in the semiconductor industry, equally split between chip makers (mostly ASIC) and IP. Both industries complement one another, but radically different, at least about the way to access the market. Chip makers are usually secretive and try to keep their development highly confidential, when an IP provider will try to maximize noise about their new product (sometimes before it can be available!). This has been one efficient strategy for companies like Denali, Virage Logic or Arteris, as they have been able to create very high market awareness (which has certainly helped to maximize revenues and exit).

IPnest is now 10 years old, and provide market surveys to many customers (more than 40, see the above picture), in Asia, America and Europe, from small to large IP vendors, chip makers, foundries or research center. These are demanding customers, expecting much more than simply vendor ranking or 10 years forecast, that we give. That they need is high quality data, accurate for actual data and realistic for forecast.

You can only provide these data if you understand the IP market dynamics, and to do so, you need to capitalize on long time experience about this market, knowing it from the customer view point (TI, Atmel) as well as from the provider (PLDA, Snowbush and my numerous customers). Excel is useful but not enough, the design, project manager and marketing experience are the only to give credibility, and that’s why IPnest is successful!

Eric Esteve
from IPnest


Project Verification Planning for Analog Designs

Project Verification Planning for Analog Designs
by Tom Dillinger on 02-18-2019 at 12:00 pm

Successful projects leverage the investment in comprehensive methodology and resource planning, covering design and analysis flows – that planning effort is especially important for functional verification.

The emergence of complex SoC designs for advanced automotive applications has led to a major focus on verification planning, as reflected in the ISO26262 standard. This standard defines requirements for automotive product verification, specifically:
Continue reading “Project Verification Planning for Analog Designs”


Doesn’t sound like a recovery anytime in 2019

Doesn’t sound like a recovery anytime in 2019
by Robert Maire on 02-18-2019 at 7:00 am

AMAT reported a more or less in line quarter with revenues of $3.75B and Non-GAAP EPS of $0.81 versus street expectation of $0.79 and revenues of $3.71B. Guidance came in well below the street with revenues expected between $3.33B to $3.63B and Non-GAAP EPS from $0.62 to $0.70 versus expectations of $3.66B and $0.77. The company expects memory spend to be significantly lower in 2019 versus 2018 as the industry remains oversupplied.

To make matters worse, the display business will weaken in the second half of the year as TV build outs get pushed into 2020. Display will be down about a third.

WFE spending will be down “significantly”. We think the industry is down close to 20% from low $50’sB to low $40’s B.

The company talked about “taking steps” which is obvious code for cutting costs and people in the near term. With think this is indicative of the longer term nature of the downturn as the company wouldn’t be taking such measures if this were only a couple of quarters.

Doesn’t sound like a recovery any time in 2019
Given the combination of not “calling a bottom” plus reduced memory spend throughout the year and on top of that a weakening display sector in the second half, we would be very, very hard pressed to come up with a H2 recovery scenario that many analysts are pushing based on nothing more than wishes and hope. The company remains cautious and offered no evidence that would support any time of positive move in H2.

Dead cat bounce done?
Lam also had a relatively poor outlook yet the stock rallied on “it could have been worse” relief rally. We think the relief rally that has gone on since Lam reported may sputter as investors wake up to the reality that 2019 is a pretty bad, down year for semi equipment. There is only so much deviation from reality that the stock market will support before the “ugly truth” comes back to haunt stock prices.

No positive words on the call
The company was careful to set low expectations, saying that the recovery, when it happens, will be “shallow & gradual” or “slow & gradual”. The company used no words or guidance that would support an H2 recovery that bullish analysts have been pushing.

Death of a thousand cuts?
One of our concerns is that even with greatly reduced and controlled expectations, what happens if we continue to reset numbers downward on each earnings call. We could see a scenario where lacking a bottom we continue to cut numbers until one quarter, some unknown time in the future, things get better.

The Stocks
Given the surge in all the stocks since the Lam call, we see no reason to put new money in the names as the outlook is worse than it was a quarter ago, yet the stocks are higher. We still have no bottom. We would look to take money off the table for the short term gains over the past month as investors may let the stocks drift downward until Q1 is reported.

The recent upswing also puts the stocks outside of what we would consider the “cheap range”. We remain concerned that even though there eventually will be a recover, the timing is unknown and risks remain of further downside. China remains a significant risk and we don’t expect a clean resolution any time soon given recent reports.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.