Banner 800x100 0810

The ESD Alliance CEO Outlook is Coming April 28 –– Live!

The ESD Alliance CEO Outlook is Coming April 28 –– Live!
by Bob Smith on 04-10-2022 at 10:00 am

CEO Outlook Image

It’s not often our community is able to attend an in-person discussion where executives share their insights on industry trends, especially over the past two years as the pandemic swept across the globe.

Well, that’s about to change and I suggest you start jotting down questions as the ESD Alliance plans its first in-person CEO Outlook in three years. We’re featuring five experienced executives –– Dr. Anirudh Devgan of Cadence Design Systems, Niels Fache from Keysight Technologies, Aki Fujimura of D2S, Siemens EDA’s Joe Sawicki and Simon Segars of Arm. Ed Sperling of Semiconductor Engineering leads the discussion. Audience participation will be encouraged via a Q&A session.

Keysight is our co-host Thursday, April 28, at Agilent Building 5 at 5301 Stevens Creek Blvd. in Santa Clara, Calif., beginning at 5:30pm with a networking reception with food and beverages. The CEO Outlook panel begins at 6:30pm. It is free for ESD Alliance and SEMI members. Pricing for non-members is $49 per person. Click here for registration information.

The ESD Alliance Annual Membership meeting will be held prior to the start of the CEO Outlook beginning at 5pm at the same location. Non-members are welcome to attend if they purchase a ticket for the CEO Outlook.

The CEO executive panel is a long-standing yearly tradition that started with the EDA Consortium (EDAC) before our charter was expanded to include the entire system design ecosystem and we changed our name to the Electronic System Design (ESD) Alliance.

The wait is over and I look forward to seeing you again in person, and recommend you register today. Our CEO Outlook is a popular event and we’re expecting a big crowd. Registration details can be found here.

About the ESD Alliance
The ESD Alliance serves as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry. We have an ongoing series of networking and educational events like the CEO Outlook, programs and initiatives. Additionally, as a SEMI Technology Community, ESD Alliance companies can join SEMI at no extra cost.

To learn more about the ESD Alliance, visit the ESD Alliance website. Or contact me at bsmith@semi.org if you have questions or need more information.

Engage with the ESD Alliance at:
Website: www.esd-alliance.org
ESD Alliance Bridging the Frontier blog
Twitter: @ESDAlliance
LinkedIn
Facebook

Also read:

Key Executive to Discuss Latest Chip Industry Design Trends at SEMI ESD Alliance 2022 CEO Outlook April 28

Nominations Open for Phil Kaufman Hall of Fame Sponsored by ESD Alliance and IEEE CEDA

Cadence’s Dr. Anirudh Devgan to be Honored with the 2021 Phil Kaufman Award on May 12


Chip Shortage Killed the Radio in the Car

Chip Shortage Killed the Radio in the Car
by Roger C. Lanctot on 04-10-2022 at 6:00 am

Chip Shortage Killed the Radio in the Car

“In my mind and in my car, we can’t rewind we’ve gone too far.” – “Video Killed the Radio Star” – The Buggles

I discovered within days of driving home my new BMW X3 last fall that I was a victim of the much ballyhooed chip shortage. Among the features “deleted” from my car were “Passenger Lumbar,” “BMW Digital Key,” and “SiriusXM and HD.”

To its credit BMW and the dealer detailed the deletions on the vehicle’s Monroney label. Sadly, the SiriusXM rep I called to help me find and activate the service was unaware that the necessary hardware was not available as it should have been according to my coded VIN number. (Rumor has it that BMW intends to provide an aftermarket solution – but I’m not holding my breath.)

The experience was startling. Was BMW considering removing the car radio or maybe just doing without digital? Are they thinking that life would be so much simpler if they could dispense with the radio and all the testing and all the related cabling, semiconductors, and those damn antennas.

In fact, delivering an interference-free AM experience in an EV has become sufficiently challenging for some OEMs that they have, in some isolated cases, chosen to simply do without. We expect the radio in an internal combustion vehicle – but maybe not in an EV?

We take it for granted. But who is to say that there must be a radio in the car?

Has the time arrived when we need a radio mandate? Why didn’t my dealer see fit to alert me to the missing SiriusXM and HD Radio hardware? Was the dealer afraid it might be a deal breaker?

Is it time for the FCC to step in and subsidize the chip making resources of semiconductor companies such as NXP, Texas Instruments, and ST Micro? Do we need a strategic SiriusXM/HD Radio semiconductor reserve – to be tapped into in times of supply chain crises?

There is a clear public interest for requiring access to free over-the-air broadcast content in cars – especially in times of severe weather, terrorist attacks, road closures, bridge collapses! You might lose your cell service, but you can always find a channel on the radio. And, of course, the Emergency Broadcast System.

Within a month or two of taking delivery of my BMW I had the opportunity to take in the hybrid radio experience delivered in the newest EVs from Mercedes Benz – the so-called MBUX. Word is that Mercedes Benz has also been hit by radio chipset shortages but is withholding delivery of those chip challenged vehicles until the content can be installed.

My chip-less BMW has me listening to the radio without the benefit of rich HD Radio metadata and visual content and, of course, without SiriusXM, Howard Stern remains out of reach. A new in-car content consumption experience infused with visual metadata and integrated with recommendation engines, search, and personal profiles is arriving in the market only slightly delayed by the chip shortage.

Presumably BMW and others will reverse their “deletions” in recognition of the enduring value proposition of radio in the car.

Also read:

A Blanche DuBois Approach Won’t Resolve Traffic Trouble

Auto Safety – A Dickensian Tale

No Traffic at the Crossroads


Podcast EP70: A Review of EDA and IP Growth For 2021 with Dr. Walden Rhines

Podcast EP70: A Review of EDA and IP Growth For 2021 with Dr. Walden Rhines
by Daniel Nenni on 04-08-2022 at 10:00 am

Dan is joined by Dr. Walden Rhines in his capacity as Executive Sponsor for the SEMI Electronic Design Market Data Report. Wally provides an overview of the most recent report covering all of 2021. Spoiler alert: it was a record-breaking year in many areas.

Dan and Wally explore the details behind the numbers and what it may mean for the coming year of semiconductor growth. Various events around the world are also discussed with regard to their possible impact on the industry.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Esko Mikkola of Alphacore

CEO Interview: Dr. Esko Mikkola of Alphacore
by Daniel Nenni on 04-08-2022 at 6:00 am

Esko Mikkola Polo

 

Phoenix Portrait Photographer

CEO’s background: what led to your current organization?
Since an early age, exceeding expectations has driven me to succeed in my areas of interest. While enlisted in the Finnish Army, I was selected, trained, and short-listed for the country’s astronaut training program before it was unfortunately dissolved.  A major realized goal was to qualify and represent Finland in the 2004 Athens Olympics. In the US, I won the NCAA javelin championship in 2013. This rigorous athletic discipline complemented a similar intense academic drive to extensively research advanced analog electronics with an emphasis on radiation tolerance, reliability modeling and characterization structures. While completing a PhD in Electrical Engineering, I had a focus on evaluating analog IC modelling correlation issues affecting advanced small geometry technologies. During the course of this work, it became apparent that a significant unserved opportunity existed to deliver ultra-low power, high-speed, and high-resolution data converter solutions. I was particularly interested in the mixed signal aspects of conversion architectures especially as applied to analog circuit functionality and reliability when using emerging very small nanometer-scale geometries. Those experiences led to managing successful research and development programs where the ideas were put into practice. Once the technology and opportunity were validated, I made the entrepreneurial leap as the Founder and CEO of Alphacore Inc.

Please tell us about Alphacore and its product offering.
The Alphacore design team has extensive experience in delivering products and solutions for a diverse customer base. We offer products and services that often far exceed the performance specifications of what is available in the market today. These solutions fulfill the needs of a broad range of leading-edge commercial, scientific, and aerospace communication applications. For example, Alphacore’s novel data converter architecture, HYDRATM (Hybrid DigitizeR Architecture), allows us to deliver a best-in-class family of RF data converters. The resolution in this family is up to 14bits with analog bandwidths as high as 25Ghz, while consuming milliwatts of power. These hybrid architecture innovations are key to delivering the unbeatable specs found in our Gigasamples per second and milliwatts of power RF data converter library.

From our foundation in RF Data Converters, we have built a transceiver architecture, SMARTTM (Scalable Macro Architecture for Receiver and Transmitter), that is delivered to customers as scalable macros. This innovative and multi-core macro approach is ideal for phased arrays, beam forming, Massive-MIMO, 5G/6G applications that offer significant advantages compared to other approaches. Starting with our RF data converter family, we cover selectable data converter specs for multi-channel arrays available from a few hundred MHz to 20 Gigasamples per second. Alphacore’s innovative hybrid architecture enables these arrays to be configurable with best-in-class area and power efficiency. Our scalable macro transceiver architecture can be configured with on-board PLL’s, with selectable I/O formats, and with or without SerDes.  This scalable approach enables optimum performance, maximum configurability, with minimal customization and design risk.

What makes Alphacore Intellectual Property/products unique?
Continuing with the theme from the previous questions, there are really two things that make our products unique. First, and probably most important is that the team has invented and now productized HYDRATM and SMARTTM architectures that solve the area, performance, and power challenges of not just the data converters, but of the macros required at the RF system level.

Alphacore is growing fast, and leverages our small company agility to deliver innovative architectures and remarkable system performance for our partners and customers. Our ADCs have the lowest power in their resolution and bandwidth class. Additionally, we are a member of the prestigious GlobalFoundries FDX Network that recognizes these industry demonstrated design accomplishments. We continue to blaze new trails in leading edge process technologies such as GF’s 22FDX 22nm fully depleted silicon on insulator (FD-SOI) CMOS process, our customers are assured their unique solution is developed using rigorous discipline approved by an internationally respected partner.

For example, our exciting IP cores with World-class performance levels, using low power FD-SOI technologies, offer proven solutions that include data converter products. We alone offer reduction in power that is 10X lower than nearest commercial competitors, wide bandwidth, and innovative automatic background calibration for spur removal all with Gigasample-class conversion rates. The outright performance increase of our cores offers a unique market disrupting position with established competitor data conversion vendors. Significantly, these have enabled cost-effective IC readout solutions with frame rates all the way up to 600,000 frames per second, and more reliable control electronics for harsh environments.

What customer challenges/competitive positioning are you addressing?
Our commercial customers are in market segments that are driven by frequently upgraded specifications and new advanced product demands, such as emerging 5G communications standards, and Advanced Driver-Assistance Systems that demand high quality solutions at low-cost. Alphacore delivers data converters with the best performance figure of merit (power dissipation, sampling rate, effective number of bits, and signal-to-noise and-distortion ratio parameters), imaging solutions with the highest resolutions and frames per second available, and power management products such as the most power efficient DC-DC converters.

As mentioned before Alphacore’s new products enable market disrupting consumer priced solutions with unbelievable performance and pricing in monolithic packages that replace expensive and less reliable hybrid multi-chip modules. Alphacore’s business model provides our customers with optimum tradeoff selection for power and performance of data converters to propel faster time-to-market for their system designs that are simpler, smaller and more affordable, with available assistance from our dedicated technical support during the process.

Furthermore, competitive versions of these products can be delivered with characterized radiation tolerance specifications.

Which markets most benefit from Alphacore IP?
Our technical team includes disciplined and seasoned “Radiation-Hardened-By-Design” (RHBD) experts; however, we specialize in designing commercial high-performance solutions for niche needs of demanding segments including 5G communications. Alphacore’s very-low-power and high-speed data converter IP design blocks are ideally suited to direct RF sampling architectures necessary for advanced communication standards including 5G, LTE and WiFi and their base stations.

Also, our library of IP facilitates the fast development pace of new technology in other major markets for automotive sensors, aerospace, defense, medical imaging, homeland security, scientific research, or electronics for High Performance Computing (HPC) in space environments.

All of these market segments drive next generation products, that similar to Moore’s Law, seem to demand 2x, 3x, multiples of performance or resolution increases at similar multiples of economies of scale and pricing. Significantly, Alphacore’s product roadmap with complementing photolithographic breakthroughs, novel scaling architectures, etc., is ideal whether customers request low-cost high-performance IP or much larger scaled IP versions with massive increases in data or image resolution.

What are Alphacore’s Goals?
The company is driving towards recognition of its strengths and expertise, and commercial business development of its products and licensable IP in a handful of strategic business opportunity-driven areas that directly focus on Alphacore’s clear strengths. I would characterize these critical growth areas for us as 1) High-performance Analog, Mixed-Signal and RF Solutions, 2) Advanced CMOS Image Sensor, Camera System and ROIC products and services, 3) Radiation Hardening expertise for applications with Harsh Environments, and 4) Emerging Solutions for 5G, SATCOM, Scientific Research, Automotive, Defense, Space.

Also Read

CEO Interview: Kelly Peng of Kura Technologies

CEO Interview: Aki Fujimura of D2S

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology


Design to Layout Collaboration Mixed Signal

Design to Layout Collaboration Mixed Signal
by Tom Simon on 04-07-2022 at 10:00 am

Cliosoft integration with Custom Compiler

When talking about today’s sophisticated advanced node designs it’s easy to first think about the digital challenges. Yet, the effort to design the needed analog and mixed signal blocks for them should not be underestimated. The need for high speed clocks, high frequency RF circuits and high bit rate IOs makes the analog portions, particularly on FinFET nodes, complex and difficult. Analog design has in reality maintained its importance to SOC success over time. Indeed, the facts show growing numbers of analog and AMS circuit and layout designers working in teams around the world. Collaboration within and among these teams has become a primary concern.

There is a changing analog tool landscape too. Custom Compiler from Synopsys is making significant inroads into the previously monolithic custom IC design market. Synopsys reports that there are now nearly 200 companies using Custom Compiler. This, in conjunction with Synopsys’ own internal usage for the development their commanding analog IP portfolio, means that there are literally thousands of seats in use today and the numbers are growing. In a recent webinar by Synopsys and Cliosoft, a leading design data management solution provider, Synopsys cites increased design efficiency as the key to their on-going success. The webinar titled “Enabling Effective Design & Layout Collaboration for Next Generation Analog and Mixed Signal Designs” touts the efficiencies added by integration with Cliosoft for design collaboration.

One might assume that this is just about checking files in and out so they can be edited safely. However, the webinar goes into detail about some pretty important aspects of the integration of Cliosoft SOS and Synopsys Custom Compiler. They specifically highlight the signoff review features. It’s important to note that circuit designers and layout engineers working on the same project might be sitting halfway around the world from each other. The integration described in the webinar offers sophisticated features so that one team can add notation to areas within a design, including highlighting specific areas of the design graphically to help communicate changes that might be needed. The Cliosoft SOS integration allows this collaboration activity right inside of the Custom Compiler user interface and directly on the design.

Cliosoft integration with Custom Compiler for Collaboration

The webinar has an overview that shows how Cliosoft SOS capabilities can be used for design/layout collaboration and closure. The four elements of this are managing the design, facilitating collaboration, offering insight through analysis and finally making reuse possible.

Design data management includes revision control as you would expect. It offers release and variant management. Data security and access controls are provided as well. It also contains features that help to optimize network and disk storage usage.

The collaboration element covers support for remote cache servers with automatic synchronization. Underlying this are mechanisms that provide secure and efficient data transfer between sites.

The analysis features can produce design audit reports. It can also be used to spot schematic/layout differences. There are also reports on the changes made between releases or over time on designs. All of this helps manage and track the design process.

The fourth category is reuse, while long sought after, has in practice proven challenging. Cliosoft SOS helps companies effectively locate and reuse designs. Customers can create their own IP catalog. When there are fixes and releases to IP in the catalog, they are propagated so everyone stays up to date. The net effect is to increase productivity.

The webinar covers examples of each of these elements. Also, it includes a demo that shows how Cliosoft SOS is used directly inside of the Custom Compiler GUI for several of the tasks mentioned above to improve collaboration. The full webinar can be viewed on the Synopsys website.

Also read:

Synopsys Tutorial on Dependable System Design

Synopsys Announces FlexEDA for the Cloud!

Use Existing High Speed Interfaces for Silicon Test

 

 


Intel Best Practices for Formal Verification

Intel Best Practices for Formal Verification
by Daniel Nenni on 04-07-2022 at 6:00 am

formal dynamic verification comparison

Dynamic event-based simulation of RTL models has traditionally been the workhorse verification methodology.  A team of verification engineers interprets the architectural specification to write testbenches for various elements of the design hierarchy.  Test environments at lower levels are typically exercised then discarded, as RTL complexity grows during model integration.  Methodologies have been enhanced to enable verification engineers to generate more efficient and more thorough stimulus sets – e.g., biased pseudo-random pattern generation (PRPG); portable simulation stimulus (PSS) actions, flows, and scenarios.

Yet, dynamic verification (DV) is impeded by the need to have RTL (or perhaps SystemC) models thoroughly developed to execute meaningful tests, implying that bug hunting gets off to a rather late start.

Formal verification (FV) algorithms evaluate properties that describe system behavior, without the need for dynamic stimulus.  These are three facets to FV:

  • assertions

Model assertions are written to be exhaustively proven by evaluating the potential values and sequential state space for specific signals.

  • assumptions

Known signal relationships (or constraints) are specified on model inputs, to limit the bounds of the formal assertion proof search.

  • covers

A covergroup defines an event (or sequence) of interest that should eventually occur, to record the scope of verification testing – an example would be to ensure proper functionality of the empty and full status of a FIFO stack.

A common semantic form for these FV properties is SystemVerilog Assertions (SVA). [1]

The properties in an assertion to be exhaustively proven have a range of complexities:

  • (combinational) Boolean expressions
  • temporal expressions, describing the required sequence of events
  • implication: event(s) in a single cycle imply additional event(s) much occur in succeeding cycles
  • repetition: an event much be succeeded by a repetition of event(s)

When a formal verification tool is invoked to evaluate an assertion against a functional model, the potential outcomes are:

  • proven
  • disproven (typically with a counterexample signal sequence provided)
  • bounded proven to a sequential depth, an incomplete proof halted due to resource and/or runtime limits applied as the potential state space being evaluated grows

FV offers an opportunity to find bugs faster and improve the productivity of the verification team.  Yet, employing the optimal methodology to leverage the relative strengths of both formal verification and dynamic verification (with simulation and emulation) requires significant up-front project planning.

At the recent DVCON, Scott Peverelle from the Intel Optane Group gave an insightful talk on how their verification team has adopted a hybrid FV and DV strategy. [2] The benefits they have seen in bug finding and (early) model quality are impressive – part of the initiative toward “shift left” project execution.  The rest of this article summarizes the highlights of his presentation.

Design Hierarchy and Hybrid FV/DV Approaches

The figure above illustrates a general hierarchical model applied to a large IP core – block, cluster, and full IP.  The goal is to achieve comprehensive FV coverage for each block-level design, and extend FV into the cluster-level as much as possible.

The expectation is that block-level interface assertions and assumptions will be easier to develop and verify.  And similarly, end-to-end temporal assertions involving multiple interfaces across the block will have smaller sequential depth, and thus have a higher likelihood of achieving an exhaustive proof.  Scott noted that the micro-architects work to partition block-level models to assist with end-to-end assertion property generation.

Architectural Modeling and FV

Prior to focusing on FV of RTL models, a unique facet of the recommended hybrid methodology is to create architectural models for each block, as depicted below.

The architectural models can be quite abstract, and thus small and easy to develop.  The models only need to represent enough behavior to include key architectural features.

A major goal is to enable the verification team to develop and exercise the more complex interface and end-to-end FV assertions, and defer work on the properties self-contained within the block functionality.  These architectural models are then connected to represent a broader scope of the overall hierarchy.

Although the resource invested in writing architectural model may delay RTL development, Scott highlighted that this FV approach expedites evaluation of complex, hard-to-find errors, such as for:  addressing modes and address calculation; absence of deadlock in communication dependencies; verification of the order of commands; and, measurement of the latency of system operations.

Formal Verification of RTL

The development of additional FV assertions, assumptions, and covers expands in parallel with RTL model coding, with both the micro-architects and verification team contributing to the FV testbench.

The recommended project approach is for the micro-architects to focus initially on RTL interface functionality; these RTL “stubs” are immediately useful in FV (except for the end-to-end assertions).  Subsequently, the baseline RTL is developed and exercised against a richer set of assertions and covers.  And, the properties evaluated during the architectural modeling phase are re-used and exercised against the RTL.

As illustrated below, there is a specific “FV signoff” milestone for the block RTL.

The FV results are evaluated – no failing assertions are allowed, of course, and any assertions that are incomplete (bounded proven) are reviewed.  Any incompletes are given specific focus by the dynamic verification testbench team – the results of a bounded search are commonly used as a starting point for deep state space simulation.

Promotion of FV Components to Higher Levels

With a foundation in place for block-level FV signoff, Scott described how the FV testbench components are promoted to the next level of verification hierarchy, as illustrated below.  The FV component is comprised of specific SystemVerilog modules, with assertions and assumptions partitioned accordingly.  (An “FBM” is a formal bus model, focused on interface behavior properties.)

Each FV component has a mode parameter, which sets the configuration of assertions, assumptions, and covers.  In ACTIVE mode, all assertions and covers are enabled; assumptions (in yellow) are used to constrain the property proof.  When promoting an FV component, the PASSIVE mode is used – note that all assume properties are now regarded as assertions to be proven (green).

In short, any block model assumption on input ports need to be converted to assertions to verify (either formally or through dynamic simulation) at the next level of model integration.

Briefly, there is also an ARCH mode setting for each FV component, as depicted below:

If a high-level architectural model is integrated into a larger verification testbench, end-to-end and output assertions become FV assumptions (in yellow), while input assumptions are disabled (in grey).

Additional Formal Apps

Scott highlighted the additional tools that are still needed to complement the formal property verification methodology:

  • inter-module connectivity checks (e.g., unused ports)
  • clock domain crossing (CDC) checks for metastability interface design
  • sequential equivalence checking, for implementation optimizations that introduce clock gating logic
  • X-propagation checks

3rd Party IP Interface Verification

If the design integrates IP from an external source, a FV-focused testbench is a valuable resource investment to verify the interface behavior.  Scott mentioned that end-to-end assertions that are developed to ensure the overall architectural spec correctly matches the 3rd party IP behavior have proven to be of considerable value.  (Scott also noted that there are commercially-available FV testbenches for protocol compliance for industry-standard interfaces.)

Dynamic Verification

Although FV verification can accelerate bug finding, Scott reiterated that their verification team is focused on a hybrid methodology, employing DV for deep state bug hunting, and for functionality that is not well-suited for formal verification.  Examples of this functionality include:

  • workload-based measurements of system throughput and interface bandwidth
  • power state transition management
  • firmware and hardware co-simulation (or emulation)
  • SerDes PHY training

Results

The planning, development, and application of FV components and integration in a verification testbench requires an investment in architecture, design and verification team resources.  Scott presented the following bug rate detection graph as an indication of the benefits of the approach they have adopted.

The baseline is a comparable (~20M gate) IP design, where dynamic verification did not begin in earnest until the initial RTL integration milestone.  The early emphasis on FV at the architectural level captured hundreds of bugs, quite a few related to incorrect and/or unclear definitions in the architectural spec (including 3rd party IP integration).   These bugs would have taken much longer and more verification resource to uncover in a traditional DV-only verification flow.

Summary

Formal property verification has become an integral part of the initiative toward a shift-left methodology.  Early FV planning and assertion development combined with a resource investment in architectural modeling helps identify bugs much earlier.  A strategy for focusing on FV at lower levels of the design hierarchy, following by evaluation of constraints at higher levels of integration, will offer FV testbench reuse benefits.  This is an improved method over lower-level dynamic verification testbench development that is commonly discarded as the design complexity progresses.

The best practices FV methodology recently presented by Intel at DVCON is definitely worth further investigation.

References

[1]   IEEE Standard 1800 for SystemVerilog, https://ieeexplore.ieee.org/document/8299595

[2]   Chen, Hao; Peverelle, Scott;  et al, “Maximizing Formal ROI through Accelerated IP Verification Sign-off”, DVCON 2022, paper 1032.

Also read:

Intel Evolution of Transistor Innovation

Intel 2022 Investor Meeting

The Intel Foundry Ecosystem Explained


TSMC’s Reliability Ecosystem

TSMC’s Reliability Ecosystem
by Tom Dillinger on 04-06-2022 at 10:00 am

AC accelerated stress conditions

TSMC has established a leadership position among silicon foundries, based on three foundational principles:

  • breadth of technology support
  • innovation in technology development
  • collaboration with customers

Frequent SemiWiki readers have seen how these concepts have been applied to the fabrication and packaging technology roadmaps, which continue to advance at an amazing cadence.  Yet, sparse coverage has typically been given to TSMC’s focus on process and customer product reliability assessments – these principles are also fundamental to the reliability ecosystem at TSMC.

At the recent International Reliability Physics Symposium (IRPS 2022), Dr. Jun He, Vice-President of Corporate Quality and Reliability at TSMC, gave a compelling keynote presentation entitled:  “New Reliability Ecosystem: Maximizing Technology Value to Serve Diverse Markets”.  This article provides some of the highlights of his talk, including his emphasis on these principles.

Technology Offerings and Reliability Evaluation

The figures above highlight the diverse set of technologies that Dr. He’s reliability team needs to address.  The reliability stress test methods for these technologies vary greatly, from the operating voltage environment to unique electromechanical structures.

Dr. He indicated, “Technology qualification procedures need to be tailored toward the application.  Specifically, the evaluation of MEMS technologies necessitates unique approaches.  Consider the case of an ultrasound detector, where in its end use the detector is immersed in a unique medium.  Our reliability evaluation methods need to reflect that application environment.”

For more traditional microelectronic technologies, the reliability assessments focus on accelerating defect mechanisms, for both devices and interconnect:

  • hot carrier injection (HCI)
  • bias temperature instability (NBTI for pFETs, PBTI for nFETs)
  • time-dependent dielectric breakdown (TDDB)
  • electromigration (for interconnects and vias)

Note that these mechanisms are highly temperature-dependent.

As our understanding of the physics behind these mechanisms has improved, the approaches toward evaluating their impact to product application failure rates have also evolved.

Dr. He commented, “Existing JEDEC stress test standards are often based on mechanism acceleration using a DC Vmax voltage.  However, customer-based product qualification feedback did not align with our technology qualification data.  Typically, the technology-imposed operating environment restrictions were more conservative.”

This is of specific interest to high-performance computing (HPC) applications, seeking to employ boost operating modes at increased supply voltages (within thermal limits).

Dr. He continued, “We are adapting our qualification procedures to encompass a broader set of parameters.  We are incorporating AC tests, combining Vmax, frequency, and duty cycle variables.”

The nature of “AC recovery” in the NBTI/PBTI mechanism for device Vt shift has been recognized for some time, and is reflected in device aging models.  Dr. He added, “We are seeing similar recovery behavior for the TDDB defect mechanism.  We are aggressively pursuing reliability evaluation methods and models for AC TDDB, as well.”

The figure above illustrates the how the Vt shift due to BTI is a function of the duty cycle for the device input environment, as represented by the ratio of the AC-to-DC Vt difference.  The figure also highlights the newer introduction of a TDDB lifetime assessment for high-K gate dielectrics, as a function of input frequency and duty cycle.

Parenthetically, Dr. He acknowledged that end application product utilization can vary widely, and that AC reliability testing makes some usage assumptions.  He indicated that TSMC works with customer to establish appropriate margins for their operating environment.

Reliability Evaluation of New Device Types

TSMC has recently added resistive RAM (RRAM) and magneto-resistive RAM (MRAM) IP to their technology offerings.

The unique physical nature of the resistance change in the storage device for these technologies necessitates development of a corresponding reliability evaluation procedure, to establish retention and endurance specifications.  (For the MRAM technology, the external magnetic field immunity specification is also critical.)

For both these technologies, the magnitude and duration of the write current to the storage cell is a key design parameter.  The maximum write current is a crucial reliability factor.  For the MRAM example, a high write current through the magnetic tunnel junction to set/reset the orientation of the free magnetic layer in the storage cell degrades the tunnel barrier.

TSMC collaborates with customers to integrate write current limiting circuits within their designs to address the concern.  The figure below illustrates the write current limiter for the RRAM IP.

TSMC and Customer Collaboration Reliability Ecosystem

In addition to the RRAM and MRAM max write current design considerations, Dr. He shared other examples of customer collaborations, which is a key element of the reliability ecosystem.

Dr. He. shared the results of design discussions with customers to address magnetic immunity factors – the figure below illustrates cases where the design integrated a Hall effect sensor to measure the local magnetic field.  The feedback from the sensor can be used to trigger corrective actions in the write cycle.

The customer collaboration activities also extend beyond design for reliability (DFR) recommendations.  TSMC shares defect pareto data with customers.  Correspondingly, the TSMC DFR and design-for-testability (DFT) teams will partner with customers to incorporate key defect-oriented test screens into the production test flow.

Dr. He provided the example where block-specific test screens may be appropriate, as illustrated below.

Power management design approaches may be needed across the rest of the design to accommodate block-level test screens.

The figure below depicts the collaboration model, showing how customer reliability feedback is incorporated into both the test environment and as a driver for continuous improvement process (CIP) development to enhance the technology reliability.

Summary

At the recent IRPS, TSMC presented their reliability ecosystem, encompassing:

  • developing unique reliability programs across a wide breadth of technologies (e.g., MEMS)
  • developing new reliability methods for emerging technologies (e.g., RRAM, MRAM)
  • sharing design recommendations with customers to enhance final product reliability
  • collaborating closely with customers on DFR issues, and integrating customer feedback into DFT screening procedures and continuous improvement process focus

Reflecting upon Dr. He’s presentation, it is no surprise that these reliability ecosystem initiatives are consistent with TSMC’s overall principles.

-chipguy

Also read:

Self-Aligned Via Process Development for Beyond the 3nm Node

Technology Design Co-Optimization for STT-MRAM

Advanced 2.5D/3D Packaging Roadmap


Synopsys Tutorial on Dependable System Design

Synopsys Tutorial on Dependable System Design
by Bernard Murphy on 04-06-2022 at 6:00 am

Dependability

Synopsys hosted a tutorial on the last day of DVCon USA 2022 on design/system dependability. Which here they interpret as security, functional safety, and reliability analysis. The tutorial included talks from DARPA, AMD, Arm Research and Synopsys. DARPA and AMD talked about general directions and needs, Arm talked about their PACE reliability analysis technique and Synopsys shared details on solutions they already have and aligning standards in safety and security.

DARPA on practical strategies for security in silicon

Serge Leef (PM DARPA) provided insight into the DARPA focus on scalable defense mechanisms in electronics, particularly trading off cost versus security. They’re targeting not the big semis or consumer electronics guys but rather mid-sized semis and defense companies, who are most interested in getting help. The ultimate goal in the project that Serge oversees is automatic synthesis of secure silicon (AISS). No-one ever accused DARPA of thinking small.

Synopsys, Northrop Grumman, Arm and others have been selected to drive this effort. A security engine, developed in a different part of the program will be integrated together with commercial IP and security aware tools. In a phased approach to full automation, initially systems will be composed around these various IP. A second phase optimization will be configured within a platform-based architecture. In the final phase they aim to be able to specify a cost function around power, area, speed and security (PASS). Allowing system teams to dial in the tradeoff they want.

As I said, an ambitious goal, but this is the organization that gave us the Internet after all 😀.

AMD on functional safety

Bala Chavali is an RAS architect at AMD (RAS is reliability, availability and serviceability/ maintainability). Her main thesis in this talk was the challenge in meeting objectives across multiple dependability goals, each with their own standards and expectations. She breaks these down into reliability, safety, security, availability and maintainability.

In part the challenges arise from disconnected standards, lifecycle requirements and required compliance work products across these multiple objectives. In part challenges come from lack of enough standards on IP suppliers, particularly around safety, security, and traceability.

Bala underscored the importance of unifying these objectives as much as possible to minimize duplicated effort. She sees value in aligning common standards efforts, for example in defining a generic dependability development lifecycle. This should leverage a wholistic analysis data set. Also a common data exchange language across applications (automotive, industrial, avionics, and across the system design hierarchy). Bala mentioned the Accellera functional safety working group (on which she serves) as one organization working towards this goal, also IEEE P2581 as another with a similar objective.

Arm Research on PACE reliability analysis

Reiley Jeyapaul, Senior Research engineer at Arm Research, talked about using formal tools for reliability estimation on a Cortex-R52 using their proof-driven architecturally correct execution (PACE) methodology. Their objective is to estimate the fraction of their design which is vulnerable to soft errors (random failures) to produce an architecture vulnerability factor (AVF). They suggest this as a model to derate the estimated FIT rate. This factor is needed since not all naïve vulnerabilities are real (if an error does not propagate), or may in some cases be only conditionally vulnerable. Arm will license PACE models to partners for their use in system vulnerability assessments.

Reiley provides detail on their formal technique and how they validated this method against exhaustive fault injection (EFI). Models are a little pessimistic, but not as pessimistic as the naïve model, and run dramatically faster than EFI. Sounds like a valuable capability for SoC designers.

Synopsys on automation for safety and security

Meirav Nitzan, director of functional safety and security solutions at Synopsys, closed the tutorial. She summed up capabilities offered by Synopsys for safety and security, in tools and IP and across both software and hardware design. I’m not going to attempt to summarize that long list. I think their selection for the DARPA AISS program is endorsement enough. I will call out a few points that I found interesting to my predominantly hardware audience.

We all know tools and IP suites in this area. To this they add SW test libraries for safety and software security analysis for 3rd part SW running on HW. They also provide virtual prototyping for developing secure SW. They use fault simulation for FMEDA analyses of course. I also found it interesting that they support fault modeling for malicious attacks, also sample fault simulation in emulation. Which I would guess would be valuable in testing vulnerability to probing attacks.

Meirav wrapped by reiterating the work IEEE P2581 is doing; to align security and safety requirements, a worthy goal for all of us. Learn more about Synopsys’ solutions for mission-critical silicon and software development HERE.

Also read:

Synopsys Announces FlexEDA for the Cloud!

Use Existing High Speed Interfaces for Silicon Test

Getting to Faster Closure through AI/ML, DVCon Keynote


The Importance of Low Power for NAND Flash Storage

The Importance of Low Power for NAND Flash Storage
by Tom Simon on 04-05-2022 at 10:00 am

Low Power for NAND Flash

Even though we all know that reducing power consumption in NAND Flash Storage is a good idea, it is worthwhile to take a deeper dive into the underlying reasons for this. A white paper by Hyperstone, a leading developer of Flash controllers, discusses these topics providing useful insight into the problem and its solutions. The paper is titled “Power Consumption in NAND Flash Storage.”

Low Power for NAND Flash

It almost goes without saying that the most obvious reason to reduce NAND flash storage is to reduce operating costs – lower power means lower utility bills. Furthermore, lower power can also mean packaging and cooling system savings which will reduce end product costs. Along with that there are several other equally important reasons.

Regarding the key issues for NAND flash devices, the Hyperstone white paper mentions thermal stress. Increasing complexity and higher performance translate into more thermal stress for semiconductors. The problem is especially aggravated when devices are located in industrial or automotive settings where heat is already an issue. One property of semiconductors that works to our disadvantage is when they operate at higher temperatures, they use more power – creating a positive feedback loop. The white paper points out that a temperature rise from 50C to 100C leads to a ten-fold increase in leakage current. Thermal stress can lead to multiple kinds of failures such as device breakdown, electromigration and physical fatigue.

Leakage current, an important issue for NAND flash devices, is static current that flows through devices when they are not active. Unfortunately, the percentage of chip power consumption from leakage is growing as chips move to smaller more advanced process nodes. Additionally, there is a tradeoff between performance and leakage power. Lower threshold transistors, which have more leakage, are necessary for higher performance. The Hyperstone paper describes some specialized techniques for applying bias voltages to transistors to vary their leakage current as needed based on time-dependent performance requirements.

Within the NAND flash controller, the different functional elements have their own power requirements which can change depending on workloads and operational modes. The white paper does a good job of delineating the major functional blocks that can affect power consumption. Likewise, the NAND flash itself requires different levels of power for different operations. These are also dependent on whether it is single level charge (SLC) or multi-level charge (MLC).

The Hyperstone white paper describes the power implications of operational modes for the host interface over PCIe, and also discusses the power characteristics of DRAM which is often used for cache purposes. Error correction can play a hard to predict role in power usage. Encoding error correction information is not compute intensive. However, when there are errors, especially multiple errors extensive computation can be involved. This can drive up power consumption.

As a result, getting objective power measurements under different workloads and operating speeds can be tricky. Benchmarking power consumption is required to make realistic determination of low power performance. All of the above factors and more come into play. Many decisions in the total system design influence the result. DC to DC power converters need to be properly selected to handle transitions from modes to active modes. Interfaces should fully support low power modes. Controller chips and their ancillary chips needs to include advanced and complex power saving features, such as clock gating and power gating. Hyperstone includes information about their use of CystalDiskMark for benchmarking SSDs with Hyperstone flash memory controllers. They even include several graphs of power performance over long intervals that show their relative performance versus a competitor.

Because power consumption is such an important consideration for SSDs, it is good to get a deeper insight into its sources and solutions. Hyperstone does a good job of covering all of this in their white paper. You can download the white paper and other relevant Flash Storage white papers on the website of Hyperstone.


Security Requirements for IoT Devices

Security Requirements for IoT Devices
by Daniel Nenni on 04-05-2022 at 6:00 am

IoT Product Lifecycles SemiWiki

Designing for secure computation and communication has become a crucial requirement across all electronic products.  It is necessary to identify potential attack surfaces and integrate design features to thwart attempts to obtain critical data and/or to access key intellectual property.  Critical data spans a wide variety of assets, including financial, governmental, and personal privacy information.

The security of intellectual property within a design is needed to prevent product reverse engineering.  For example, an attacker may seek to capture firmware running on a microcontroller in an IoT application to provide competitive manufacturers with detailed product information.

Indeed, the need for a security micro-architecture integrated in the design is not only applicable to high-end transaction processing systems, but also down to IoT devices, as well.

Vincent van der Leest, Director of Product Marketing at Intrinsic ID, recently directed me to a whitepaper entitled, “Preventing a $500 Attack Destroying Your IoT Devices”.  Sponsored by the Trusted IoT Ecosystem Security Group of the Global Semiconductor Alliance (GSA), with contributions from Intrinsic ID and other collaborators, the theme of this primer is that a security architecture is definitely a requirement for IoT devices.  The rest of this article covers some of the key observations in the paper – a link is provided at the bottom.

Types of Attacks

There are three main types of attacks that may yield information to attackers:

  • side-channel attacks: non-invasive listening attacks that monitor physical signals coming from the IoT device
  • fault injection attacks: attempts to disrupt the internal operation of the IoT device, such as attempting to modify program execution of an internal microcontroller to gain knowledge about internal confidential data or interrupt program flow
  • invasive attacks: physical reverse engineering of the IoT circuitry, usually too expensive a process for attackers

For the first two attack types above, current IoT systems offer an increasing number of (wired and wireless) interfaces providing potential access.  Designers must be focused on security measures across all these interfaces, from memory storage access (RAM and external non-volatile flash memory) to data interface ports to test access methods (JTAG).

The paper goes into more detail on the security architecture, Cryptographic Keys and a Digital Signature, plus Key Generation.

Examples of Security Measures

The first step to prevent IoT attacks is taking certain design considerations into account. Here are some examples of design and program execution techniques to consider:

  • firmware storage.

An obvious malicious attack method would be to inject invalid firmware code into the IoT platform.  A common measure is to execute a small boot loader program upon start-up from internal ROM (not externally visible), then continue to O/S boot and application firmware from flash.  As the intent of using flash memory is to enable in-field product updates and upgrades, this externally-visible attack surface requires a security measure.  For example, O/S and application code could incorporate a digital signature evaluated for authentication by the security domain.

  • runtime memory security

Security measures may be taken to perform integrity checks on application memory storage during runtime operation.  These checks could be evaluated periodically, or triggered by a specific application event.  Note that these runtime checks really only apply to relatively static data – examples include:  application code, interrupt routines, interrupt address tables.

  • application task programming

An early design decision involves defining which tasks within the overall application necessitate security measures, and different operating system privileges.  As a result, the application may need to be divided into small subtasks.  This will also simplify the design of the security subsystem, if the task complexity is limited.

  • redundant hardware/software resources

For additional security, it may be prudent to execute critical application code twice, to detect an attack method that attempts to inject a glitch fault into the IoT system.

  • attestation

An additional measure to integrate into the design is the means by which the security subsystem attests (through an external interface) to confirm the IoT product is initialized as expected.  If runtime security checks are employed, the design also needs to communicate that operational security is being maintained.

IoT Product Development and Lifecycle

The figure above provides a high-level view of the IoT product lifecycle.  During IC fabrication, initial configuration and boot firmware are typically written to the device.  Cryptographic keys would be included as part of the security subsystem.  During IoT system integration, additional security data may also be added to the SoC.

Summary

The pervasive and diverse nature of IoT applications, from industrial IoT deployments to consumer (and healthcare) products, necessitates a focus on security of critical economic and personal privacy assets.  The whitepaper quotes a survey indicating that security concerns continue to be a major inhibitor to consumer IoT device adoption.  A recommendation in the whitepaper is for the IoT product developer to pursue an independent, third-party test and certification of the security features.

Here is the whitepaper: Preventing a $500 Attack Destroying Your IoT Devices

Contributors:

Published by:

Also read:

WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?

Enlisting Entropy to Generate Secure SoC Root Keys

Using PUFs for Random Number Generation