ads mdx semiwiki building trust gen 800x100ai

Podcast EP59: A brief history of semiconductors and EDA with Rich Goldman

Podcast EP59: A brief history of semiconductors and EDA with Rich Goldman
by Daniel Nenni on 01-28-2022 at 10:00 am

Dan is joined by good friend and fellow boater Rich Goldman. Rich has a storied career in EDA that began at TI with Morris Chang and Wally Rhines, continued through a long career at Synopsys and included a book collaboration with Neil Armstrong, Stephen Hawking and Brian May (the lead guitarist for Queen),

Dan and Rich cover a lot of ground across both semiconductors and EDA, the innovation, the trends and what it means.

Book reference: Starmus

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?

WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?
by Daniel Nenni on 01-28-2022 at 6:00 am

Intrinsic ID Webinar Blog

In the first half of 2021, the number of attacks on IoT devices more than doubled to 1.5 billion attacks in just six months. These attacks target some typical weaknesses, such as the use of weak passwords, lack of regular patches and updates, insecure interfaces, and insufficient data protection. However, researchers from Bishop Fox recently identified a new critical vulnerability of IoT devices that might not be so obvious to many of us. Their study showed that hardware random number generators (RNGs) used in billions of IoT devices fail to provide sufficient entropy. Insufficient entropy causes predictable instead of random numbers, which severely compromises the foundation of many cryptographic algorithms. Secret keys will lose their strength, leading to broken encryption.

REPLAY is HERE

So a new approach for generating random numbers is needed. This webinar shows how large amounts of entropy can be extracted from unpredictable SRAM behavior. Using this method only requires a software installation, meaning the security systems of billions of devices can be patched without the need to make hardware changes, even in devices that have already been deployed.

What will you learn?

This webinar shows how you can utilize Zign™ RNG, the Intrinsic ID embedded software IP for random number generation, to add a NIST-certified random number generator to any device, simply using software. The webinar will explain to you:

  • Why having a trusted RNG is important for any IoT device
  • How entropy can be harvested from unpredictable SRAM behavior
  • How this entropy is used to create a strong RNG
  • Which steps have been taken to turn Zign RNG into a NIST-certified RNG

Who should attend?

This webinar is must-see for anyone responsible for product design and/or embedded security of IoT chips and devices. Whether you are a semiconductor professional or an IoT device maker, this webinar will show you how you add a trusted source of strong entropy to your product without making any changes to your hardware.

About Zign RNG

Zign RNG is an embedded software IP solution from Intrinsic ID that that leverages the unpredictable behavior of existing SRAM to generate entropy in IoT devices. This approach enables anyone to add a random number generator to their products without the need for hardware modifications. The Zign RNG product is compliant with the NIST SP 800-90 standard, and it is the only hardware entropy source that does not need to be loaded at silicon fabrication. It can be installed later in the supply chain, and even retrofitted on already-deployed devices. This enables a never-before-possible “brownfield” deployment of a cryptographically secure, NIST-certified RNG.

About the Speaker: Nicolas Moro

Nicolas Moro holds a PhD in Computer Science from Université Pierre et Marie Curie in Paris. After receiving his PhD, he worked in varying R&D roles at NXP and Imec. Two years ago Nicolas joined Intrinsic ID, where he works as a Senior Embedded Software Engineer. He has extensive experience in embedded systems security and is the author of several research papers about fault injection attacks and software countermeasures.

REPLAY is HERE

About Intrinsic-ID

Intrinsic ID is the world’s leading provider of security IP for embedded systems based on PUF technology. The technology provides an additional level of hardware security utilizing the inherent uniqueness in each and every silicon chip. The IP can be delivered in hardware or software and can be applied easily to almost any chip – from tiny microcontrollers to high-performance FPGAs – and at any stage of a product’s lifecycle. It is used as a hardware root of trust to protect sensitive military and government data and systems, validate payment systems, secure connectivity, and authenticate sensors. Intrinsic ID security has been deployed and proven in hundreds of millions of devices certified by EMVCo, Visa, CC EAL6+, PSA, ioXt, and governments across the globe.

Also Read:

Enlisting Entropy to Generate Secure SoC Root Keys
Using PUFs for Random Number Generation
Quantum Computing and Threats to Secure Communication

SIP Modules Solve Numerous Scaling Problems – But Introduce New Issues

SIP Modules Solve Numerous Scaling Problems – But Introduce New Issues
by Tom Simon on 01-27-2022 at 10:00 am

SIP Verification

Multi-chip modules are now more important than ever, even though the basic concept has been around for decades. With The effects of Moore’s Law and other factors such as yield, power, and process choices, reasons for dividing what once would have been a single SOC into multiple die and integrating them in a single module have become extraordinarily compelling. These system in package (SIP) modules are becoming ever more popular. Yet for all their advantages, they do add a level of design and verification complexity that must be addressed.

SIP Verification

There are many good reasons to use SIP modules. SIP modules let designers break up large die into several smaller dies, which lessens the impact of a fabrication defect. Instead of throwing away an entire large die, only the smaller die affected by a failure need to be replaced. Also, some parts of a large system can easily be fabricated on a lower cost and less technically complex die based on a trailing process node. Similarly, memories, RF and other specialized functional units can reside on their own die using any needed process technology such as NAND memory, GaAs, etc. High speed SerDes for off-chip links can also use legacy analog nodes to save costs and reduce design risk. SIP modules also reduce PCB component counts and simplify board design.

On the flip side, these benefits come with increased complexity. They introduce a new level of interconnect that needs to be verified for correct connectivity. Substrate connections from each die need to be logically correct, and also the geometry of the connections require verification. The pad centers need to be checked for proper alignment. Device scaling and orientation are factors that can determine if the final fabricated parts are functional. Different die and elements used to construct SIP modules have unique thermal properties which can affect the integrity of the bump to pad connections. All of this calls for a solution to ensure that the design is correct.

To help design teams deal with the added complexity of SIP module verification, Siemens EDA has developed a tool called Xpedition Substrate Integrator (xSI) that provides an integrated solution for defining and consolidating all pertinent module design data, allowing for the definition of the golden design intent. There is native integration with Calibre 3DSTACK to provide robust automated DRC/LVS checking for SIP modules. Justin Locke from Siemens has authored a white paper that describes the need for a DRC/LVS specifically targeted at SIP modules. The white paper is titled “System-in-Package/Module Assembly Verification”.

There are several unique challenges that are faced by SIP module verification tools. Because it sits at the nexus between board and die, the task of verifying SIP Modules has to interface with multiple tool flows and also multiple design teams. This data and organization complexity has to be a primary focus for any tool. Additionally, during earlier stages of the flow, the full GDS may not be available to help locate and identify the pads on the die. Siemens xSI offers the ability to create dummy die information that can be used in the interim until the full GDS information if available. Once GDS for the die is available, it can be used to ensure proper pad centering and connection overlap.

System in Package is here now. Design teams need to work with them to deliver market winning products. Today’s SIP modules are a far cry from the old multi-chip modules. It comes as a relief that there are tool solutions tailored to help deliver high quality finished products. The full Siemens white paper is available for reading on the Siemens website.

Also read:

MBIST Power Creates Lurking Danger for SOCs

From Now to 2025 – Changes in Store for Hardware-Assisted Verification

DAC 2021 – Taming Process Variability in Semiconductor IP


Samsung Keynote at IEDM

Samsung Keynote at IEDM
by Scotten Jones on 01-27-2022 at 6:00 am

Samsung Keynote Figure 1

Kinam Kim is a longtime Samsung technologist who has published many excellent articles over the years. He is now the Chairman of Samsung Electronics, and he gave a very interesting keynote address at IEDM.

He began with some general observations:

The world is experiencing a transformation powered by semiconductors that has been accelerated by COVID due to lock downs requiring contactless society. IT has become essential due to remote work, and remote education. Sensors, processors, and memory are all required. Digital adoption has taken a quantum leap and 25% remote work has increased to 58%. The digitization of the economy presents tremendous opportunity, smarts systems are generating tremendous amounts of data. Over the past 50 years, transistors per wafer are up by 10 million times, processor speeds by 100 thousand times, and costs down 47% per year. Semiconductors have similarity to the human brain, sensor are like eyes, processors and memory do the processing and storing. Smart phones combining sensors with processing enable new applications, sensors are taking on a bigger role with autonomous driving, AI, etc.

There was an interesting section on sensors but that isn’t really my area and I want to focus on the logic, DRAM and NAND roadmaps he presented.

Figure 1 presents the logic roadmap.

Figure 1. Logic Roadmap.

In figure 1 we can see how the contacted poly pitch (CPP) of logic processes has scaled over time. In the planar era we saw high-k metal gate (HKMG) introduced by Intel at 45nm and by the foundries at 28nm as well as innovations like embedded silicon germanium (eSiGe) to improve channel performance through strain. FinFETs were introduced by Intel at 22nm and adopted by the foundries at 14/16nm and have carried the industry forward for several nodes. Samsung is currently trying to lead the industry into the Gate All Around (GAA) era with horizontal nanosheets (HNS) they call multi bridge and HNS should carry the industry for at least two nodes. Beyond 2nm Samsung anticipates one of either a 3D Stacked FET (called a CFET or 3D FET by others), VFET as recently disclosed by IBM and Samsung, 2D materials or a negative capacitance FET (NCFET).

Figure 2 presents the roadmap for DRAM.

Figure 2 DRAM Roadmap

With EUV already ramping up in DRAM, the next challenges are shrinking the memory cell. Samsung is anticipating staking two layers of capacitors soon. A switch to vertical access transistor is anticipated in the later part of the decade followed by 3D DRAM. I haven’t been able to find much specific information on how 3D DRAM will be built but similar structures are illustrated in presentations from ASM, Applied Materials and Tokyo Electron as well as this presentation making it appear that the industry is converging on a solution.

Figure 3 presents the roadmap for NAND.

Figure 3 NAND Roadmap

Samsung’s latest 3D NAND is a 176-layer process that uses string stacking for the first time (first time string stacking for them, others have been string stacking for multiple generations) and peripheral under the array for the first time (once again the first time for them, others have been doing it for several generations). Next up is shrinking the spacing between the channel holes to improve density while also increasing the number of layers. Around 2025 Samsung is showing wafer bonding to separate the peripheral circuitry and memory array. At first, I was surprised by this, first off YMTC is already doing this and if Samsung thinks it offers an advantage, I am surprised they would wait so long to implement it. Secondly, I have cost modeled wafer bonding and I believe it is higher cost than the current monolithic approach. After thinking about it some more I am wondering if it is viewed as solving a stress problem that allows continued layer stacking and will be implemented when needed to continue stacking. Finally in the later part of the decade Samsung is anticipating material changes and further channel hole shrinks. The figure shown here doesn’t show it but, in their presentation, Samsung showed over a thousand layers for their 14th generation process.

In conclusion the keynote presents a view of continued scaling and improvement for logic, DRAM and NAND through the end of the decade.

Also read:

IBM at IEDM

Intel Discusses Scaling Innovations at IEDM

IEDM 2021 – Back to in Person


Upcoming Webinar: 3DIC Design from Concept to Silicon

Upcoming Webinar: 3DIC Design from Concept to Silicon
by Kalar Rajendiran on 01-26-2022 at 10:00 am

Lessons from Existing Multi Die Solutions

Multi-die design is not a new concept. It has been around for a long time and has evolved from 2D level integration on to 2.5D and then to full 3D level implementations. Multiple driving forces have led to this progression.  Whether the forces are driven by market needs, product needs, manufacturing technology availability or EDA tools development, the progression has been picking up speed. With the slowing down of Moore’s law, the industry has entered a new era. While there is not yet an industry-wide term, Synopsys uses SysMoore as a shorthand notation to refer to this era.

Synopsys gave a presentation at DAC 2021 on addressing the market demands of the SysMoore era. The presentation gave excellent insights into their strategy for delivering solutions. Six vectors were identified as efficiency roadmap drivers to power the SysMoore Era and what solutions the various market segments are demanding. New complexities and opportunities were highlighted for advances all around and what Synopsys is bringing out in terms of new technologies for this era. A recent post provides a synopsis of that entire talk.

One of the efficiency drivers identified relates to memory and I/O latency, multi-lane HBMs and Phys on multi-die designs. In the SysMoore era, high performance computing (HPC) is fast becoming a major driver of multi-die/3DIC designs for multiple reasons. There is no let-up on the increasing need for functionality integration and performance enhancement. At the same time, integrating everything on a single large die may not be the most viable option sub 7nm nodes. This opens up the opportunity to implement a multi-die design and still optimize for PPA, latency, cost and time-to-market schedule. At the same time, there are many challenges to overcome when doing multi-die designs. Refer to Figure below for drawbacks of existing multi-die solutions.

These challenges cause slow convergence and sub-optimal PPA/mm3.

Solution

An effective cross-discipline collaboration is needed for converging to an optimal solution. What is needed is a platform that enables a consistent and efficient exchange of information. A solution that offers a GUI-driven 3D visualization, planning and design. One that implements DRC-aware routing and shielding and supports HBM. A platform that leverages a single data model that allows for fast exploration and pathfinding to accelerate the design process. A solution that enables an integrated golden-signoff that includes multi-die analysis of signal integrity, power integrity, thermal integrity, timing integrity and EMIR.

Synopsys 3DIC Compiler

While the following slide provides a high-level summary of features and benefits, you can learn more at an upcoming webinar.

About the Webinar

Synopsys will be hosting a Webinar on Feb 10, 2022 about their 3DIC Compiler (3DICC) solution. The event will cover designing HBM3 into high performance computing designs using a multi-die approach. It will cover the what-if analysis, floor planning, implementation, HBM3 channel D2D routing and analysis and simulation/signoff aspects.

What You Will Hear, See and Learn?

  • HBM3 overview and HBM3 design example
  • Relevance of the 3DICC features/benefits to HPC designs
  • 2.5D/3D architecture evaluation
  • Ansys Redhawk-SC ElectroThermal multi-physics simulation integration with 3DICC platform
  • Two live demos, showcasing the ease of use and advanced auto die-to-die (D2D) routing capabilities
  • Live Q&A session for attendees

Who Should Attend?

  • System Architects
  • Engineering Managers
  • Chip Development Engineers

Registration LinkYou can register for the Webinar here.

Also read:

Identity and Data Encryption for PCIe and CXL Security

Heterogeneous Integration – A Cost Analysis

Delivering Systemic Innovation to Power the Era of SysMoore


The Hitchhiker’s Guide to HFSS Meshing

The Hitchhiker’s Guide to HFSS Meshing
by Matt Commens on 01-26-2022 at 6:00 am

PCB

Automatic adaptive meshing in Ansys HFSS is a critical component of its finite element method (FEM) simulation process. Guided by Maxwell’s Equations, it efficiently refines a mesh to deliver a reliable solution, guaranteed. Engineers around the world count on this technology when designing cutting-edge electronic products. But the adaptive meshing process relies on an initial mesh that accurately represents the model’s geometry. Today, HFSS establishes the initial mesh using a suite of meshing technologies, each optimally applied to a specific type of geometry. From there, HFSS continues the adaptive refinement process until the solution converges.

Over the last two decades, computers have become larger, more powerful, and increasingly cloud-based in their high-performance computing (HPC) architecture. The FEM algorithms of HFSS have vastly improved alongside innovations in the HPC computing space. Today, they allow the rigorous and reliable simulation technology of HFSS to be applied to ever more complex electromagnetic systems. However, with larger, more complex systems, the task of initial mesh generation becomes more and more challenging.

This white paper introduces the history of HFSS meshing innovations and explores recent technological breakthroughs that have greatly improved performance and reliability in initial mesh creation.

The History of HFSS Meshing

The “Mesh” is the foundation of physics simulation; it’s how a complex modeling problem is discretized into “solvable blocks.” Understandably, for today’s highly complex systems, considerable time may be designated to generating the initial mesh because it’s such a critical step. Accurately capturing the physical geometry under test, as represented by the initial mesh, has a defining influence on the resulting simulation and speed of results. That wasn’t always the case. 25 years ago, simulation was dominated by the actual solving of the electromagnetic fields, and meshing amounted to a tiny fraction of the overall time spent generating a simulated model.

The very first HFSS simulation in 1989 took 16 hours to produce one frequency point on a then-state-of-the-art computer. The vast majority of that 16 hours was spent solving for the electromagnetic fields. Today, we can solve the same model and extract four thousand frequency points in about 30 seconds on an ordinary laptop computer. Advances in speed naturally led engineers to attempt increasingly complex designs through 3D simulation. Over the past 20 years, new meshing technologies supported the pace of innovation, but even with advanced techniques, meshing took up a larger relative portion of the process for complex designs. Simulation technologists saw that meshing was a larger pole in the “simulation tent,” so they introduced new algorithms and parallel processing to encourage further innovation in the simulation space.

Today, Ansys HFSS uses a variety of different meshing algorithms, each optimized for different geometries:

The Original “Classic”

At its core, mesh generation is a space-discretization process where geometry is divided into elemental shapes. While there are several shapes to work with, in HFSS, a mesh represents geometry as a set of 3D tetrahedra, see Figure 1. It can be demonstrated that any 3D shape can be decomposed into a set of tetrahedra. Since HFSS leverages automatic mesh generation, the algorithm makes use of tetrahedra to refine and mathematically guarantee an accurate mesh.

Figure 1: Geometrically conformal tetrahedra leveraged in HFSS automatic mesh generation

Classic is one of Ansys’ earliest meshing technologies. It uses a Bowyer algorithm to create a compact mesh from any set of geometries. It’s an extremely rigorous approach. First, Classic meshes the surfaces of all objects to create a water-tight presentation of the geometry, and then it fills in the volumes of all objects with 3D tetrahedra. For accuracy, mesh elements must be continuous across a surface. In other words, two objects in contact must have a conformal triangular mesh at their adjoining faces. As geometric complexity grew and models started including hundreds or thousands of parts, it became difficult to align the triangular mesh to achieve conformal mesh everywhere. At some point, this meshing approach, which is not readily parallelized, reaches its limit. It can’t handle high levels of design complexity in a reasonable amount of time.

TAU

In 2009, Ansys released the TAU meshing algorithm. TAU approaches the task of meshing from an inverse perspective. From a bird’s-eye view, a model represents some volume of objects potentially contacting others at different points. TAU breaks up the volume into gradually smaller tetrahedra to fit each object in the model. Then, it adaptively refines and tightens local mesh size and shape to align the volumetric mesh with the faces of the input model. Eventually, TAU gets the tetrahedra close enough to each surface for a water-tight mesh that accurately represents all the geometry. For 3D CAD, such as a model of a backplane connector or an aircraft body, TAU is a very robust and reliable algorithm; however, TAU struggles with designs that include high aspect ratio geometries, like PCBs and wirebond packaging, where Classic may perform better.

Both Classic and TAU meshers are designed to handle all arbitrary geometries accurately. Depending on the model, in “Auto-mesh” mode, HFSS determines the correct mesher choice to apply.

Phi Mesher

2013 brought the next generation of meshing at Ansys—Phi. Phi is a layout-based meshing technology that’s 10, 15, or even 20 times faster than previous meshing technology, depending on the model’s geometry. A faster initial mesh often ensures faster simulations; they can be accelerated and enhanced even further with HPC.

Phi is HFSS’ first “geometry-aware” meshing technology. It relies on the layered nature of design that’s commonly found in PCBs or IC packages. The technique is based on the knowledge that all geometry in these kinds of models have a 2D layer description, with the third dimension achieved by sweeping the 2D layer description (in the XY plane) uniformly in Z. Phi was designed to accelerate initial mesh generation by conquering a 3D problem with a 2D approach. It was initially implemented in the HFSS 3D Layout design flow and eventually extended to the 3D workflow a few releases later.

Phi performance is exceptional, achieving speeds an order of magnitude greater than other meshing technologies. In complex IC designs, see Figure 2, it’s a game changer. With earlier techniques, on-chip passive components, for example, took a considerable amount of time to complete. With Phi meshing, an hours-long initial mesh processes can be reduced to minutes or even seconds. However, Phi’s uniform-in-Z constraint limits the types of designs it can handle. For example, Phi can’t be leveraged if trace etching or bondwires were included in IC package design.

 Figure 2: A typical complex PCB design, Phi mesh

That said, with the right geometry, Phi is extremely fast. Once the initial mesh is completed, the adaptive meshing algorithm works the same way as it would with any other HFSS meshing technology to produce the final convergence. In addition, Phi can create a smaller initial meshing count, which contributes to better downstream performance in the adaptive meshing and frequency sweeps. It’s faster from start to finish, not just in the initial mesh generation phase.

The Three Mesh Paradigm

With three meshing technologies in place, an Ansys auto-mesh algorithm scanned model geometries to determine which of the mesh technologies to use. In addition, fallbacks from one meshing technology to another ensured a reliable meshing flow. For example, if the algorithm identified a significant amount of high-aspect-ratio CAD, it would launch the Classic mesh algorithm. Phi was fully automated in the sense that it was always applied to uniformly swept in Z geometry.

Early on, each customer’s design flow tended to align with the same meshing techniques for every project, so they consistently gravitated to one meshing approach. However, as the HFSS solver algorithms became faster and more scalable, and cluster and Cloud hardware became more readily available, the size of HFSS simulations grew and became more complex. Designs were no longer single components; they were systems made up of multiple types of CAD. It wasn’t enough to solve just the PCB or just the connector anymore. To get it right, especially as data rates increased, it became more and more important to simulate together—connector and PCB, antenna and airframe, and so on.

As engineers design for tighter margins in the competitive electronics landscape, simulations encompass the PCB, IC packaging, connectors, surface mount components, and beyond. The Three Mesh Paradigm carried heavy burdens for customers and Ansys alike; a one-mesh-fits-all approach was not optimally effective. Understanding the different options and mesh technologies and knowing when to apply them could be a real challenge.

Enter HFSS Mesh Fusion.

The Rise of HFSS Mesh Fusion

Introduced in early 2021, HFSS Mesh Fusion achieved a fundamental meshing breakthrough using locally defined parameters. In other words, HFSS Mesh Fusion applies meshing technology depending on the local needs of the CAD. For example, when analyzing a simulation where a PCB contains both wirebond packaging and 3D connector models, such as a backplane connector, the PCB portion calls for Phi, wirebond packages call for Classic, and connectors are best meshed using TAU.

This multi-mesh capability became possible with HFSS Mesh Fusion. The only requirement is to assemble the design as a set of 3D Components, which can be encrypted to hide intellectual property and enable easy collaboration with component vendors. The 3D Component hierarchy provides the localized CAD definition to appropriately apply mesh. In addition, the same auto-mesh technology can be used to set the mesh locally, requiring little to no user input. From there, the same adaptive meshing scheme is applied to provide HFSS gold-standard accuracy and reliability.

Ansys recently worked with a team integrating a 5G chipset into a tablet computer. Before Mesh Fusion, there were a lot of mesh challenges to resolve before arriving at a usable simulation. With Mesh Fusion, Phi was applied locally to the chipset and TAU was applied to the remainder of the design—a sleek housing with complex CAD to encase the rest of the electronics in the tablet. Local mesh application ensured a clean mesh on the chipset, which was critical to the accuracy of the overall simulation. All of these seemingly disparate meshing approaches came together in Mesh Fusion for a fast, accurate, and reliable simulation result.

The Future of Meshing at Ansys

The presence of HFSS Mesh Fusion offers a night and day difference for Ansys customers. Instead of getting bogged down by meshing issues to resolve, users are free to explore more intensive design challenges that drive the electronics industry forward.

Most recently, Ansys used a ground-up approach to develop a new meshing technology. This new mesher, called Phi Plus, was designed specifically for wirebond packaging, see Figure 3., which is particularly difficult to mesh with other technologies – even Mesh Fusion. Like Phi, it’s geometry aware and takes advantage of a priori knowledge of system design. In addition, it was developed with parallelization approaches in mind, allowing for excellent scaling with HPC resources. Its success is not limited to wirebond packaging. Phi Plus can handle any kind of combined layout and 3D CAD simulation, such as connector on PCB. Phi Plus meshing is the next game changer in a long line of innovative techniques from Ansys!

Figure 3: Phi Plus Mesh applied to a wire bind package design

For updates, keep an eye on our social channels in this new year.

Also Read

The 5G Rollout Safety Controversy

Can you Simulate me now? Ansys and Keysight Prototype in 5G

Cut Out the Cutouts


MBIST Power Creates Lurking Danger for SOCs

MBIST Power Creates Lurking Danger for SOCs
by Tom Simon on 01-25-2022 at 10:00 am

MBIST power emulation

The old phrase that the cure is worse than the disease is apropos when discussing MBIST for large SOCs where running many MBIST tests in parallel can exceed power distribution network (PDN) capabilities. Memory Built-In Self-Test (MBIST) usually runs automatically during power on events. Due to the desire to speed up test and chip boot times, these tests are frequently run in parallel. The problem is that they can easily produce switching activity that is an order of magnitude above the levels found during regular chip operation. Indeed, these higher switching activity levels not only can cause supply droop affecting the test results, but also the high heat generated can harm chips. These effects can lead to incorrect binning or event direct and latent failures.

The solution is to simulate MBIST activity to predict the load on the PDN and the related thermal effects. With simulation results in hand, designers can correctly decide how many and which memory blocks can be tested in parallel. However, this is not always feasible in large SOCs with many memory blocks because the simulation times may be prohibitive. With gate level and even less accurate RTL simulation it may not be possible to run enough cycles to get the information needed.

In a white paper titled “Analyzing the power implications of MBIST usage”, Siemens EDA looks at how designers can run sufficient simulation to make informed decisions on the testing strategy before tapeout. Siemens worked with ARM on one of their test chips to create a test case where they could apply hardware emulation with the DFT and Power apps for the Siemens hardware emulator Veloce. First, the Veloce DFT app is used to output the internal activity during MBIST emulation. The app uses the Standard Test Interface Language (STIL) and produces industry standard output files.

The Veloce Power app takes the activity information from the MBIST runs to generate waveforms, power profiles and heat maps that can indicate when there are power spikes above specified limits. With this information test engineers can make informed decisions about the sequencing of MBIST.

MBIST power emulation

The ARM test case described in the Siemens white paper contains 176 million gates. Siemens used a Veloce system with 6 Veloce Strato boards for this test case. The Veloce emulator run took only 26 hours, which is 15,600 times faster than gate level simulation. Another benefit of the Veloce flow is that the activity information is streamed by the Power app to the power tools in the flow, saving disk space and time. The results from the test case showed several power spikes that violated the SOC design specifications. The output from the Veloce Power app shows the total power levels through the simulation along with the separate power contributions for the clock, combinational logic and memory. Likewise, there is information on where on the die the power is being used. This information makes it easy to determine where there are problems.

Finding problems such these requires running millions or billions of clock cycles. The limitations of software simulators make it prohibitive to perform the necessary analysis. Emulation offers a unique avenue to closely examine the power impacts of MBIST and other test operations long before silicon. The Siemens white paper offers insight into the power method used on a real test case. The white paper is available to download for reading on the Siemens website.

Also Read:

From Now to 2025 – Changes in Store for Hardware-Assisted Verification

DAC 2021 – Taming Process Variability in Semiconductor IP

DAC 2021 – Siemens EDA talks about using the Cloud


Business Considerations in Traceability

Business Considerations in Traceability
by Bernard Murphy on 01-25-2022 at 6:00 am

nuclear power control room min

Traceability as an emerging debate around hardware is gaining a lot of traction. As a reminder, traceability is the need to support a disciplined ability to trace from initial OEM requirements down through the value chain to implementation support and confirmed verification in software and hardware. Demand for traceability appears most commonly in safety-critical applications, from automotive and rail to mil-aero. In process industries such as petroleum, petrochemical and pharmaceutical, to power plants and machinery safety-related controls. These are today’s applications. The more we push the IoT envelope, Industry 4.0, smart cities and homes, the more cannot-fail products we will inevitably create.

Why bother?

Traceability requirements in our world started in software to ensure that what an OEM wanted was actually built and tested. Now that application-specific hardware plays a bigger role in many modern designs, effective traceability must look inside some aspects of hardware as much as it does in software.

But why is traceability support important? Is this a temporary business fad we’ll soon forget? Or is it an unavoidable secular shift? And what kind of investment will it require? Tools and servers, probably, but added engineering effort is a bigger concern. Will you need to add more staff and more time to schedules? Let’s start with potential market downsides to ignoring traceability.

Locking out regulated markets

It’s easy to understand any case where regulation requires traceability. For medical devices covered by ISO 14971, one source asserts, “Auditing capabilities are also critical for regulatory compliance traceability. The FDA has been known to close down a product or prevent it from being shipped, or even shut down a whole division until you are in compliance.”

Jama Software – who know a thing or two about traceability – add, “even if everything goes according to plan, there’s no guarantee that the traceability workflows in use will account for all relevant requirements and risks. For instance, in the case of a medical device, a matrix created within Excel won’t come with frameworks aligned with industry standards like ISO 14971, making it more difficult to ensure coordinated traceability and ensure successful proof of compliance.”

You shouldn’t assume this only applies to medical devices. OEMs are now working more closely with SoC builders and expect more detailed evidence that device meet all requirements. Coverage reports won’t satisfy this need. They’d rather have a traceability path, from their requirement to a point in the RTL and test plans where you can defend your implementation.

Locking out a geography

Problems don’t only arise in highly regulated industries. We all know of cases when features requested in one geographical region are not important in others. In theory, detailed specifications and lists of requirements capture all needs and variations between clients and geographies. In practice, some escapes slip through. Remember the call from a client, after the spec is finalized, that they forgot one very important feature? The local apps guy takes careful notes and promises the spec will reflect this requirement. But it never appears in the official spec

In one case I heard of, an Asia-Pac client had such a requirement – perhaps a control/status register extension they needed which no other geography had asked for. Seemed harmless enough. But the R&D team didn’t see that requirement in the specs. They built the chip without that feature, and the customer rejected the product. That client was going to be the reference account for the region but became the wrong kind of reference. They lost the whole geography for that product.

The point here being that specs are not quite structured enough for a formal agreement on what you are going to build. Which is why software product builders now depend heavily on formal requirements documentation and traceability. Specs are a good way to elaborate on and explain requirements, but requirements are becoming the definitive definition. Clients won’t find this to be a problem. They are often already very familiar with requirements traceability and related tools. You will just need to build the same awareness/discipline in your team, from the field to R&D.

Won’t that create a big overhead for you?

That depends. If you are going to use a general-purpose requirements tool with no understanding of hardware design, then probably yes, you will have to put quite a bit of work into traceability bookkeeping. It might be time to learn more about the Arteris® Harmony Trace platform. This links intimately to hardware design on one end and traceability standards on the other end. With design semantic know-how to greatly reduce the burden on engineering teams.

Also read:

Traceability and ISO 26262

Physically Aware SoC Assembly

More Tales from the NoC Trenches

 


How System Companies are Re-shaping the Requirements for EDA

How System Companies are Re-shaping the Requirements for EDA
by Kalar Rajendiran on 01-24-2022 at 10:00 am

Panelists and Cadence Moderator

As the oldest and largest EDA conference, the Design Automation Conference (DAC) brings the best minds together to present, discuss, showcase and debate the latest and greatest advances in EDA. It accomplishes this in the form of technical papers, talks, company booths, product pavilions and panel discussions.

A key aspect of driving advances in design automation is to discuss evolving EDA requirements, so the industry can develop the solutions as the market demands. At DAC 2021, Cadence sponsored an interesting panel session that gets to the heart of this. The session was titled “How System Companies are Re-shaping the Requirements for EDA,” with participants representing Arm, Intel, Google, AMD and Meta (Facebook). The discussion was organized and moderated by Frank Schirrmeister from Cadence Design Systems.

 

The following is a synthesis of the panel session on EDA requirements to support the upcoming era of electronics.

Frank Sets the Stage

Referencing a Wired magazine article, Frank highlights how data center workloads increased six-fold from 2010 to 2018. Internet traffic increased ten-fold and storage capacity rose 25x over that same time period. Yet data center power usage increased only 6% over the same period and we have semiconductor technology, design architectures and EDA to thank for it.

The electronics industry is entering an era of domain-specific architectures and languages, as predicted by John Hennessy and David Patterson back in 2018, The primary factors driving this move are hyperscale computing, high-performance edge processing and the proliferation of consumer devices. The next generation of hyperconnected, always-on consumer devices are expected to deliver user experiences never imaginable even a few years ago.

The Global DataSphere quantifies and analyzes the amount of data created, captured and replicated in any given year across the world. End point data creation growth is estimated at a CAGR of 85% from 2019 to 2025 and 175 Zettabytes in 2025. It is as much as there are grains of sand on all the world’s beaches. That’s quite a bit of data to be processed and dealt with.

The companies on the panel session are all involved in creating, analyzing, capturing and/or replicating this humongous amount of data. The discussion will cover what they see as their requirements on the EDA industry.

Arm – Chris Bergey

From an infrastructure perspective, Arm’s involvement is from HPC to data center to 5G to edge gateways. Specialty computing is a big focus area now. System validation is key, when customers are committing to large R&D expenses. When dealing with chiplets architectures leveraging 2D/2.5D/3D implementations, it is relatively easier when all the dies and design rules are owned by a single company.

For heterogeneous implementations, multi-chip packaging is generally used in markets where the margins are high enough to accommodate the extra design efforts, yield fallouts and margin stacking. In reality, hybrid chiplets implementations will help the market grow faster. The EDA industry is expected to play a big role in making heterogeneous chiplets implementation easier and robust.

Intel – Rebecca Lipon

High Bandwidth Memory (HBM) and high-speed servers drove development of critical IP that opened the floodgates for a whole bunch of new applications and products. The industry has to maintain its determination to continue on similar journeys and try to push the envelope. For example, IP level innovation at the packaging level.

Open Compute Project (OCP) is the foundation started by Meta a decade ago. Many companies including all of the companies represented on the panel today are members of this foundation. It works on initiatives that allow you to use open firmware and software that speeds up development and extends the life of products.

One of the initiatives that OCP is focused on is composable computing and supporting domain specific architectures.  EDA industry should look into this and look to Linux as a model for open-source community.

Google – Amir Salek

The number of categories of workflows that run in our global data centers, is in the 1000s.  And Google Cloud adds a whole new dimension to the demand on serving and processing data, supporting different workloads. Each workload has its own characteristics and while many of them can run on general-purpose hardware, many more need customized hardware.

Testing and reliability are primary areas of concern. I think this plays a major role in terms of understanding the causes of marginality and to decide how to deal with. Looking at TPU pods, we’re talking 1,000s and 1,000s of chips that are stitched together to work in coordination as a supercomputer. So, any little bit of a reliability issue during testing and test escapes, basically gets magnified. And then after many days, you find out that the whole effort was basically useless and you have to repeat the job again.

Prototyping FPGA is a tremendous platform for testing and validation. We are doubling down on emulation and prototyping every year to make sure that we close the gap between the hardware and software.

AMD – Alex Starr

The data center all the way to the consumer, the whole software stack needs to run on whatever the solution is. And many of our designs are implemented using chiplets architecture and that brings up different types of complexity to deal with. The things that keeps me up at night is how to verify and validate these complex systems and get to market quickly.

Hardware emulators and FPGA prototyping systems market is booming and is probably the highest growth area within EDA. Today’s emulators can fit very large designs and help prototype bigger devices. The hardware acceleration platforms to put large designs are tremendously expensive and difficult to get working at that scale. And, as designs grow to five plus billion dates, emulators are not going to scale. Emulation as used for prototyping is at its limit. We are looking at hybrid kind of modeling-based approaches. We are refining these internally and in collaboration with external standards bodies. We really want to extend out into our OEM customers and their ecosystems as well.

Meta (Facebook) – Drew Wingard

We are working on chips to enable our vision for the Metaverse. Metaverse involves socially acceptable all-day wearables such as augmented reality glasses. This new computing platform puts an enormous amount of processing resources right on one’s face. The result is that it demands very tight form factors, low power usage and very minimal heat dissipation.

We need to put different parts of processing in software and hardware. We need to think a lot about the tradeoffs between latencies vs throughputs and cost of computation vs cost of communication. We need a mix of options around different classes of heterogeneous processing and a whole lot of support around modeling. And we have to balance the desire for optimizing requirements versus offering optionality because nobody knows what the killer app is going to be.

As a consumer firm, privacy is incredibly important as they relate to our product usage. Our products should be socially acceptable for the persons wearing as well as the persons across from them.

When we roll all the above together, availability of system models and design cycle times become incredibly important. Many challenges revolve around availability of models and interoperability between models. This is where continuing to closely work with the EDA industry opens up opportunities.

Also Read

2021 Retrospective. Innovation in Verification

Methodology for Aging-Aware Static Timing Analysis

Scalable Concolic Testing. Innovation in Verification


LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface
by Daniel Nenni on 01-24-2022 at 6:00 am

SemiWiki Webinar ad New 400x400

To meet the increased demand for converter speed and resolution, JEDEC proposed the JESD204 standard describing a new efficient serial interface to handle data converters. In 2006, the JESD204 standard offered support for multiple data converters over a single lane with the following standard revisions; A, B, and C successively adding features such as support for multiple lanes, deterministic latency, and error detection and correction while constantly increasing Lane Rates. The JESD204D revision is currently in the works and aims to once more increase the Lane Rate to 112Gbps with the change of lane encoding and a switch of the error correction scheme to Reed-Solomon. Most of today’s high-speed converters make use of the JESD standard and the applications fall within but are not limited to Wireless, Telecom, Aerospace, Military, Imaging, and Medical, in essence anywhere a high-speed converter can be used.

The JESD204 standard is dedicated to the transmission of converter samples over serial interfaces. Its framing allows for mapping M converters of S samples each with a resolution of N bits, onto L lanes with a F octet sized frames that, in succession, form larger Multiframes or Extended Multiblock structures described by K or E parameters. These frames allow for various placement of samples in high- or low-density (HD) and for each sample to be accompanied by CS control bits within a sample container of N’ bits or at the end of a frame (CF). These symbols, describing the sample data and frame formatting, paired with the mapping rules dictated by the standard, allow to communicate a shared understanding of how the transmitted data should be mapped and interpreted by both parties engaging in the transmission.

The 8b10b encoding scheme of JESD204, JESD204A and JESD204B paired with Decision Feedback Equalizers (DFEs) may not work efficiently above 12.5Gbps as it may not offer adequate spectral richness, for this reason, and for better relative power efficiency 64b66b encoding was introduced in JESD204C targeting applications up to 32 Gbps. JESD204D that is following in its footsteps with even higher line rates planned up to 112Gbps utilizing PAM4 PHYs demands a new encoding to efficiently encapsulate the Reed Solomon Forward Error Correction (RS-FEC) 10-bit symbol-oriented mapping.

Deterministic latency introduced in JESD204B allows for the system to maintain constant system latency throughout reset, and power up cycles, as well as re-initialization events. This is accomplished in most cases by providing a system reference signal (SYSREF) that establishes a common timing reference between the Transmitter and Receiver and allows the system to compensate for any latency variability or uncertainty.

The main traps and pitfalls of system design around the JESD204 standard would deal with system clocking in subclass 1 where deterministic latency is achieved with the use of SYSREF as well as SYSREF generation and utilization under different system conditions. Choosing the right frame format and SYSREF type to match system clock stability and link latency can also prove challenging.

View the replay here: https://attendee.gotowebinar.com/recording/5500320533040852748

About Comcores

Comcores is a Key supplier of digital IP Cores and design services for digital subsystems with a focus on Ethernet Solutions, Wireless Fronthaul and [-RAN, and Chip to Chip Interfaces. Comcores’ mission is to provide best-in-class, state of the art, quality components and design services to ASIC, FPGA, and System vendors, and thereby drastically reduce their product cost, risk, and time to market. Our longterm background in building communication protocols, ASIC development, wireless networks and digital radio systems has brought a solid foundation for understanding the complex requirements of modern communication tasks. This know-how is used to define and build state-of-the-art, high-quality products used in communication networks.

 

Also Read:

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

CEO Interview: John Mortensen of Comcores

Comcores Wiki