Bronco Webinar 800x100 1

SIP Modules Solve Numerous Scaling Problems – But Introduce New Issues

SIP Modules Solve Numerous Scaling Problems – But Introduce New Issues
by Tom Simon on 01-27-2022 at 10:00 am

SIP Verification

Multi-chip modules are now more important than ever, even though the basic concept has been around for decades. With The effects of Moore’s Law and other factors such as yield, power, and process choices, reasons for dividing what once would have been a single SOC into multiple die and integrating them in a single module have become extraordinarily compelling. These system in package (SIP) modules are becoming ever more popular. Yet for all their advantages, they do add a level of design and verification complexity that must be addressed.

SIP Verification

There are many good reasons to use SIP modules. SIP modules let designers break up large die into several smaller dies, which lessens the impact of a fabrication defect. Instead of throwing away an entire large die, only the smaller die affected by a failure need to be replaced. Also, some parts of a large system can easily be fabricated on a lower cost and less technically complex die based on a trailing process node. Similarly, memories, RF and other specialized functional units can reside on their own die using any needed process technology such as NAND memory, GaAs, etc. High speed SerDes for off-chip links can also use legacy analog nodes to save costs and reduce design risk. SIP modules also reduce PCB component counts and simplify board design.

On the flip side, these benefits come with increased complexity. They introduce a new level of interconnect that needs to be verified for correct connectivity. Substrate connections from each die need to be logically correct, and also the geometry of the connections require verification. The pad centers need to be checked for proper alignment. Device scaling and orientation are factors that can determine if the final fabricated parts are functional. Different die and elements used to construct SIP modules have unique thermal properties which can affect the integrity of the bump to pad connections. All of this calls for a solution to ensure that the design is correct.

To help design teams deal with the added complexity of SIP module verification, Siemens EDA has developed a tool called Xpedition Substrate Integrator (xSI) that provides an integrated solution for defining and consolidating all pertinent module design data, allowing for the definition of the golden design intent. There is native integration with Calibre 3DSTACK to provide robust automated DRC/LVS checking for SIP modules. Justin Locke from Siemens has authored a white paper that describes the need for a DRC/LVS specifically targeted at SIP modules. The white paper is titled “System-in-Package/Module Assembly Verification”.

There are several unique challenges that are faced by SIP module verification tools. Because it sits at the nexus between board and die, the task of verifying SIP Modules has to interface with multiple tool flows and also multiple design teams. This data and organization complexity has to be a primary focus for any tool. Additionally, during earlier stages of the flow, the full GDS may not be available to help locate and identify the pads on the die. Siemens xSI offers the ability to create dummy die information that can be used in the interim until the full GDS information if available. Once GDS for the die is available, it can be used to ensure proper pad centering and connection overlap.

System in Package is here now. Design teams need to work with them to deliver market winning products. Today’s SIP modules are a far cry from the old multi-chip modules. It comes as a relief that there are tool solutions tailored to help deliver high quality finished products. The full Siemens white paper is available for reading on the Siemens website.

Also read:

MBIST Power Creates Lurking Danger for SOCs

From Now to 2025 – Changes in Store for Hardware-Assisted Verification

DAC 2021 – Taming Process Variability in Semiconductor IP


Samsung Keynote at IEDM

Samsung Keynote at IEDM
by Scotten Jones on 01-27-2022 at 6:00 am

Samsung Keynote Figure 1

Kinam Kim is a longtime Samsung technologist who has published many excellent articles over the years. He is now the Chairman of Samsung Electronics, and he gave a very interesting keynote address at IEDM.

He began with some general observations:

The world is experiencing a transformation powered by semiconductors that has been accelerated by COVID due to lock downs requiring contactless society. IT has become essential due to remote work, and remote education. Sensors, processors, and memory are all required. Digital adoption has taken a quantum leap and 25% remote work has increased to 58%. The digitization of the economy presents tremendous opportunity, smarts systems are generating tremendous amounts of data. Over the past 50 years, transistors per wafer are up by 10 million times, processor speeds by 100 thousand times, and costs down 47% per year. Semiconductors have similarity to the human brain, sensor are like eyes, processors and memory do the processing and storing. Smart phones combining sensors with processing enable new applications, sensors are taking on a bigger role with autonomous driving, AI, etc.

There was an interesting section on sensors but that isn’t really my area and I want to focus on the logic, DRAM and NAND roadmaps he presented.

Figure 1 presents the logic roadmap.

Figure 1. Logic Roadmap.

In figure 1 we can see how the contacted poly pitch (CPP) of logic processes has scaled over time. In the planar era we saw high-k metal gate (HKMG) introduced by Intel at 45nm and by the foundries at 28nm as well as innovations like embedded silicon germanium (eSiGe) to improve channel performance through strain. FinFETs were introduced by Intel at 22nm and adopted by the foundries at 14/16nm and have carried the industry forward for several nodes. Samsung is currently trying to lead the industry into the Gate All Around (GAA) era with horizontal nanosheets (HNS) they call multi bridge and HNS should carry the industry for at least two nodes. Beyond 2nm Samsung anticipates one of either a 3D Stacked FET (called a CFET or 3D FET by others), VFET as recently disclosed by IBM and Samsung, 2D materials or a negative capacitance FET (NCFET).

Figure 2 presents the roadmap for DRAM.

Figure 2 DRAM Roadmap

With EUV already ramping up in DRAM, the next challenges are shrinking the memory cell. Samsung is anticipating staking two layers of capacitors soon. A switch to vertical access transistor is anticipated in the later part of the decade followed by 3D DRAM. I haven’t been able to find much specific information on how 3D DRAM will be built but similar structures are illustrated in presentations from ASM, Applied Materials and Tokyo Electron as well as this presentation making it appear that the industry is converging on a solution.

Figure 3 presents the roadmap for NAND.

Figure 3 NAND Roadmap

Samsung’s latest 3D NAND is a 176-layer process that uses string stacking for the first time (first time string stacking for them, others have been string stacking for multiple generations) and peripheral under the array for the first time (once again the first time for them, others have been doing it for several generations). Next up is shrinking the spacing between the channel holes to improve density while also increasing the number of layers. Around 2025 Samsung is showing wafer bonding to separate the peripheral circuitry and memory array. At first, I was surprised by this, first off YMTC is already doing this and if Samsung thinks it offers an advantage, I am surprised they would wait so long to implement it. Secondly, I have cost modeled wafer bonding and I believe it is higher cost than the current monolithic approach. After thinking about it some more I am wondering if it is viewed as solving a stress problem that allows continued layer stacking and will be implemented when needed to continue stacking. Finally in the later part of the decade Samsung is anticipating material changes and further channel hole shrinks. The figure shown here doesn’t show it but, in their presentation, Samsung showed over a thousand layers for their 14th generation process.

In conclusion the keynote presents a view of continued scaling and improvement for logic, DRAM and NAND through the end of the decade.

Also read:

IBM at IEDM

Intel Discusses Scaling Innovations at IEDM

IEDM 2021 – Back to in Person


Upcoming Webinar: 3DIC Design from Concept to Silicon

Upcoming Webinar: 3DIC Design from Concept to Silicon
by Kalar Rajendiran on 01-26-2022 at 10:00 am

Lessons from Existing Multi Die Solutions

Multi-die design is not a new concept. It has been around for a long time and has evolved from 2D level integration on to 2.5D and then to full 3D level implementations. Multiple driving forces have led to this progression.  Whether the forces are driven by market needs, product needs, manufacturing technology availability or EDA tools development, the progression has been picking up speed. With the slowing down of Moore’s law, the industry has entered a new era. While there is not yet an industry-wide term, Synopsys uses SysMoore as a shorthand notation to refer to this era.

Synopsys gave a presentation at DAC 2021 on addressing the market demands of the SysMoore era. The presentation gave excellent insights into their strategy for delivering solutions. Six vectors were identified as efficiency roadmap drivers to power the SysMoore Era and what solutions the various market segments are demanding. New complexities and opportunities were highlighted for advances all around and what Synopsys is bringing out in terms of new technologies for this era. A recent post provides a synopsis of that entire talk.

One of the efficiency drivers identified relates to memory and I/O latency, multi-lane HBMs and Phys on multi-die designs. In the SysMoore era, high performance computing (HPC) is fast becoming a major driver of multi-die/3DIC designs for multiple reasons. There is no let-up on the increasing need for functionality integration and performance enhancement. At the same time, integrating everything on a single large die may not be the most viable option sub 7nm nodes. This opens up the opportunity to implement a multi-die design and still optimize for PPA, latency, cost and time-to-market schedule. At the same time, there are many challenges to overcome when doing multi-die designs. Refer to Figure below for drawbacks of existing multi-die solutions.

These challenges cause slow convergence and sub-optimal PPA/mm3.

Solution

An effective cross-discipline collaboration is needed for converging to an optimal solution. What is needed is a platform that enables a consistent and efficient exchange of information. A solution that offers a GUI-driven 3D visualization, planning and design. One that implements DRC-aware routing and shielding and supports HBM. A platform that leverages a single data model that allows for fast exploration and pathfinding to accelerate the design process. A solution that enables an integrated golden-signoff that includes multi-die analysis of signal integrity, power integrity, thermal integrity, timing integrity and EMIR.

Synopsys 3DIC Compiler

While the following slide provides a high-level summary of features and benefits, you can learn more at an upcoming webinar.

About the Webinar

Synopsys will be hosting a Webinar on Feb 10, 2022 about their 3DIC Compiler (3DICC) solution. The event will cover designing HBM3 into high performance computing designs using a multi-die approach. It will cover the what-if analysis, floor planning, implementation, HBM3 channel D2D routing and analysis and simulation/signoff aspects.

What You Will Hear, See and Learn?

  • HBM3 overview and HBM3 design example
  • Relevance of the 3DICC features/benefits to HPC designs
  • 2.5D/3D architecture evaluation
  • Ansys Redhawk-SC ElectroThermal multi-physics simulation integration with 3DICC platform
  • Two live demos, showcasing the ease of use and advanced auto die-to-die (D2D) routing capabilities
  • Live Q&A session for attendees

Who Should Attend?

  • System Architects
  • Engineering Managers
  • Chip Development Engineers

Registration LinkYou can register for the Webinar here.

Also read:

Identity and Data Encryption for PCIe and CXL Security

Heterogeneous Integration – A Cost Analysis

Delivering Systemic Innovation to Power the Era of SysMoore


The Hitchhiker’s Guide to HFSS Meshing

The Hitchhiker’s Guide to HFSS Meshing
by Matt Commens on 01-26-2022 at 6:00 am

PCB

Automatic adaptive meshing in Ansys HFSS is a critical component of its finite element method (FEM) simulation process. Guided by Maxwell’s Equations, it efficiently refines a mesh to deliver a reliable solution, guaranteed. Engineers around the world count on this technology when designing cutting-edge electronic products. But the adaptive meshing process relies on an initial mesh that accurately represents the model’s geometry. Today, HFSS establishes the initial mesh using a suite of meshing technologies, each optimally applied to a specific type of geometry. From there, HFSS continues the adaptive refinement process until the solution converges.

Over the last two decades, computers have become larger, more powerful, and increasingly cloud-based in their high-performance computing (HPC) architecture. The FEM algorithms of HFSS have vastly improved alongside innovations in the HPC computing space. Today, they allow the rigorous and reliable simulation technology of HFSS to be applied to ever more complex electromagnetic systems. However, with larger, more complex systems, the task of initial mesh generation becomes more and more challenging.

This white paper introduces the history of HFSS meshing innovations and explores recent technological breakthroughs that have greatly improved performance and reliability in initial mesh creation.

The History of HFSS Meshing

The “Mesh” is the foundation of physics simulation; it’s how a complex modeling problem is discretized into “solvable blocks.” Understandably, for today’s highly complex systems, considerable time may be designated to generating the initial mesh because it’s such a critical step. Accurately capturing the physical geometry under test, as represented by the initial mesh, has a defining influence on the resulting simulation and speed of results. That wasn’t always the case. 25 years ago, simulation was dominated by the actual solving of the electromagnetic fields, and meshing amounted to a tiny fraction of the overall time spent generating a simulated model.

The very first HFSS simulation in 1989 took 16 hours to produce one frequency point on a then-state-of-the-art computer. The vast majority of that 16 hours was spent solving for the electromagnetic fields. Today, we can solve the same model and extract four thousand frequency points in about 30 seconds on an ordinary laptop computer. Advances in speed naturally led engineers to attempt increasingly complex designs through 3D simulation. Over the past 20 years, new meshing technologies supported the pace of innovation, but even with advanced techniques, meshing took up a larger relative portion of the process for complex designs. Simulation technologists saw that meshing was a larger pole in the “simulation tent,” so they introduced new algorithms and parallel processing to encourage further innovation in the simulation space.

Today, Ansys HFSS uses a variety of different meshing algorithms, each optimized for different geometries:

The Original “Classic”

At its core, mesh generation is a space-discretization process where geometry is divided into elemental shapes. While there are several shapes to work with, in HFSS, a mesh represents geometry as a set of 3D tetrahedra, see Figure 1. It can be demonstrated that any 3D shape can be decomposed into a set of tetrahedra. Since HFSS leverages automatic mesh generation, the algorithm makes use of tetrahedra to refine and mathematically guarantee an accurate mesh.

Figure 1: Geometrically conformal tetrahedra leveraged in HFSS automatic mesh generation

Classic is one of Ansys’ earliest meshing technologies. It uses a Bowyer algorithm to create a compact mesh from any set of geometries. It’s an extremely rigorous approach. First, Classic meshes the surfaces of all objects to create a water-tight presentation of the geometry, and then it fills in the volumes of all objects with 3D tetrahedra. For accuracy, mesh elements must be continuous across a surface. In other words, two objects in contact must have a conformal triangular mesh at their adjoining faces. As geometric complexity grew and models started including hundreds or thousands of parts, it became difficult to align the triangular mesh to achieve conformal mesh everywhere. At some point, this meshing approach, which is not readily parallelized, reaches its limit. It can’t handle high levels of design complexity in a reasonable amount of time.

TAU

In 2009, Ansys released the TAU meshing algorithm. TAU approaches the task of meshing from an inverse perspective. From a bird’s-eye view, a model represents some volume of objects potentially contacting others at different points. TAU breaks up the volume into gradually smaller tetrahedra to fit each object in the model. Then, it adaptively refines and tightens local mesh size and shape to align the volumetric mesh with the faces of the input model. Eventually, TAU gets the tetrahedra close enough to each surface for a water-tight mesh that accurately represents all the geometry. For 3D CAD, such as a model of a backplane connector or an aircraft body, TAU is a very robust and reliable algorithm; however, TAU struggles with designs that include high aspect ratio geometries, like PCBs and wirebond packaging, where Classic may perform better.

Both Classic and TAU meshers are designed to handle all arbitrary geometries accurately. Depending on the model, in “Auto-mesh” mode, HFSS determines the correct mesher choice to apply.

Phi Mesher

2013 brought the next generation of meshing at Ansys—Phi. Phi is a layout-based meshing technology that’s 10, 15, or even 20 times faster than previous meshing technology, depending on the model’s geometry. A faster initial mesh often ensures faster simulations; they can be accelerated and enhanced even further with HPC.

Phi is HFSS’ first “geometry-aware” meshing technology. It relies on the layered nature of design that’s commonly found in PCBs or IC packages. The technique is based on the knowledge that all geometry in these kinds of models have a 2D layer description, with the third dimension achieved by sweeping the 2D layer description (in the XY plane) uniformly in Z. Phi was designed to accelerate initial mesh generation by conquering a 3D problem with a 2D approach. It was initially implemented in the HFSS 3D Layout design flow and eventually extended to the 3D workflow a few releases later.

Phi performance is exceptional, achieving speeds an order of magnitude greater than other meshing technologies. In complex IC designs, see Figure 2, it’s a game changer. With earlier techniques, on-chip passive components, for example, took a considerable amount of time to complete. With Phi meshing, an hours-long initial mesh processes can be reduced to minutes or even seconds. However, Phi’s uniform-in-Z constraint limits the types of designs it can handle. For example, Phi can’t be leveraged if trace etching or bondwires were included in IC package design.

 Figure 2: A typical complex PCB design, Phi mesh

That said, with the right geometry, Phi is extremely fast. Once the initial mesh is completed, the adaptive meshing algorithm works the same way as it would with any other HFSS meshing technology to produce the final convergence. In addition, Phi can create a smaller initial meshing count, which contributes to better downstream performance in the adaptive meshing and frequency sweeps. It’s faster from start to finish, not just in the initial mesh generation phase.

The Three Mesh Paradigm

With three meshing technologies in place, an Ansys auto-mesh algorithm scanned model geometries to determine which of the mesh technologies to use. In addition, fallbacks from one meshing technology to another ensured a reliable meshing flow. For example, if the algorithm identified a significant amount of high-aspect-ratio CAD, it would launch the Classic mesh algorithm. Phi was fully automated in the sense that it was always applied to uniformly swept in Z geometry.

Early on, each customer’s design flow tended to align with the same meshing techniques for every project, so they consistently gravitated to one meshing approach. However, as the HFSS solver algorithms became faster and more scalable, and cluster and Cloud hardware became more readily available, the size of HFSS simulations grew and became more complex. Designs were no longer single components; they were systems made up of multiple types of CAD. It wasn’t enough to solve just the PCB or just the connector anymore. To get it right, especially as data rates increased, it became more and more important to simulate together—connector and PCB, antenna and airframe, and so on.

As engineers design for tighter margins in the competitive electronics landscape, simulations encompass the PCB, IC packaging, connectors, surface mount components, and beyond. The Three Mesh Paradigm carried heavy burdens for customers and Ansys alike; a one-mesh-fits-all approach was not optimally effective. Understanding the different options and mesh technologies and knowing when to apply them could be a real challenge.

Enter HFSS Mesh Fusion.

The Rise of HFSS Mesh Fusion

Introduced in early 2021, HFSS Mesh Fusion achieved a fundamental meshing breakthrough using locally defined parameters. In other words, HFSS Mesh Fusion applies meshing technology depending on the local needs of the CAD. For example, when analyzing a simulation where a PCB contains both wirebond packaging and 3D connector models, such as a backplane connector, the PCB portion calls for Phi, wirebond packages call for Classic, and connectors are best meshed using TAU.

This multi-mesh capability became possible with HFSS Mesh Fusion. The only requirement is to assemble the design as a set of 3D Components, which can be encrypted to hide intellectual property and enable easy collaboration with component vendors. The 3D Component hierarchy provides the localized CAD definition to appropriately apply mesh. In addition, the same auto-mesh technology can be used to set the mesh locally, requiring little to no user input. From there, the same adaptive meshing scheme is applied to provide HFSS gold-standard accuracy and reliability.

Ansys recently worked with a team integrating a 5G chipset into a tablet computer. Before Mesh Fusion, there were a lot of mesh challenges to resolve before arriving at a usable simulation. With Mesh Fusion, Phi was applied locally to the chipset and TAU was applied to the remainder of the design—a sleek housing with complex CAD to encase the rest of the electronics in the tablet. Local mesh application ensured a clean mesh on the chipset, which was critical to the accuracy of the overall simulation. All of these seemingly disparate meshing approaches came together in Mesh Fusion for a fast, accurate, and reliable simulation result.

The Future of Meshing at Ansys

The presence of HFSS Mesh Fusion offers a night and day difference for Ansys customers. Instead of getting bogged down by meshing issues to resolve, users are free to explore more intensive design challenges that drive the electronics industry forward.

Most recently, Ansys used a ground-up approach to develop a new meshing technology. This new mesher, called Phi Plus, was designed specifically for wirebond packaging, see Figure 3., which is particularly difficult to mesh with other technologies – even Mesh Fusion. Like Phi, it’s geometry aware and takes advantage of a priori knowledge of system design. In addition, it was developed with parallelization approaches in mind, allowing for excellent scaling with HPC resources. Its success is not limited to wirebond packaging. Phi Plus can handle any kind of combined layout and 3D CAD simulation, such as connector on PCB. Phi Plus meshing is the next game changer in a long line of innovative techniques from Ansys!

Figure 3: Phi Plus Mesh applied to a wire bind package design

For updates, keep an eye on our social channels in this new year.

Also Read

The 5G Rollout Safety Controversy

Can you Simulate me now? Ansys and Keysight Prototype in 5G

Cut Out the Cutouts


MBIST Power Creates Lurking Danger for SOCs

MBIST Power Creates Lurking Danger for SOCs
by Tom Simon on 01-25-2022 at 10:00 am

MBIST power emulation

The old phrase that the cure is worse than the disease is apropos when discussing MBIST for large SOCs where running many MBIST tests in parallel can exceed power distribution network (PDN) capabilities. Memory Built-In Self-Test (MBIST) usually runs automatically during power on events. Due to the desire to speed up test and chip boot times, these tests are frequently run in parallel. The problem is that they can easily produce switching activity that is an order of magnitude above the levels found during regular chip operation. Indeed, these higher switching activity levels not only can cause supply droop affecting the test results, but also the high heat generated can harm chips. These effects can lead to incorrect binning or event direct and latent failures.

The solution is to simulate MBIST activity to predict the load on the PDN and the related thermal effects. With simulation results in hand, designers can correctly decide how many and which memory blocks can be tested in parallel. However, this is not always feasible in large SOCs with many memory blocks because the simulation times may be prohibitive. With gate level and even less accurate RTL simulation it may not be possible to run enough cycles to get the information needed.

In a white paper titled “Analyzing the power implications of MBIST usage”, Siemens EDA looks at how designers can run sufficient simulation to make informed decisions on the testing strategy before tapeout. Siemens worked with ARM on one of their test chips to create a test case where they could apply hardware emulation with the DFT and Power apps for the Siemens hardware emulator Veloce. First, the Veloce DFT app is used to output the internal activity during MBIST emulation. The app uses the Standard Test Interface Language (STIL) and produces industry standard output files.

The Veloce Power app takes the activity information from the MBIST runs to generate waveforms, power profiles and heat maps that can indicate when there are power spikes above specified limits. With this information test engineers can make informed decisions about the sequencing of MBIST.

MBIST power emulation

The ARM test case described in the Siemens white paper contains 176 million gates. Siemens used a Veloce system with 6 Veloce Strato boards for this test case. The Veloce emulator run took only 26 hours, which is 15,600 times faster than gate level simulation. Another benefit of the Veloce flow is that the activity information is streamed by the Power app to the power tools in the flow, saving disk space and time. The results from the test case showed several power spikes that violated the SOC design specifications. The output from the Veloce Power app shows the total power levels through the simulation along with the separate power contributions for the clock, combinational logic and memory. Likewise, there is information on where on the die the power is being used. This information makes it easy to determine where there are problems.

Finding problems such these requires running millions or billions of clock cycles. The limitations of software simulators make it prohibitive to perform the necessary analysis. Emulation offers a unique avenue to closely examine the power impacts of MBIST and other test operations long before silicon. The Siemens white paper offers insight into the power method used on a real test case. The white paper is available to download for reading on the Siemens website.

Also Read:

From Now to 2025 – Changes in Store for Hardware-Assisted Verification

DAC 2021 – Taming Process Variability in Semiconductor IP

DAC 2021 – Siemens EDA talks about using the Cloud


Business Considerations in Traceability

Business Considerations in Traceability
by Bernard Murphy on 01-25-2022 at 6:00 am

nuclear power control room min

Traceability as an emerging debate around hardware is gaining a lot of traction. As a reminder, traceability is the need to support a disciplined ability to trace from initial OEM requirements down through the value chain to implementation support and confirmed verification in software and hardware. Demand for traceability appears most commonly in safety-critical applications, from automotive and rail to mil-aero. In process industries such as petroleum, petrochemical and pharmaceutical, to power plants and machinery safety-related controls. These are today’s applications. The more we push the IoT envelope, Industry 4.0, smart cities and homes, the more cannot-fail products we will inevitably create.

Why bother?

Traceability requirements in our world started in software to ensure that what an OEM wanted was actually built and tested. Now that application-specific hardware plays a bigger role in many modern designs, effective traceability must look inside some aspects of hardware as much as it does in software.

But why is traceability support important? Is this a temporary business fad we’ll soon forget? Or is it an unavoidable secular shift? And what kind of investment will it require? Tools and servers, probably, but added engineering effort is a bigger concern. Will you need to add more staff and more time to schedules? Let’s start with potential market downsides to ignoring traceability.

Locking out regulated markets

It’s easy to understand any case where regulation requires traceability. For medical devices covered by ISO 14971, one source asserts, “Auditing capabilities are also critical for regulatory compliance traceability. The FDA has been known to close down a product or prevent it from being shipped, or even shut down a whole division until you are in compliance.”

Jama Software – who know a thing or two about traceability – add, “even if everything goes according to plan, there’s no guarantee that the traceability workflows in use will account for all relevant requirements and risks. For instance, in the case of a medical device, a matrix created within Excel won’t come with frameworks aligned with industry standards like ISO 14971, making it more difficult to ensure coordinated traceability and ensure successful proof of compliance.”

You shouldn’t assume this only applies to medical devices. OEMs are now working more closely with SoC builders and expect more detailed evidence that device meet all requirements. Coverage reports won’t satisfy this need. They’d rather have a traceability path, from their requirement to a point in the RTL and test plans where you can defend your implementation.

Locking out a geography

Problems don’t only arise in highly regulated industries. We all know of cases when features requested in one geographical region are not important in others. In theory, detailed specifications and lists of requirements capture all needs and variations between clients and geographies. In practice, some escapes slip through. Remember the call from a client, after the spec is finalized, that they forgot one very important feature? The local apps guy takes careful notes and promises the spec will reflect this requirement. But it never appears in the official spec

In one case I heard of, an Asia-Pac client had such a requirement – perhaps a control/status register extension they needed which no other geography had asked for. Seemed harmless enough. But the R&D team didn’t see that requirement in the specs. They built the chip without that feature, and the customer rejected the product. That client was going to be the reference account for the region but became the wrong kind of reference. They lost the whole geography for that product.

The point here being that specs are not quite structured enough for a formal agreement on what you are going to build. Which is why software product builders now depend heavily on formal requirements documentation and traceability. Specs are a good way to elaborate on and explain requirements, but requirements are becoming the definitive definition. Clients won’t find this to be a problem. They are often already very familiar with requirements traceability and related tools. You will just need to build the same awareness/discipline in your team, from the field to R&D.

Won’t that create a big overhead for you?

That depends. If you are going to use a general-purpose requirements tool with no understanding of hardware design, then probably yes, you will have to put quite a bit of work into traceability bookkeeping. It might be time to learn more about the Arteris® Harmony Trace platform. This links intimately to hardware design on one end and traceability standards on the other end. With design semantic know-how to greatly reduce the burden on engineering teams.

Also read:

Traceability and ISO 26262

Physically Aware SoC Assembly

More Tales from the NoC Trenches

 


How System Companies are Re-shaping the Requirements for EDA

How System Companies are Re-shaping the Requirements for EDA
by Kalar Rajendiran on 01-24-2022 at 10:00 am

Panelists and Cadence Moderator

As the oldest and largest EDA conference, the Design Automation Conference (DAC) brings the best minds together to present, discuss, showcase and debate the latest and greatest advances in EDA. It accomplishes this in the form of technical papers, talks, company booths, product pavilions and panel discussions.

A key aspect of driving advances in design automation is to discuss evolving EDA requirements, so the industry can develop the solutions as the market demands. At DAC 2021, Cadence sponsored an interesting panel session that gets to the heart of this. The session was titled “How System Companies are Re-shaping the Requirements for EDA,” with participants representing Arm, Intel, Google, AMD and Meta (Facebook). The discussion was organized and moderated by Frank Schirrmeister from Cadence Design Systems.

 

The following is a synthesis of the panel session on EDA requirements to support the upcoming era of electronics.

Frank Sets the Stage

Referencing a Wired magazine article, Frank highlights how data center workloads increased six-fold from 2010 to 2018. Internet traffic increased ten-fold and storage capacity rose 25x over that same time period. Yet data center power usage increased only 6% over the same period and we have semiconductor technology, design architectures and EDA to thank for it.

The electronics industry is entering an era of domain-specific architectures and languages, as predicted by John Hennessy and David Patterson back in 2018, The primary factors driving this move are hyperscale computing, high-performance edge processing and the proliferation of consumer devices. The next generation of hyperconnected, always-on consumer devices are expected to deliver user experiences never imaginable even a few years ago.

The Global DataSphere quantifies and analyzes the amount of data created, captured and replicated in any given year across the world. End point data creation growth is estimated at a CAGR of 85% from 2019 to 2025 and 175 Zettabytes in 2025. It is as much as there are grains of sand on all the world’s beaches. That’s quite a bit of data to be processed and dealt with.

The companies on the panel session are all involved in creating, analyzing, capturing and/or replicating this humongous amount of data. The discussion will cover what they see as their requirements on the EDA industry.

Arm – Chris Bergey

From an infrastructure perspective, Arm’s involvement is from HPC to data center to 5G to edge gateways. Specialty computing is a big focus area now. System validation is key, when customers are committing to large R&D expenses. When dealing with chiplets architectures leveraging 2D/2.5D/3D implementations, it is relatively easier when all the dies and design rules are owned by a single company.

For heterogeneous implementations, multi-chip packaging is generally used in markets where the margins are high enough to accommodate the extra design efforts, yield fallouts and margin stacking. In reality, hybrid chiplets implementations will help the market grow faster. The EDA industry is expected to play a big role in making heterogeneous chiplets implementation easier and robust.

Intel – Rebecca Lipon

High Bandwidth Memory (HBM) and high-speed servers drove development of critical IP that opened the floodgates for a whole bunch of new applications and products. The industry has to maintain its determination to continue on similar journeys and try to push the envelope. For example, IP level innovation at the packaging level.

Open Compute Project (OCP) is the foundation started by Meta a decade ago. Many companies including all of the companies represented on the panel today are members of this foundation. It works on initiatives that allow you to use open firmware and software that speeds up development and extends the life of products.

One of the initiatives that OCP is focused on is composable computing and supporting domain specific architectures.  EDA industry should look into this and look to Linux as a model for open-source community.

Google – Amir Salek

The number of categories of workflows that run in our global data centers, is in the 1000s.  And Google Cloud adds a whole new dimension to the demand on serving and processing data, supporting different workloads. Each workload has its own characteristics and while many of them can run on general-purpose hardware, many more need customized hardware.

Testing and reliability are primary areas of concern. I think this plays a major role in terms of understanding the causes of marginality and to decide how to deal with. Looking at TPU pods, we’re talking 1,000s and 1,000s of chips that are stitched together to work in coordination as a supercomputer. So, any little bit of a reliability issue during testing and test escapes, basically gets magnified. And then after many days, you find out that the whole effort was basically useless and you have to repeat the job again.

Prototyping FPGA is a tremendous platform for testing and validation. We are doubling down on emulation and prototyping every year to make sure that we close the gap between the hardware and software.

AMD – Alex Starr

The data center all the way to the consumer, the whole software stack needs to run on whatever the solution is. And many of our designs are implemented using chiplets architecture and that brings up different types of complexity to deal with. The things that keeps me up at night is how to verify and validate these complex systems and get to market quickly.

Hardware emulators and FPGA prototyping systems market is booming and is probably the highest growth area within EDA. Today’s emulators can fit very large designs and help prototype bigger devices. The hardware acceleration platforms to put large designs are tremendously expensive and difficult to get working at that scale. And, as designs grow to five plus billion dates, emulators are not going to scale. Emulation as used for prototyping is at its limit. We are looking at hybrid kind of modeling-based approaches. We are refining these internally and in collaboration with external standards bodies. We really want to extend out into our OEM customers and their ecosystems as well.

Meta (Facebook) – Drew Wingard

We are working on chips to enable our vision for the Metaverse. Metaverse involves socially acceptable all-day wearables such as augmented reality glasses. This new computing platform puts an enormous amount of processing resources right on one’s face. The result is that it demands very tight form factors, low power usage and very minimal heat dissipation.

We need to put different parts of processing in software and hardware. We need to think a lot about the tradeoffs between latencies vs throughputs and cost of computation vs cost of communication. We need a mix of options around different classes of heterogeneous processing and a whole lot of support around modeling. And we have to balance the desire for optimizing requirements versus offering optionality because nobody knows what the killer app is going to be.

As a consumer firm, privacy is incredibly important as they relate to our product usage. Our products should be socially acceptable for the persons wearing as well as the persons across from them.

When we roll all the above together, availability of system models and design cycle times become incredibly important. Many challenges revolve around availability of models and interoperability between models. This is where continuing to closely work with the EDA industry opens up opportunities.

Also Read

2021 Retrospective. Innovation in Verification

Methodology for Aging-Aware Static Timing Analysis

Scalable Concolic Testing. Innovation in Verification


LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface
by Daniel Nenni on 01-24-2022 at 6:00 am

SemiWiki Webinar ad New 400x400

To meet the increased demand for converter speed and resolution, JEDEC proposed the JESD204 standard describing a new efficient serial interface to handle data converters. In 2006, the JESD204 standard offered support for multiple data converters over a single lane with the following standard revisions; A, B, and C successively adding features such as support for multiple lanes, deterministic latency, and error detection and correction while constantly increasing Lane Rates. The JESD204D revision is currently in the works and aims to once more increase the Lane Rate to 112Gbps with the change of lane encoding and a switch of the error correction scheme to Reed-Solomon. Most of today’s high-speed converters make use of the JESD standard and the applications fall within but are not limited to Wireless, Telecom, Aerospace, Military, Imaging, and Medical, in essence anywhere a high-speed converter can be used.

The JESD204 standard is dedicated to the transmission of converter samples over serial interfaces. Its framing allows for mapping M converters of S samples each with a resolution of N bits, onto L lanes with a F octet sized frames that, in succession, form larger Multiframes or Extended Multiblock structures described by K or E parameters. These frames allow for various placement of samples in high- or low-density (HD) and for each sample to be accompanied by CS control bits within a sample container of N’ bits or at the end of a frame (CF). These symbols, describing the sample data and frame formatting, paired with the mapping rules dictated by the standard, allow to communicate a shared understanding of how the transmitted data should be mapped and interpreted by both parties engaging in the transmission.

The 8b10b encoding scheme of JESD204, JESD204A and JESD204B paired with Decision Feedback Equalizers (DFEs) may not work efficiently above 12.5Gbps as it may not offer adequate spectral richness, for this reason, and for better relative power efficiency 64b66b encoding was introduced in JESD204C targeting applications up to 32 Gbps. JESD204D that is following in its footsteps with even higher line rates planned up to 112Gbps utilizing PAM4 PHYs demands a new encoding to efficiently encapsulate the Reed Solomon Forward Error Correction (RS-FEC) 10-bit symbol-oriented mapping.

Deterministic latency introduced in JESD204B allows for the system to maintain constant system latency throughout reset, and power up cycles, as well as re-initialization events. This is accomplished in most cases by providing a system reference signal (SYSREF) that establishes a common timing reference between the Transmitter and Receiver and allows the system to compensate for any latency variability or uncertainty.

The main traps and pitfalls of system design around the JESD204 standard would deal with system clocking in subclass 1 where deterministic latency is achieved with the use of SYSREF as well as SYSREF generation and utilization under different system conditions. Choosing the right frame format and SYSREF type to match system clock stability and link latency can also prove challenging.

View the replay here: https://attendee.gotowebinar.com/recording/5500320533040852748

About Comcores

Comcores is a Key supplier of digital IP Cores and design services for digital subsystems with a focus on Ethernet Solutions, Wireless Fronthaul and [-RAN, and Chip to Chip Interfaces. Comcores’ mission is to provide best-in-class, state of the art, quality components and design services to ASIC, FPGA, and System vendors, and thereby drastically reduce their product cost, risk, and time to market. Our longterm background in building communication protocols, ASIC development, wireless networks and digital radio systems has brought a solid foundation for understanding the complex requirements of modern communication tasks. This know-how is used to define and build state-of-the-art, high-quality products used in communication networks.

 

Also Read:

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

CEO Interview: John Mortensen of Comcores

Comcores Wiki


How France’s Largest Semiconductor Company Got Stolen in Plain Sight

How France’s Largest Semiconductor Company Got Stolen in Plain Sight
by Doug O'Laughlin on 01-23-2022 at 10:00 am

SOITEC Stock Price

Originally published on Fabricated Knowledge

Soitec is a semiconductor materials company known for its smart cut and Silicon on Insulator (SOI) technologies, which are critical in 5G, Silicon Photonics, and Silicon Carbide (EV) end-markets.

Yesterday, they announced that current CEO Paul Boudre will retire and be replaced by Pierre Barnabé in July of 2022. Barnabé is currently an SVP at Atos, a French IT consulting company. This seemingly routine CEO transition sank company shares by over 15%. It’s not what it seems.

First, the entire existing management team is opposedThe management team sent a letter to the board that does not mince words. Translated to English, it reads:

Soitec’s Management Committee deplores the takeover of Soitec by the Chairman of the Board of Directors for 3 years, which culminates today with the incomprehensible appointment of a new CEO.

What exactly is happening at Soitec? Well first let’s just start with the current story of the now outgoing CEO, Paul Boudre.

The Story of Soitec (and Paul Boudre)

Soitec, like most firms, was no overnight success. This timeline of the companies history is the best place to start. Soitec originally pursued other markets but failed to gain traction. In 2015, the company Soitec shuttered its solar business as part of a hard pivot away from its existing failures.

The situation was dire. The company had more debt than the value of its market capitalization.

For context Soitec today is a ~5.8 billion euro company now

In January of 2015, with shares trading at measly $200 million market capitalization, Paul Boudre was appointed CEO. Paul initially joined the company from KLA as an Executive VP in Sales before transitioning to COO in 2008. Drastic changes were needed.

Soitec’s rather bleak 2015

In the midst of significant layoffs in the solar division, a hefty financing package company to keep the company afloat, Paul focused the company on SOI and made it what it is today.

The details of the turnaround are unimportant, but the results today speak for themselves: Soitec now boasts a ~$5.8B market capitalization (~29x higher than the start of his tenure) and has 4x+ annual revenue. This is a legendary turnaround that will surely be sung of in Semiconductor Valhalla.

At the age of 63, Paul is approaching retirement. Given such a track record, one imagines a thoughtful process involving existing management, not the overnight skullduggery that actually occurred. Further, the new CEO candidate has no experience in the semiconductor field. Why did the board choose newcomer Pierre Barnabé over other qualified internal candidates?

To answer this, we need to learn about the Chairman of the Board.

Enter Eric Meurice (and Pierre)

Eric Meurice is best known as the former CEO of ASML. In 2013 Eric was appointed Chairman of the board of ASML after 8 years of being CEO. Note the exact lanaguage of the Chairman announcement.

Eric Meurice will be Chairman of ASML Holding and act as adviser to the new leadership and the Supervisory Board until the end of his contract on 31 March 2014, ensuring a smooth and comprehensive transition of critical tasks and processes, customer contacts and relations with strategic suppliers.

The current CEO gets the job during Eric Meurice’s contract and Mr. Meurice gets bumped to Chairman. It’s pretty customary for the outgoing CEO to spend time as the current Chairman. It’s less customary to not have the contract renewed. Despite European CEOs having shorter tenures, this is atypical.

As the former CEO of ASML, Eric Meurice is now an ideal candidate for board memberships. Here’s a list of the few boards he’s been a part of.

Let’s hone in on Eric’s stint at Soitec. Eric Meurice joined the board of Soitec in 2018 as the chair of the Nomination Committee. The committee was tasked with nominating a new Chairman, so else does Eric Meurice nominate but himself?

Eric, overqualified as he is, gets the job. But that’s not enough. In 2019, he picks up two more important roles as the Chair of the Strategic Committee and Chair of the Compensation Committee. He now holds both the keys to the kingdom and to its treasury.

Joining a company, becoming the Chairman of the Board, and then taking on additional roles as the head of the Compensation and Strategic Committees fit into the typical model of a high-powered executive. That’s standard.

How does the French Government fit into Eric’s rise to power?

Eric Meurice’s Extraordinary Actions

Let’s look at the specific actions Eric Meurice took that prompted significant backlash from the executive committee. In the executive committee letter, they listed out a few specific complaints that were (importantly) falsifiable. Their (translated) list of grievances is as follows:

  • Takeover of the interim compensation committee that has become definitive, creating an omnipresence in all committees and at the head of several committees
  • Interference in social dialogue without consultation with management.
  • Double language regarding the opposition of management on the implementation of PAT (Action Plan for All) in 2021.
  • Establishment of internal regulations granting exceptionally extensive powers to the Chairman of the Board of Directors and establishing the keys to his takeover.
  • Alteration of evidence in the context of the investigation of a governance drift.
  • Intimidation, vexatious practices towards members of the Executive Committee.

The board granted itself extensive power through a list of resolutions added to the typical bylaws of the company at the so-called “extraordinary shareholders general meeting”. It’s rare to see an extraordinary resolution, so it’s outright mind-boggling to see 35 resolutions. This is one of the broadest power grabs I’ve ever seen. Let’s consider a few of the resolutions.

There are a total of 35 resolutions, each giving the board more power than a typical board would have.

The resolutions are technical, but the gist of them is that the board now has a whole set of new powers that are usually reserved for the CFO. They can issue shares, buy back shares, decide who gets shares, and all of these powers are directly granted to the board. This creates extraordinary power to the board and the people on it.

This explains why they had 3 CFOs since the extraordinary resolutions began. Remy Pierre was replaced in September 2019 by Sébastien Rouge who was replaced one year later by Lea Alzingre. That’s high turnover for the job.

Maybe these resolutions could be kosher, but the reason they are not is all the extraordinary resolutions are new for Soitec. Take 2017 for example, when there were only 6 plain resolutions. Something has changed.

6 routine resolutions in 2017.

But besides the mundane extraordinary powers of resolutions given to the board, there’s more at play. I now want to focus on the executive committee complaint around the compensation board because this is where the other players (France) start to enter the scene.

Board Power Politics

Let’s talk about the composition of the board. The board power politics make sense when you can see who is sits on what board committees. There are 5 board committees in Soitec, and the restricted strategic meeting is an ad-hoc group for acquisitions or other events. So that means there are really 4 standing committees and 1 ad-hoc committee. They are the Strategic Committee, the Audit Committee, the Nomination Committee, and the Compensation Committee.

These committees are filled with 14 members, many of them are supposed to be independent. This breaks down with further examination as many of the “independents” are clearly affiliated with the French Government. Now let’s start with chairs of the five committees.

Eric Meurice is the Chairman of the BoardChair of the Compensation Committee (most powerful committee), and the Chair of the Strategic Committee.

Laurence Delpy is Chair of the Nomination Committee, the chair that Eric Meurice previously held before he became Chairman of the Board. She’s independent, yet she sits on all 5 committees.

Lastly, Christophe Gegout is chair of the Audit and Risk Committees. He’s supposed to be an independent member, but he used to work for CEA aka the large French consortium with a meaningful stake in Soitec. He doesn’t work there anymore, but they’re clearly playing fast and loose with the definition of the word “independent”.

Let’s look at the actual composition of the committees and identify 1) who is in what committee and 2) which committees matter. I made a simple graphic based on the filing with a legend that explains where everyone’s allegiances lie. In order of importance, it’s the Compensation, the Nomination, the Strategic & Restricted Strategic, and lastly the Audit committee. I broadly categorized the board members into the 4 “teams”, aka Team France, Team China, Independent, and Employee Directors. Note the legend in the picture below.

Notice the legend in the photo above. Team Blue / France is the one to watch

Let’s start backward. The green team is employee directors and is part of a movement to get labor union leaders on the board so employees have more say in the company writ large. For the sake of this analysis, I consider them non-players, as this is their first year on the board, and they don’t sit on committees.

Next is the team “actually independent”. Note that Satoshi Onishi works for Shin-Etsu, so he’s independent as in he’s representing his company’s JV with Soitec. He is not a big player. In Shuo Zhang’s case, I cannot make any meaningful connections to anyone else. Paul Boudre of course is the outgoing CEO. Notice that he does not sit on any important committees.

That brings me to Team China. Team China is the NSIG (National Silicon Industry Group) block, which invested 14.5% at the time which now is diluted to 10.34% stake (this is lower now) into Soitec in May 2016. NSIG, like many large Chinese firms, is an extension of the CCP. They hold two seats that I represent in Red. Kai Seikku is actually sitting on the powerful committees, aka the nomination and compensation committees. But importantly Jeffrey Wang has been pushed out to the not-important audit committee.

Last is Team France. With the exception of Francoise Chombar, they are all French Nationals. I put Francoise Chombar on Team France because she shares a board with Eric Meurice at Umicore, so I assume she’s on his team.

Everyone else either works for CEA (French Alternative Energies and Atomic Energy Commission) or Bpifrance (French Public Investment Bank), formerly worked there, or suspiciously looks connected (Laurence Delpy is clearly important but I have no links other than she worked at Alcatel Lucent where Pierre is from). Thierry Sommelet works at Bpifrance for example.

Importantly Team France consists of 6 out of 8 members of both of the most powerful Nomination and Compensation committees. And everyone else who is not affiliated with Team France conveniently sits outside of these boards with the exception of Kai Seikku, who represents the powerful ~10.34% share block from NSIG. Team France is clearly in control here, and the BPI and CEA seats are permanent, with rotating members but consistent committees.

What I’m trying to say is that Soitec’s board is controlled board by a very small number of players, all of which can be linked to France. Obviously, these members want to protect the interests of France, and what’s more most of the moves taken by the board pre-date Eric’s arrival. So it’s clearly not Eric in charge, but rather the representatives of France that are driving this bus!

What’s France Got to Do With It?

When I first started down the rabbit hole of the Eric Meurice takeover I thought the motivation was pretty simple. Ousted CEO Eric Meurice was looking for another kingdom to rule and found it in the form of Soitec. A clear power play, as highlighted by the management letter. After all, we knew he already had ambitions to sit in the CEO seat again, per this ST Micro rumor. In a series of successive moves, he rose from Director to Chairman, and he pushed his way to the top.

There are a few problems with that theory and it comes in two bold flavors. First, the year (July 2018) that Eric Meurice got appointed to the board as a director was the first year of expanded extraordinary resolutions. The previous year it went from 8 total resolutions to whopping 23 new resolutions. So the expanded board powers actually pre-date Eric’s arrival on the board. Eric Meurice was just the conduit for the control of the board.

The second thing was this extremely crucial disclosure about a standstill agreement with NSIG that made clued me in that something else was happening. When NSIG (National Silicon Industry Group aka China) bought the 14.5% stake in Soitec, they agreed to a standstill agreement on the shares.

The standstill agreement is an agreement at the time so that NSIG (China) didn’t continue to raise their stake in Soitec and effectively take over the company. It’s a takeover provision that stopped their large influence from increasing. The French board members were clearly aware of this.

But that standstill agreement ended on June 7, 2019, which was 1 year after Eric’s rise to power. This explains why he entered when he did. However, this tidbit made everything become clear(er) regarding the resolutions.

Should NSIG Sunrise S.à.r.l acquire shares in the Company before the expiration of the Shareholders’ Agreement at the close of the Shareholders’ General Meeting called to approve the financial statements for the fiscal year ended March 31, 2021, it would lose its rights relating to the Company’s governance

Team France knew they needed to lock down the company from a governance standpoint before March 31, 2021, or risk further influence from NSIG aka China. And Eric Meurice was the perfect man for the job. Win-win.

Besides Paul Boudre was ready to go. He terminated a paused employment contract so that the company wouldn’t have to pay him a termination fee, and they rewarded him with an incentive (I find this weird). He clearly signaled he was on the way out. Readers of the filings could have spotted his retirement as early as 2020, it was just the executive team that was blindsided.

But that brings us to what really happened. France Nationalized Soitec through a series of board moves, just in time before the Chinese government could push for more control. The CEO put in place is just a placeholder for the board.

National Champions Need Nationalization

How are we surprised that France wanted to nationalize Soitec?! This is France we are talking about here! It’s clear that there’s a vested interest in keeping Soitec French-controlled. The expanded board powers also expanded government influence over Soitec. Soitec has effectively become a State Owned Controlled Enterprise.

Look at the shareholder base. The ~17.67% French-controlled block was large, but not completely dominant given the large NSIG (aka China) block. A Chinese takeover of France’s semiconductor star would be devastating (not to mention embarrassing). Team France could not let this happen.

All of the board actions in conjunction means that this was likely premeditated by the controlling shareholders – the state of France – to further control Soitec. Paul’s retirement was just the catalyst to push the changes that have already been made ahead. So what about our underqualified CEO candidate, Pierre Barnabe?

The thing that I found really curious was that Pierre is on the board of INRIA, or the Institute for Research in Computer Science and Automation. It’s a government lead entity that’s core purpose is to further France’s technology interests. From the French National control lens, Pierre is a perfect CEO.

What Now?

So now France effectively owns Soitec. What are they going to do with it? Paul left the company on a strong financial note and Soitec is a springboard for other national pursuits, like manufacturing Silicon Carbide. Manufacturing Silicon Carbide would be terrible for Soitec the business, but great for France’s national ambitions.

Another likely option is that the board can use its expanded powers to purchase a fab in Europe, which would be perfect given Soitec produces wafers. Now France produces wafers and chips! There is a small design team within Soitec, and expanding that could also further French national interests. That’s a full stack semiconductor company with just 2 additional acquisitions.

All of this makes sense in the now geopolitically driven world of semiconductors. As of late, there have been multiple announcements for new mega-fabs, like the Intel Ohio fab or the new TSMC Japan fab. Every country is doing what it can to shore up its semiconductor businesses, and France couldn’t let China steal their national champion.

Conclusions and Questions

Soitec is effectively nationalized through board control. It makes sense given that China had a window to push for control, so instead, France just took the whole thing. France, Soitec’s largest shareholder, put everyone in a place of power in order to achieve this. What looked like a power-hungry move by a single actor, Eric Meurice, really was a coordinated win-win to control the company for France.

While I’m sure Soitec’s nationalization will be disappointing for free-market capitalists (where they at?), it’s not surprising at all given the current semiconductor climate. It’s a de facto-controlled company now. The takeaway: national-level politics continue to matter in the semiconductor industry. This theme is not going to go away any time soon. Soitec is just the latest and greatest in the series. Goodbye Soitec, Hello French National Semiconductor company.

There are some loose ends. How does NSIG feel about this? Did Paul Boudre know about any of this? There are a lot of other interesting threads in this entire story, but it’s clear what happened. France nationalized their largest semiconductor company!


If you enjoyed this piece, please consider subscribing. Even the free tier gets occasional posts. I try to write about semiconductor companies broadly from an investment perspective, so this investigative journalism is a bit different.

Oh by the way if you are either a past ASML employee during Eric’s time at ASML or a current Soitec employee who would love to talk, you can reach me at Doug@fabricatedknowledge.com.


Some Unfinished Business

I wanted to discuss parts of the story that didn’t flow well with the rest of the power politics at Soitec. The question I presume most people would be wondering is about Pierre Barnabe. Why is he not qualified again?

I really had a problem with this statement from Soitec management.

He draws upon a remarkable track record that includes a threefold revenue increase at the Atos Big Data and Cybersecurity division in the space of a few years, in a highly competitive market requiring deep cooperation with the ecosystem.

Part of that three-fold growth was 38 acquisitions along the way. Is that execution or is that just buying three times more revenue? And also the stock and business are horrid. It’s down 63% over the last 5 years and revenue growth is tepid at best.

Oh, and STMicroelectronics has a deep bench of French National semiconductor executives that would have made a perfect fit. Given Pierres in with the government, Pierre definitely did not get the job on the basis of merit.

What about Paul Boudre?

One of the weirdest side exits is Paul Boudre. By voluntarily abdicating his contract agreement and then getting paid to do so, I think Paul had wind of changes but didn’t care as he knew he was on the way out. And even if Paul did want to change things, he was locked out of the committees that matter (compensation and nomination). He wouldn’t be privy to what happened anyways.

I think the biggest blindside is the executive committee. I get why their reaction is so strong, but I think their outrage of Paul’s replacement and the lack of internal promotion is missing the bigger national interest story in what they thought would be a routine succession. The COO likely expected promotion, and other executives in turn would be promoted to COO. I feel bad for them, as it’s frustrating to not be a part of the company they clearly helped build.


Appendix: A bit more on SOI

Silicon on Insulator is a technology that embeds the insulator below the surface of silicon. Usually, this is done via Ion implantation, a “smart cut” and a flip of the substrate so that the buried insulator is now below a device layer. This improves overall performance meaningfully.

Soitec is the only volume manufacturer in the world, and their competitors license their technology. Soitec believes that their market share in SOI wafers is ~77% globally.

Subscribe to Fabricated Knowledge

Related Blog


Musk: Colossus of Roads, with Achilles’​ Heel

Musk: Colossus of Roads, with Achilles’​ Heel
by Roger C. Lanctot on 01-23-2022 at 8:00 am

Elon Musk Achilles Heel 1

For Tesla, 2021 was an amazing year. A blindspot looms in 2022.

Critics cheered the National Highway Traffic Safety Administration for opening multiple investigations into fatal and near fatal Tesla crashes.  Legislators decried the de facto beta testing of Tesla’s Full Self-Driving beta on public roads. And in December, click-bait headlines shined a spotlight on a distracting in-dash video game function – which Tesla had enabled and later disabled – ending the controversy.

While the critics howled, fans flocked. Tesla closed out the year by reporting 936,000 units sold globally and garnering a top safety pick from the Insurance Institute for Highway Safety for the Model Y, following up a similar assessment for the Model 3. For now, for Tesla, it looks like nothing but green lights – but there is trouble ahead.

Tesla’s CEO, Elon Musk, is famous for poo-pooing technologies that he holds in ill regard – among them: hydrogen fuel, lidar sensors, and wireless V2X communications. His dim assessment of lidar stands in stark contrast to an industry-wide embrace of the technology which might have served to help Tesla vehicles avoid crashing into police cars and emergency vehicles parked on the shoulders of highways.

Musk claims that lidar is unnecessary. What he is really saying is that lidar is expensive and he is trying to control the cost of his vehicles. For Musk, cameras and perhaps a little bit of radar are good enough.

In the same way, Musk has routinely dismissed vehicle-to-everything wireless communication technology enabled by cellular V2X (C-V2X) or dedicated short range communication (DSRC). Musk’s position is that autonomous or semi-autonomous vehicles must work without a network.

Musk’s opposition to wireless-enhanced autonomous operation puts him and his company outside of a vast industry collaboration working toward the leveraging of wireless technology to enhance vehicle safety. The onset of 5G, which brings with it V2X functionality, promises to enable a wide range of collision avoidance applications including the protection of vulnerable road users and the communication of the signal phase and timing of traffic lights.

These capabilities are arriving today in cars in China and, soon, in the U.S. as C-V2X technology sees swift adoption from car makers including Ford Motor Company and Audi, among others. These car companies understand that C-V2X will allow their cars to communicate their location and avoid collisions.

Meanwhile, manufacturers of infrastructure equipment are increasingly shifting to C-V2X tech to enable infrastructure-to-vehicle communications. Here, too, the objective is safety and collision avoidance.

Musk is resisting and Tesla is steadily diverging from the rest of the industry. While Tesla has led the way toward the widespread adoption of wireless-based over-the-air software updates, the company has neglected using wireless technology for safety purposes.

Tesla still lacks an automatic crash notification function (outside of Europe) – equivalent to General Motors’ OnStar. And Tesla’s traffic light recognition application that is part of the full self driving beta is entirely dependent upon vehicle mounted cameras and the vigilance of the driver for safe operation.

Musk’s camera-centric approach (enhanced with ultrasonic and radar sensors), which has helped propel the company to the forefront of semi-autonomous vehicle development has clearly reached the limits of its efficacy. We’ve all seen the videos of Teslas mistaking Burger King signs or even the moon for a stop sign. The multiple crashes involving fixed objects in the roadway speak volumes – to consumers and regulators.

The market leader in camera technology, Intel’s Mobileye, has dropped the camera-only pretense and is developing its own lidar while cooperating, in the short term, with lidar supplier Luminar.  That’s a multimillion dollar shift by Mobileye. Tesla has made amazing strides with camera technology, but has now clearly reached the end of the road.

To keep up with developments coming fast throughout the rest of the automotive industry, Tesla needs to embrace the integration of wireless technology into safety applications for collision avoidance and emergency response. What Tesla is missing by failing to leverage wireless is the ability to extend the safety sensing horizon of the car beyond the line of sight of its cameras – including over hills and around corners.

Without wireless, Tesla will also remain blinded to emerging wireless alerting solutions for everything from the movement of emergency vehicles (Haas Alert Safety Cloud), vulnerable road users, wrong way drivers, and, most notable of all, the presence and signal phase and timing of traffic lights. Tesla has proven that it cannot solve these challenges with cameras alone.

There isn’t much that Tesla is getting wrong – from fast charging to direct sales to software updates to battery gigafactories. Wireless connectivity, for Tesla, remains a weakness.

Also read:

RedCap Will Accelerate 5G for IoT

Traceability and ISO 26262

Siemens EDA Automotive Insights, for Analysts