DAC2025 SemiWiki 800x100

What is Your Ground Truth?

What is Your Ground Truth?
by Roger C. Lanctot on 08-14-2022 at 6:00 am

What is Your Ground Truth

When my son bought a 2020 Chevrolet Bolt EV a couple of years ago, I was excited. I wanted to see what the Xevo-supplied Marketplace (contextually driven ads and offers) looked like and I was also curious as to what clever navigation integration GM was offering.

I was swiftly disappointed to discover that Xevo Marketplace was not an embedded offering but rather an app to be projected from a connected smartphone. As for navigation, it wasn’t available for my son’s MY2020 Bolt even as an option.

I was stunned for two reasons. First, I thought the Marketplace app was intended as a brand-defining service which would surely be embedded in the vehicle and integrated as part of the home screen as an everyday driving application. Second, how could GM ship an EV that was unable to route the driver to the nearest compatible and available charging station via the on-board navigation system?

These manifestations made me wonder whether I was witnessing the appification of in-vehicle infotainment. What if and why shouldn’t cars ship with dumb terminals that draw all of their intelligence and content from the driver’s/user’s mobile device?

Further fueling this impression was GM’s subsequent introduction of Maps+, a Mapbox-sourced navigation app based on Open Street Map with a $14.99/month subscription. To be followed by the launch of Google Built-in (Googlemaps, Googleplay apps, Google Voice Assistant) as a download with a Premium OnStar plan ($49.99/month) or Unlimited Data plan ($25/month) – seems confusing, right?

The truly confusing part, though, isn’t the variety of plans and pricing combos, it is the variable view of reality or ground truth. Ground truth is the elusive understanding of traffic conditions on the road ahead and how that might impact travel and arrival times. Ground truth can also apply to the availability of parking and charging resources. (Parkopedia is king of parking ground truth, according to a recent Strategy Analytics study: https://business.parkopedia.com/strategy-analytics-us-ground-truth-testing-2021?hsLang=en )

The owner of a “lower end” GM vehicle – with no embedded navigation option – will have access to at least three different views of ground truth: OnStar turn-by-turn navigation, Maps+ navigation, and Google- and/or Apple-based navigation. (Of course Waze, Here We Go, and TomTom apps might also be available from a connected smartphone.)

Each of these navigation solutions will have different views of reality and different routing algorithms driven by different traffic and weather data sources and assumptions. Am I, as the vehicle owner and driver, supposed to “figure out” which is the best source? When I bought the car wasn’t I paying GM for its expertise in vetting these different systems?

What about access to the data and the use of vehicle data? Is my driving info being hoovered up by some third party? And what is ground truth when I am being given varying lenses through which to grasp it?

Solutions are now available in the market – from companies such as TrafficLand in the U.S. – that are capable of integrating still images and live video from traffic cameras along my route allowing me to better understand the routing decisions my car or my apps are making for me. The optimum means for accessing this information would be through a built-in navigation system.

GM continues to offer built-in or “embedded” navigation across its product line with a handful of exceptions – such as entry-level models of the Bolt.  Embedded navigation – usually part of the integrated “infotainment” system – is big business, representing billions of dollars in revenue from options packages for auto makers.

More importantly, the modern day infotainment system – lately rendered on a 10-inch or larger in-dash screen – is a critical point of customer engagement. The infotainment system is the focal point of in-vehicle communications, entertainment, and navigation – as well as vehicle status reports.

Vehicle owners around the world tell my employer – Strategy Analytics – in surveys and focus groups that the apps that are most important to them while driving relate to traffic, weather, and parking. Traffic is the most important, particularly predictive traffic information, because this data is what determines navigation routing decisions.

Navigation apps do not readily disclose their traffic sources, but it is reasonable to assume that a navigation app with HERE or TomTom map data is using HERE or TomTom-sourced traffic information. Googlemaps has its own algorithms as do Apple and Mapbox – but, of course, there is some mixing and matching between the providers of navigation, maps, and traffic data.

This is all the more reason why access to TrafficLand’s real-time traffic camera feeds is so important. Sometimes seeing is believing and TrafficLand’s traffic cameras are, by definition, monitoring the majority of known traffic hot spots across the country.

When the navigation system in my car wants to re-route me – requesting my approval – I’d like to see the evidence to justify a change in plans. Access to traffic camera info can provide that evidence.

I can understand why GM – and some other auto makers such as Toyota – have opted to drop embedded navigation availability from some cars as budget-minded consumers seek to pinch some pennies. But the embedded map represents the core of a contextually aware in-vehicle system.

There is extraordinary customer retention value in building navigation into every car – particularly an EV. The fundamental principles of creating safe, connected cars call for the integration of a location-aware platform including navigation.

Deleting navigation may be a practical consideration as attach rates decline, but it’s bad for business. In fact, there is a bit of a head-snapping irony in GM or Toyota or any auto maker deleting embedded navigation in favor of a subscription-based navigation experience from Mapbox or Google. These car makers are telling themselves that the customers least able to pay for built-in navigation will be willing to pay a monthly subscription for an app. I think not.

This is very short-term thinking. Location awareness is a brand-defining experience and auto makers targeting “connected services” opportunities will want to have an on-board, built-in navigation system. If not, the auto maker that deletes built-in navigation will be handing the customer relationship and the related aftermarket profits to third parties such as Apple, Amazon, and Google. That’s the real ground truth.

Also Read:

What’s Wrong with Robotaxis?

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform


Podcast EP100: A Look Back and a Look Ahead with Dan and Mike

Podcast EP100: A Look Back and a Look Ahead with Dan and Mike
by Daniel Nenni on 08-12-2022 at 10:00 am

Dan and Mike get together to reflect on the past and the future in this 100th Semiconductor Insiders podcast episode. The chip shortage, foundry landscape, Moore’s law, CHIPS Act and industry revenue trends are some of the topics discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA

CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA
by Daniel Nenni on 08-12-2022 at 6:00 am

Kai Beckmann 1

Kai Beckmann is a Member of the Executive Board at Merck KGaA, Darmstadt, Germany, and the CEO of Electronics. He is responsible for the Electronics business sector, which he has been leading since September 2017. In October 2018, Kai Beckmann also took over the responsibility for the Darmstadt site and In-house Consulting. In addition, he acts as the Country Speaker for Germany with responsibility for co-determination matters.

Prior to his current role, Kai Beckmann was Chief Administration Officer of Merck KGaA, Darmstadt, Germany, with responsibility for Group Human Resources, Group Business Technology, Group Procurement, In-house Consulting, Site Operations and the company’s Business Services, as well as Environment, Health, Safety, Security, and Quality.

In 2007, he became the first Chief Information Officer of Merck KGaA, Darmstadt, Germany, with responsibility for Corporate Information Services. From 2004 to 2007, he served as Managing Director of Singapore and Malaysia, and prior to that he held senior executive responsibility for the Information Management and Consulting unit from 1999 to 2004. He began his career at Merck KGaA, Darmstadt, Germany in 1989 as an IT system consultant.

Kai Beckmann studied computer science at the Technical University of Darmstadt from 1984 to 1989. In 1998, he earned a doctorate in Economics while working. He is married and has one son.

Tell us about EMD Electronics

Merck KGaA, Darmstadt, Germany, operates across life science, healthcare, and electronics. More than 60,000 employees work to make a positive difference in millions of people’s lives every day by creating more joyful and sustainable ways to live. In 2021, Merck KGaA, Darmstadt, Germany, generated sales of € 19.7 billion in 66 countries. The company holds the global rights to the name and trademark “Merck” internationally. The only exceptions are the United States and Canada, where the business sectors of Merck KGaA, Darmstadt, Germany, operate as MilliporeSigma in life science, EMD Serono in healthcare, and EMD Electronics in electronics.

As EMD Electronics we are the company behind the companies advancing digital living. Our portfolio covers a broad range of products and solutions, including high-tech materials and solutions for the semiconductor industry, as well as liquid crystals and OLED materials for displays and effect pigments for coatings and cosmetics. We offer the broadest portfolio of innovative materials in the semiconductor industry and support our customers in creating industry-leading microchips. In the US, EMD Electronics alone has approximately 2,000 employees across the country, with more than a dozen manufacturing and R&D sites spanning the continental U.S.

Last year you announced a $1 billion investment in the US to support semiconductor customers. Can you tell us more about these investments?

These investments are part of our global program called “Level Up” for investing in R&D, capacity, and accelerating growth in the semiconductor and display markets. Over the next five years, we plan to spend around $2.5 billion globally in long-term fixed assets (capital expenditures) in Semiconductor and Display Solutions. In the U.S., EMD Electronics plans to invest primarily in its Arizona, California, Texas, and Pennsylvania sites. Last year, we announced global investments of more than $3.5 billion as part of our “Level Up” growth program. With this, we seek to capture the growth opportunities that come with the significantly accelerating global demand for innovative semiconductor and display materials. This demand is driven by exponential data growth and highly impactful technology trends that include remote working, the growth of AI, and soaring demand for electric vehicles. Our “Level Up” growth program focuses on four mutually reinforcing key priorities: Scale, Technology, Portfolio, and Capabilities. Further investing in these four areas builds the foundation of our ambitious growth targets, in conjunction with the strong demand for electronics materials, particularly semiconductors.

Sustainability is becoming increasingly important across the industry. What is Merck KGaA, Darmstadt, Germany, and especially the business of EMD Electronics doing to ensure a sustainable future?

We believe that we can harness science and technology to help tackle many global challenges. Always guided by a robust set of values, we approach all our actions and decisions with a sense of responsibility. Sustainability has therefore been vital to us for many generations. We can only ensure our own future success by also creating lasting added value for society.

In creating long-term added value for society, we have defined three goals within our sustainability strategy. In 2030, we will achieve progress for more than one billion people through sustainable science and technology, along with integrating sustainability into all our value chains. By 2040, we will be climate-neutral and reduce our resource consumption. Most of our greenhouse gas emissions stem from process-related emissions during the production of specialty chemicals for the electronics industry. With improved processes, Merck can significantly reduce those emissions in the future.

As a materials supplier for the electronics industry, we are a key enabler for sustainable innovation. We are addressing emissions through abatement and alternatives. For example, we are now designing a process for large-scale NF3 abatement as a pre-requisite to meet our long-term GHG goals, and to drive decarbonization in the industry.  In addition to optimizing our own processes to find more sustainable solutions, we are also a trustworthy partner for our customers to support them on their sustainability journey. Just recently we announced  our collaboration with Micron, testing an alternative low-GWP etch gas of ours, further aligning on our shared sustainability goals.

You recently attended Semicon West. What were your reactions to being back in person with customers at a trade show in the US, and what announcements or innovations were you most excited about?

I truly appreciated being able to re-connect with many great people from all over the world face-to-face. SEMICON West is the choice place to exchange on key topics in our industry, tackling industry challenges, and to establish partnerships and collaborations. The vitality of innovation never stops and it’s wonderful to see the progress the industry is making.  It is fascinating to see how the industry is driving innovations in new materials in fields such as 3D NAND, FinFET, Nanosheet or EUV, to continuously make devices more intelligent, power efficient and smaller. With the move to 3D, shrinking is no longer the most cost-effective way to increase density. Feature sizes for 3D chips will no longer shrink and may increase, as they already have for 3D NAND. I also heard several times “etch could become the new litho”. Since Merck is supplying materials for all parts of the manufacturing process – litho, deposition, etch, CMP, cleans, you name it – we are well positioned to participate in the continued growth story that is Moore’s Law, 2nd edition. Additionally, we appreciate that sustainability is becoming more and more important in our industry where we are a well-respected partner for our customers.

Finally, let me mention data analytics as one driving force for the industry. We combine a data-driven approach with a physics-based expertise. In December last year we formed the independent partnership Athinia together with Palantir to deliver a secure collaborative data analytics platform for the semiconductor industry. The Athinia platform will leverage AI and big data to solve critical challenges, improve quality and supply chain transparency, and time to market. At Semicon West Athinia announced that Micron Technology plans to use the data analytics platform to create a pioneering data collaboration ecosystem that will help lead a continued journey of digital transformation with Micron’s critical suppliers.

Advancing Digital Living has data at its core, data that will be continually leveraged in the coming decade. Our teams pioneer digital solutions that ensure we can deliver high-caliber, customized quality control that allows for optimal material performance. Our digital solutions team also serves customers in predictive quality analysis. Our approach starts at the production level, which is at the center of the supply chain, interacting with customers and partners. We gain learnings from the use of the right technology or system and then adapt, and scale as needed, ultimately allowing us to identify which characteristics led to the “golden batch”. This also helps to accelerate new material development in the future as we transfer the learnings in R&D in a systematic way periodically. By the way, minimizing quality-based excursions also offers sustainability benefits, minimizing wasted product and suboptimal paths through supply chains.

For more information click HERE.

Also read:

CEO Interview: Jaushin Lee of Zentera Systems, Inc.

CEO Interview: Shai Cohen of proteanTecs

CEO Interview: Barry Paterson of Agile Analog


Understanding Sheath Behavior Key to Plasma Etch

Understanding Sheath Behavior Key to Plasma Etch
by Scott Kruger on 08-11-2022 at 10:00 am

Final Edit EtchingProcess Illustration

Readers of SemiWiki will be well aware of the challenges the industry has faced in photolithography in moving to new nodes, which drove the development of new EUV light sources as well as new masking techniques.  Plasma etching is another key step in chip manufacturing that has also seen new challenges in the development of new sub-10nm processes.

Plasmas, the fourth state of matter, are formed by filling a vacuum chamber with a low-pressure gas and using electromagnetic energy inputs to ionize the gas:  electrons are stripped from the ions and become unbound.   Because electrons are more than a thousand times smaller than the ions, they move quickly relative to the ions.   At the wafer surface, the electrons quickly strike the wafer and are depleted.  A steady-state electric field known as the sheath is formed to balance the current losses.  It is this boundary layer that gives the plasma many of its useful properties in manufacturing, such as plasma vapor deposition, plasma ashing (to remove the photoresist), or what we will focus on here, plasma etching. Plasma etching, also known as dry etching, was a breakthrough for achieving anisotropic etches for producing deep features.   As seen below, the input gas type and volume, the applied voltage amplitude and waveforms, and the reactor geometry can all be varied to give considerable flexibility in plasma etching reactors.  Common reactor types are capacitively coupled reactors, which use a single voltage source; reactive-ion etch (RIE) reactors, which have multiple electrodes to independently control the reactive ions for selective etching, or an inductively coupled plasma RIE (ICP-RIE) reactors, which use higher frequencies to enable higher densities and a faster etch rate.  Designing a plasma etch reactor has a wide-range of input parameters to give a large design and operating space for solving a given manufacturing problem.

As the dimensions for semiconductor devices have become smaller, the implications for plasma etching have changed in multiple ways.  Current advanced nodes have dramatically increased the film stack complexity for advanced logic.  Other areas that will see increased challenges in scaling are the 3D NAND structures in advanced memory, or advanced packaging with its complex routing needs.  For example, in the figure below, a schematic for a proposed RDL from Lau et.al.[1] is shown along with a scanning electron microscope image of the through-silicon via (TSV). This TSV, created using a Bosch-type Deep Reactive Ion Etch (DRIE), has an aspect ratio of 10.5 demonstrating the deep anisotropy capable of modern etch reactors.   In these and other areas, the importance of being able to understand the details of the plasma etch has increased.   For plasma etching, the critical issues are the degree of anisotropy (vertical etching versus horizontal etch), the shape of trenches (straight versus tapered or bowed as seen in the figure below) and etch uniformity.  This is in addition to such traditional concerns such as etch rate and uniformity over the entire wafer that are critical for high yields and economics.

Controlling plasmas is difficult because they are complex chemically reactive gasses that interact with the semiconductor material in complex ways.  Simulations have long been important in understanding plasma behavior in etch reactors. The three basic modeling paradigms are drift-diffusion, hydrodynamic (or fluid), and kinetic.  These models are directly equivalent to the types of models used in modeling electron transport in TCAD algorithms for studying semiconductor devices.   A key difference here is that the ions move instead of creating a solid-state lattice, and chemical reactions are also critical for understanding the plasma formation, properties, and etching abilities.

Drift-diffusion and fluid models are widely used to determine the overall energy balance and basic plasma properties.  However, kinetic codes are critical for understanding the details of the plasma etching process.  The degree of etch anisotropy is determined fundamentally by the energy and angle of ions as they strike the wafer, a quantity that is strongly dependent on the plasma sheath.  The complexities of the sheath cannot be fully resolved with the drift-diffusion and fluid models, but require a kinetic code.  Kinetic modeling is especially useful for gaining insights into the plasma uniformity, and degree of anisotropy of the etching process.

Tech-X Corporation has developed VSim, a kinetic modeling tool for simulating plasma etch reactors.   Tech-X Corporation, located in Boulder, Colorado, has been in high performance computing in plasma physics for almost three decades.  High-performance computing enables the details of ion and electron behavior to be computed across manufacturing-relevant spatial scales that are large relative to fundamental plasma length scales.   With over a decade of experience in servicing the wafer equipment manufacturing market, Tech-X provides the leading plasma kinetic simulation capability.   More information is here (http://www.txcorp.com/vsim) for VSim’s capabilities.  In our next article, we will highlight enhancements for VSim 12 that will be released on September 14.

[1] Lau, J., et al. “Redistribution layers (RDLs) for 2.5 D/3D IC integration.” International Symposium on Microelectronics. Vol. 2013. No. 1. International Microelectronics Assembly and Packaging Society, 2013.

Also Read:

Coverage Analysis in Questa Visualizer

Fast EM/IR Analysis, a new EDA Category

DSP IP for High Performance Sensor Fusion on an Embedded Budget


WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow
by Synopsys on 08-11-2022 at 8:00 am

Synopsys Ansys RF Flow Webinar

The design and characterization of RF circuits is a complex process that requires an RF designer to overcome a variety of challenges. Not only do they face the complexities posed by advanced semiconductor processes and the need to meet the demanding requirements of modern wireless standards, designers must also account for electromagnetic effects that become significant at RF and mmWave frequencies. The Synopsys Custom Design Family provides a holistic solution to RF design challenges, including accurate EM modeling with industry-leading tools such as the Ansys EM Tool suite, simulation and analysis of important RF measurements, productive layout creation, and RC extraction and physical verification with foundry qualified signoff tools.

What you will Learn

In this webinar we will use a Low Noise Amplifier design to illustrate the steps needed for creating state-of-the-art RF circuits. We will start with Synopsys Custom Compiler for design creation, simulation and results analysis. Next we will perform inductor synthesis with the Ansys VeloceRF tool and  Inductor modeling with Ansys RaptorX . The next step will be to create a layout with Custom Compiler Layout Editor. Once the layout is complete, we will perform physical verification and parasitic extraction with IC Validator and StarRC and use Ansys Exalto for EM extraction of the critical nets.  As a final step we will simulate the combined extracted model with the PrimeSim SPICE simulator for post-layout verification and analyze the results in PrimeWave Design Environment.

The Synopsys Custom Design Family is a complete front to back solution for all types of custom integrated circuit design.  It includes Custom Compiler, a modern and productive editor for schematics and layout, the PrimeSim Continuum simulation solution for fast and accurate analog and RF simulation, and the PrimeWave Design Environment for post-processing and viewing of simulation results. It also features natively integrated signoff tools – Star RC extraction and IC Validator physical verification.

A wide variety of third-party tools are integrated with the Synopsys custom design platform, including the Ansys tools for Electro Magnetic modeling and extraction, which will be featured in this webinar.

The Presenters

Samad Parekh
Product Marketing Manager, Sr. Staff
Synopsys

Samad Parekh is the Product Manager for Spice Simulation and Design Environment products at Synopsys. He has 10 years of experience serving as a senior member of the Synopsys Applications Engineering team supporting Analog and Custom tools. Prior to Synopsys, Samad worked as an RF designer for 6 years designing RF and microwave circuits for the cellular and aerospace markets. Samad holds a BSEE from UCLA and MSEE from UC Irvine.

Kelly Damalou
Product Manager
Ansys

Kelly Damalou is Product Manager for the Ansys on-chip electromagnetic simulation portfolio. For the past 20 years she has worked closely with leading semiconductor companies, helping them address their electromagnetic challenges. She joined Ansys in 2019 through the acquisition of Helic, where, since 2004 she held several positions both in Product Development and Field Operations. Kelly holds a diploma in Electrical Engineering from the University of Patras, Greece, and an MBA from the University of Piraeus, Greece.

To Learn More

Please register for the webinar below:

https://www.synopsys.com/implementation-and-signoff/resources/webinars/synopsys-ansys-custom-design-flow.html

Also read:

DSP IP for High Performance Sensor Fusion on an Embedded Budget

Intelligently Optimizing Constrained Random

Using STA with Aging Analysis for Robust IC Designs


Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform

Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform
by Kalar Rajendiran on 08-11-2022 at 6:00 am

SoC Block Diagram with EFLX and QuiddiKey

While the ASIC market has always had its advantages over alternate solutions, it has faced boom and bust cycles typically driven by high NRE development costs and time to market lead times. During the same time, the FPGA market has been consistently bringing out more and more advanced products with each new generation. With very high-speed interfaces offered on these products along with flexibility through field-programmability, these advanced FPGA products give ASICs a good run for the money.

Changing requirements have also been the tail wind behind the fast adoption of FPGAs. Being able to accommodate last minute changes without having to re-spin the chip is a God-send in markets with fast changing requirements. This is not to say that ASICs have lost their edge. ASICs still hold their deserved place in terms of PPA scoring when compared against FPGA-based solutions and software solutions run on general-purpose processors. But the advent of embedded FPGA capability has brought flexibility and configurability to ASICs. By integrating embedded FPGA (eFPGA) cores into ASICs, systems can now enjoy the benefits of both ASICs and FPGAs.

What About Security?

Today, there are a number of fast growing markets with rapidly evolving requirements too and that is great for ASICs with embedded FPGA cores. But these fast growing markets also have a high-bar in terms of security of data and communications. While security is always a topic of serious interest in the field of electronics, the focus has grown with the increased use of global supply chains. With many touchpoints throughout the development and deployment phases, concerns about counterfeit chips being inserted to hijack systems are logical and valid.

That is why, securing systems is implemented through the mechanism of hardware root of trust. The hardware root of trust contains the keys for the encrypting and decrypting functions and enables a secure boot process. As the hardware root of trust is inherently trusted, it is critical to ensure that the keys that are stored on the chip can never be hacked.

Can Security Be Further Enhanced?

What if the security can be further enhanced by not even storing the keys on the chip? What if the keys can be individualized to the chip level rather than limited to the design/product level? The security level would indeed be enhanced a lot. This is the essence of a recent announcement by Flex Logix. By partnering with Intrinsic ID, Flex Logix is able to bring an enhanced level of security to SoCs that integrate their EFLX® eFPGA cores. The enhanced security is implemented through Intrinsic ID’s QuiddiKey that leverages their SRAM PUF technology. Refer to the Figure below for a block level diagram of such an SoC.

QuiddiKey

Intrinsic ID QuiddiKey® is a hardware IP solution that enables device manufacturers and designers to secure their products with internally generated, chip-unique cryptographic keys without the need for adding costly, security-dedicated silicon. It uses the inherently random start-up values of SRAM as a physical unclonable function (PUF), which generates the entropy required for a strong hardware root of trust. Since the pattern is unique to a particular SRAM, the pattern is unique to a particular chip like a fingerprint is to its owner. For more details about Intrinsic ID’s SRAM PUF technology, visit the SRAM-PUF product page. For more details about QuiddiKey, visit the QuiddiKey product page.

QuiddiKey IP can be applied easily to almost any chip – from tiny microcontrollers (MCUs) to high-performance systems-on-chip (SoCs). QuiddiKey has been validated for NIST CAVP and has been deployed and proven in hundreds of millions of devices certified by EMVCo, Visa, CCEAL6+, PSA, ioXt, and many governments across the globe. Refer to the Figure below for the major functions that the QuiddiKey IP implements.

Enhanced Security of eFPGA Platforms

In the joint Flex Logix/Intrinsic ID solution, a cryptographic key derived from a chip-unique root key is used to encrypt and authenticate the bitstream of an eFPGA. If the chip is attacked or found in the field, the bitstream of the eFPGA cannot be altered, read, or copied to another chip. That is because the content is protected by a key that is never stored and therefore is invisible and unclonable by an attacker.

Neither is the concern of counterfeit chips being inserted within the supply chain valid any longer. Each QuiddiKey user can generate an unlimited number of chip-unique keys, enabling each user in the supply chain to derive their own chip-unique keys. Each user can protect their respective secrets as their cryptographic keys will not be known to the manufacturer or other supply-chain users.

For more details about how the Flex Logix/Intrinsic ID partnership is “taking eFPGA security to the next level”, refer to this whitepaper.

To learn more about Flex Logix’s eFPGA solutions visit https://flex-logix.com/efpga/.

Also Read:

[WEBINAR] Secure your devices with PUF plus hardware root-of-trust

WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?

Enlisting Entropy to Generate Secure SoC Root Keys


Podcast EP99: How Cliosoft became the leading design data management company

Podcast EP99: How Cliosoft became the leading design data management company
by Daniel Nenni on 08-10-2022 at 10:00 am

Dan is joined by Srinath Anantharaman, who founded Cliosoft in 1997 and serves as the company’s CEO. He has over 40 years of software engineering and management experience in the EDA industry.

Dan and Srinath explore the original focus for Cliosoft and how that has expanded over the years. The future of Cliosoft, as well as its plans for DAC are discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Coverage Analysis in Questa Visualizer

Coverage Analysis in Questa Visualizer
by Bernard Murphy on 08-10-2022 at 6:00 am

Questa coverage

Coverage analysis is how you answer the question “have I tested enough?” You need some way to quantify the completeness of our testing; coverage is how you do that. Right out of the gate this is a bit deceptive. To truly cover a design our tests would need to cover every accessible state and state transition. The complexity of that task routinely invokes comparisons with the number of protons in the universe so instead you use proxies for coverage. Touching every line in the RTL, exercising every branch, every function, every assertion and so on. Each a far cry from exhaustive coverage., but as heuristics, they work surprisingly well.

Where does coverage-based analysis start?

Granting that testing cannot be exhaustive, it should at least be complete against a high-level specification, it should be reasonably uniform (no holes), and it should be efficient.

Specification coverage is determined by the test plan. Every requirement in the specification should map to a corresponding section in the plan, which will in turn generate multiple sub-requirements. All functional testing starts here. Coverage at this level depends on architecture and design expertise in defining the testplan. It can also often leverage prior testplans and learning from prior designs. Between expertise and reuse product teams build confidence that the testplan adequately covers the specification. This process is somewhat subjective, though traceability analytics have added valuable quantification and auditability in connecting between requirements and testplans.

Testing and coverage analysis then decomposes hierarchically into implementation test development, which is where most of us start thinking about coverage analysis. Each line-item in the testplan will map to one or more functional tests. These will be complemented by implementation-specific tests around completeness and sequencing for control signal, registers, test and debug infrastructure and so on.

Refining implementation coverage

Here is where uniformity and efficiency become important. When you start writing tests, you’re writing them to cover the testplan and the known big-ticket implementation tests. You’re not yet thinking much about implementation coverage. At some point you start to pay attention to coverage and realize that there are big holes. Chunks of code not covered at all by your testing. Sure, exhaustive testing isn’t possible, but that’s no excuse for failing to test code you can immediately see in a coverage analysis.

Then you add new tests, or repurpose old tests from a prior design. You add constrained random testing, all to boost coverage however you choose to measure it (typically across multiple metrics: function, line, branch for example). The goal here is to drive reasonably uniform coverage across the design to an achievable level, with no unexplained gaps.

Efficiency is also important. Fairly quickly (I hope) you get to a point where adding more tests doesn’t seem to improve coverage much. Certainly you want to find the few special tests you can add that will further advance your coverage, but you’d also like to know which tests you can drop. Because they’re costing you expensive regression runtime without contributing any advantage in coverage improvement.

Questa Visualizer adds coverage analysis

It should be apparent that coverage analysis is the guiding metric for completeness in verification. Questa recently released coverage visualization in their Visualizer product to guide you in coverage analysis and optimization, during the course of verification and debug. Check it out.


Intel and TSMC do not Slow 3nm Expansion

Intel and TSMC do not Slow 3nm Expansion
by Daniel Nenni on 08-09-2022 at 10:00 am

Pat Gelsinger and CC Wei SemiWiki

The media has gone wild over a false report that Intel and TSMC are slowing down 3nm. It is all about sensationalism and getting clicks no matter what damage is done to the hardworking semiconductor people, companies and industry as a whole. And like lemmings jumping off a cliff, other less reputable media outlets perpetuated this false report with zero regard for the truth.

By the way, lemmings only jump off cliffs when they become overpopulated and migrations end badly. So maybe that is what has happened here, media outlets have become over populated.

For the record:

-Q2 2022 Taiwan Semiconductor Manufacturing Co Ltd Earnings Call

“CC Wei: Next, let me talk about the tool delivery update. As a major player in the global semiconductor supply chain, TSMC work closely with all our tool supplier to plan our CapEx and capacity in advance. However, like many other industries, our suppliers have been facing greater challenges in their supply chains, which are extending toward delivery lead times for both advanced and mature nodes. As a result, we expect some of our CAPEX ($4B of $44B) this year to be pushed out into 2023.”

And an update on N3:

“CC Wei: Now let me talk about the N3 and N3E status. Our N3 is on track for volume production in second half of this year with good yield. We expect revenue contribution starting first half of 2023, with a smooth ramp in 2023, driven by both HPC and smartphone applications. N3E will further extend our N3 family with enhanced performance, power and yield. N3E will offer complete platform support for both smartphone and HPC applications. We observed a high level of customer engagement at N3E, and volume production is scheduled for around 1 year after N3. Our 3-nanometer technology will be the most advanced semiconductor technology in both PPA and transistor technology when it is introduced. Thus, we are confident that our N3 family will be another large and long-lasting node for TSMC.”

Yes, Intel had a challenging quarter and it will be a difficult year but my sources say that Meteor Lake, the first disaggregated chip with an Intel 4 CPU, TSMC N3 GPU, and a TSMC N5 base die and SoC is on track. Saphire Rapids I do not know. Since this involves stitching multiple Intel CPU tiles together there could be challenges but this seems to be a design/integration issue versus a process yield problem.

Pat Gelsinger has fixed the Intel process issues by changing the methodology to match what TSMC does, half nodes versus full nodes for advanced yield learning. As a result, I have complete confidence in Intel 4 and 3 moving forward as planned, absolutely.

-Q2 2022 Intel Conference Call Comments

Pat Gelsinger: For example, regaining our leadership begins with Moore’s Law and the capacity to deliver it at scale. Over the last 18 months, we’ve taken the right steps to establish a strong footing for our TD roadmap. We are well into the ramp of Intel 7, now shipping in excess of 35 million units. Intel 4 is ready for volume production in second half of this year and Intel 3, 20A and 18A are all at or ahead of schedule.

Pat also reorganized Intel design groups and decentralized them for increased autonomy. This will take time to see the results but I can assure you it was the correct thing to do.

I know having chicken little in the semiconductor hen house is fun to watch but it really is getting old. Check your sources and if they have zero semiconductor experience I would take it for what it is worth, entertainment.

And for those of you who want to know what really caused the automotive chip shortage:

“In the past two years they call me and behave like my best friend,” he told a laughing crowd of TSMC partners and customers in Silicon Valley recently. One automaker called to urgently request 25 wafers, said Wei, who is used to fielding orders for 25,000 wafers. “No wonder you cannot get the support.” CC Wei, TSMC Technical Symposium 2022.

Also read:

Future Semiconductor Technology Innovations

3D Device Technology Development

The Turn of Moore’s Law from Space to Time


Fast EM/IR Analysis, a new EDA Category

Fast EM/IR Analysis, a new EDA Category
by Daniel Payne on 08-09-2022 at 6:00 am

IR Drop min

I’ve watched the SPICE market segment into multiple approaches, like: Classic SPICE, Parallel SPICE, FastSPICE and Analog FastSPICE. In a similar fashion the same thing just happened to EM/IR analysis, because after years of waiting we finally have a different approach to EM/IR analysis that works at the top-level of an IC that I’m calling Fast EM/IR analysis. I came to this conclusion after a video call with Maxim Ershov, CEO at Diakopto and Kelvin Khoo, COO at Diakopto.

EM/IR analysis has traditionally been approached at the transistor-level, or the cell-level, producing very accurate results, but at the expense of long run times as the design is assembled from sub-blocks, and then ultimately run on the top-level. The run times have got “out of control” in FinFET technologies as the size of the designs, and the number, magnitude and impact of parasitic elements have grown exponentially. As a result, it has become impractical to analyze EM/IR at the top-level because such simulations could easily take weeks.

In addition, traditional EM/IR tools are notorious for being very complicated and tedious to set up, use and interpret the results. They can also be used only very late in the design stages, when layouts are mostly complete and hard to change.

Diakopto’s PrimeX was developed in response to customers’ need for a new category of Fast EM/IR tools, that are capable of verifying top-level power nets, which is not feasible with other EM/IR tools. PrimeX delivers on this promise by trading off some accuracy for dramatic gains in speed, capacity, and ease of setup. But what makes PrimeX shine is the ease-of-use for novice users, along with insightful debugging capabilities that identify the few areas and elements (out of a sea of millions or billions) that cause EM/IR problems.

The majority of critical EM/IR problems are caused by “silly” layout mistakes, such as missing or insufficient number of vias, long narrow metal lines, cutouts in metal planes, poor connection of pad (cells) to the power nets, and insufficient number or sub-optimal placement of power switches. Design teams should not need to waste a couple of weeks by running fully accurate EM/IR analysis to detect and find these mistakes.

PrimeX enables a very fast resistance, EM and IR drop analysis, thanks to a novel “approximate computing” methodology. This “approximate computing” methodology includes a variety of different techniques, such as utilizing the design intent information, design hierarchy, simplifications using physics-based considerations.

This allows users to perform multiple quick iterations and to clean up layout mistakes early in the design phase. It also allows design teams to perform EM/IR analysis for very large power nets at the top-level in a few hours, instead of weeks, to detect the overwhelming majority of layout issues that cause EM/IR problems.

As an example, all power nets for a high-speed SerDes (112-224 Gbps) implemented in 5nm technology were analyzed in an overnight run using PrimeX.

PrimeX Flow

The tool also offers powerful debugging capabilities beyond what is commonly found in traditional EM/IR tools. It provides deep insight into EM/IR problems, and reports root causes in a top-down manner:  by net, by layer, power switches, and by polygon and colors over the layout.

In addition to speed and insights, PrimeX was designed from the start to be easy to use and set up. This means that it can be used by any design or layout engineer, without the need for lengthy training or for them to be experts in an EM/IR tool. In addition, PrimeX requires minimal configuration and set up, so there is no need for a dedicated expert or CAD group to bring up, maintain and support the tool and flow.

Another drawback of the traditional IC design flow is that EM/IR analysis happens quite late in the process, only after DRC/LVS for each block and for the top hierarchy level has become clean, requiring long iteration cycles, and long simulation times to get the current sources information. By this time, it is almost too late to make layout ECOs, and designers are in panic mode right before a tapeout deadline. The new methodology offered by PrimeX allows designers to analyze EM/IR earlier in the IC flow, and often prior to LVS clean, to pinpoint bottlenecks and choke points, and to clean up the design/layout much earlier.

IR drop distribution in power net VDD
Current density distribution in power net VDD

According to Diakopto, PrimeX has been adopted by 8 customers, several of which have made the tool their sign-off flow for the top-level EM/IR verification. PrimeX builds on the success of Diakopto’s flagship product, ParagonX, which has been adopted by over 40 customers for debugging IC designs sensitive to layout parasitics.

The headquarters for Diakopto is in San Jose, a strategic location for a smaller EDA vendor to be located, because of the density of IC firms in Silicon Valley. You can run the Diakopto tools on any Linux box (CentOS or Red Hat). PrimeX runs fast enough on a single core so far, that they haven’t done much on a parallel approach, but stay tuned. One CPU is sufficient right now.

Summary

EM/IR analysis tools have been around for over two decades now, and the limitations of capacity, slow run times, and LVS clean, are well understood by users. What’s new is that Diakopto has pioneered a new approach of Fast EM/IR analysis that complements the traditional EM/IR tools, allowing the fastest results, especially at the top-level of an SoC using leading edge nodes. PrimeX is disruptive, and should be welcomed by IC design teams.

Related Blogs

Bizarre results for P2P resistance and current density (100x off) in on-chip ESD network simulations – why?

Your Symmetric Layouts show Mismatches in SPICE Simulations. What’s going on?