Bronco Webinar 800x100 1

Digital Twins Simplify System Analysis

Digital Twins Simplify System Analysis
by Dave Bursky on 08-15-2022 at 6:00 am

Siemens Digital Twin SemiWiki

The ability to digitally replicate physical systems has been used to model hardware operations for many years, and more recently, digital twining technology has been applied to electronic systems to better simulate and troubleshoot the systems. As explained by Bryan Ramirez, Director of Industries, Solutions & Ecosystems, Siemens EDA, one example of the early use of twin technology was in the Apollo 13 mission back in 1970. With a spacecraft 200,000 miles away, hands-on troubleshooting was not possible to solve a failing subsystem or system problem. Designers tackled the challenge by using a ground-based duplicate system (a physical twin) to replicate and then troubleshoot problems that arose.

However, such physical twins were both expensive and very large, and often had to be disassembled to reach the system that failed. By employing a digital twin of the system, designers can manipulate the software to do the analysis and develop a solution or workaround to the problem, saving time and money.  A conceptual model of the digital twin was first proposed by Michael Grieves of the University of Michigan in 2002 explained Ramirez, and the first practical definition of the digital twin stemmed from work at NASA in 2010 to improve physical model simulations for spacecrafts.

Digital twins allow designers to virtually test products before they build the systems and complete the final verification.  They also allow engineers to explore their design space and even define their system of systems. For example, continued Ramirez, using digital twin technology to model autonomous driving can help impact the electronic control systems. Using the twin technology designers can develop the models that simulate the sensing, computations and actuations of the autonomous driving system as well as the “shift-left” software development. This gives the designer the ability to go from chipß->carß>city validation without hardware. That reduces costs and design spins as well as allowing designers to optimize system performance.

Additional benefits of digital twin technology include the ability to include predictive maintenance, remote diagnostics, and even real-time threat monitoring. In industrial applications, real-time monitoring, feedback for continuous improvement, and feed-forward predictive insights are key benefits of leveraging the digital twin approach (see the figure). Factory automation can also benefit by using the digital twin capability for simulating autonomous guided vehicles, interconnected systems of systems, as well as examining security, safety, and reliability aspects.

Extrapolating future scenarios, Ramirez suggests that the digital twin capability can simulate the impossible. One such example is an underwater greenhouse dubbed Nemo’s Garden. In the simulation, the software can accelerate innovation by removing the limitations of weather conditions, seasonality, growing seasons, and diver availability.

All these simulation capabilities are the result of improved compute capabilities, which, in turn, are the result of higher-performance integrated circuits. Additionally, as the IC content in systems continues to increase, it becomes easier to simulate/emulate the systems as digital twins. However, as chip complexity continues to increase, cost – especially the cost of respins – the need for the use of digital twins increases to better simulate the complex chips and thus avoid costly respins. The challenges that the digital twin technology faces include creating models for the complex systems, developing multi-domain and mixed-fidelity simulations, setting standardization for data consistency and sharing, and performance optimization. These are issues that the industry is working hard to address.

For more information go to Siemens Digital Industries Software

Also read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


Time for NHTSA to Get Serious

Time for NHTSA to Get Serious
by Roger C. Lanctot on 08-14-2022 at 10:00 am

Time for NHTSA to Get Serious

In the final season of “The Sopranos,” Christopher Multisanti (played by Michael Imperioli) and Anthony Soprano (James Gandolfini) lose control of their black Cadillac Escalade and go tumbling off a two-lane rural highway and down a hill. Christopher dies (spoiler alert) with an assist from Tony, before Tony calls “911” for help.

Connected car junkies will immediately cry foul given that the episode – which first aired in 2007 – falls well within the deployment window of General Motors’ OnStar system. But more vigilant devotees will recall that in an earlier season Tony says he “had all of that tracking shit removed” from his car. (Tony favored GM vehicles.)

I was reminded of this as I rewatched the series and pondered the National Highway Traffic Safety Administration’s crash reporting General Order issued last year. The reporting requirement raises a critical issue regarding privacy obligations or the relevance of privacy in the event of a crash. The shortcomings of the initial tranche of data reported out by NHTSA last month suggest a revision of the reporting requirement is in order.

When the NHTSA issued its Standing General Order in June of 2021 requiring “identified manufacturers and operators to report to the agency certain crashes involving vehicles equipped with automated driving systems (ADS) or SAE Level 2 advanced driver assistance systems (i.e. systems that simultaneously control speed and steering),” the expectation was that the agency would soon be awash in an ocean of data. The agency was seeking deeper insights into the causes of crashes, the mitigating effects of ADS and some ADAS systems, and some hint as to the future direction of regulatory actions.

Instead, the agency received reports of 419 crashes of ADAS-equipped vehicles and 145 crashes involving vehicles equipped with automated driving systems. What has emerged from the exercise is a batch of heterogeneous data with obvious results (human-driven vehicles with front end damage and robot-driven vehicles with rear-end damage.) and gaping holes.

The volume and type of data were insufficient to draw any significant conclusions and the varying ability of the individual car companies to collect and report the data produced inconsistent information. In fact, allowable redactions further impeded the potential for achieving useful insights.

To this add NHTSA’s own caveats – described in great detail in NHTSA documents;

  • Access to Crash Data May Affect Crash Reporting
  • Incident Report Data May Be Incomplete or Unverified
  • Redacted Confidential Business Information and Personally Identifiable Information
  • The Same Crash May Have Multiple Reports
  • Summary Incident Report Data Are Not Normalized

The only car company that appears to be adequately prepared and equipped to report the sort of data that NHTSA is seeking, Tesla, stands out for having reported the most relevant crashes. In a report titles “Do Teslas Really Account for 70% of U.S. Crashes Involving ADAS? Of course Not,” CleanTechnica.com notes that Tesla is more or less “punished” for its superior data reporting capability. Competing auto makers are allowed to hide behind the limitations of their own ability to collect and report the required data.

It’s obvious from the report that there is a vast under-reporting of crashes. This is the most salient conclusion from the reporting and it calls for a radical remedy.

The U.S. does not have a mandate for vehicle connectivity, but nearly every single new car sold in the U.S. comes with a wireless cellular connection.  The U.S. does have a requirement that an event data recorder (EDR) be built into every car.

If NHTSA is serious about collecting crash data, the agency ought to mandate a connection between the EDR and the telematics system and require that in the event of a crash the data related to that crash be automatically transmitted to a government data collection point – and simultaneously reported to first responders connected to public service access points.

There are several crucial issues that will be remedied by this approach:

  • First responders will receive the fastest possible notification of potentially fatal crashes. Most automatic notifications are triggered by airbag deployments and too many of those notifications go to call centers that introduce delays and impede the transmission of relevant data.
  • A standard set of data will be transmitted to both the regulatory authority and first responders – removing inconsistencies and redactions. All such systems ought to be collecting and reporting the same set of data. European authorities recognized the importance of consistent data collection when they introduced the eCall mandate which took effect in April of 2018.
  • Manufacturers will finally lose plausible deniability – such as the ignorance that GM claimed during Congressional hearings in an attempt to avoid responsibility for fatal ignition switch failures
  • Such a policy will recognize that streets and highways are public spaces where the drivers of cars that collide with inanimate objects, pedestrians, or other motorists have forfeited a right to privacy. The public interest is served by automated data reporting from crash scenes.

NHTSA administrators are political appointees with precious little time to influence policy in the interest of saving lives. It is time for NHTSA to act quickly to establish a timeline for automated crash reporting to cut through the redactions and data inconsistencies and excuses and pave a realistic path toward reliable, real-time data reporting suitable for realigning regulatory policy. At the same time, the agency will greatly enhance the timeliness and efficacy of local crash responses – Anthony Soprano notwithstanding.

Also Read:

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform

Accellera Update: CDC, Safety and AMS


Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists
by Fred Chen on 08-14-2022 at 8:00 am

Spot Pairs for Measurement of Secondary Electron Blur in EUV

There is growing awareness that EUV lithography is actually an imaging technique that heavily depends on the distribution of secondary electrons in the resist layer [1-5]. The stochastic aspects should be traced not only to the discrete number of photons absorbed but also the electrons that are subsequently released. The electron spread function, in particular, should be quantified as part of resist evaluation [5]. The scale of the electron spread or blur is likely not a well-defined parameter, but itself has some distribution.

It is necessary to quantitatively assess this distribution of spread. A basic, direct approach is to pattern two features close enough to one another to show resist loss dependent on the electron blur. For example, two 20-25 nm spots separated by a 40 nm center-to-center distance will be significantly affected by 3 nm electron blur scale length, but much less so by 2 nm scale length (see the figure below).

The resist loss between two closely spaced exposed features following development depends on the scale length of the electron spread, a.k.a. blur [5].

An EUV resist or electron-beam resist could be evaluated at a given thickness by evaluating EUV or e-beam exposures of these arrays of these spot pairs in sufficient numbers to get the distribution of electron blur scales, down to the far-out tails, i.e., ppb level or lower. This data would be necessary for better predictions of yield, due to CD variation, edge placement error (EPE), and even the occurrence of stochastic defects.

Interestingly enough, scanning probe electron lithography, such as using an STM, may have the advantage of having probe-to-sample bias limiting the lateral spread of electrons, thereby reducing blur [6]. Again, the spot pair pattern can be used to confirm whether this is true or not.

Reference

[1] J. Torok et al., “Secondary Electrons in EUV Lithography,” J. Photopolymer Sci. and Tech. 26, 625 (2013).

[2] R. Fallica et al., “Experimental estimation of lithographically-relevant secondary electron blur, 2017 EUVL Workshop.

[3] M. I. Jacobs et al., “Low energy electron attenuation lengths in core-shell nanoparticles,” Phys. Chem. Chem. Phys. 19 (2017).

[4] F. Chen, “The Electron Spread Function in EUV Lithography,” https://www.linkedin.com/pulse/electron-spread-function-euv-lithography-frederick-chen

[5] F. Chen, The Importance of Secondary Electron Spread Measurement in EUV Resists,” https://www.youtube.com/watch?v=deB0pxEwwvc

[6] U. Mickan and A. J. J. van Dijsseldonk, US Patent 7463336, assigned to ASML.

This article originally appeared in LinkedIn Pulse: Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

Also Read:

EUV’s Pupil Fill and Resist Limitations at 3nm

ASML- US Seeks to Halt DUV China Sales

ASML EUV Update at SPIE


What is Your Ground Truth?

What is Your Ground Truth?
by Roger C. Lanctot on 08-14-2022 at 6:00 am

What is Your Ground Truth

When my son bought a 2020 Chevrolet Bolt EV a couple of years ago, I was excited. I wanted to see what the Xevo-supplied Marketplace (contextually driven ads and offers) looked like and I was also curious as to what clever navigation integration GM was offering.

I was swiftly disappointed to discover that Xevo Marketplace was not an embedded offering but rather an app to be projected from a connected smartphone. As for navigation, it wasn’t available for my son’s MY2020 Bolt even as an option.

I was stunned for two reasons. First, I thought the Marketplace app was intended as a brand-defining service which would surely be embedded in the vehicle and integrated as part of the home screen as an everyday driving application. Second, how could GM ship an EV that was unable to route the driver to the nearest compatible and available charging station via the on-board navigation system?

These manifestations made me wonder whether I was witnessing the appification of in-vehicle infotainment. What if and why shouldn’t cars ship with dumb terminals that draw all of their intelligence and content from the driver’s/user’s mobile device?

Further fueling this impression was GM’s subsequent introduction of Maps+, a Mapbox-sourced navigation app based on Open Street Map with a $14.99/month subscription. To be followed by the launch of Google Built-in (Googlemaps, Googleplay apps, Google Voice Assistant) as a download with a Premium OnStar plan ($49.99/month) or Unlimited Data plan ($25/month) – seems confusing, right?

The truly confusing part, though, isn’t the variety of plans and pricing combos, it is the variable view of reality or ground truth. Ground truth is the elusive understanding of traffic conditions on the road ahead and how that might impact travel and arrival times. Ground truth can also apply to the availability of parking and charging resources. (Parkopedia is king of parking ground truth, according to a recent Strategy Analytics study: https://business.parkopedia.com/strategy-analytics-us-ground-truth-testing-2021?hsLang=en )

The owner of a “lower end” GM vehicle – with no embedded navigation option – will have access to at least three different views of ground truth: OnStar turn-by-turn navigation, Maps+ navigation, and Google- and/or Apple-based navigation. (Of course Waze, Here We Go, and TomTom apps might also be available from a connected smartphone.)

Each of these navigation solutions will have different views of reality and different routing algorithms driven by different traffic and weather data sources and assumptions. Am I, as the vehicle owner and driver, supposed to “figure out” which is the best source? When I bought the car wasn’t I paying GM for its expertise in vetting these different systems?

What about access to the data and the use of vehicle data? Is my driving info being hoovered up by some third party? And what is ground truth when I am being given varying lenses through which to grasp it?

Solutions are now available in the market – from companies such as TrafficLand in the U.S. – that are capable of integrating still images and live video from traffic cameras along my route allowing me to better understand the routing decisions my car or my apps are making for me. The optimum means for accessing this information would be through a built-in navigation system.

GM continues to offer built-in or “embedded” navigation across its product line with a handful of exceptions – such as entry-level models of the Bolt.  Embedded navigation – usually part of the integrated “infotainment” system – is big business, representing billions of dollars in revenue from options packages for auto makers.

More importantly, the modern day infotainment system – lately rendered on a 10-inch or larger in-dash screen – is a critical point of customer engagement. The infotainment system is the focal point of in-vehicle communications, entertainment, and navigation – as well as vehicle status reports.

Vehicle owners around the world tell my employer – Strategy Analytics – in surveys and focus groups that the apps that are most important to them while driving relate to traffic, weather, and parking. Traffic is the most important, particularly predictive traffic information, because this data is what determines navigation routing decisions.

Navigation apps do not readily disclose their traffic sources, but it is reasonable to assume that a navigation app with HERE or TomTom map data is using HERE or TomTom-sourced traffic information. Googlemaps has its own algorithms as do Apple and Mapbox – but, of course, there is some mixing and matching between the providers of navigation, maps, and traffic data.

This is all the more reason why access to TrafficLand’s real-time traffic camera feeds is so important. Sometimes seeing is believing and TrafficLand’s traffic cameras are, by definition, monitoring the majority of known traffic hot spots across the country.

When the navigation system in my car wants to re-route me – requesting my approval – I’d like to see the evidence to justify a change in plans. Access to traffic camera info can provide that evidence.

I can understand why GM – and some other auto makers such as Toyota – have opted to drop embedded navigation availability from some cars as budget-minded consumers seek to pinch some pennies. But the embedded map represents the core of a contextually aware in-vehicle system.

There is extraordinary customer retention value in building navigation into every car – particularly an EV. The fundamental principles of creating safe, connected cars call for the integration of a location-aware platform including navigation.

Deleting navigation may be a practical consideration as attach rates decline, but it’s bad for business. In fact, there is a bit of a head-snapping irony in GM or Toyota or any auto maker deleting embedded navigation in favor of a subscription-based navigation experience from Mapbox or Google. These car makers are telling themselves that the customers least able to pay for built-in navigation will be willing to pay a monthly subscription for an app. I think not.

This is very short-term thinking. Location awareness is a brand-defining experience and auto makers targeting “connected services” opportunities will want to have an on-board, built-in navigation system. If not, the auto maker that deletes built-in navigation will be handing the customer relationship and the related aftermarket profits to third parties such as Apple, Amazon, and Google. That’s the real ground truth.

Also Read:

What’s Wrong with Robotaxis?

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform


Podcast EP100: A Look Back and a Look Ahead with Dan and Mike

Podcast EP100: A Look Back and a Look Ahead with Dan and Mike
by Daniel Nenni on 08-12-2022 at 10:00 am

Dan and Mike get together to reflect on the past and the future in this 100th Semiconductor Insiders podcast episode. The chip shortage, foundry landscape, Moore’s law, CHIPS Act and industry revenue trends are some of the topics discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA

CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA
by Daniel Nenni on 08-12-2022 at 6:00 am

Kai Beckmann 1

Kai Beckmann is a Member of the Executive Board at Merck KGaA, Darmstadt, Germany, and the CEO of Electronics. He is responsible for the Electronics business sector, which he has been leading since September 2017. In October 2018, Kai Beckmann also took over the responsibility for the Darmstadt site and In-house Consulting. In addition, he acts as the Country Speaker for Germany with responsibility for co-determination matters.

Prior to his current role, Kai Beckmann was Chief Administration Officer of Merck KGaA, Darmstadt, Germany, with responsibility for Group Human Resources, Group Business Technology, Group Procurement, In-house Consulting, Site Operations and the company’s Business Services, as well as Environment, Health, Safety, Security, and Quality.

In 2007, he became the first Chief Information Officer of Merck KGaA, Darmstadt, Germany, with responsibility for Corporate Information Services. From 2004 to 2007, he served as Managing Director of Singapore and Malaysia, and prior to that he held senior executive responsibility for the Information Management and Consulting unit from 1999 to 2004. He began his career at Merck KGaA, Darmstadt, Germany in 1989 as an IT system consultant.

Kai Beckmann studied computer science at the Technical University of Darmstadt from 1984 to 1989. In 1998, he earned a doctorate in Economics while working. He is married and has one son.

Tell us about EMD Electronics

Merck KGaA, Darmstadt, Germany, operates across life science, healthcare, and electronics. More than 60,000 employees work to make a positive difference in millions of people’s lives every day by creating more joyful and sustainable ways to live. In 2021, Merck KGaA, Darmstadt, Germany, generated sales of € 19.7 billion in 66 countries. The company holds the global rights to the name and trademark “Merck” internationally. The only exceptions are the United States and Canada, where the business sectors of Merck KGaA, Darmstadt, Germany, operate as MilliporeSigma in life science, EMD Serono in healthcare, and EMD Electronics in electronics.

As EMD Electronics we are the company behind the companies advancing digital living. Our portfolio covers a broad range of products and solutions, including high-tech materials and solutions for the semiconductor industry, as well as liquid crystals and OLED materials for displays and effect pigments for coatings and cosmetics. We offer the broadest portfolio of innovative materials in the semiconductor industry and support our customers in creating industry-leading microchips. In the US, EMD Electronics alone has approximately 2,000 employees across the country, with more than a dozen manufacturing and R&D sites spanning the continental U.S.

Last year you announced a $1 billion investment in the US to support semiconductor customers. Can you tell us more about these investments?

These investments are part of our global program called “Level Up” for investing in R&D, capacity, and accelerating growth in the semiconductor and display markets. Over the next five years, we plan to spend around $2.5 billion globally in long-term fixed assets (capital expenditures) in Semiconductor and Display Solutions. In the U.S., EMD Electronics plans to invest primarily in its Arizona, California, Texas, and Pennsylvania sites. Last year, we announced global investments of more than $3.5 billion as part of our “Level Up” growth program. With this, we seek to capture the growth opportunities that come with the significantly accelerating global demand for innovative semiconductor and display materials. This demand is driven by exponential data growth and highly impactful technology trends that include remote working, the growth of AI, and soaring demand for electric vehicles. Our “Level Up” growth program focuses on four mutually reinforcing key priorities: Scale, Technology, Portfolio, and Capabilities. Further investing in these four areas builds the foundation of our ambitious growth targets, in conjunction with the strong demand for electronics materials, particularly semiconductors.

Sustainability is becoming increasingly important across the industry. What is Merck KGaA, Darmstadt, Germany, and especially the business of EMD Electronics doing to ensure a sustainable future?

We believe that we can harness science and technology to help tackle many global challenges. Always guided by a robust set of values, we approach all our actions and decisions with a sense of responsibility. Sustainability has therefore been vital to us for many generations. We can only ensure our own future success by also creating lasting added value for society.

In creating long-term added value for society, we have defined three goals within our sustainability strategy. In 2030, we will achieve progress for more than one billion people through sustainable science and technology, along with integrating sustainability into all our value chains. By 2040, we will be climate-neutral and reduce our resource consumption. Most of our greenhouse gas emissions stem from process-related emissions during the production of specialty chemicals for the electronics industry. With improved processes, Merck can significantly reduce those emissions in the future.

As a materials supplier for the electronics industry, we are a key enabler for sustainable innovation. We are addressing emissions through abatement and alternatives. For example, we are now designing a process for large-scale NF3 abatement as a pre-requisite to meet our long-term GHG goals, and to drive decarbonization in the industry.  In addition to optimizing our own processes to find more sustainable solutions, we are also a trustworthy partner for our customers to support them on their sustainability journey. Just recently we announced  our collaboration with Micron, testing an alternative low-GWP etch gas of ours, further aligning on our shared sustainability goals.

You recently attended Semicon West. What were your reactions to being back in person with customers at a trade show in the US, and what announcements or innovations were you most excited about?

I truly appreciated being able to re-connect with many great people from all over the world face-to-face. SEMICON West is the choice place to exchange on key topics in our industry, tackling industry challenges, and to establish partnerships and collaborations. The vitality of innovation never stops and it’s wonderful to see the progress the industry is making.  It is fascinating to see how the industry is driving innovations in new materials in fields such as 3D NAND, FinFET, Nanosheet or EUV, to continuously make devices more intelligent, power efficient and smaller. With the move to 3D, shrinking is no longer the most cost-effective way to increase density. Feature sizes for 3D chips will no longer shrink and may increase, as they already have for 3D NAND. I also heard several times “etch could become the new litho”. Since Merck is supplying materials for all parts of the manufacturing process – litho, deposition, etch, CMP, cleans, you name it – we are well positioned to participate in the continued growth story that is Moore’s Law, 2nd edition. Additionally, we appreciate that sustainability is becoming more and more important in our industry where we are a well-respected partner for our customers.

Finally, let me mention data analytics as one driving force for the industry. We combine a data-driven approach with a physics-based expertise. In December last year we formed the independent partnership Athinia together with Palantir to deliver a secure collaborative data analytics platform for the semiconductor industry. The Athinia platform will leverage AI and big data to solve critical challenges, improve quality and supply chain transparency, and time to market. At Semicon West Athinia announced that Micron Technology plans to use the data analytics platform to create a pioneering data collaboration ecosystem that will help lead a continued journey of digital transformation with Micron’s critical suppliers.

Advancing Digital Living has data at its core, data that will be continually leveraged in the coming decade. Our teams pioneer digital solutions that ensure we can deliver high-caliber, customized quality control that allows for optimal material performance. Our digital solutions team also serves customers in predictive quality analysis. Our approach starts at the production level, which is at the center of the supply chain, interacting with customers and partners. We gain learnings from the use of the right technology or system and then adapt, and scale as needed, ultimately allowing us to identify which characteristics led to the “golden batch”. This also helps to accelerate new material development in the future as we transfer the learnings in R&D in a systematic way periodically. By the way, minimizing quality-based excursions also offers sustainability benefits, minimizing wasted product and suboptimal paths through supply chains.

For more information click HERE.

Also read:

CEO Interview: Jaushin Lee of Zentera Systems, Inc.

CEO Interview: Shai Cohen of proteanTecs

CEO Interview: Barry Paterson of Agile Analog


Understanding Sheath Behavior Key to Plasma Etch

Understanding Sheath Behavior Key to Plasma Etch
by Scott Kruger on 08-11-2022 at 10:00 am

Final Edit EtchingProcess Illustration

Readers of SemiWiki will be well aware of the challenges the industry has faced in photolithography in moving to new nodes, which drove the development of new EUV light sources as well as new masking techniques.  Plasma etching is another key step in chip manufacturing that has also seen new challenges in the development of new sub-10nm processes.

Plasmas, the fourth state of matter, are formed by filling a vacuum chamber with a low-pressure gas and using electromagnetic energy inputs to ionize the gas:  electrons are stripped from the ions and become unbound.   Because electrons are more than a thousand times smaller than the ions, they move quickly relative to the ions.   At the wafer surface, the electrons quickly strike the wafer and are depleted.  A steady-state electric field known as the sheath is formed to balance the current losses.  It is this boundary layer that gives the plasma many of its useful properties in manufacturing, such as plasma vapor deposition, plasma ashing (to remove the photoresist), or what we will focus on here, plasma etching. Plasma etching, also known as dry etching, was a breakthrough for achieving anisotropic etches for producing deep features.   As seen below, the input gas type and volume, the applied voltage amplitude and waveforms, and the reactor geometry can all be varied to give considerable flexibility in plasma etching reactors.  Common reactor types are capacitively coupled reactors, which use a single voltage source; reactive-ion etch (RIE) reactors, which have multiple electrodes to independently control the reactive ions for selective etching, or an inductively coupled plasma RIE (ICP-RIE) reactors, which use higher frequencies to enable higher densities and a faster etch rate.  Designing a plasma etch reactor has a wide-range of input parameters to give a large design and operating space for solving a given manufacturing problem.

As the dimensions for semiconductor devices have become smaller, the implications for plasma etching have changed in multiple ways.  Current advanced nodes have dramatically increased the film stack complexity for advanced logic.  Other areas that will see increased challenges in scaling are the 3D NAND structures in advanced memory, or advanced packaging with its complex routing needs.  For example, in the figure below, a schematic for a proposed RDL from Lau et.al.[1] is shown along with a scanning electron microscope image of the through-silicon via (TSV). This TSV, created using a Bosch-type Deep Reactive Ion Etch (DRIE), has an aspect ratio of 10.5 demonstrating the deep anisotropy capable of modern etch reactors.   In these and other areas, the importance of being able to understand the details of the plasma etch has increased.   For plasma etching, the critical issues are the degree of anisotropy (vertical etching versus horizontal etch), the shape of trenches (straight versus tapered or bowed as seen in the figure below) and etch uniformity.  This is in addition to such traditional concerns such as etch rate and uniformity over the entire wafer that are critical for high yields and economics.

Controlling plasmas is difficult because they are complex chemically reactive gasses that interact with the semiconductor material in complex ways.  Simulations have long been important in understanding plasma behavior in etch reactors. The three basic modeling paradigms are drift-diffusion, hydrodynamic (or fluid), and kinetic.  These models are directly equivalent to the types of models used in modeling electron transport in TCAD algorithms for studying semiconductor devices.   A key difference here is that the ions move instead of creating a solid-state lattice, and chemical reactions are also critical for understanding the plasma formation, properties, and etching abilities.

Drift-diffusion and fluid models are widely used to determine the overall energy balance and basic plasma properties.  However, kinetic codes are critical for understanding the details of the plasma etching process.  The degree of etch anisotropy is determined fundamentally by the energy and angle of ions as they strike the wafer, a quantity that is strongly dependent on the plasma sheath.  The complexities of the sheath cannot be fully resolved with the drift-diffusion and fluid models, but require a kinetic code.  Kinetic modeling is especially useful for gaining insights into the plasma uniformity, and degree of anisotropy of the etching process.

Tech-X Corporation has developed VSim, a kinetic modeling tool for simulating plasma etch reactors.   Tech-X Corporation, located in Boulder, Colorado, has been in high performance computing in plasma physics for almost three decades.  High-performance computing enables the details of ion and electron behavior to be computed across manufacturing-relevant spatial scales that are large relative to fundamental plasma length scales.   With over a decade of experience in servicing the wafer equipment manufacturing market, Tech-X provides the leading plasma kinetic simulation capability.   More information is here (http://www.txcorp.com/vsim) for VSim’s capabilities.  In our next article, we will highlight enhancements for VSim 12 that will be released on September 14.

[1] Lau, J., et al. “Redistribution layers (RDLs) for 2.5 D/3D IC integration.” International Symposium on Microelectronics. Vol. 2013. No. 1. International Microelectronics Assembly and Packaging Society, 2013.

Also Read:

Coverage Analysis in Questa Visualizer

Fast EM/IR Analysis, a new EDA Category

DSP IP for High Performance Sensor Fusion on an Embedded Budget


WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow
by Synopsys on 08-11-2022 at 8:00 am

Synopsys Ansys RF Flow Webinar

The design and characterization of RF circuits is a complex process that requires an RF designer to overcome a variety of challenges. Not only do they face the complexities posed by advanced semiconductor processes and the need to meet the demanding requirements of modern wireless standards, designers must also account for electromagnetic effects that become significant at RF and mmWave frequencies. The Synopsys Custom Design Family provides a holistic solution to RF design challenges, including accurate EM modeling with industry-leading tools such as the Ansys EM Tool suite, simulation and analysis of important RF measurements, productive layout creation, and RC extraction and physical verification with foundry qualified signoff tools.

What you will Learn

In this webinar we will use a Low Noise Amplifier design to illustrate the steps needed for creating state-of-the-art RF circuits. We will start with Synopsys Custom Compiler for design creation, simulation and results analysis. Next we will perform inductor synthesis with the Ansys VeloceRF tool and  Inductor modeling with Ansys RaptorX . The next step will be to create a layout with Custom Compiler Layout Editor. Once the layout is complete, we will perform physical verification and parasitic extraction with IC Validator and StarRC and use Ansys Exalto for EM extraction of the critical nets.  As a final step we will simulate the combined extracted model with the PrimeSim SPICE simulator for post-layout verification and analyze the results in PrimeWave Design Environment.

The Synopsys Custom Design Family is a complete front to back solution for all types of custom integrated circuit design.  It includes Custom Compiler, a modern and productive editor for schematics and layout, the PrimeSim Continuum simulation solution for fast and accurate analog and RF simulation, and the PrimeWave Design Environment for post-processing and viewing of simulation results. It also features natively integrated signoff tools – Star RC extraction and IC Validator physical verification.

A wide variety of third-party tools are integrated with the Synopsys custom design platform, including the Ansys tools for Electro Magnetic modeling and extraction, which will be featured in this webinar.

The Presenters

Samad Parekh
Product Marketing Manager, Sr. Staff
Synopsys

Samad Parekh is the Product Manager for Spice Simulation and Design Environment products at Synopsys. He has 10 years of experience serving as a senior member of the Synopsys Applications Engineering team supporting Analog and Custom tools. Prior to Synopsys, Samad worked as an RF designer for 6 years designing RF and microwave circuits for the cellular and aerospace markets. Samad holds a BSEE from UCLA and MSEE from UC Irvine.

Kelly Damalou
Product Manager
Ansys

Kelly Damalou is Product Manager for the Ansys on-chip electromagnetic simulation portfolio. For the past 20 years she has worked closely with leading semiconductor companies, helping them address their electromagnetic challenges. She joined Ansys in 2019 through the acquisition of Helic, where, since 2004 she held several positions both in Product Development and Field Operations. Kelly holds a diploma in Electrical Engineering from the University of Patras, Greece, and an MBA from the University of Piraeus, Greece.

To Learn More

Please register for the webinar below:

https://www.synopsys.com/implementation-and-signoff/resources/webinars/synopsys-ansys-custom-design-flow.html

Also read:

DSP IP for High Performance Sensor Fusion on an Embedded Budget

Intelligently Optimizing Constrained Random

Using STA with Aging Analysis for Robust IC Designs


Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform

Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform
by Kalar Rajendiran on 08-11-2022 at 6:00 am

SoC Block Diagram with EFLX and QuiddiKey

While the ASIC market has always had its advantages over alternate solutions, it has faced boom and bust cycles typically driven by high NRE development costs and time to market lead times. During the same time, the FPGA market has been consistently bringing out more and more advanced products with each new generation. With very high-speed interfaces offered on these products along with flexibility through field-programmability, these advanced FPGA products give ASICs a good run for the money.

Changing requirements have also been the tail wind behind the fast adoption of FPGAs. Being able to accommodate last minute changes without having to re-spin the chip is a God-send in markets with fast changing requirements. This is not to say that ASICs have lost their edge. ASICs still hold their deserved place in terms of PPA scoring when compared against FPGA-based solutions and software solutions run on general-purpose processors. But the advent of embedded FPGA capability has brought flexibility and configurability to ASICs. By integrating embedded FPGA (eFPGA) cores into ASICs, systems can now enjoy the benefits of both ASICs and FPGAs.

What About Security?

Today, there are a number of fast growing markets with rapidly evolving requirements too and that is great for ASICs with embedded FPGA cores. But these fast growing markets also have a high-bar in terms of security of data and communications. While security is always a topic of serious interest in the field of electronics, the focus has grown with the increased use of global supply chains. With many touchpoints throughout the development and deployment phases, concerns about counterfeit chips being inserted to hijack systems are logical and valid.

That is why, securing systems is implemented through the mechanism of hardware root of trust. The hardware root of trust contains the keys for the encrypting and decrypting functions and enables a secure boot process. As the hardware root of trust is inherently trusted, it is critical to ensure that the keys that are stored on the chip can never be hacked.

Can Security Be Further Enhanced?

What if the security can be further enhanced by not even storing the keys on the chip? What if the keys can be individualized to the chip level rather than limited to the design/product level? The security level would indeed be enhanced a lot. This is the essence of a recent announcement by Flex Logix. By partnering with Intrinsic ID, Flex Logix is able to bring an enhanced level of security to SoCs that integrate their EFLX® eFPGA cores. The enhanced security is implemented through Intrinsic ID’s QuiddiKey that leverages their SRAM PUF technology. Refer to the Figure below for a block level diagram of such an SoC.

QuiddiKey

Intrinsic ID QuiddiKey® is a hardware IP solution that enables device manufacturers and designers to secure their products with internally generated, chip-unique cryptographic keys without the need for adding costly, security-dedicated silicon. It uses the inherently random start-up values of SRAM as a physical unclonable function (PUF), which generates the entropy required for a strong hardware root of trust. Since the pattern is unique to a particular SRAM, the pattern is unique to a particular chip like a fingerprint is to its owner. For more details about Intrinsic ID’s SRAM PUF technology, visit the SRAM-PUF product page. For more details about QuiddiKey, visit the QuiddiKey product page.

QuiddiKey IP can be applied easily to almost any chip – from tiny microcontrollers (MCUs) to high-performance systems-on-chip (SoCs). QuiddiKey has been validated for NIST CAVP and has been deployed and proven in hundreds of millions of devices certified by EMVCo, Visa, CCEAL6+, PSA, ioXt, and many governments across the globe. Refer to the Figure below for the major functions that the QuiddiKey IP implements.

Enhanced Security of eFPGA Platforms

In the joint Flex Logix/Intrinsic ID solution, a cryptographic key derived from a chip-unique root key is used to encrypt and authenticate the bitstream of an eFPGA. If the chip is attacked or found in the field, the bitstream of the eFPGA cannot be altered, read, or copied to another chip. That is because the content is protected by a key that is never stored and therefore is invisible and unclonable by an attacker.

Neither is the concern of counterfeit chips being inserted within the supply chain valid any longer. Each QuiddiKey user can generate an unlimited number of chip-unique keys, enabling each user in the supply chain to derive their own chip-unique keys. Each user can protect their respective secrets as their cryptographic keys will not be known to the manufacturer or other supply-chain users.

For more details about how the Flex Logix/Intrinsic ID partnership is “taking eFPGA security to the next level”, refer to this whitepaper.

To learn more about Flex Logix’s eFPGA solutions visit https://flex-logix.com/efpga/.

Also Read:

[WEBINAR] Secure your devices with PUF plus hardware root-of-trust

WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?

Enlisting Entropy to Generate Secure SoC Root Keys


Podcast EP99: How Cliosoft became the leading design data management company

Podcast EP99: How Cliosoft became the leading design data management company
by Daniel Nenni on 08-10-2022 at 10:00 am

Dan is joined by Srinath Anantharaman, who founded Cliosoft in 1997 and serves as the company’s CEO. He has over 40 years of software engineering and management experience in the EDA industry.

Dan and Srinath explore the original focus for Cliosoft and how that has expanded over the years. The future of Cliosoft, as well as its plans for DAC are discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.