100X800 Banner (1)

UVM Polymorphism is Your Friend

UVM Polymorphism is Your Friend
by Bernard Murphy on 08-17-2022 at 6:00 am

Polymorphism min

Rich Edelman of Siemens EDA recently released a paper on this topic. I’ve known Rich since our days together back in National Semi. And I’ve always been impressed by his ability to make a complex topic more understandable to us lesser mortals. He tackles a tough one in this paper – a complex concept (polymorphism) in a complex domain (UVM). As best I can tell, he pulls the trick off again, though this is a view from someone who is already wading out of his depth in UVM. Which got me thinking about a talk by Avidan Efody that I blogged on recently, “Why designers hate us.”

More capable technology, smaller audience

Avidan, a verification guy at Apple, is probably as expert as they come in UVM and all its capabilities. But he also can see some of the downsides of the standard, especially in narrowing the audience, limited value in what he calls “stupid checks” and some other areas. See HERE for more on his talk. His point being that as UVM has become more and more capable to meet the needs of its core audience (professional hardware verifiers), it has become less and less accessible to everyone else. RTL designers must either wait on testbenches for debug (weeks to months, not exactly shift left) or cook their own tests in SystemVerilog. They still need automation, so they start hooking Python to their SV, or better yet cocotb. Then they can do their unit level testing without any need for the verification team or UVM.

Maybe this divergence between designer testing and mainstream verification is just the way it has to be. I don’t see a convergence being possible unless UVM crafts a simpler entry point for designers, or some cocotb look-alike or link to cocotb. Without all the classes and factories and other complications.

But I digress.

Classes and Polymorphism

The production verification world needs and welcomes UVM with all its capabilities. This is Rich’s audience and here he wants to help those not already at an expert level to uplevel. Even for these relative experts, UVM is still a complex world, full of strange magic. Some of that magic is in reusability of an unfamiliar type, through polymorphism.

A significant aspect of the UVM is its class-based structure. Classes allow you to define object types which not only encapsulate the parameters of the object (e.g., center, width, length for a geometric object) but also the method that can operate on those objects. Which for a user of the object abstracts away all that internal complexity. They just need methods to draw, print, move, etc. the object.

Reuse enters through allowing a class to be defined as extension to an existing class. All the same parameters and methods, with a few new ones added. And/or maybe a few of the existing parameters/methods overridden. And you can extend extended classes and so on. This is polymorphism – variants on a common core. So far, so obvious. The standard examples, like the graphic in this article don’t look very compelling.

Rich however uses polymorphism judiciously (his word) and selectively to define a few key capabilities, such as an interrupt sequence class. Reusing what is already defined in UVM to better meet a specific objective.

As I said, I’m way out of my depth on this stuff, but I do trust that Rich knows what he is talking about. You can read the white paper HERE.


Delivering 3D IC Innovations Faster

Delivering 3D IC Innovations Faster
by Kalar Rajendiran on 08-16-2022 at 6:00 am

System Technology Co Optimization STCO

3D IC technology development started many years ago well before the slowing down of Moore’s law benefits became a topic of discussion. The technology was originally leveraged for stacking functional blocks with high-bandwidth buses between them. Memory manufacturers and other IDMs were the ones to typically leverage this technology during its early days. As the technology itself does not limit the use to only such purposes, there has always been a broader appeal and potential for this technology.

Over the years, 3D IC technology has progressed from its novelty stage to becoming an established mainstream manufacturing technology. And the EDA industry has introduced many tools and technology to help design products that take the 3D IC path. Over the recent past, complex SoC implementations started leveraging 3D IC technology to balance performance/cost goals.

The slowing of Moore’s law has become a major driver to the chiplets way of implementing SoCs. Chiplets are small ICs specifically designed and optimized for operation within a package in conjunction with other chiplets and full-sized ICs. More companies are turning to 3D stacking of ICs and chiplets implemented in different process nodes optimal for the respective chiplet’s function. Designers can also combine 3D memory stacks, such as high bandwidth memory, on a silicon interposer within the same package. The 3D IC implementation will be a major beneficiary of the chiplets adoption wave.

When a new capability is ready for mainstream, its mass adoption success depends on how easily, quickly, effectively and efficiently a solution can be delivered. While the 3D IC manufacturing technology may have become mainstream, there are some foundational enablers for a successful heterogeneous 3D IC implementation. Siemens EDA recently published an eBook on this topic, authored by Keith Felton.

This post will highlight some salient points from the eBook. A follow up post will cover methodology and workflows recommendations for achieving optimal results when implementing 3D IC designs.

Foundational Enablers For Successful Heterogeneous 3D IC Implementation

Any good design methodology always includes look-aheads for downstream effects in order to consider and address them early in the design process. While this is important for monolithic designs, it becomes paramount when designing 3D ICs.

System Co-Optimization (STCO) approach

This approach involves starting at the architectural level to partition the system into various chiplets and packaged die based on functional requirements and form factor constraints. After this step, RTL or functional models are generated. This is followed by physical floor planning and validation all the way to detailed layout supported with in-process performance modeling.

STCO elements already exist in a number of Siemens EDA tools, allowing engineers to evaluate design decisions in the context of predictive downstream effects of routability, power, thermal and manufacturability. Predictive modeling is a fundamental component of the STCO methodology that leverages Siemens EDA modeling tools during physical planning to gain early insight into downstream performance.

Transition from design-based to systems-based optimization

A 3D IC design requires consistent system representation throughout the design and integration process with visibility and interoperability of all cross-domain content. This calls for tools and methodology capable of a full system perspective from early planning through implementation to design signoff and manufacturing handoff.

Expanding the supply chain and tool ecosystem

3D IC design efforts demand a higher level of tool interoperability and openness than the industry is used to. Sharing and updating design content in a multi-vendor and/or multi-tool environment must be supported. This places a greater demand on assembly level verification throughout the design process to ensure the different pieces of the system work together as expected.

Balancing design resources across multiple domains

STCO facilitates exploration of the 3D IC solution space for striking the ideal balance of resources across all domains and deriving the optimal product configuration. An early perspective enables better engineering decisions on resource allocation, resulting in higher performing, more cost effective products.

Tighter integration of the various teams

A new design flow is required to support the design, validation, and integration of multiple ASICs, chiplets, memory, and interposers within a 3D IC design. The silicon, packaging and PCB teams are more likely to be global, requiring even tighter integration with the system, RTL and ASIC design processes.

For more details on Siemens EDA 3D IC innovations, you can download the eBook published by Siemens EDA.

While the Siemens heterogeneous 3D IC solution is packed with powerful capabilities, fully benefitting from these capabilities depends on the implementation methodology put to use. Designing 3D IC products that deliver differentiation, profitability and time to market advantages will be the subject of a follow-on blog.

Also Read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


ARC Processor Summit 2022 Your embedded edge starts here!

ARC Processor Summit 2022 Your embedded edge starts here!
by Synopsys on 08-15-2022 at 10:00 am

ARC Summit 2022

As embedded systems continue to become more complex and integrate greater functionality, SoC developers are faced with the challenge of developing more powerful, yet more energy-efficient devices. The processors used in these embedded applications must be efficient to deliver high levels of performance within limited power and silicon area budgets.

Why Attend?

Join us for the ARC® Processor Summit to hear our experts, users and ecosystem partners discuss the most recent trends and solutions that impact the development of SoCs for embedded applications. This event will provide you with in-depth information from industry leaders on the latest ARC processor IP and related hardware/software technologies that enable you to achieve differentiation in your chip or system design. Sessions will be followed by a networking reception where you can see live demos and chat with fellow attendees, our partners, and Synopsys experts.

Who Should Attend?

Whether you are a developer of chips, systems or software, the ARC Processor Summit will give you practical information to help you meet your unique performance, power and area requirements in the shortest amount of time.

Automotive

Comprehensive solutions that help drive security, safety & reliability into automotive systems

AI

Power-efficient hardware/software solutions to implement artificial intelligence technologies in next-gen SoCs

Enabling Technologies

Solutions to accelerate SoC and software development to meet target performance, power and area requirements

We look forward to seeing you in person at ARC Processor Summit!

Make the Safe Choice

  • Over 20 years of innovation delivering silicon-proven processor IP for embedded applications – billions of chips shipped annually
  • Industry’s second-leading processor by unit shipment
  • The safe choice with significant investment in development of safety and security processor

Industry’s Best Performance Efficiency for Embedded

  • Broad portfolio of proven 32-/64-bit CPU and DSP cores, subsystems and software development tools
  • Processor IP for a range of applications including ultra-low power AIoT, safety-critical automotive, and embedded vision with neural networks
  • Supported by a broad ecosystem of commercial and open-source tools, operating systems, and middleware

PPA Efficient, Configurable, Extensible

  • Optimized to deliver the best PPA efficiency in the industry for embedded SoCs
  • Highly configurable, allowing designers to optimize the performance, power, and area of each processor instance on their SoC
  • ARC Processor eXtension (APEX) technology to customize processor implementation

Rich ARC Ecosystem

  • Complete suite of development tools to efficiently build, debug, profile and optimize embedded software applications for ARC based designs
  • Broad 3rd party support provides access to ARC-optimized software and hardware solutions from leading commercial providers
  • Online access to a wide range of popular, proven free and open-source software and documentation

Register Today!

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also Read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

DSP IP for High Performance Sensor Fusion on an Embedded Budget

Intelligently Optimizing Constrained Random


Digital Twins Simplify System Analysis

Digital Twins Simplify System Analysis
by Dave Bursky on 08-15-2022 at 6:00 am

Siemens Digital Twin SemiWiki

The ability to digitally replicate physical systems has been used to model hardware operations for many years, and more recently, digital twining technology has been applied to electronic systems to better simulate and troubleshoot the systems. As explained by Bryan Ramirez, Director of Industries, Solutions & Ecosystems, Siemens EDA, one example of the early use of twin technology was in the Apollo 13 mission back in 1970. With a spacecraft 200,000 miles away, hands-on troubleshooting was not possible to solve a failing subsystem or system problem. Designers tackled the challenge by using a ground-based duplicate system (a physical twin) to replicate and then troubleshoot problems that arose.

However, such physical twins were both expensive and very large, and often had to be disassembled to reach the system that failed. By employing a digital twin of the system, designers can manipulate the software to do the analysis and develop a solution or workaround to the problem, saving time and money.  A conceptual model of the digital twin was first proposed by Michael Grieves of the University of Michigan in 2002 explained Ramirez, and the first practical definition of the digital twin stemmed from work at NASA in 2010 to improve physical model simulations for spacecrafts.

Digital twins allow designers to virtually test products before they build the systems and complete the final verification.  They also allow engineers to explore their design space and even define their system of systems. For example, continued Ramirez, using digital twin technology to model autonomous driving can help impact the electronic control systems. Using the twin technology designers can develop the models that simulate the sensing, computations and actuations of the autonomous driving system as well as the “shift-left” software development. This gives the designer the ability to go from chipß->carß>city validation without hardware. That reduces costs and design spins as well as allowing designers to optimize system performance.

Additional benefits of digital twin technology include the ability to include predictive maintenance, remote diagnostics, and even real-time threat monitoring. In industrial applications, real-time monitoring, feedback for continuous improvement, and feed-forward predictive insights are key benefits of leveraging the digital twin approach (see the figure). Factory automation can also benefit by using the digital twin capability for simulating autonomous guided vehicles, interconnected systems of systems, as well as examining security, safety, and reliability aspects.

Extrapolating future scenarios, Ramirez suggests that the digital twin capability can simulate the impossible. One such example is an underwater greenhouse dubbed Nemo’s Garden. In the simulation, the software can accelerate innovation by removing the limitations of weather conditions, seasonality, growing seasons, and diver availability.

All these simulation capabilities are the result of improved compute capabilities, which, in turn, are the result of higher-performance integrated circuits. Additionally, as the IC content in systems continues to increase, it becomes easier to simulate/emulate the systems as digital twins. However, as chip complexity continues to increase, cost – especially the cost of respins – the need for the use of digital twins increases to better simulate the complex chips and thus avoid costly respins. The challenges that the digital twin technology faces include creating models for the complex systems, developing multi-domain and mixed-fidelity simulations, setting standardization for data consistency and sharing, and performance optimization. These are issues that the industry is working hard to address.

For more information go to Siemens Digital Industries Software

Also read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


Time for NHTSA to Get Serious

Time for NHTSA to Get Serious
by Roger C. Lanctot on 08-14-2022 at 10:00 am

Time for NHTSA to Get Serious

In the final season of “The Sopranos,” Christopher Multisanti (played by Michael Imperioli) and Anthony Soprano (James Gandolfini) lose control of their black Cadillac Escalade and go tumbling off a two-lane rural highway and down a hill. Christopher dies (spoiler alert) with an assist from Tony, before Tony calls “911” for help.

Connected car junkies will immediately cry foul given that the episode – which first aired in 2007 – falls well within the deployment window of General Motors’ OnStar system. But more vigilant devotees will recall that in an earlier season Tony says he “had all of that tracking shit removed” from his car. (Tony favored GM vehicles.)

I was reminded of this as I rewatched the series and pondered the National Highway Traffic Safety Administration’s crash reporting General Order issued last year. The reporting requirement raises a critical issue regarding privacy obligations or the relevance of privacy in the event of a crash. The shortcomings of the initial tranche of data reported out by NHTSA last month suggest a revision of the reporting requirement is in order.

When the NHTSA issued its Standing General Order in June of 2021 requiring “identified manufacturers and operators to report to the agency certain crashes involving vehicles equipped with automated driving systems (ADS) or SAE Level 2 advanced driver assistance systems (i.e. systems that simultaneously control speed and steering),” the expectation was that the agency would soon be awash in an ocean of data. The agency was seeking deeper insights into the causes of crashes, the mitigating effects of ADS and some ADAS systems, and some hint as to the future direction of regulatory actions.

Instead, the agency received reports of 419 crashes of ADAS-equipped vehicles and 145 crashes involving vehicles equipped with automated driving systems. What has emerged from the exercise is a batch of heterogeneous data with obvious results (human-driven vehicles with front end damage and robot-driven vehicles with rear-end damage.) and gaping holes.

The volume and type of data were insufficient to draw any significant conclusions and the varying ability of the individual car companies to collect and report the data produced inconsistent information. In fact, allowable redactions further impeded the potential for achieving useful insights.

To this add NHTSA’s own caveats – described in great detail in NHTSA documents;

  • Access to Crash Data May Affect Crash Reporting
  • Incident Report Data May Be Incomplete or Unverified
  • Redacted Confidential Business Information and Personally Identifiable Information
  • The Same Crash May Have Multiple Reports
  • Summary Incident Report Data Are Not Normalized

The only car company that appears to be adequately prepared and equipped to report the sort of data that NHTSA is seeking, Tesla, stands out for having reported the most relevant crashes. In a report titles “Do Teslas Really Account for 70% of U.S. Crashes Involving ADAS? Of course Not,” CleanTechnica.com notes that Tesla is more or less “punished” for its superior data reporting capability. Competing auto makers are allowed to hide behind the limitations of their own ability to collect and report the required data.

It’s obvious from the report that there is a vast under-reporting of crashes. This is the most salient conclusion from the reporting and it calls for a radical remedy.

The U.S. does not have a mandate for vehicle connectivity, but nearly every single new car sold in the U.S. comes with a wireless cellular connection.  The U.S. does have a requirement that an event data recorder (EDR) be built into every car.

If NHTSA is serious about collecting crash data, the agency ought to mandate a connection between the EDR and the telematics system and require that in the event of a crash the data related to that crash be automatically transmitted to a government data collection point – and simultaneously reported to first responders connected to public service access points.

There are several crucial issues that will be remedied by this approach:

  • First responders will receive the fastest possible notification of potentially fatal crashes. Most automatic notifications are triggered by airbag deployments and too many of those notifications go to call centers that introduce delays and impede the transmission of relevant data.
  • A standard set of data will be transmitted to both the regulatory authority and first responders – removing inconsistencies and redactions. All such systems ought to be collecting and reporting the same set of data. European authorities recognized the importance of consistent data collection when they introduced the eCall mandate which took effect in April of 2018.
  • Manufacturers will finally lose plausible deniability – such as the ignorance that GM claimed during Congressional hearings in an attempt to avoid responsibility for fatal ignition switch failures
  • Such a policy will recognize that streets and highways are public spaces where the drivers of cars that collide with inanimate objects, pedestrians, or other motorists have forfeited a right to privacy. The public interest is served by automated data reporting from crash scenes.

NHTSA administrators are political appointees with precious little time to influence policy in the interest of saving lives. It is time for NHTSA to act quickly to establish a timeline for automated crash reporting to cut through the redactions and data inconsistencies and excuses and pave a realistic path toward reliable, real-time data reporting suitable for realigning regulatory policy. At the same time, the agency will greatly enhance the timeliness and efficacy of local crash responses – Anthony Soprano notwithstanding.

Also Read:

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform

Accellera Update: CDC, Safety and AMS


Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists
by Fred Chen on 08-14-2022 at 8:00 am

Spot Pairs for Measurement of Secondary Electron Blur in EUV

There is growing awareness that EUV lithography is actually an imaging technique that heavily depends on the distribution of secondary electrons in the resist layer [1-5]. The stochastic aspects should be traced not only to the discrete number of photons absorbed but also the electrons that are subsequently released. The electron spread function, in particular, should be quantified as part of resist evaluation [5]. The scale of the electron spread or blur is likely not a well-defined parameter, but itself has some distribution.

It is necessary to quantitatively assess this distribution of spread. A basic, direct approach is to pattern two features close enough to one another to show resist loss dependent on the electron blur. For example, two 20-25 nm spots separated by a 40 nm center-to-center distance will be significantly affected by 3 nm electron blur scale length, but much less so by 2 nm scale length (see the figure below).

The resist loss between two closely spaced exposed features following development depends on the scale length of the electron spread, a.k.a. blur [5].

An EUV resist or electron-beam resist could be evaluated at a given thickness by evaluating EUV or e-beam exposures of these arrays of these spot pairs in sufficient numbers to get the distribution of electron blur scales, down to the far-out tails, i.e., ppb level or lower. This data would be necessary for better predictions of yield, due to CD variation, edge placement error (EPE), and even the occurrence of stochastic defects.

Interestingly enough, scanning probe electron lithography, such as using an STM, may have the advantage of having probe-to-sample bias limiting the lateral spread of electrons, thereby reducing blur [6]. Again, the spot pair pattern can be used to confirm whether this is true or not.

Reference

[1] J. Torok et al., “Secondary Electrons in EUV Lithography,” J. Photopolymer Sci. and Tech. 26, 625 (2013).

[2] R. Fallica et al., “Experimental estimation of lithographically-relevant secondary electron blur, 2017 EUVL Workshop.

[3] M. I. Jacobs et al., “Low energy electron attenuation lengths in core-shell nanoparticles,” Phys. Chem. Chem. Phys. 19 (2017).

[4] F. Chen, “The Electron Spread Function in EUV Lithography,” https://www.linkedin.com/pulse/electron-spread-function-euv-lithography-frederick-chen

[5] F. Chen, The Importance of Secondary Electron Spread Measurement in EUV Resists,” https://www.youtube.com/watch?v=deB0pxEwwvc

[6] U. Mickan and A. J. J. van Dijsseldonk, US Patent 7463336, assigned to ASML.

This article originally appeared in LinkedIn Pulse: Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

Also Read:

EUV’s Pupil Fill and Resist Limitations at 3nm

ASML- US Seeks to Halt DUV China Sales

ASML EUV Update at SPIE


What is Your Ground Truth?

What is Your Ground Truth?
by Roger C. Lanctot on 08-14-2022 at 6:00 am

What is Your Ground Truth

When my son bought a 2020 Chevrolet Bolt EV a couple of years ago, I was excited. I wanted to see what the Xevo-supplied Marketplace (contextually driven ads and offers) looked like and I was also curious as to what clever navigation integration GM was offering.

I was swiftly disappointed to discover that Xevo Marketplace was not an embedded offering but rather an app to be projected from a connected smartphone. As for navigation, it wasn’t available for my son’s MY2020 Bolt even as an option.

I was stunned for two reasons. First, I thought the Marketplace app was intended as a brand-defining service which would surely be embedded in the vehicle and integrated as part of the home screen as an everyday driving application. Second, how could GM ship an EV that was unable to route the driver to the nearest compatible and available charging station via the on-board navigation system?

These manifestations made me wonder whether I was witnessing the appification of in-vehicle infotainment. What if and why shouldn’t cars ship with dumb terminals that draw all of their intelligence and content from the driver’s/user’s mobile device?

Further fueling this impression was GM’s subsequent introduction of Maps+, a Mapbox-sourced navigation app based on Open Street Map with a $14.99/month subscription. To be followed by the launch of Google Built-in (Googlemaps, Googleplay apps, Google Voice Assistant) as a download with a Premium OnStar plan ($49.99/month) or Unlimited Data plan ($25/month) – seems confusing, right?

The truly confusing part, though, isn’t the variety of plans and pricing combos, it is the variable view of reality or ground truth. Ground truth is the elusive understanding of traffic conditions on the road ahead and how that might impact travel and arrival times. Ground truth can also apply to the availability of parking and charging resources. (Parkopedia is king of parking ground truth, according to a recent Strategy Analytics study: https://business.parkopedia.com/strategy-analytics-us-ground-truth-testing-2021?hsLang=en )

The owner of a “lower end” GM vehicle – with no embedded navigation option – will have access to at least three different views of ground truth: OnStar turn-by-turn navigation, Maps+ navigation, and Google- and/or Apple-based navigation. (Of course Waze, Here We Go, and TomTom apps might also be available from a connected smartphone.)

Each of these navigation solutions will have different views of reality and different routing algorithms driven by different traffic and weather data sources and assumptions. Am I, as the vehicle owner and driver, supposed to “figure out” which is the best source? When I bought the car wasn’t I paying GM for its expertise in vetting these different systems?

What about access to the data and the use of vehicle data? Is my driving info being hoovered up by some third party? And what is ground truth when I am being given varying lenses through which to grasp it?

Solutions are now available in the market – from companies such as TrafficLand in the U.S. – that are capable of integrating still images and live video from traffic cameras along my route allowing me to better understand the routing decisions my car or my apps are making for me. The optimum means for accessing this information would be through a built-in navigation system.

GM continues to offer built-in or “embedded” navigation across its product line with a handful of exceptions – such as entry-level models of the Bolt.  Embedded navigation – usually part of the integrated “infotainment” system – is big business, representing billions of dollars in revenue from options packages for auto makers.

More importantly, the modern day infotainment system – lately rendered on a 10-inch or larger in-dash screen – is a critical point of customer engagement. The infotainment system is the focal point of in-vehicle communications, entertainment, and navigation – as well as vehicle status reports.

Vehicle owners around the world tell my employer – Strategy Analytics – in surveys and focus groups that the apps that are most important to them while driving relate to traffic, weather, and parking. Traffic is the most important, particularly predictive traffic information, because this data is what determines navigation routing decisions.

Navigation apps do not readily disclose their traffic sources, but it is reasonable to assume that a navigation app with HERE or TomTom map data is using HERE or TomTom-sourced traffic information. Googlemaps has its own algorithms as do Apple and Mapbox – but, of course, there is some mixing and matching between the providers of navigation, maps, and traffic data.

This is all the more reason why access to TrafficLand’s real-time traffic camera feeds is so important. Sometimes seeing is believing and TrafficLand’s traffic cameras are, by definition, monitoring the majority of known traffic hot spots across the country.

When the navigation system in my car wants to re-route me – requesting my approval – I’d like to see the evidence to justify a change in plans. Access to traffic camera info can provide that evidence.

I can understand why GM – and some other auto makers such as Toyota – have opted to drop embedded navigation availability from some cars as budget-minded consumers seek to pinch some pennies. But the embedded map represents the core of a contextually aware in-vehicle system.

There is extraordinary customer retention value in building navigation into every car – particularly an EV. The fundamental principles of creating safe, connected cars call for the integration of a location-aware platform including navigation.

Deleting navigation may be a practical consideration as attach rates decline, but it’s bad for business. In fact, there is a bit of a head-snapping irony in GM or Toyota or any auto maker deleting embedded navigation in favor of a subscription-based navigation experience from Mapbox or Google. These car makers are telling themselves that the customers least able to pay for built-in navigation will be willing to pay a monthly subscription for an app. I think not.

This is very short-term thinking. Location awareness is a brand-defining experience and auto makers targeting “connected services” opportunities will want to have an on-board, built-in navigation system. If not, the auto maker that deletes built-in navigation will be handing the customer relationship and the related aftermarket profits to third parties such as Apple, Amazon, and Google. That’s the real ground truth.

Also Read:

What’s Wrong with Robotaxis?

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform


Podcast EP100: A Look Back and a Look Ahead with Dan and Mike

Podcast EP100: A Look Back and a Look Ahead with Dan and Mike
by Daniel Nenni on 08-12-2022 at 10:00 am

Dan and Mike get together to reflect on the past and the future in this 100th Semiconductor Insiders podcast episode. The chip shortage, foundry landscape, Moore’s law, CHIPS Act and industry revenue trends are some of the topics discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA

CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA
by Daniel Nenni on 08-12-2022 at 6:00 am

Kai Beckmann 1

Kai Beckmann is a Member of the Executive Board at Merck KGaA, Darmstadt, Germany, and the CEO of Electronics. He is responsible for the Electronics business sector, which he has been leading since September 2017. In October 2018, Kai Beckmann also took over the responsibility for the Darmstadt site and In-house Consulting. In addition, he acts as the Country Speaker for Germany with responsibility for co-determination matters.

Prior to his current role, Kai Beckmann was Chief Administration Officer of Merck KGaA, Darmstadt, Germany, with responsibility for Group Human Resources, Group Business Technology, Group Procurement, In-house Consulting, Site Operations and the company’s Business Services, as well as Environment, Health, Safety, Security, and Quality.

In 2007, he became the first Chief Information Officer of Merck KGaA, Darmstadt, Germany, with responsibility for Corporate Information Services. From 2004 to 2007, he served as Managing Director of Singapore and Malaysia, and prior to that he held senior executive responsibility for the Information Management and Consulting unit from 1999 to 2004. He began his career at Merck KGaA, Darmstadt, Germany in 1989 as an IT system consultant.

Kai Beckmann studied computer science at the Technical University of Darmstadt from 1984 to 1989. In 1998, he earned a doctorate in Economics while working. He is married and has one son.

Tell us about EMD Electronics

Merck KGaA, Darmstadt, Germany, operates across life science, healthcare, and electronics. More than 60,000 employees work to make a positive difference in millions of people’s lives every day by creating more joyful and sustainable ways to live. In 2021, Merck KGaA, Darmstadt, Germany, generated sales of € 19.7 billion in 66 countries. The company holds the global rights to the name and trademark “Merck” internationally. The only exceptions are the United States and Canada, where the business sectors of Merck KGaA, Darmstadt, Germany, operate as MilliporeSigma in life science, EMD Serono in healthcare, and EMD Electronics in electronics.

As EMD Electronics we are the company behind the companies advancing digital living. Our portfolio covers a broad range of products and solutions, including high-tech materials and solutions for the semiconductor industry, as well as liquid crystals and OLED materials for displays and effect pigments for coatings and cosmetics. We offer the broadest portfolio of innovative materials in the semiconductor industry and support our customers in creating industry-leading microchips. In the US, EMD Electronics alone has approximately 2,000 employees across the country, with more than a dozen manufacturing and R&D sites spanning the continental U.S.

Last year you announced a $1 billion investment in the US to support semiconductor customers. Can you tell us more about these investments?

These investments are part of our global program called “Level Up” for investing in R&D, capacity, and accelerating growth in the semiconductor and display markets. Over the next five years, we plan to spend around $2.5 billion globally in long-term fixed assets (capital expenditures) in Semiconductor and Display Solutions. In the U.S., EMD Electronics plans to invest primarily in its Arizona, California, Texas, and Pennsylvania sites. Last year, we announced global investments of more than $3.5 billion as part of our “Level Up” growth program. With this, we seek to capture the growth opportunities that come with the significantly accelerating global demand for innovative semiconductor and display materials. This demand is driven by exponential data growth and highly impactful technology trends that include remote working, the growth of AI, and soaring demand for electric vehicles. Our “Level Up” growth program focuses on four mutually reinforcing key priorities: Scale, Technology, Portfolio, and Capabilities. Further investing in these four areas builds the foundation of our ambitious growth targets, in conjunction with the strong demand for electronics materials, particularly semiconductors.

Sustainability is becoming increasingly important across the industry. What is Merck KGaA, Darmstadt, Germany, and especially the business of EMD Electronics doing to ensure a sustainable future?

We believe that we can harness science and technology to help tackle many global challenges. Always guided by a robust set of values, we approach all our actions and decisions with a sense of responsibility. Sustainability has therefore been vital to us for many generations. We can only ensure our own future success by also creating lasting added value for society.

In creating long-term added value for society, we have defined three goals within our sustainability strategy. In 2030, we will achieve progress for more than one billion people through sustainable science and technology, along with integrating sustainability into all our value chains. By 2040, we will be climate-neutral and reduce our resource consumption. Most of our greenhouse gas emissions stem from process-related emissions during the production of specialty chemicals for the electronics industry. With improved processes, Merck can significantly reduce those emissions in the future.

As a materials supplier for the electronics industry, we are a key enabler for sustainable innovation. We are addressing emissions through abatement and alternatives. For example, we are now designing a process for large-scale NF3 abatement as a pre-requisite to meet our long-term GHG goals, and to drive decarbonization in the industry.  In addition to optimizing our own processes to find more sustainable solutions, we are also a trustworthy partner for our customers to support them on their sustainability journey. Just recently we announced  our collaboration with Micron, testing an alternative low-GWP etch gas of ours, further aligning on our shared sustainability goals.

You recently attended Semicon West. What were your reactions to being back in person with customers at a trade show in the US, and what announcements or innovations were you most excited about?

I truly appreciated being able to re-connect with many great people from all over the world face-to-face. SEMICON West is the choice place to exchange on key topics in our industry, tackling industry challenges, and to establish partnerships and collaborations. The vitality of innovation never stops and it’s wonderful to see the progress the industry is making.  It is fascinating to see how the industry is driving innovations in new materials in fields such as 3D NAND, FinFET, Nanosheet or EUV, to continuously make devices more intelligent, power efficient and smaller. With the move to 3D, shrinking is no longer the most cost-effective way to increase density. Feature sizes for 3D chips will no longer shrink and may increase, as they already have for 3D NAND. I also heard several times “etch could become the new litho”. Since Merck is supplying materials for all parts of the manufacturing process – litho, deposition, etch, CMP, cleans, you name it – we are well positioned to participate in the continued growth story that is Moore’s Law, 2nd edition. Additionally, we appreciate that sustainability is becoming more and more important in our industry where we are a well-respected partner for our customers.

Finally, let me mention data analytics as one driving force for the industry. We combine a data-driven approach with a physics-based expertise. In December last year we formed the independent partnership Athinia together with Palantir to deliver a secure collaborative data analytics platform for the semiconductor industry. The Athinia platform will leverage AI and big data to solve critical challenges, improve quality and supply chain transparency, and time to market. At Semicon West Athinia announced that Micron Technology plans to use the data analytics platform to create a pioneering data collaboration ecosystem that will help lead a continued journey of digital transformation with Micron’s critical suppliers.

Advancing Digital Living has data at its core, data that will be continually leveraged in the coming decade. Our teams pioneer digital solutions that ensure we can deliver high-caliber, customized quality control that allows for optimal material performance. Our digital solutions team also serves customers in predictive quality analysis. Our approach starts at the production level, which is at the center of the supply chain, interacting with customers and partners. We gain learnings from the use of the right technology or system and then adapt, and scale as needed, ultimately allowing us to identify which characteristics led to the “golden batch”. This also helps to accelerate new material development in the future as we transfer the learnings in R&D in a systematic way periodically. By the way, minimizing quality-based excursions also offers sustainability benefits, minimizing wasted product and suboptimal paths through supply chains.

For more information click HERE.

Also read:

CEO Interview: Jaushin Lee of Zentera Systems, Inc.

CEO Interview: Shai Cohen of proteanTecs

CEO Interview: Barry Paterson of Agile Analog


Understanding Sheath Behavior Key to Plasma Etch

Understanding Sheath Behavior Key to Plasma Etch
by Scott Kruger on 08-11-2022 at 10:00 am

Final Edit EtchingProcess Illustration

Readers of SemiWiki will be well aware of the challenges the industry has faced in photolithography in moving to new nodes, which drove the development of new EUV light sources as well as new masking techniques.  Plasma etching is another key step in chip manufacturing that has also seen new challenges in the development of new sub-10nm processes.

Plasmas, the fourth state of matter, are formed by filling a vacuum chamber with a low-pressure gas and using electromagnetic energy inputs to ionize the gas:  electrons are stripped from the ions and become unbound.   Because electrons are more than a thousand times smaller than the ions, they move quickly relative to the ions.   At the wafer surface, the electrons quickly strike the wafer and are depleted.  A steady-state electric field known as the sheath is formed to balance the current losses.  It is this boundary layer that gives the plasma many of its useful properties in manufacturing, such as plasma vapor deposition, plasma ashing (to remove the photoresist), or what we will focus on here, plasma etching. Plasma etching, also known as dry etching, was a breakthrough for achieving anisotropic etches for producing deep features.   As seen below, the input gas type and volume, the applied voltage amplitude and waveforms, and the reactor geometry can all be varied to give considerable flexibility in plasma etching reactors.  Common reactor types are capacitively coupled reactors, which use a single voltage source; reactive-ion etch (RIE) reactors, which have multiple electrodes to independently control the reactive ions for selective etching, or an inductively coupled plasma RIE (ICP-RIE) reactors, which use higher frequencies to enable higher densities and a faster etch rate.  Designing a plasma etch reactor has a wide-range of input parameters to give a large design and operating space for solving a given manufacturing problem.

As the dimensions for semiconductor devices have become smaller, the implications for plasma etching have changed in multiple ways.  Current advanced nodes have dramatically increased the film stack complexity for advanced logic.  Other areas that will see increased challenges in scaling are the 3D NAND structures in advanced memory, or advanced packaging with its complex routing needs.  For example, in the figure below, a schematic for a proposed RDL from Lau et.al.[1] is shown along with a scanning electron microscope image of the through-silicon via (TSV). This TSV, created using a Bosch-type Deep Reactive Ion Etch (DRIE), has an aspect ratio of 10.5 demonstrating the deep anisotropy capable of modern etch reactors.   In these and other areas, the importance of being able to understand the details of the plasma etch has increased.   For plasma etching, the critical issues are the degree of anisotropy (vertical etching versus horizontal etch), the shape of trenches (straight versus tapered or bowed as seen in the figure below) and etch uniformity.  This is in addition to such traditional concerns such as etch rate and uniformity over the entire wafer that are critical for high yields and economics.

Controlling plasmas is difficult because they are complex chemically reactive gasses that interact with the semiconductor material in complex ways.  Simulations have long been important in understanding plasma behavior in etch reactors. The three basic modeling paradigms are drift-diffusion, hydrodynamic (or fluid), and kinetic.  These models are directly equivalent to the types of models used in modeling electron transport in TCAD algorithms for studying semiconductor devices.   A key difference here is that the ions move instead of creating a solid-state lattice, and chemical reactions are also critical for understanding the plasma formation, properties, and etching abilities.

Drift-diffusion and fluid models are widely used to determine the overall energy balance and basic plasma properties.  However, kinetic codes are critical for understanding the details of the plasma etching process.  The degree of etch anisotropy is determined fundamentally by the energy and angle of ions as they strike the wafer, a quantity that is strongly dependent on the plasma sheath.  The complexities of the sheath cannot be fully resolved with the drift-diffusion and fluid models, but require a kinetic code.  Kinetic modeling is especially useful for gaining insights into the plasma uniformity, and degree of anisotropy of the etching process.

Tech-X Corporation has developed VSim, a kinetic modeling tool for simulating plasma etch reactors.   Tech-X Corporation, located in Boulder, Colorado, has been in high performance computing in plasma physics for almost three decades.  High-performance computing enables the details of ion and electron behavior to be computed across manufacturing-relevant spatial scales that are large relative to fundamental plasma length scales.   With over a decade of experience in servicing the wafer equipment manufacturing market, Tech-X provides the leading plasma kinetic simulation capability.   More information is here (http://www.txcorp.com/vsim) for VSim’s capabilities.  In our next article, we will highlight enhancements for VSim 12 that will be released on September 14.

[1] Lau, J., et al. “Redistribution layers (RDLs) for 2.5 D/3D IC integration.” International Symposium on Microelectronics. Vol. 2013. No. 1. International Microelectronics Assembly and Packaging Society, 2013.

Also Read:

Coverage Analysis in Questa Visualizer

Fast EM/IR Analysis, a new EDA Category

DSP IP for High Performance Sensor Fusion on an Embedded Budget