Banner 800x100 0810

SoC Verification Flow and Methodologies

SoC Verification Flow and Methodologies
by Sivakumar PR on 08-18-2022 at 6:00 am

Electronic System

We need more and more complex chips and SoCs for all new applications that use the latest technologies like AI. For example, Apple’s 5nm SoC A14 features 6-core CPU, 4 core-GPU and 16-core neural engine capable of 11 trillion operations per second, which incorporates 11.8 billion transistors, and AWS 7nm 64-bit Graviton2 custom processor contains 30 billion transistors. Designing such complex chips demands a standard and proven verification flow that involves extensive verification at every level, block to IP to Sub-system to SoC, using various verification methodologies and technologies.

In this article, let me walk you through various verification methodologies we use for verifying IPs, Sub-systems, and SoCs and explain why we need new methodologies/standards like PSS.

Understanding how we build electronic systems using SoCs is essential for the verification engineers who deal with SoC verification flow, whether doing a white-box verification at the IP level, gray-box verification at the sub-system level, or black-box verification at the SoC level.

How do we build electronic systems using SoC?

Any chip, a simple embedded microcontroller, or a complex system-on-a-chip [SoC] will have one or more processors. Figure1 shows a complex electronic system composed of both hardware and software needed for electronic devices like smartphones.

Electronic System

Figure1 : Electronic System and System-On-Chip

The hardware is made up of a complex SoC that incorporates almost all the components needed for the device. In the case of the smartphone, we integrate all the hardware components called IPs [Intellectual Properties] like CPUs, GPUs, DSP, Application Processors, Interface IPs like USB, UART, SPI, I2C, GPIO, and subsystems like System Controllers, Memories with controllers, Bluetooth, and WiFi, etc. and create the SoC. Using SoC helps us to reduce the size and power consumption of the device while improving its performance.

The software is composed of application software and system software. The application software provides the user interface, and the system software provides the interface to application software to deal with the hardware. In the smartphone case, the application software could be mobile apps like YouTube, Netflix, GoogleMap, etc, and the system software could be the operating system [OS] like ios or android. The system software provides everything like firmware and protocol stack along with the OS needed for the application software to interface with the hardware. The OS manages multiple application threads in parallel, memory allocation, and I/O operations as a central component of the system software.

Let me explain how the entire system, like a smartphone works. For example, when you invoke an application like a calculator on a smartphone, the operating system loads the executable binary from the storage memory into RAM. Then it immediately loads its starting address into the program counter [PC] of its processor. The processor [ARM/x86/RISC-V] executes the binary loaded in the RAM/Cache pointed by the PC [address of RAM]. This precompiled binary is nothing but the machine language of the processor, and therefore the processor executes the application in terms of its instructions [ADD/SUB/MULT/LOAD] and calculates the results.

Understanding the SoC design process using processors can help the verification engineers deal with any complex sub-system/chip verification at the system level. As part of the SoC verification process, verification engineers may need to deal with various things like virtual prototyping for system modeling, IP, subsystem and SoC functional verification, hardware-software co-verification, emulation, ASIC prototyping, post-silicon validation, etc. in their long-term career. So it demands a cohesive and complete knowledge and understanding of both hardware and software to work independently as verification experts and sometimes to work closely with software teams as well to deal with the software, RTOS/firmware/stacks for the chip/system level verification.

Now let us explore various verification methodologies.

IP Verification

IPs are the fundamental building blocks for any SoC. So IP verification demands exhaustive white-box verification that demands methodologies like formal verification and random simulation, especially for the processor IPs as everything is initiated and driven by them as a central component in any SoCs. Figure 2 shows how we verify a processor IP using an exhaustive random simulation by a SystemVerilog-based UVM TB. All the processor instructions can be simulated with various random values, generating functional, assertion, and code coverage. We use coverage to measure the progress and quality of the verification and then for the final verification sign-off. IP level verification demands good expertise in HVL programming, Formal and Dynamic ABV, Simulation debugging, and using VIPs and EDA tools.

Figure 2 RISC-V UVM Verification Environment

ABV- Assertion Based Verification, VIP – Verification IP

UVM-Universal Verification Methodology  UVC-UVM Verification component

BFM-Bus Functional Model  VIP-Verification IP RAL-Register Abstraction Layer

Sub-System Verification

Sub-systems are composed of mostly pre-verified IPs and some newly built IPs like bridges and system controllers that are specific to the chip. Figure 3 shows how we build an SoC from a sub-system that integrates all the necessary interface IPs, bridges, and system controllers using an on-chip bus like AMBA. In this case, we prefer simulation-based Gray-box verification, especially random simulation using verification IPs. All the VIPs like AXI, AHB, APB, GPIO, UART, SPI, and I2C UVCs [UVM Verification Component] will be configured and connected with the respective interfaces. As shown in the figure-3, we create other TB components like reference models, scoreboards, and UVM RAL for making the verification environment self-checking. We execute various VIP UVM sequences at the top level, verify the data flow, and measure the performance of the bus.

Figure 3 Sub-System UVM Verification Environment

SoC Verification

SoCs are composed of primarily pre-verified third-party IPs and some in-house IPs. Usually, we prefer a black-box verification using hardware emulation or simulation technologies for the SoC level verification. For example, you may come across a complex SoC verification environment, as shown in figure 4. The SoC testbench [TB] will have all kinds of testbench components like standard UVM Verification IPs[USB/Bluetooth/WiFi and standard interfaces], legacy HDL TB components [JTAG Agent] with UVM wrappers, custom UVM agents[Firmware agents], and some monitors, in addition to the scoreboard and SystemC/C/C++ functional models. In this case, you will have to deal with both firmware and UVM sequences at the chip level. As a verification engineer, you need to know how to implement this kind of hybrid verification environment using the standard VIPs, legacy HDL BFMs and firmware code, and more importantly, how to automate the simulation/emulation using EDA tools.

Figure 4: SoC Verification Environment 

UVM-Universal Verification Methodology  UVC-UVM Verification component

BFM-Bus Functional Model  VIP-Verification IP RAL-Register Abstraction Layer

Let me explain how it works. For example, if the SoC uses an ARM processor, usually we replace the ARM RTL [Encrypted Netlist/RTL] with its functional model called DSM [Design Simulation Model] that can use the firmware[Written in C] as a stimulus to initiate any operation and drive all other peripherals[RTL IPs]. So the SoC verification folks write UVM sequences to generate various directed scenarios through firmware testcases and verify the SoC functionality. During the simulation, the firmware C source code is compiled as an object code[ARM Machine Language binary] which will be loaded into on-chip RAM. The ARM processor model [DSM] reads the object code from memory and initiates the operation by configuring & driving all the RTL peripheral blocks [Verilog/VHDL]. It works for both simulation and emulation. If the SoC is very complex, hardware emulation is preferred to accelerate the verification process and achieve faster verification sign-off.

Why PSS?

Figure 5: IP, Sub-System, and SoC Verification Methodologies

PSS Definition: The Portable Test and Stimulus Standard defines a specification for creating a single representation of stimulus and test scenarios, usable by a variety of users across different levels of integration under different configurations, enabling the generation of different implementations of a scenario that run on a variety of execution platforms, including, but not necessarily limited to, simulation, emulation, FPGA prototyping, and post-Silicon. With this standard, users can specify a set of behaviors once, from which multiple implementations may be derived.

Figure 6 : PSS flow

As shown in figure6, using PSS, we can define the test scenarios and execute them at any level IP/Sub-System/SoC using any Verification Technology. For example, we can define an IP’s test scenarios in PSS. At the IP level verification, we can generate assertions using EDA from its PSS specification for the formal verification, and if needed, we can generate UVM testcases from the same PSS specification for the simulation or emulation at the SoC level. We don’t need to manually rewrite the IP/Sub-system level testcases to migrate and reuse them at the SoC level. PSS specification remains the same for all kinds of technologies. Based on our choice, like formal/simulation/emulation, the EDA tool can generate the testcases from PSS speciation in any languages or methodologies like C/C++/Verilog/SystemVerilog/UVM.

The methodologies like formal verification and PSS are evolving, simultaneously the EDA vendors are also automating the test generation and verification sign-off using technologies like ML. So in the near future, the industry needs brilliant and skilled verification engineers who can collaborate with the chip architects to drive the verification process for the first-time silicon success through the ‘Correct by Construction’ approach, beyond the traditional verification folks who deal with black-box verification that involves predominantly writing testcases and managing the regression testing. Are you interested in chip verification and ready for this big job?

Also Read:

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing

Verification IP vs Testbench

CEO Interview: Sivakumar P R of Maven Silicon


Podcast EP101: Unlocking the True Potential of Wireless with Peraso Technologies mmWave Silicon

Podcast EP101: Unlocking the True Potential of Wireless with Peraso Technologies mmWave Silicon
by Daniel Nenni on 08-17-2022 at 10:00 am

Dan is joined by Ron Glibbery, who co-founded Peraso Technologies in 2009 and serves as its chief executive officer. Prior to co-founding Peraso Technologies, Ron held executive positions at Kleer Semiconductor Intellon, Cogency Semiconductor, and LSI Logic.

Dan and Ron explore the impact of Pareso’s unique mmWave silicon technology to deliver high bandwidth communications for many applications, including 5G.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


UVM Polymorphism is Your Friend

UVM Polymorphism is Your Friend
by Bernard Murphy on 08-17-2022 at 6:00 am

Polymorphism min

Rich Edelman of Siemens EDA recently released a paper on this topic. I’ve known Rich since our days together back in National Semi. And I’ve always been impressed by his ability to make a complex topic more understandable to us lesser mortals. He tackles a tough one in this paper – a complex concept (polymorphism) in a complex domain (UVM). As best I can tell, he pulls the trick off again, though this is a view from someone who is already wading out of his depth in UVM. Which got me thinking about a talk by Avidan Efody that I blogged on recently, “Why designers hate us.”

More capable technology, smaller audience

Avidan, a verification guy at Apple, is probably as expert as they come in UVM and all its capabilities. But he also can see some of the downsides of the standard, especially in narrowing the audience, limited value in what he calls “stupid checks” and some other areas. See HERE for more on his talk. His point being that as UVM has become more and more capable to meet the needs of its core audience (professional hardware verifiers), it has become less and less accessible to everyone else. RTL designers must either wait on testbenches for debug (weeks to months, not exactly shift left) or cook their own tests in SystemVerilog. They still need automation, so they start hooking Python to their SV, or better yet cocotb. Then they can do their unit level testing without any need for the verification team or UVM.

Maybe this divergence between designer testing and mainstream verification is just the way it has to be. I don’t see a convergence being possible unless UVM crafts a simpler entry point for designers, or some cocotb look-alike or link to cocotb. Without all the classes and factories and other complications.

But I digress.

Classes and Polymorphism

The production verification world needs and welcomes UVM with all its capabilities. This is Rich’s audience and here he wants to help those not already at an expert level to uplevel. Even for these relative experts, UVM is still a complex world, full of strange magic. Some of that magic is in reusability of an unfamiliar type, through polymorphism.

A significant aspect of the UVM is its class-based structure. Classes allow you to define object types which not only encapsulate the parameters of the object (e.g., center, width, length for a geometric object) but also the method that can operate on those objects. Which for a user of the object abstracts away all that internal complexity. They just need methods to draw, print, move, etc. the object.

Reuse enters through allowing a class to be defined as extension to an existing class. All the same parameters and methods, with a few new ones added. And/or maybe a few of the existing parameters/methods overridden. And you can extend extended classes and so on. This is polymorphism – variants on a common core. So far, so obvious. The standard examples, like the graphic in this article don’t look very compelling.

Rich however uses polymorphism judiciously (his word) and selectively to define a few key capabilities, such as an interrupt sequence class. Reusing what is already defined in UVM to better meet a specific objective.

As I said, I’m way out of my depth on this stuff, but I do trust that Rich knows what he is talking about. You can read the white paper HERE.


Delivering 3D IC Innovations Faster

Delivering 3D IC Innovations Faster
by Kalar Rajendiran on 08-16-2022 at 6:00 am

System Technology Co Optimization STCO

3D IC technology development started many years ago well before the slowing down of Moore’s law benefits became a topic of discussion. The technology was originally leveraged for stacking functional blocks with high-bandwidth buses between them. Memory manufacturers and other IDMs were the ones to typically leverage this technology during its early days. As the technology itself does not limit the use to only such purposes, there has always been a broader appeal and potential for this technology.

Over the years, 3D IC technology has progressed from its novelty stage to becoming an established mainstream manufacturing technology. And the EDA industry has introduced many tools and technology to help design products that take the 3D IC path. Over the recent past, complex SoC implementations started leveraging 3D IC technology to balance performance/cost goals.

The slowing of Moore’s law has become a major driver to the chiplets way of implementing SoCs. Chiplets are small ICs specifically designed and optimized for operation within a package in conjunction with other chiplets and full-sized ICs. More companies are turning to 3D stacking of ICs and chiplets implemented in different process nodes optimal for the respective chiplet’s function. Designers can also combine 3D memory stacks, such as high bandwidth memory, on a silicon interposer within the same package. The 3D IC implementation will be a major beneficiary of the chiplets adoption wave.

When a new capability is ready for mainstream, its mass adoption success depends on how easily, quickly, effectively and efficiently a solution can be delivered. While the 3D IC manufacturing technology may have become mainstream, there are some foundational enablers for a successful heterogeneous 3D IC implementation. Siemens EDA recently published an eBook on this topic, authored by Keith Felton.

This post will highlight some salient points from the eBook. A follow up post will cover methodology and workflows recommendations for achieving optimal results when implementing 3D IC designs.

Foundational Enablers For Successful Heterogeneous 3D IC Implementation

Any good design methodology always includes look-aheads for downstream effects in order to consider and address them early in the design process. While this is important for monolithic designs, it becomes paramount when designing 3D ICs.

System Co-Optimization (STCO) approach

This approach involves starting at the architectural level to partition the system into various chiplets and packaged die based on functional requirements and form factor constraints. After this step, RTL or functional models are generated. This is followed by physical floor planning and validation all the way to detailed layout supported with in-process performance modeling.

STCO elements already exist in a number of Siemens EDA tools, allowing engineers to evaluate design decisions in the context of predictive downstream effects of routability, power, thermal and manufacturability. Predictive modeling is a fundamental component of the STCO methodology that leverages Siemens EDA modeling tools during physical planning to gain early insight into downstream performance.

Transition from design-based to systems-based optimization

A 3D IC design requires consistent system representation throughout the design and integration process with visibility and interoperability of all cross-domain content. This calls for tools and methodology capable of a full system perspective from early planning through implementation to design signoff and manufacturing handoff.

Expanding the supply chain and tool ecosystem

3D IC design efforts demand a higher level of tool interoperability and openness than the industry is used to. Sharing and updating design content in a multi-vendor and/or multi-tool environment must be supported. This places a greater demand on assembly level verification throughout the design process to ensure the different pieces of the system work together as expected.

Balancing design resources across multiple domains

STCO facilitates exploration of the 3D IC solution space for striking the ideal balance of resources across all domains and deriving the optimal product configuration. An early perspective enables better engineering decisions on resource allocation, resulting in higher performing, more cost effective products.

Tighter integration of the various teams

A new design flow is required to support the design, validation, and integration of multiple ASICs, chiplets, memory, and interposers within a 3D IC design. The silicon, packaging and PCB teams are more likely to be global, requiring even tighter integration with the system, RTL and ASIC design processes.

For more details on Siemens EDA 3D IC innovations, you can download the eBook published by Siemens EDA.

While the Siemens heterogeneous 3D IC solution is packed with powerful capabilities, fully benefitting from these capabilities depends on the implementation methodology put to use. Designing 3D IC products that deliver differentiation, profitability and time to market advantages will be the subject of a follow-on blog.

Also Read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


ARC Processor Summit 2022 Your embedded edge starts here!

ARC Processor Summit 2022 Your embedded edge starts here!
by Synopsys on 08-15-2022 at 10:00 am

ARC Summit 2022

As embedded systems continue to become more complex and integrate greater functionality, SoC developers are faced with the challenge of developing more powerful, yet more energy-efficient devices. The processors used in these embedded applications must be efficient to deliver high levels of performance within limited power and silicon area budgets.

Why Attend?

Join us for the ARC® Processor Summit to hear our experts, users and ecosystem partners discuss the most recent trends and solutions that impact the development of SoCs for embedded applications. This event will provide you with in-depth information from industry leaders on the latest ARC processor IP and related hardware/software technologies that enable you to achieve differentiation in your chip or system design. Sessions will be followed by a networking reception where you can see live demos and chat with fellow attendees, our partners, and Synopsys experts.

Who Should Attend?

Whether you are a developer of chips, systems or software, the ARC Processor Summit will give you practical information to help you meet your unique performance, power and area requirements in the shortest amount of time.

Automotive

Comprehensive solutions that help drive security, safety & reliability into automotive systems

AI

Power-efficient hardware/software solutions to implement artificial intelligence technologies in next-gen SoCs

Enabling Technologies

Solutions to accelerate SoC and software development to meet target performance, power and area requirements

We look forward to seeing you in person at ARC Processor Summit!

Make the Safe Choice

  • Over 20 years of innovation delivering silicon-proven processor IP for embedded applications – billions of chips shipped annually
  • Industry’s second-leading processor by unit shipment
  • The safe choice with significant investment in development of safety and security processor

Industry’s Best Performance Efficiency for Embedded

  • Broad portfolio of proven 32-/64-bit CPU and DSP cores, subsystems and software development tools
  • Processor IP for a range of applications including ultra-low power AIoT, safety-critical automotive, and embedded vision with neural networks
  • Supported by a broad ecosystem of commercial and open-source tools, operating systems, and middleware

PPA Efficient, Configurable, Extensible

  • Optimized to deliver the best PPA efficiency in the industry for embedded SoCs
  • Highly configurable, allowing designers to optimize the performance, power, and area of each processor instance on their SoC
  • ARC Processor eXtension (APEX) technology to customize processor implementation

Rich ARC Ecosystem

  • Complete suite of development tools to efficiently build, debug, profile and optimize embedded software applications for ARC based designs
  • Broad 3rd party support provides access to ARC-optimized software and hardware solutions from leading commercial providers
  • Online access to a wide range of popular, proven free and open-source software and documentation

Register Today!

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also Read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

DSP IP for High Performance Sensor Fusion on an Embedded Budget

Intelligently Optimizing Constrained Random


Digital Twins Simplify System Analysis

Digital Twins Simplify System Analysis
by Dave Bursky on 08-15-2022 at 6:00 am

Siemens Digital Twin SemiWiki

The ability to digitally replicate physical systems has been used to model hardware operations for many years, and more recently, digital twining technology has been applied to electronic systems to better simulate and troubleshoot the systems. As explained by Bryan Ramirez, Director of Industries, Solutions & Ecosystems, Siemens EDA, one example of the early use of twin technology was in the Apollo 13 mission back in 1970. With a spacecraft 200,000 miles away, hands-on troubleshooting was not possible to solve a failing subsystem or system problem. Designers tackled the challenge by using a ground-based duplicate system (a physical twin) to replicate and then troubleshoot problems that arose.

However, such physical twins were both expensive and very large, and often had to be disassembled to reach the system that failed. By employing a digital twin of the system, designers can manipulate the software to do the analysis and develop a solution or workaround to the problem, saving time and money.  A conceptual model of the digital twin was first proposed by Michael Grieves of the University of Michigan in 2002 explained Ramirez, and the first practical definition of the digital twin stemmed from work at NASA in 2010 to improve physical model simulations for spacecrafts.

Digital twins allow designers to virtually test products before they build the systems and complete the final verification.  They also allow engineers to explore their design space and even define their system of systems. For example, continued Ramirez, using digital twin technology to model autonomous driving can help impact the electronic control systems. Using the twin technology designers can develop the models that simulate the sensing, computations and actuations of the autonomous driving system as well as the “shift-left” software development. This gives the designer the ability to go from chipß->carß>city validation without hardware. That reduces costs and design spins as well as allowing designers to optimize system performance.

Additional benefits of digital twin technology include the ability to include predictive maintenance, remote diagnostics, and even real-time threat monitoring. In industrial applications, real-time monitoring, feedback for continuous improvement, and feed-forward predictive insights are key benefits of leveraging the digital twin approach (see the figure). Factory automation can also benefit by using the digital twin capability for simulating autonomous guided vehicles, interconnected systems of systems, as well as examining security, safety, and reliability aspects.

Extrapolating future scenarios, Ramirez suggests that the digital twin capability can simulate the impossible. One such example is an underwater greenhouse dubbed Nemo’s Garden. In the simulation, the software can accelerate innovation by removing the limitations of weather conditions, seasonality, growing seasons, and diver availability.

All these simulation capabilities are the result of improved compute capabilities, which, in turn, are the result of higher-performance integrated circuits. Additionally, as the IC content in systems continues to increase, it becomes easier to simulate/emulate the systems as digital twins. However, as chip complexity continues to increase, cost – especially the cost of respins – the need for the use of digital twins increases to better simulate the complex chips and thus avoid costly respins. The challenges that the digital twin technology faces include creating models for the complex systems, developing multi-domain and mixed-fidelity simulations, setting standardization for data consistency and sharing, and performance optimization. These are issues that the industry is working hard to address.

For more information go to Siemens Digital Industries Software

Also read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


Time for NHTSA to Get Serious

Time for NHTSA to Get Serious
by Roger C. Lanctot on 08-14-2022 at 10:00 am

Time for NHTSA to Get Serious

In the final season of “The Sopranos,” Christopher Multisanti (played by Michael Imperioli) and Anthony Soprano (James Gandolfini) lose control of their black Cadillac Escalade and go tumbling off a two-lane rural highway and down a hill. Christopher dies (spoiler alert) with an assist from Tony, before Tony calls “911” for help.

Connected car junkies will immediately cry foul given that the episode – which first aired in 2007 – falls well within the deployment window of General Motors’ OnStar system. But more vigilant devotees will recall that in an earlier season Tony says he “had all of that tracking shit removed” from his car. (Tony favored GM vehicles.)

I was reminded of this as I rewatched the series and pondered the National Highway Traffic Safety Administration’s crash reporting General Order issued last year. The reporting requirement raises a critical issue regarding privacy obligations or the relevance of privacy in the event of a crash. The shortcomings of the initial tranche of data reported out by NHTSA last month suggest a revision of the reporting requirement is in order.

When the NHTSA issued its Standing General Order in June of 2021 requiring “identified manufacturers and operators to report to the agency certain crashes involving vehicles equipped with automated driving systems (ADS) or SAE Level 2 advanced driver assistance systems (i.e. systems that simultaneously control speed and steering),” the expectation was that the agency would soon be awash in an ocean of data. The agency was seeking deeper insights into the causes of crashes, the mitigating effects of ADS and some ADAS systems, and some hint as to the future direction of regulatory actions.

Instead, the agency received reports of 419 crashes of ADAS-equipped vehicles and 145 crashes involving vehicles equipped with automated driving systems. What has emerged from the exercise is a batch of heterogeneous data with obvious results (human-driven vehicles with front end damage and robot-driven vehicles with rear-end damage.) and gaping holes.

The volume and type of data were insufficient to draw any significant conclusions and the varying ability of the individual car companies to collect and report the data produced inconsistent information. In fact, allowable redactions further impeded the potential for achieving useful insights.

To this add NHTSA’s own caveats – described in great detail in NHTSA documents;

  • Access to Crash Data May Affect Crash Reporting
  • Incident Report Data May Be Incomplete or Unverified
  • Redacted Confidential Business Information and Personally Identifiable Information
  • The Same Crash May Have Multiple Reports
  • Summary Incident Report Data Are Not Normalized

The only car company that appears to be adequately prepared and equipped to report the sort of data that NHTSA is seeking, Tesla, stands out for having reported the most relevant crashes. In a report titles “Do Teslas Really Account for 70% of U.S. Crashes Involving ADAS? Of course Not,” CleanTechnica.com notes that Tesla is more or less “punished” for its superior data reporting capability. Competing auto makers are allowed to hide behind the limitations of their own ability to collect and report the required data.

It’s obvious from the report that there is a vast under-reporting of crashes. This is the most salient conclusion from the reporting and it calls for a radical remedy.

The U.S. does not have a mandate for vehicle connectivity, but nearly every single new car sold in the U.S. comes with a wireless cellular connection.  The U.S. does have a requirement that an event data recorder (EDR) be built into every car.

If NHTSA is serious about collecting crash data, the agency ought to mandate a connection between the EDR and the telematics system and require that in the event of a crash the data related to that crash be automatically transmitted to a government data collection point – and simultaneously reported to first responders connected to public service access points.

There are several crucial issues that will be remedied by this approach:

  • First responders will receive the fastest possible notification of potentially fatal crashes. Most automatic notifications are triggered by airbag deployments and too many of those notifications go to call centers that introduce delays and impede the transmission of relevant data.
  • A standard set of data will be transmitted to both the regulatory authority and first responders – removing inconsistencies and redactions. All such systems ought to be collecting and reporting the same set of data. European authorities recognized the importance of consistent data collection when they introduced the eCall mandate which took effect in April of 2018.
  • Manufacturers will finally lose plausible deniability – such as the ignorance that GM claimed during Congressional hearings in an attempt to avoid responsibility for fatal ignition switch failures
  • Such a policy will recognize that streets and highways are public spaces where the drivers of cars that collide with inanimate objects, pedestrians, or other motorists have forfeited a right to privacy. The public interest is served by automated data reporting from crash scenes.

NHTSA administrators are political appointees with precious little time to influence policy in the interest of saving lives. It is time for NHTSA to act quickly to establish a timeline for automated crash reporting to cut through the redactions and data inconsistencies and excuses and pave a realistic path toward reliable, real-time data reporting suitable for realigning regulatory policy. At the same time, the agency will greatly enhance the timeliness and efficacy of local crash responses – Anthony Soprano notwithstanding.

Also Read:

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform

Accellera Update: CDC, Safety and AMS


Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists
by Fred Chen on 08-14-2022 at 8:00 am

Spot Pairs for Measurement of Secondary Electron Blur in EUV

There is growing awareness that EUV lithography is actually an imaging technique that heavily depends on the distribution of secondary electrons in the resist layer [1-5]. The stochastic aspects should be traced not only to the discrete number of photons absorbed but also the electrons that are subsequently released. The electron spread function, in particular, should be quantified as part of resist evaluation [5]. The scale of the electron spread or blur is likely not a well-defined parameter, but itself has some distribution.

It is necessary to quantitatively assess this distribution of spread. A basic, direct approach is to pattern two features close enough to one another to show resist loss dependent on the electron blur. For example, two 20-25 nm spots separated by a 40 nm center-to-center distance will be significantly affected by 3 nm electron blur scale length, but much less so by 2 nm scale length (see the figure below).

The resist loss between two closely spaced exposed features following development depends on the scale length of the electron spread, a.k.a. blur [5].

An EUV resist or electron-beam resist could be evaluated at a given thickness by evaluating EUV or e-beam exposures of these arrays of these spot pairs in sufficient numbers to get the distribution of electron blur scales, down to the far-out tails, i.e., ppb level or lower. This data would be necessary for better predictions of yield, due to CD variation, edge placement error (EPE), and even the occurrence of stochastic defects.

Interestingly enough, scanning probe electron lithography, such as using an STM, may have the advantage of having probe-to-sample bias limiting the lateral spread of electrons, thereby reducing blur [6]. Again, the spot pair pattern can be used to confirm whether this is true or not.

Reference

[1] J. Torok et al., “Secondary Electrons in EUV Lithography,” J. Photopolymer Sci. and Tech. 26, 625 (2013).

[2] R. Fallica et al., “Experimental estimation of lithographically-relevant secondary electron blur, 2017 EUVL Workshop.

[3] M. I. Jacobs et al., “Low energy electron attenuation lengths in core-shell nanoparticles,” Phys. Chem. Chem. Phys. 19 (2017).

[4] F. Chen, “The Electron Spread Function in EUV Lithography,” https://www.linkedin.com/pulse/electron-spread-function-euv-lithography-frederick-chen

[5] F. Chen, The Importance of Secondary Electron Spread Measurement in EUV Resists,” https://www.youtube.com/watch?v=deB0pxEwwvc

[6] U. Mickan and A. J. J. van Dijsseldonk, US Patent 7463336, assigned to ASML.

This article originally appeared in LinkedIn Pulse: Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

Also Read:

EUV’s Pupil Fill and Resist Limitations at 3nm

ASML- US Seeks to Halt DUV China Sales

ASML EUV Update at SPIE


What is Your Ground Truth?

What is Your Ground Truth?
by Roger C. Lanctot on 08-14-2022 at 6:00 am

What is Your Ground Truth

When my son bought a 2020 Chevrolet Bolt EV a couple of years ago, I was excited. I wanted to see what the Xevo-supplied Marketplace (contextually driven ads and offers) looked like and I was also curious as to what clever navigation integration GM was offering.

I was swiftly disappointed to discover that Xevo Marketplace was not an embedded offering but rather an app to be projected from a connected smartphone. As for navigation, it wasn’t available for my son’s MY2020 Bolt even as an option.

I was stunned for two reasons. First, I thought the Marketplace app was intended as a brand-defining service which would surely be embedded in the vehicle and integrated as part of the home screen as an everyday driving application. Second, how could GM ship an EV that was unable to route the driver to the nearest compatible and available charging station via the on-board navigation system?

These manifestations made me wonder whether I was witnessing the appification of in-vehicle infotainment. What if and why shouldn’t cars ship with dumb terminals that draw all of their intelligence and content from the driver’s/user’s mobile device?

Further fueling this impression was GM’s subsequent introduction of Maps+, a Mapbox-sourced navigation app based on Open Street Map with a $14.99/month subscription. To be followed by the launch of Google Built-in (Googlemaps, Googleplay apps, Google Voice Assistant) as a download with a Premium OnStar plan ($49.99/month) or Unlimited Data plan ($25/month) – seems confusing, right?

The truly confusing part, though, isn’t the variety of plans and pricing combos, it is the variable view of reality or ground truth. Ground truth is the elusive understanding of traffic conditions on the road ahead and how that might impact travel and arrival times. Ground truth can also apply to the availability of parking and charging resources. (Parkopedia is king of parking ground truth, according to a recent Strategy Analytics study: https://business.parkopedia.com/strategy-analytics-us-ground-truth-testing-2021?hsLang=en )

The owner of a “lower end” GM vehicle – with no embedded navigation option – will have access to at least three different views of ground truth: OnStar turn-by-turn navigation, Maps+ navigation, and Google- and/or Apple-based navigation. (Of course Waze, Here We Go, and TomTom apps might also be available from a connected smartphone.)

Each of these navigation solutions will have different views of reality and different routing algorithms driven by different traffic and weather data sources and assumptions. Am I, as the vehicle owner and driver, supposed to “figure out” which is the best source? When I bought the car wasn’t I paying GM for its expertise in vetting these different systems?

What about access to the data and the use of vehicle data? Is my driving info being hoovered up by some third party? And what is ground truth when I am being given varying lenses through which to grasp it?

Solutions are now available in the market – from companies such as TrafficLand in the U.S. – that are capable of integrating still images and live video from traffic cameras along my route allowing me to better understand the routing decisions my car or my apps are making for me. The optimum means for accessing this information would be through a built-in navigation system.

GM continues to offer built-in or “embedded” navigation across its product line with a handful of exceptions – such as entry-level models of the Bolt.  Embedded navigation – usually part of the integrated “infotainment” system – is big business, representing billions of dollars in revenue from options packages for auto makers.

More importantly, the modern day infotainment system – lately rendered on a 10-inch or larger in-dash screen – is a critical point of customer engagement. The infotainment system is the focal point of in-vehicle communications, entertainment, and navigation – as well as vehicle status reports.

Vehicle owners around the world tell my employer – Strategy Analytics – in surveys and focus groups that the apps that are most important to them while driving relate to traffic, weather, and parking. Traffic is the most important, particularly predictive traffic information, because this data is what determines navigation routing decisions.

Navigation apps do not readily disclose their traffic sources, but it is reasonable to assume that a navigation app with HERE or TomTom map data is using HERE or TomTom-sourced traffic information. Googlemaps has its own algorithms as do Apple and Mapbox – but, of course, there is some mixing and matching between the providers of navigation, maps, and traffic data.

This is all the more reason why access to TrafficLand’s real-time traffic camera feeds is so important. Sometimes seeing is believing and TrafficLand’s traffic cameras are, by definition, monitoring the majority of known traffic hot spots across the country.

When the navigation system in my car wants to re-route me – requesting my approval – I’d like to see the evidence to justify a change in plans. Access to traffic camera info can provide that evidence.

I can understand why GM – and some other auto makers such as Toyota – have opted to drop embedded navigation availability from some cars as budget-minded consumers seek to pinch some pennies. But the embedded map represents the core of a contextually aware in-vehicle system.

There is extraordinary customer retention value in building navigation into every car – particularly an EV. The fundamental principles of creating safe, connected cars call for the integration of a location-aware platform including navigation.

Deleting navigation may be a practical consideration as attach rates decline, but it’s bad for business. In fact, there is a bit of a head-snapping irony in GM or Toyota or any auto maker deleting embedded navigation in favor of a subscription-based navigation experience from Mapbox or Google. These car makers are telling themselves that the customers least able to pay for built-in navigation will be willing to pay a monthly subscription for an app. I think not.

This is very short-term thinking. Location awareness is a brand-defining experience and auto makers targeting “connected services” opportunities will want to have an on-board, built-in navigation system. If not, the auto maker that deletes built-in navigation will be handing the customer relationship and the related aftermarket profits to third parties such as Apple, Amazon, and Google. That’s the real ground truth.

Also Read:

What’s Wrong with Robotaxis?

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform


Podcast EP100: A Look Back and a Look Ahead with Dan and Mike

Podcast EP100: A Look Back and a Look Ahead with Dan and Mike
by Daniel Nenni on 08-12-2022 at 10:00 am

Dan and Mike get together to reflect on the past and the future in this 100th Semiconductor Insiders podcast episode. The chip shortage, foundry landscape, Moore’s law, CHIPS Act and industry revenue trends are some of the topics discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.