Bronco Webinar 800x100 1

An Accellera Update. COVID Accelerates Progress

An Accellera Update. COVID Accelerates Progress
by Bernard Murphy on 12-17-2020 at 6:00 am

logo accellera min

Normally I would post this Accellera update during DVCon US but, no surprise, this year is weird. Particularly in conferences going virtual. The last DVCon was in early March of this year, right on the cusp of the shutdown. I was there in person, as was Lu Dai (Chairman of Accellera). Both Synopsys and Cadence had dropped out, citing safety, though presentations continued (not sure about the exhibits). Lu reminded me that, as thanks for our fortitude, DVCon was one of the best places to find hand sanitizer, out of stock everywhere else!

We talked about how the pandemic had affected standards development. Lu saw a net positive for members having to work from home and conference virtually. As an international organization, it has been easier to get everyone together, even if meeting schedules weren’t always convenient. Attendance has been higher, and meetings more frequent. He said that when working groups (WGs) put together their plans, the board worried they were too aggressive. But it turned out they’ve been pretty close – more is getting done faster after all.

PSS 2.0 and UVM-AMS

PSS 2.0, now in public review has certainly accelerated its schedule. What I find telling here is Lu’s view of the new release. He’s a user after all (at Qualcomm) as well as the chair of Accellera. Qualcomm is a major adopter of the standard. They saw 1.0 as a good start but not production-ready because they needed to do quite a lot of patching when building on proprietary implementations. In this new release they see a production-ready vendor-neutral solution they’re ready to adopt in full. As I said, a telling viewpoint.

UVM-AMS is a very new effort, launched only late last year. The WG have already developed what they call a design objective document (DOD), all the capabilities they want to be covered in the standard. Next, they’re going to be voting on which of those capabilities should make it into the first release. According to Lu, this is a pretty fast pace, much faster than normal. Again, a silver lining from the pandemic.

IP security and functional safety

The IP Security assurance working group is also progressing. Lu clarified (for me at least) that this will be an annotation standard which should get to release potentially faster than some other working groups. They’re working quite closely with Mitre. Mitre is already well established as a centralized resource for common vulnerabilities and exposure, originally in software, now also in hardware. IPSA is tying into the Mitre security threat database in hardware. The objective then is how that threat information carries over in markup for use by EDA tools. Details here are still evolving.

On functional safety, the Accellera working group is working closely with the IEEE functional safety working group and have agreed a division of tasks. Accellera focuses more on the hardware side, IEEE works more on the software and higher layers. Yet another area moving at a fast pace, internal email updates almost every day and regular joint meetings with IEEE. On coordination with broader standards activities (notably ISO 26262) Lu doesn’t see a problem. Given linkages between Accellera and IEEE, and already well established member linkages with ISO 26262 there’s lot of interaction between standard activities. By design a lot of Accellera and IEEE  work here is complimentary to the ISO 26262 focus, and there are enough channels to cross-check.

DVCon logistics

On DVCon, the other main focus for Accellera, I already mentioned the US 2020 conference. DVCon China cancelled since it was scheduled right in the middle of that country’s own battle with the pandemic. DVCon Europe had more time to prepare and was able to pull off a very impressive virtual conference. In fact I attended and wrote up one of the talks, though apparently I didn’t take advantage of the full virtual experience at the show. Lu was so impressed he plans to use the same platform for any other virtual events. Honestly from my perspective, I’d love to see all conferences go virtual even after the pandemic is over. May be tough on the travel and convention center industries but way easier on the rest of us!

For more detail on the latest new in Accellera, check HERE.

Also Read:

DVCon 2020 Virtual Follow-Up Conference!

Accellera Tackles Functional Safety, Mixed-Signal

Functional Safety Comes to EDA and IP


Advanced Process Development is Much More than just Litho

Advanced Process Development is Much More than just Litho
by Tom Dillinger on 12-16-2020 at 10:00 am

Vt distribution

The vast majority of the attention given to the introduction of each new advanced process node focuses on lithographic updates.  The common metrics quoted are the transistors per mm**2 or the (high-density) SRAM bit cell area.  Alternatively, detailed decomposition analysis may be applied using transmission electron microscopy (TEM) on a lamella sample, to measure fin pitch, gate pitch, and (first-level) metal pitch.

With the recent transition of the critical dimension layers from 193i to extreme ultraviolet (EUV) exposure, the focus on litho is understandable.  Yet, process development and qualification encompasses many more facets of materials engineering to achieve robust manufacturability, so that the full complement of product goals can be achieved.  Specifically, process development engineers are faced with increasingly stringent reliability targets, while concurrently achieving performance and power dissipation improvements.

At the recent IEDM conference, TSMC gave a technical presentation highlighting the development focus that enabled the N5 process node to achieve (risk production) qualification.  This article summarizes the highlights of that presentation. [1]

An earlier SemiWiki article introduced the litho and power/performance features of N5. [2]  One of the significant materials differences in N5 is the introduction of a “high mobility” device channel, or HMC.  As described in [2], the improved carrier mobility in N5 is achieved by the introduction of additional strain on the device channel region.  (Although TSMC did not provide technical details, the pFET hole mobility is also likely improved by the introduction of a moderate percentage of Germanium into the Silicon channel region, or Si(1-x)Ge(x).)

Additionally, the optimized N5 process node incorporates an optimized high-K metal-gate (HKMG) dielectric stack between gate and channel, resulting in a stronger electric field.

A very significant facet of this “bandgap engineering” for carrier mobility and the gate oxide stack materials selection is to ensure that reliability targets are satisfied.  Several of the N5 reliability qualification results are illustrated below.

TSMC highlighted the following reliability measures from the N5 qualification test vehicle:

  • bias temperature stability (BTI)
  • both NBTI for pFETs and PBTI for nFETs, manifesting in a performance degradation over time from a device Vt shift (positive absolute value) due to trapped oxide charge
  • also may result in a degradation of VDDmin for SRAM operation
  • hot carrier injection (HCI)
  • an asymmetric injection of charge into the gate oxide near the drain end of the device (operating in saturation), resulting in degraded carrier mobility
  • time-dependent gate oxide dielectric breakdown (TDDB)

Note that the N5 node is targeted to satisfy both high-performance and mobile (low-power) product requirements.  As a result, both performance degradation and maintaining an aggressive SRAM VDDmin are important long-term reliability criteria.

TDDB

The figure above illustrates that the TDDB lifetime is maintained relative to node N7, even with the increased gate electric field.

Self-heating

The introduction of FinFET device geometries substantially altered the thermal resistance paths from the channel power dissipation to the ambient.  New “self-heating” analysis flows were employed to more accurately calculate local junction temperatures, often displayed as a “heat map”.  As might be expected with the aggressive dimensional scaling from N7 to N5, the self-heat temperature rise is greater in N5, as illustrated below.

Designers of HPC products need to collaborate with both their EDA partners for die thermal analysis tools and their product engineering team for accurate (on-die and system) thermal resistance modeling.  For the on-die model, both active and inactive structures strongly influence the thermal dispersion.

HCI

Hot carrier injection performance degradation for N7 and N5 are shown below, for nFETs and pFETs.

Note that HCI is strongly temperature-dependent, necessitating accurate self-heat analysis.

BTI

The pMOS NBTI reliability analysis results are illustrated below, with the related ring oscillator performance impact.

In both cases, reliability analysis demonstrates improved BTI characteristics of N5 relative to N7.

SRAM VDDmin

The SRAM minimum operating voltage (VDDmin) is a key parameter for low-power designs, especially with the increasing demand for local memory storage.  Two factors that contribute to the minimum SRAM operating voltage (with sufficient read and write margins) are:

  • the BTI device shift, as shown above
  • the statistical process variation in the device Vt, as shown below (normalized to Vt_mean in N7 and N5)

Based on these two individual results, the SRAM reliability data after HTOL stress shows improved VDDmin impact for N5 versus N7.

Interconnect

TSMC also briefly described the N5 process engineering emphasis on (Mx, low-level metal) interconnect reliability optimization.  With an improved damascene trench liner and a “Cu reflow” step, the scaling of the Mx pitch – by ~30% in N5 using EUV – did not adversely impact electromigration fails, nor line-to-line dielectric breakdown.  The figure below illustrates the line-to-line (and via) cumulative breakdown reliability fail data for N5 compared to N7 – N5 tolerates the higher electric field with the scaled Mx pitch.

Summary

The majority of the coverage associated with the introduction of TSMC’s N5 process node related to the broad adoption of EUV lithography to replace multipatterning for the most critical layers, enabling aggressive area scaling.  Yet, process engineers must also optimize materials selection and many individual fabrication steps, to achieve reliability targets.  TSMC recently presented how these reliability measures for N5 are superior to prior nodes.

-chipguy

References

[1]  Liu, J.C., et al, “A Reliability Enhanced 5nm CMOS Technology Featuring 5th Generation FinFET with Fully-Developed EUV and High Mobility Channel for Mobile SoC and High Performance Computing Application”, IEDM 2020.

[2]  https://semiwiki.com/semiconductor-manufacturers/tsmc/282339-tsmc-unveils-details-of-5nm-cmos-production-technology-platform-featuring-euv-and-high-mobility-channel-finfets-at-iedm2019/

 

Related Lithography Posts


Close the Year with Cliosoft – eBooks, Videos and a Fun Holiday Contest

Close the Year with Cliosoft – eBooks, Videos and a Fun Holiday Contest
by Mike Gianfagna on 12-16-2020 at 6:00 am

Close the Year with Cliosoft – eBooks Videos and a Fun Holiday Contest

‘Tis the season, a time when a lot of companies summarize the year, send out holiday greetings and generally wind down until after the New Year. That’s not the case at Cliosoft.  Their marketing machine has been in full gear with lots of new, useful and compelling content. I’ll provide a round-up of what’s happening. You can close the year with Cliosoft – eBooks, videos and a fun holiday contest.

eBooks

Startup Best Practices eBook

eBooks aren’t something you see every day from an EDA vendor.  Cliosoft has two published, including one in Chinese as well as English, with more on the way. The first one is Startup Best Practices. The eBook is written by Srinath Anantharaman, the CEO and founder of Cliosoft. This is a short eBook that hits some very important fundamental points. Here is the table of contents:

  • INTRODUCTION
  • WHAT ARE ‘BEST PRACTICES’?
  • WHY ADOPT BEST PRACTICES FROM THE START?
  • ARE DESIGN MANAGEMENT AND OTHER COLLABORATION TOOLS NEEDED?
  • KEEP IT SIMPLE
  • IT CONSIDERATIONS
  • CONCLUSION

This is a great read if you’re starting to build a design infrastructure or if you’re considering an upgrade to your existing flow. If you are in one of these situations, there is a sentence in the introduction that I think is worth repeating here.

“This eBook makes the case that adopting best practices and methodology early will lay the foundation to create a design team that is built to last.”

Design Methodology Guide

The next eBook is one chapter from a book called Design Methodology Guide, Advanced Methodology for AMS IP and SOC Design, Verification and Implementation. The chapter is entitled Data Management for Mixed-Signal Designs, and it’s authored by Michael Henrie and Srinath Anantharaman. Michael Henrie is the director of software engineering at Cliosoft. The chapter goes into a lot of detail. It begins with a discussion of the current mixed-signal design environment and traditional team design techniques and their pitfalls.  There is then a discussion of design management system requirements and how to manage projects with such a system in place. The impact a design management system has on global collaboration, analog design workflows, ECOs and release tracking are discussed.

Techniques to administer rules, roles, access and permission as well as how to reuse IP and PDKs across projects are also touched on. A lot of detail and examples are offered. I’m sure there’s something in this eBook for everyone.

There are two more eBooks in the works now, with more in development. The next titles treat the ever-popular topic of moving to the cloud:

  • Best Practices for Deploying Design Management on Amazon AWS
  • Using Cliosoft SOS Design Management Platform in the Cloud

You can get your copy of Cliosoft’s eBooks here.

Videos

There are quite a few videos available on the Cliosoft website as well. The titles there include:

  • Designing on AWS
  • The New Trend in IP Traceability That IP Developers and Design Managers Rely On
  • Network Storage Optimization for IP
  • Challenges in IP reuse
  • Visualizing Differences in Analog Design
  • What’s in Your IP

The videos are a combination of webinar replays and “chalk talks”. The content covers a lot of very relevant topics.

I’ll focus on the content of one video, The New Trend in IP Traceability presented by Karim Khalfan, director of applications engineering at Cliosoft. I originally covered this webinar on SemiWiki here. Karim begins by discussing why IP traceability is important. Some key benefits include:

  • Increased visibility
  • Improved quality
  • Reduced risk

The standards that demand reliability and how IP traceability addresses these requirements are also discussed (e.g., ISO26262 and MIL-STD-882). Karim then sets up a series of live demonstration scenarios that illustrate the challenges of several stakeholders and how IP traceability helps. The webinar concludes with a Q&A session with questions from the original live audience. Karim manages to get through all this content in under 30 minutes. This one is definitely worth your time. You can access all Cliosoft’s videos here.

Holiday Contest

This one is a lot of fun. If you’re not quick, it can become torture so give it a try!  You have to spot a series of words in a “sea of letters” while working against the clock. It’s definitely worth the effort because you are entered in a chance to win a $250 Amazon gift card if you play. There will be drawings every Friday until December 25, so have a big cup of coffee and check it out. You can enter the game from a link at the top of the Cliosoft home page.  So that’s how you can close the year with Cliosoft – eBooks, videos and a fun holiday contest.

Happy Holidays to all!

Also Read

The History and Physics of Cliosoft’s Academic Program!

A tour of Cliosoft’s participation at DAC 2020 with Simon Rance

How to Grow with Poise and Grace, a Tale of Scalability from ClioSoft


Achronix Talks about FPGAs for Video Processing

Achronix Talks about FPGAs for Video Processing
by Tom Simon on 12-15-2020 at 10:00 am

Need for Video Editing

The internet keeps adding users and connected devices. According to the numbers in a white paper from Achronix, by 2022 there will be 4.8 billion internet users and 28.5 billion connected devices. Internet traffic will reach 275 exabytes per month. Of this a staggering 83 percent will be video traffic. Moving the data from creators to consumers, video editing and processing of video data for applications using machine learning requires large amounts of video processing. The Achronix white paper, titled “FFPGAs for Advanced Video Processing Solutions”, examines each of the above tasks and the type of data processing required.

Need for Video Processing

We have all seen the message “Processing your video” when uploading videos to YouTube or Facebook. The conversion of video in one format to another for use on other platforms and devices is critical a step for sharing content. Originally this work was done on CPUs or sometimes using GPUs. While ASICs might also be an attractive processing solution, they are limited when it comes to the proprietary compression methods that are often used in transcoding.

In the case of video editing and content creation, desktop computers with GPU acceleration have been a mainstay. However, with the dawn of 4K and 8K video, these platforms are underpowered for the task. This work has been moving to the cloud, however using traditional processors in the cloud has its limits.

Lastly, AI applications need images not video to perform inference. This entails converting H.264 or H.265 video streams into JPEG or PNG images that can be used by the AI processors. The conversion to an image file may also include converting image resolution or other processing to help the AI application.

Achronix makes the case that FPGAs, especially their Speedster7t, are well suited to all of these tasks. Both GPUs and FPGAs offer parallel processing, but FPGAs often come up as the preferred choice because of their power advantage over GPUs.

The Achronix white paper looks at each type of activity to analyze the effectiveness of their Speedster7t FPGA. When streaming and transcoding H.264 video many of the tasks are easily handled by CPUs. Yet, one task in the process, motion estimation, has been profiled to use around 21% of the entire processing load on CPUs. This is a task that can be moved to an FPGA for a big improvement in throughput.

Whether you are talking about working with RAW video data or compressing video using intra-frame structure, video editing and content creation have become unwieldy at resolutions such as 4K and 8K. Previously with HD and 2K video using CPUs was a feasible approach. The white paper includes benchmark data that supports the notion that CPUs must be supplanted at today’s higher resolutions.

For AI, there is a lot to be gained by combining the video decoder and the image encoder in the same processing unit. Frequently there is also a need for additional image processing as a prerequisite to the inference step. This too can easily be accommodated in an FPGA.

Achronix then moves to a discussion of the specific advantages found in their Speedster7t family. Their 2D Network on Chip (NoC) facilitates high speed transfers between the external interfaces in the Speedster7t FPGA and blocks in the FPGA fabric. It also provides rapid transfers among functional blocks on-chip. Because it is separate from the FPGA fabric, no FPGA resources are consumed when setting up pathways for data exchange. Likewise, because it uses a high-level protocol, FPGA designers do not need to put together routing and buffering logic. To transfer data a user or consumer only need to connect to a Network Access Point.

Speedster7t FPGAs come with a well thought out set of interfaces. The Speedster7t AC7t1500, for instance, offers fracturable Ethernet controllers (supporting rates up to 400G), PCI Gen 5 ports and up to 32 SerDes channels with speeds up to 112 Gbps. It also has multi-channel GDDR6 memory interfaces. With the NoC running at a much higher speed than the clocks usually associated with FPGA fabrics, it can transport in aggregate over 20 Tbps. The combination of the NoC and the high-speed interfaces means it is in a class by itself when it comes to meeting the needs of video processing.

The paper finishes with a discussion of the Machine Learning Processors (MLP) that Achronix has developed for use in the Speedster7t family. It is interesting reading about how the MLPs are optimized with local block RAM and math units that handle MAC operations needed for AI. Achronix has consistently been adding features for a wide range of complex applications to their FPGAs. Their white papers, such as this one, frequently make compelling cases for the use of their technology in system design. The full white paper on video processing is available on their website.

 

 


More on Bug Localization. Innovation in Verification

More on Bug Localization. Innovation in Verification
by Bernard Murphy on 12-15-2020 at 6:00 am

innovation min

Mining assertions from constrained random simulations to localize bugs. Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas. Feel free to comment.

The Innovation

This month’s pick is Symptomatic bug localization for functional debug of hardware designs. This paper was presented at the 2016 ICES. The authors are from the University of Illinois at Urbana Champaign.

There’s wide agreement that tracing bugs to their root cause consumes more effort than any other phase of verification. Methods to reduce this effort are always worth a close look. The authors start with constrained random (CR) simulations. They mine failing simulations for likely invariant assertions, which they call symptoms. These they infer from multiple CR simulations based on common observed behaviors. They use an open-source miner developed in the same group, GoldMine, for this purpose.

Then for each failure they look for commonalities between assertions, looking for common execution paths among symptoms.  They claim that common symptoms signal a highly suspicious path which likely localize the failure.

One way to determine what code is covered by an assertion is through static cone-of-influence analysis. These authors instead use simulations to determine dynamically what statements are relevant. These they assume are statements executed in the time slice from the left-hand-side of the assertion becoming true and the right-hand-side becoming true. They acknowledge dynamic analysis is incomplete for this purpose, though in their experiments they say they found all simulated all statements in scope.

The authors ran experiments on a USB 2.0 core and were able to localize bugs to within 5% of the RTL source code lines in one case and within 15% on average.

Paul’s view

I first would like to acknowledge that this paper is one of a series of high-quality papers from Dr. Vasudevan’s team at the University of Illinois, building on their GoldMine assertion mining tool. Lots of inspiring research here which I very much enjoyed learning about!

The paper nicely brings together three techniques. In the first phase, they look for common patterns in failure traces using GoldMine. The patterns found are in the form of “A implies B” assertions. Each such assertion identifies some time slice from the trace where whenever a sequence of events “A” happens to some other sequence “B” of events that happens later.

Second, they intersect these assertions mined across all the failure traces to find the important signatures that apply to all failure traces.

Finally, they map the assertions back to lines in the source RTL by re-running the failing simulations and tracking which RTL lines are executed in the time between “A” and “B” happening in the trace.

Overall, this is a very elegant and scalable method to localize faults. On their USB controller example, they localized half of the 22 bugs inserted to within 15% of the source code base, which is impressive.

I struggled a bit with the importance/sensitivity analysis in Figure 4. I expected to see some visual correlation between code zone importance and the actual code zone where the bug was injected but this didn’t seem to be the case.

The other thought I have, which would be a fun and easy experiment: in the second phase, check the signature assertions against traces for good simulations that passed. Then prune any assertion that also matches a good simulation trace. This might improve the quality of the important signature list further.

Jim’s view

By default, I would normally look at this and think “part of verification, no new money there, not investable”. However, debug probably has the least EDA support today. It’s also the most expensive part of verification in time consumed by verification experts. And tools in this area can be very sticky. I’m thinking particularly of Verdi. There, users developed a strong loyalty, quite likely because they spend so much time in debug.

Now I think a new capability that could significantly enhance the value of a debugger – reducing total debug time – could attract a lot of interest. I’d want to see proof of course, but I think there might be a case.

My view

Before Atrenta was acquired by Synopsys, we acquired a company that did assertion mining. As a standalone capability, generating new assertions for direct verification, it seemed to struggle find traction. This is a different spin on the technique. The assertions are not the end goal but a means to localize bugs. Clever idea and maybe more immediate market appeal.

You can read the previous Innovation blog HERE.


Alphawave IP is Enabling 224Gbps Serial Links with DSP

Alphawave IP is Enabling 224Gbps Serial Links with DSP
by Mike Gianfagna on 12-14-2020 at 10:00 am

Alphawave IP is Enabling 224Gbps Serial Links with DSP

Alphawave IP is a new member of the SemiWiki community. You can learn about the company and their CEO, Tony Pialis in this interview by Dan Nenni. Design & Reuse did a virtual IP-SOC Conference recently and Tony presented. The D&R event had a very strong lineup of presenters. They supplemented the prepared video presentations with two live panels on Automotive and FDSOI. This created a nice balance of prepared and live material, a good ingredient for a virtual event. Alphawave IP has a very strong portfolio of DSP-based multi-standard connectivity silicon IP solutions. They recently won the 2020 TSMC OIP Partner of the Year award for high-speed SerDes IP, so they’re definitely a company to watch. I was anxious to hear how Alphawave IP is enabling 224Gbps serial links with DSP at the D&R event.

DSP SerDes Introduction

Tony started by discussing the differences between an analog SerDes and a digital, or DSP SerDes. He explained that an analog SerDes can work reliably up to 36db NRZ or 30db PAM4. Since all equalization is implemented in the continuous time domain, this technology is sensitive to process variation. With a DSP-based design, most of the equalization is done digitally, allowing for more robust operation to 45db NRZ and 36db+ PAM4. This kind of design is also not very sensitive to process variation. Tony pointed out that the high-speed ADC required for a digital design like this is challenging to build.

Tony then went into some detail about analog linear equalization vs. DSP linear equalization. Clearly, the DSP approach is a better match for the demands of high-speed links.

The Road to 200Gbps Serial Links

Next, Tony discussed the challenges of getting from current 112Gbps PAM4 SerDes to 224Gbps PAM4 devices. Keeping the architecture the same, one can see that the reach for the device is dramatically reduced – roughly one inch vs. one foot. This is a serious challenge. The data is summarized in the figure below.

Scaling Symbol Rates to 224Gbps

Given that package and board material aren’t likely to change much in the next couple of years, a new approach to increase data throughput for existing channels is needed. One that doesn’t suffer from the tradeoff issues shown above. Tony examined several alternative modulation schemes. Each has its own strengths and weaknesses relative to required channel bandwidth and signal-to-noise (SNR) ratio. He focused on PAM8 as a good candidate given its low channel bandwidth requirements. The various modulation techniques and their requirements are summarized in the figure below.

High Capacity Modulation Schemes

The next challenge to tackle is how to manage the SNR degradation of PAM8. One step toward a solution is to use a “maximum likelihood sequence detector.”  This advanced DSP detector uses an approach called Viterbi Detection to make data slicing decisions based on a sequence of data vs. on a single symbol which is the typical approach. This minimizes error across a sequence of symbols and results in an improvement in SNR of about 1-3 db.

Next, Tony focused on forward error correction (FEC). Using new, third-generation soft FECs based on approaches such as block turbo codes, one can recover over 10 db of bit error rate, thus compensating for the challenges of PAM8 further.

Summary

Tony concluded with an overview of Alphawave’s world-leading portfolio of DSP-based PHYs covering many protocols and applications, short and long reach. The portfolio is available and silicon-proven on TSMC 7nm and 5nm processes. With this technology platform, Tony sees a path to 224Gbps. If you’d like to learn more about Alphawave IP’s assessment of the future and how its technology fits, you can see Tony’s complete D&R presentation by registering here. He goes into a lot of detail. You can also visit the Alphawave IP website to learn more and find out how Alphawave IP is enabling 224Gbps serial links with DSP.

Also Read:

Alphawave IP and the Evolution of the ASIC Business

Demand for High Speed Drives 200G Modulation Standards

CEO Interview: Tony Pialis of Alphawave IP


Design Considerations for 3DICs

Design Considerations for 3DICs
by Tom Dillinger on 12-14-2020 at 6:00 am

LVS flow phases

The introduction of heterogeneous 3DIC packaging technology offers the opportunity for significant increases in circuit density and performance, with corresponding reductions in package footprint.  Yet, the implementation of a complex 3DIC product requires a considerable investment in methodology development for all facets of the design:

  • system architecture partitioning (among die)
  • I/O assignments for all die, both for signals and the power distribution network (PDN)
  • die floorplanning, driven by the I/O assignments
  • probe card design (with potential reuse between individual die and 3DIC assembly)
  • critical timing path analysis, assessing the tradeoffs between timing paths on-die versus the implementation of vertical paths between stacked die
  • IR drop analysis, a key facet of 3DIC planning due to the power delivery to stacked die using through-silicon or through-dielectric vias
  • a DFT architecture, suitable for 3DIC testing using individual known good die (KGD)
  • reliability analysis of the composite multi-die thermal package model
  • LVS physical verification of the multi-die connectivity model

Whereas 2.5D IC packaging technology has pursued “chiplet-based” die functionality (and potential electrical interface connectivity standards), the complexity of 3DIC implementations requires early and extensive investment in the design and analysis flows listed above – a higher risk than 2.5D IC implementations, for sure, but with a potentially greater reward.

At the recent IEDM 2020 conference, TSMC presented an enlightening paper describing their recent efforts to tackle these 3DIC implementation tradeoffs, using a very interesting testchip implementation.  This article summarizes the highlights of their presentation. [1]

SoIC Packaging Technology

Prior to IEDM, TSMC presented their 3DIC package offering in detail at their Technology Symposium – known as “System on Integrated Chip”, or SoIC  (link).

A (low-temperature) die-to-die bonding technology provides the electrical connectivity and physical attach between die.  The figure below depicts available die attach options – i.e., face-to-face, face-to-back, and a complex combination including side-to-side assembly potentially integrating other die stacks.

For the face-to-face orientation, the backside of the top die receives the signal and PDN redistribution layers.  Alternatively, a third die on the top of the SoIC assembly may be used to implement the signal and PDN redistribution layers to package bumps – a design testcase from TSMC using the triple-stack will be described shortly.

A through-silicon via (TSV) in die #2 provides electrical connectivity for signals and power to die #1.  A through-dielectric via (TDV) is used for connectivity between the package and die #1 in the volumetric region outside of the smaller die #2.

Planning of the power delivery to the SoIC die requires consideration of several factors:

  • estimated power of each die (especially where die #1 is a high-performance, high-power processing unit)
  • TSV/TDV current density limits
  • distinct power domains associated with each die

The figure below highlights the design option of “number of TSVs per power/ground bump”.  To reduce IR drop and observe current density limits through a TSV, an array of TSVs may be appropriate – as an example, up to 8 TSVs are shown in the figure.  (Examples from both FF and SS corners are shown.)

The tradeoff of using multiple, arrayed TSVs is the impact on interconnect density.

As an illustration, TSMC pursued a unique SoIC implementation – a quad-core ARM A72 processor (die #1) where the L2$ cache arrays commonly integrated with each core have been re-allocated to die #2.  The CPU die in process node N5 maintains an L3$ array, while the SRAM die in process node N7 contains the full set of L2$ arrays.  A third die on top of die #2 provides the redistribution layers.  A total of 2700 connections are present between CPU die #1 and the L2$ arrays in die #2.

This is an example of how SoIC technology could have a major impact on system architectures, where a (large) cache memory is connected vertically to a core, rather than integrated laterally on a monolithic die.

PDN Planning

A key effort in the development of an SoIC is the concurrent engineering related to the assignment of bump, pad, and TSV/TDV locations throughout, for both signals and the PDN.

The figures above highlight the series of planning steps to develop the TSV configuration for the PDN – a face-to-face die attach configuration is used as an example.  The original “dummy” bond pads between die (for mechanical stability) are replaced with the signal and PDN TDV and TSV arrays.  (TSMC also pursued the goal of re-using the probe card, between die #1 testing and the final SoIC testing – that goal influenced the assignment of pad and TSV locations.)

The TSV implementations for the CPU die and SRAM die also need to be carefully chosen so as to meet IR goals, without adversely impacting overall die interconnect density.

LVS

Briefly, TSMC also highlighted the (multi-phase) LVS connectivity verification methodology, and unique DFT architecture selected for this SoIC test vehicle, as depicted below.

DFT

Another major consideration is the DFT architecture for the SoIC, and how connectivity testing will be accomplished using cross-die scan, as illustrated below.

 

TSMC demonstrated that the resulting (N5 + N7) SoIC design achieved a 15% performance gain (with suitable L2$ and L3$ hit rate and latency assumptions), leveraging a significant reduction in point-to-point distance afforded by the vertical connectivity between die.  The package areal footprint for the SoIC is reduced by ~50% from a monolithic 2D implementation.

3D SoIC packaging technology will offer system architects with unique opportunities to pursue design partitioning across vertical die configurations.  The density and electrical characteristics of the vertical bond connections may offer improved performance over lateral (monolithic or 2.5D chiplet-based) interconnects.  (The additional power dissipation of “lite I/O” driver and receiver cells between die versus on-chip signal buffering is typically small.)

The tradeoff is the investment required to develop the SoIC die floorplans for TSV and TDV vias to provide the requisite signal count and low IR drop PDN.  Although 2.5D chiplet-based package offerings have been aggressively adopted, the performance and footprint advantages of a 3DIC are rather compelling.  The TSMC test vehicle demonstrated at IEDM will no doubt generate considerable interest.

-chipguy

References

[1]  Cheng, Y.-K., et al., “Next-Generation Design and Technology Co-optimization (DTCO) of System on Integrated Chip (SoIC) for Mobile and HPC Applications”, IEDM 2020.

 


5 Things You Need to Plan for System Custom Silicon

5 Things You Need to Plan for System Custom Silicon
by Raul Perez on 12-13-2020 at 10:00 am

logo4semiwiki

I used to be part of the custom silicon management team at Apple.  I’ve seen how great a challenge it is to pull off a custom silicon strategy within a one year product cycle. Apple is the perfect example of this custom silicon model since they develop the best mobile processors in the world for their products. Which also includes other supporting system custom silicon.

Recently, Apple has even dropped Intel in favor of their own M1 processor for the Mac. Tesla has made their own AI processor and dropped Nvidia. Amazon AWS is about to release their own AI chip Trainium. Google is rumored to be developing custom silicon for their next phone release. Many others, such as Facebook, are also known to be or rumored to be developing custom silicon as part of their products or services. These are the who’s who best companies in the world developing custom silicon to lead their categories, and saying NO to off-the-shelf silicon.

There are some basic steps that should always be taken to start on the path towards a successful custom silicon strategy. Here they are:

1. Decide where to integrate each type of circuit.

By ‘where’ I mean multiple things. First, I mean what semiconductor process node. One common approach is to split chip integration into one chip for analog and power circuits in a 5V process. Then have a second digital chip in a low voltage process. There are processes that in some cases can be a good performance and value compromise to integrate everything into one chip (such as some 65 to 55 nm BCD lite nodes).

Second, there is the system physical location that needs to be considered. The charger chip will want to be close to the battery and the power input. The processor will want to be close to its peripherals. Trade-offs will need to be worked out to see if integration is acceptable or not.

Third, there is routability to be considered to check that routing congestion is not an issue including all the necessary passives required for the chip(s).

Fourth, there are some types of components such as sensors that are made in very specialized technologies such as MEMS, and these are not suitable for integration into a custom chip in standard silicon processes. You may be able to get benefits with the chip co-packaging approach here.

2. Decide what makes sense to integrate and what doesn’t.

Semiconductor processes don’t provide good enough cost and density to justify swapping an off-the-shelf power cap or power inductor for an integrated version. You can easily absorb into a custom silicon chip ESD protection and other diodes, signal fets and power fets. But for the latter you need to keep in mind that some power fet technologies are superior for high power and high voltage applications and best kept as off-the-shelf components. Every design is different and requires some engineering analysis to decide what makes sense.

Most off-the-shelf components can be integrated into one or a few chips. This will provide you a BOM cost reduction and board size reduction typically in the 50% range for each. Also better reliability, better anti-counterfeit security, better fit to your PRD and more.

3. Determine what are suitable existing components that could be used as the base IP to get to your desired custom chip.

Custom silicon is usually developed in parallel to the system development, as time to market is usually key for high volume consumer electronics. Therefore, it is desirable to find chips that are available off-the-shelf and try to base your custom chip project using those as base IP. Once you have a list of off-the-shelf components that look attractive, you can contact the suppliers that make them to start a discussion about custom silicon.

4. Determine who are suitable suppliers for your project and how you will manage the project.

First, decide how comfortable you feel about the suppliers you have listed in step 3. Do they have redundant manufacturing sites? Do they have a good track record delivering shipments on time? What is their overall financial health?

Second, you need to know how to manage the silicon suppliers from concept to mass production. It’s too risky to simply sign off on a chip spec, and then wait 4,5,6+… months to get your chips back. You need to mitigate that risk with a very thorough process that will ensure continuous communication and alignment between all parties involved, implement frequent checkpoints with chip experts that work for your company reviewing your supplier’s work to ensure it’s of high quality. It’s not an acceptable mitigation to split your man power by having a ‘backup’ system designed with off-the-shelf components.

5. Determine what is your ROI for your particular situation.

Let’s explore an example: Acme electronics is shipping on average 6 million units per year. The product life is about 4 years. Their electronics BOM cost is $2. They’ve determined that with one custom silicon chip they can do everything they need for $1. They engage with a supplier that quotes them an NRE of $3 Million USD in three payments: $1 Million USD at kick off, $1.5 Million USD at tape out and $0.5 Million USD at mass production ramp. So Acme electronics needs to pay up front $3 Million USD. But they will save $1 in every unit they ship. After shipping 3 Million systems they will recover their NRE investment. After that they will earn $1 of extra profit for every system they ship. Since they will sell 24 million systems during the product’s lifetime, after deducting the $3 Million USD NRE, Acme electronics makes $21 Million USD of extra profit. So the ROI for them was equal to 21 Million USD/3 Million USD = 700%.

It’s also important to consider in this analysis the losses you may be incurring due to counterfeits, yield losses, etc… A custom silicon strategy can help you virtually eliminate counterfeit risks and losses. Therefore, that should be part of your cost benefit analysis.

 

About CustomSilicon.com by Digital Papaya Inc.

 

CustomSilicon.com is the leading consulting firm in the custom silicon strategy and project management space for AR/VR, automotive, mobile, server, crypto, sensors, security, medical, space and more.

Raul has 20 years of combined experience in the system electronics and silicon industries. He is currently responsible for major system company’s custom silicon and sensor projects. Raul was the directly responsible silicon manager for 18 chips ramped to mass production at Apple for iPhone and iPad, and 23 total chips ramped to mass production counting projects where he was an expert reviewer. Raul was directly responsible for the development of mobile processor System PMICs for the iPad2, New iPad, iPad mini, iPad 4 and iPhone 5s. Other silicon included, backlight/display power for iPhone 5 and iPhone 5s, lightning connector silicon and video buffers. He managed supplier teams across the Globe.

Our network of experts provide our clients with an A+ silicon management team from day one.


The Semiconductor Industry Has High Hopes That Biden Will Change Tracks

The Semiconductor Industry Has High Hopes That Biden Will Change Tracks
by Terry Daly on 12-13-2020 at 8:00 am

US China Semiconductor Biden

What is the “right track” for US-China trade relations?

The semiconductor industry has been squarely in the crosshairs of US-China trade tensions for four years. As the US faces a presidential leadership transition, will a Biden administration change the dynamic? The chip industry is counting on it, and China hopes so too.

In a recent address to the US-China Business Council, China’s foreign minister Wang Yi said China is open to and hoping for a renewed relationship. “We should strive to restart the dialogue, get back to the right track, and rebuild mutual trust in the next phase of Sino-US relations.”

China should not expect an immediate unwinding of the Trump agenda. In a recent New York Times interview, Biden stated that he intends to first review the existing US-China agreement and then develop a “coherent strategy” with traditional allies in Europe and Asia.  He wants trade policy that will “… actually produce progress on China’s abusive practices – that’s stealing intellectual property, dumping products, illegal subsidies to corporations” and forcing “tech transfers” from American companies to their Chinese counterparts. These goals could have been directly lifted from Trump’s US Trade Representative Section 301 Report (March 2018). Biden also wants to build leverage through bipartisan consensus for large scale investments in R&D, infrastructure, and education to compete with China. His view is that the US currently has neither the policy nor the leverage.

The Trump administration has been on a four-year campaign to redress trade imbalances, counter long-standing industry complaints regarding China’s trade practices, check China’s global cybertheft reach and deny advanced technology to its national security complex (military, intelligence, cyber and space). Trump levied tariffs, strengthened oversight of Chinese licensing and M&A activity with the signing of the Foreign Investment Risk Review Modernization Act, and expanded export controls targeting both denied parties and advanced technologies. In addition, he triggered the foreign direct product rule to restrict global companies (primarily TSMC) from product shipments to Chinese companies (primarily Huawei) using US-origin technology (notably from US EDA & IP firms and semiconductor equipment manufacturers). The Justice Department took on high profile litigation to prosecute IP theft (UMC and Fujian Jinhua). And the “Clean Networks” initiative formed alliances with more than 50 democracies dedicated to using only trusted vendors in their 5G networks.

The policy result: mixed. The trade deficit is higher today than at the outset. The US semiconductor industry reacted negatively to policies impacting market share, financial performance, and free trade, but positively to litigation addressing high profile IP theft. The semiconductor industry and many of its customers scrambled to revise global supply chains to mitigate risk. The impact to China’s technology industry was severe. Huawei was hit particularly hard by the denial of access to chips (resulting in the sale of its Honor smartphone business) and by a partial global boycott of its 5G communications systems. Huawei and SMIC are now essentially locked out of access to leading edge chip technology (7 nm and below). China retaliated with tariffs and its own denied parties list. It codified a new strategy to become self-sufficient across the entire semiconductor value stack.

The pending “de-coupling” threatens a bifurcation in global technology standards, inefficiency in R&D investment and a revival of economic nationalism. Industrial policy has (re)surfaced in the US, Europe, India and elsewhere as regions move to protect access to leading technologies, address cyber risks to national security and critical infrastructure, and secure the supply of key components. Taiwan announced an initiative to form its own semiconductor equipment industry to reduce dependence on US firms and mitigate the reach of US sanctions.

Many executives in the semiconductor industry desperately want to roll back the Trump agenda. They want unfettered access to China’s market and to global talent, but with protection of IP and freedom of action to operate globally. They want to avoid the balkanization of the industry. They acknowledge policy objectives of their countries of incorporation but want to extract the chip industry from being a lever of economic and national security policy. They do not want to be in the club long dominated by soybeans, oil, steel, airlines, and autos.

So should a Biden administration unwind, maintain, or modify policy to gain consensus and leverage? Will it acquiesce to China’s view of the “right track”? A geopolitical reality check regarding China must underpin potential policy revisions. The Biden team surely understands that there is already near bi-partisan consensus in the US Congress that China threatens global security, denies essential human rights, and disregards obligations taken under international agreements. These threat vectors will not disappear with Joe Biden in the White House.

China is a regional and global security threat with an increasingly aggressive military posture against neighbors in the South & East China Seas and on its border with India. It is prosecuting a rapid build-up of conventional and asymmetric military capability leveraged by a “civil-military fusion” policy that enables Chinese government access to any technology available in its commercial sector. It continues trade secret and IP theft through both traditional and cyber espionage. The Belt & Road Initiative and debt diplomacy through Chinese investment in overseas port facilities and raw materials personify economic strategies backing China’s goal of global hegemony. Is this being on the “right track”?

China’s election to the UN Human Rights Council belies an atrocious human rights record. It has imprisoned and forced into involuntary labor millions of Muslim Uighurs. It also persecutes Buddhist, Falun Gong and Christian communities. The Chinese Communist Party is the only acceptable orthodoxy. China abrogates obligations it has taken under international treaties. It refuses to accept the results of maritime disputes arbitrated under the UN Convention on the Law of the Sea. It unilaterally terminated its 50-year treaty on Hong Kong (one country, two systems) and imprisoned advocates for democracy. Its role in COVID-19 remains to be understood. Right track?

China masterfully leveraged access to open societies and the international trading order since joining the WTO in 2001, lifting millions of people out of poverty. But it has not met its reciprocal obligations to free, fair, and transparent trade practices. China’s economic development play book is extensive: subsidization of national champions; restrictions on foreign access to local markets; requirements on global corporations for licensing and/or minority ownership in joint ventures as the ante for market access; acquisition of global firms followed by repatriation of IP and production; cyber theft targeting commercial IP and technology critical to national security. Right track?

Finally, across the straights sits Taiwan, jurisdiction to one of the most vibrant and strategic segments of the semiconductor industry. China has taken an unambiguous position on its ultimate sovereignty over Taiwan and its aim for reunification, positions not widely supported in the international community. China is using economic and military leverage to bend Taiwanese leadership and the international community toward that view. Right track?

Any US President who subordinates this threat profile in the quest for improved trade relations with China does so at the peril of the United States and its allies. Consensus and the use of leverage are central to the path forward.

Indeed, there is US bipartisan consensus in Congress on the China threat and the need to invest heavily in both research and manufacturing to keep US chip technology at the leading edge and assure security of chip supply. This consensus is exemplified in the CHIPS Act now integrated in the pending National Defense Authorization Act (NDAA). Internationally, there is consensus among more than 50 liberal democracies as to the threat posed to trusted communications networks by Huawei’s 5G platform and an associated commitment not to deploy Huawei.

Despite the revulsion of “All Things Trump” by most leaders in technology, objective policymakers recognize this consensus and the substantial leverage bequeathed by the Trump administration on which to advance US objectives with China. How then should a Biden administration position trade policy and the semiconductor industry in this context? Should chips be exempt from use as a lever of US policy viz-a-viz China?

First, Biden should maintain all sanctions and tariffs and avoid the visceral instinct to immediately reverse the actions of the Trump administration. This would clearly signal to China that a new Biden administration shares in the US bi-partisan consensus that China is a threat to global security and that abuses of human rights and the abrogation of treaty obligations are not acceptable. For now, maintain the leverage that was painfully developed.

Next, Biden should task Katherine Tai on day one to lead the development of a “National Trade Strategy” to drive clarity of US objectives and approach on trade policy. This would guide consistency in US action and transparency for the American people, corporations, and trading partners. It should embody the high ground of “free, fair and open trade”, embrace international trade deals that expand the global economy, embody strong IP protection, provide national security carve-outs, and integrate “reciprocity and proportionality” as central tenets in countering trade treaty violations. It should support use of trade as a viable lever in achieving national policy priorities.

Third, coordinate China trade policy with liberal democratic trading partners. Those most critical from a semiconductor perspective are South Korea, Taiwan, Japan, Singapore, the EU, Israel, and India. Unilateral US action has at times disenfranchised traditional allies, but the Clean Networks alliance and the 42 nation Wassenaar Arrangement governing export control provide beachheads from which to expand. A Biden administration should evaluate conditions under which the US could join the Trans-Pacific Partnership and negotiate toward that end. It should reconcile open issues and re-engage the WTO. These actions will blunt China’s ability to further displace US global trade leadership following China’s win in finalizing the Regional Comprehensive Economic Partnership.

Fourth, unambiguously confirm support for the CHIPS Act as incorporated in the pending NDAA, or any revision needed in 2021. Extend the CHIPS Act to include multi-year funding for the comprehensive R&D imperatives in the “Decadal Plan for Semiconductors”, as recently published by the Semiconductor Research Corporation.

Finally, re-engage in trade negotiations with China with clear objectives and allied support. Establish as part of the talks a technical working group inclusive of US and Chinese entities of the SIA, SEMI and GSA. Charter the group to deliver recommendations for specific technical and governance methods of protecting IP and ensuring that the application of US-sourced technology be limited to commercial use and firewalled from China’s national security infrastructure. A robust verification regime must be the ante for lifting existing tariffs and sanctions. Phase tariffs and sanctions out in concert with China’s demonstrated acceptance of its international treaty obligations.

The Thucydides Trap posits the inevitability of military conflict between a current global hegemon and a rising power. Is war then pre-ordained for the US and China? Semiconductor technology is the key ingredient of the digital economy and is essential to the future of both countries, indeed the globe. An agreement on chips between the great powers might pave the way for resolution of other critical flash points and lead minimally to détente.

Joe Biden is right to seek US bi-partisan consensus and alignment with allies as he steps back onto the global stage. He should wisely use the multiple points of leverage passed along from the prior administration and assure that the “right track” is defined by the interests of the US and its allies, not solely those of Beijing.

Terry Daly is a retired semiconductor industry executive and senior fellow at The Council on Emerging Market Enterprises, The Fletcher School of Law & Diplomacy, Tufts University


Tesla: The Eyes Have It

Tesla: The Eyes Have It
by Roger C. Lanctot on 12-13-2020 at 6:00 am

Tesla The Eyes Have It

David Zipper of Harvard’s Kennedy School writes in Slate that the incoming Biden Administration should “bring the hammer down” on Tesla Motors for its mis-labeled and therefore misleading Autopilot application and the recently updated Full Self-Driving software beta in the interest of the general public. Zipper’s plan, apparently is to “stop” Tesla and somehow put Federal regulators in charge of “guiding” the electric car company in its development and deployment of self-driving technology.

Slate: “The Biden Administration Needs to Do Something about Tesla”

Zipper is correct in highlighting the limitations of Tesla’s FSD software but his hysteria is misguided. FSD – launched this past fall as a beta for customers with suitably equipped vehicles and with an array of consumer caveats – is a potential menace. But a blunt force regulatory response of the sort Zipper is advocating is hardly in order and certainly nothing the Biden Administration should sign up for – especially given the fact that Tesla has become the poster child of global American automotive technological achievement.

Nevertheless, Zipper trots out fellow travelers supporting his cause including: the National Transportation Safety Board; the National Highway Traffic Safety Association, Partners for Automated Vehicle Education (PAVE), the AAA, the Owner-Operator Independent Drivers Association (OOIDA), the Government Accountability Office, and a somewhat ambivalent Alliance for Automotive Innovation.

What’s the real problem? How did we arrive at this moment where an innovative EV startup has disrupted industry norms and traditions with a customer-pleasing driving automation solution that simultaneously promises life-saving technological advances and the potential for sudden death? Why has Tesla stirred up such passionate opposition?

We got here because A) the NHTSA ran out of passive safety regulatory solutions such as seat belts, airbags, stability control, and anti-lock braking to reduce highway fatalities; and B) the agency has been sidelined, de-emphasized and defunded at the very moment when it needs more attention and funding to take on the challenge of regulating active safety systems such as blind spot detection, lane departure warning, automatic emergency braking, cross-traffic warning, and adaptive cruise control.

The last major NHTSA safety initiative was a voluntary effort agreed to by the automotive industry to implement automatic emergency braking. Before that came the decade-long effort to mandate backup camera technology.

If it weren’t for the COVID-19 pandemic killing thousands of Americans on a daily basis, consumers might be more troubled by the 100 Americans dying every day on U.S. roadways. Tesla’s CEO Elon Musk argues that his vehicles and his technology are part of the solution, not the problem.

The solution to the Tesla FSD beta software is quite simple and Zipper touches on it but fails to focus on it. The problem is the driver monitor built into Tesla vehicles. Zipper notes that it lacks an eye-tracker, thereby allowing it to be easily subverted by reckless or incautious users.

In reality, Tesla’s vehicles are already equipped with in-cabin driver and passenger monitors that may well be capable – with an over-the-air software update – of fulfilling the need for a more robust solution. Should a monitor be required, Tesla is capable of a flip-switching response.

So, the solution appears to be simple. NHTSA ought to initiate an investigation of the efficacy of driver monitoring systems and develop a recommendation. Given the resources and time normally required by such an investigation, though, NHTSA and the public might be better served by the pursuit of the same voluntary path taken for encouraging the adoption of automatic emergency braking.

Zipper notes the advantages of Europe’s so-called “type approval” process for reviewing and approving systems to be introduced for European automobiles. He fails to mention that the separate European New Car Assessment Program likely has overriding relevance here due to the popularity of its five-star safety ratings based on rigorous and ongoing research.

Euro-NCAP will require driver monitoring as standard equipment on all new vehicles beginning with model year 2022. All indications are that this requirement – already evolving – will eventually integrate eye tracking solutions.

As noted by Strategy Analytics in a recent report on the subject: “However, members of the UNECE safety committee believe that, by 2022, the test protocols from Euro-NCAP will be tightened to include direct monitoring of the driver’s eyes and face movements – and thus could be beneficial for interior camera-based driver monitoring systems.”

Strategy Analytics: “European Mandate Boosts Interior Camera-Based Driver Monitoring, Winners Now Emerging”

In other words, nothing less than eye-tracking will be required as standard equipment on European vehicles in order for them to achieve a five-star safety rating – equivalent in the U.S. to the Insurance Institute for Highway Safety’s five-star safety rating. It’s worth noting that Consumer Reports recently gave Comma. Two’s Open Pilot aftermarket driver assistance system a top rating in part due to its integration of eye-tracking based driver monitoring.

SOURCE: Consumer Reports

Consumer Reports: “Advanced Driver Assistance Systems – Test Results and Recommendations”

General Motors was a leader in integrating eye-tracking technology from Seeing Machines as part of its Super Cruise semi-automated driving system. Super Cruise took second place behind Comma Two in the Consumer Reports ranking. Tesla was third.

The greater significance behind the entire debate is the recognition of the efficacy of human-based driving. In its own literature, Euro-NCAP blames 90% of all crashes on human frailties. The reality is that if machines were doing all the driving today our transportation systems would fail miserably. Human beings are actually pretty good at driving cars – even if more than a million humans die every year in vehicle crashes.

The 100/day fatality rate in the U.S. actually represents great progress – but U.S. regulators are aware that they have reached an impasse. Transformative advances such as the adoption of seat belts and airbags are in the rearview mirror and active safety represents terra incognita. The path forward is literally and figuratively unclear.

The first step down this path, though, likely lies through driver monitoring, to better understand driver behavior and how to assist drivers. Rather than seeking to remove human beings from the driving task, auto makers, like Tesla, are seeking out ways to assist drivers.

The Consumer Reports report on advanced driver assist systems (ADAS) highlights the challenges of developing and refining effective and appropriate user interfaces that are helpful without being distracting, confusing, or annoying. Strategy Analytics conducts user experience research in this area as well and is on record criticizing Tesla’s FSD beta software.

Strategy Analytics: “Tesla Full Self Driving HMI – Not Useful, Not Usable, Not Safe”

We will not make progress by standing in the path of innovation. Developers, like drivers, need help and, maybe, some guidance. It may be time to appoint a proper Congressionally approved director of NHTSA and properly fund this essential organization so that it can take on its greatest challenge yet – helping machines to better assist humans in the task of safe driving.

At the very moment that the industry is poised to start removing steering wheels from cars, regulators are calling for driver monitors to make sure drivers are paying attention to the driving task. Suffice it to say there will be some very confusing messages for drivers to digest in the coming years. Let’s hope we get the messaging, the branding, and the regulations right in the interest of saving lives.