Bronco Webinar 800x100 1

Formal for Post-Silicon Bug Hunting? Makes perfect sense

Formal for Post-Silicon Bug Hunting? Makes perfect sense
by Bernard Murphy on 03-31-2021 at 6:00 am

Bug hunting process for DDR problem min

You verified your product design against every scenario your team could imagine. Simulated, emulated, with constrained random to push coverage as high as possible. Maybe you even added virtualized testing against realistic external traffic. You tape out, wait with fingers crossed for first silicon to come back. Plug it into the test board and everything looks good – until an intermittent bug sneaks in. After much head scratching, you isolate a problem to read/write re-ordering misbehavior around the memory controller. Now you have to try to reproduce the problem in pre-silicon verification. Hunting for a bug you missed. But formal for post-silicon bug hunting? That’s not as strange as you might think.

Out of control

You know where this is going. There’s an interconnected set of state machines mediating interaction between the interface control unit, the buffer unit and the memory controller. In some overlooked and seemingly improbable interaction, old data can be read before a location has been updated. Not often. In the lab you only see a failure intermittently, somewhere between every 2 to 8 hours. Not surprising that you didn’t catch it in pre-silicon verification.  I’ve seen similar issues crop up around cache coherence managers and modem controllers.

This is where formal methods can shine, finding obscure failures in complex state machine interactions. But in this application, you’re not setting out to prove there are no possible failures – that’s pre-silicon verification. Here you want to hunt for a bug you know must exist. That takes a different approach, one that won’t present any great challenge to formal experts but can be a frustrating search for a needle in a haystack for most of us. Through much experience Siemens EDA have developed a systematic approach they call a Spiral Refinement Methodology that should help you find that needle, without losing your mind.

Spiraling through a radar chart

They graph this refinement in a radar chart (the image in this blog is an example). The search progresses through multiple objectives at several levels. They start by reducing complexity to make formal analysis possible. Since the debug approach is formal, you first need to localize where, functionally, in the design the failure is happening. This insight typically emerges through bug triage in the lab. Then you can eliminate big chunks of the design that should not be important. And perhaps (carefully) add constraints. You will need access to formal experts, internal or external, to guide you away from pitfalls. Particularly as you start to abstract or divide up the problem to further manage complexity.

Assertions and initial state

Another key objective is to refine assertions towards the failing condition. One technique they mention here is “formal goal-posting”. This is a method to progress towards a condition through a sequence of proofs which allow you to step out through the state-space in digestible chunks, rather than trying to do the whole thing in one impossible leap. Along similar lines, they stress the importance of finding a suitable initial state to start proving cycles. For bugs that may not crop up for several hours, you’ll need to start close in the time, not just in space (function). Simulation can get you there, to set up that initial state.

Then they refine each of these objectives. Further abstractions, further tuning assertions, finding more suitable initial states. Zeroing in on a sequence or set of sequences that can lead to that failure detected in the lab. They describe application to three example failures, including this one. In each case localizing the problem through a very systematic search.

Very nice paper. You can read it HERE.

Also Read:

Library Characterization: A Siemens Cloud Solution using AWS

Smarter Product Lifecycle Management for Semiconductors

Observation Scan Solves ISO 26262 In-System Test Issues


WEBINAR: Pulsic’s Animate Makes Automated Analog Layout a Reality

WEBINAR: Pulsic’s Animate Makes Automated Analog Layout a Reality
by Tom Simon on 03-30-2021 at 10:00 am

Pulsic Webinar

Many years ago, digital and analog design flows diverged, with digital design benefiting from increasing levels of automation and more importantly separation between the front-end design process and the back-end design process. While digital design still requires linkages between the front and back end, they are well defined and the existing flows handle them in a straightforward manner. The same cannot be said for analog design. Despite the many advances in custom layout tools and improvements in the entire analog design flow, the dependencies between front-end and back-end have remained challenging along with the intricacies required in analog layout on its own.

Pulsic has a long history of providing design tools that can help improve the quality of custom digital designs and have recently turned their focus to solving the long-standing challenges of automating the analog design process. Their Animate Preview product can be used right from inside the Cadence schematic editor to begin creating and understanding the circuit layout. Because layout considerations are critical to design success, having insight and control of the physical design helps speed up the process and improve final design quality at the same time.

Paul Clewes of Pulsic gave me a detailed look at Animate Preview and talked about their upcoming webinar on April 15th. Animate is integrated with Virtuoso and when launched adds a preview window in the lower left corner of the schematic editing view. Animate will automatically detect when an analog circuit is loaded and then identify common structures such as differential pairs, current mirrors, matched pairs, etc. Animate will generate a layout on the fly and display it in the preview window.

Quite a lot happens when this layout is generated. Users do not need to specify constraints, the current technology information is used to create DRC correct results. The resulting layout is DRC correct and fully compatible and editable in Virtuoso. Because Animate is aware of the structures mentioned above, it is smart when it comes to placing devices. The webinar will show several examples of how Animate intelligently places devices to ensure optimal circuit operations.

Analog circuit designers can get quick and accurate area estimates and can then go into the Animate user interface to easily and graphically control device placement, guard ring configuration, dummy device location, etc. It is easy to modify the guard rings and dummy devices as well as control relative positions for devices. Each change made in the user interface triggers an update to the layout inside of Animate.

Users can also select from a variety of aspect ratios and also assign pins to the desired edge of the cell. Under the hood Animate is creating a DRC correct layout with proper spacing. From the user’s perspective it is a bit like using a drag and drop editor, but one intended for analog layout design. My first thought was about how WYSIWYG html editors hide the underlying html but let you move blocks easily to achieve the results you desire.

After talking with Paul, it was clear that Pulsic is onto something with Animate Preview. Because the layout of analog circuits is so important during circuit design, giving the circuit designer a tool to see and control the layout is going to help immensely. A lot of companies have taken a run at solving this problem, but there is a subtle combination of automation and direct control required to come up with a feasible solution. To make your own assessment of how useful this might be, feel free to watch the video here.

Also Read:

CEO Interview: Mark Williams of Pulsic

Analog IC Layout Automation Benefits

Webinar: Boosting Analog IC Layout Productivity


Webinar: Rapid Exploration of Advanced Materials (for Ferroelectric Memory)

Webinar: Rapid Exploration of Advanced Materials (for Ferroelectric Memory)
by Tom Dillinger on 03-30-2021 at 6:00 am

polarization

There are many unsung heroes in our industry – companies that provide unique services and expertise that enable the rapid advances in fabrication process development that we’ve come to rely upon.  Some of these companies offer “back-end” services, assisting semiconductor fabs with yield diagnostic engineering and failure analysis.  Some are “front-end” companies that pursue advanced research into promising new materials and processing techniques, and then assist with technology transfer to production manufacturing.  We tend to focus on the large semiconductor foundries and their process roadmaps, yet the underlying support from these engineering firms is fundamental to the industry as a whole.

An exemplary front-end services company is Intermolecular, the Silicon Valley science hub of the EMD Electronics.  They offer process development research and characterization services, spanning a wide range of materials – e.g., metal alloys, oxides/nitrides, thin films for specialized applications.

With each new process node, extensive investigations into new materials are pursued, to determine the optimum stoichiometry and electrical properties.  This is especially evident in the pursuit of alternative memory technologies.  A specific example is the introduction of new ferroelectric materials for very high density, non-volatile data storage.

Background

A ferroelectric material is a special dielectric, in that it exhibits two (stable) remanent polarization states.  The figure below illustrates the hysteresis curve of the crystalline polarization when an applied voltage is cycled across the dielectric – note the two intersections of the curve with the vertical axis when the applied voltage is zero, representing the stored “state” of the material.

The polarization is the contribution of the electric dipoles in the material to the electric flux between the terminals in the presence of an electric field.  The formula for the dielectric constant in the material is:  epsilon = (epsilon_0 + P/E), where epsilon_0 is the dielectric constant of free space, P is the polarization in the material, and E is the applied field.  The curve for a conventional, non-ferroelectric material would be a straight line through the origin – i.e., no polarization when the applied voltage is removed.

Note in the figure that the chemical bond orientation in the material differs slightly in the two states, resulting in the remanent polarization.

(The term ferroelectric is a bit misleading – there is no iron (Fe) constituent in the dielectric material.  The hysteresis curve of the dielectric polarization resembles the curve of a ferromagnetic material in the presence of an applied magnetic field.  After the external field is removed, the ferromagnetic material retains a magnetic polarization.  When the concept of remanent electrical polarization in a dielectric was first demonstrated, the term ferroelectricity was introduced, which has lasted.)

The two polarization states of the ferroelectric material suggest that it may be used as part of a memory bit circuit implementation.  The figure below illustrates a couple of potential implementations:

One depicts the ferroelectric material as a replacement for the storage capacitor in a 1T1C bitcell.  Unlike a conventional DRAM, note that no refresh of storage charge due to leakage current is required – data storage is represented by a dielectric polarization state, not by the amount of charge stored on the bitcell capacitance.  The other implementation depicted above shows the ferroelectric material directly integrated in the dielectric gate stack of a field-effect transistor.  The two polarization states of the material will result in different threshold voltages for the device, representing the stored data value.

As you might imagine, the crystalline properties of the dielectric material are crucial to the magnitude of the polarization states and the opening of the hysteresis curve.

Intermolecular Webinar on Ferroelectric Materials Optimization

I had an opportunity to view an outstanding webinar from Intermolecular, describing their recent investigations into ferroelectric materials.  Their prototype fab capabilities provided atomic level deposition (ALD) of a variety of hafnium oxide (HfO2) and zirconium oxide (ZrO2) films.

An image from the webinar is shown below, for the case of HfO2.  There are three crystalline topologies for these oxides, however only one demonstrates attractive ferroelectric behavior.

It is therefore necessary to ensure the process flow for depositing (and crystallizing) the film results in a very high percentage of material with the desired crystalline structure.

Another process experiment pursued by the Intermolecular team evaluated the ferroelectric properties of a stoichiometric mix of hafnium and zirconium in the dielectric in a single thin film, as well as stacking separate ALD-deposited HfO2 and ZrO2 layers.

Even if you’re not directly involved in advanced process development, I would encourage you to view this webinar presentation from Vijay Narasimhan at Intermolecular.  It is extremely informative, starting with the basics of ferroelectricity, and offering insights into the R&D flow for materials evaluation – e.g., deposition/annealing, crystalline spectroscopy, and electrical characterization.

Here is the webinar replay link.

Here is a link to more information on the front-end services provided by Intermolecular.

Also Read:

Executive Interview: Casper van Oosten of Intermolecular, Inc.

Integrating Materials Solutions with Alex Yoon of Intermolecular

Ferroelectric Hafnia-based Materials for Neuromorphic ICs


Library Characterization: A Siemens Cloud Solution using AWS

Library Characterization: A Siemens Cloud Solution using AWS
by Kalar Rajendiran on 03-29-2021 at 10:00 am

Characterization Runtime Chart

Pressing demands on compute speeds, storage capacity and rapid access to data are not new to the semiconductor industry. A desire for access to on-demand computing resources have always been there. During pre-cloud-computing era, companies provisioned on-demand compute capacity by procuring high performance computing equipment that could handle peak demand. This led to under-utiliza­tion of equipment during typical demand periods. Interestingly, larger companies, in spite of their ownership of lot of high-performance computing assets, sometimes also experienced the opposite situation. And that was lack of availability of the right kind of compute resources during extreme peak demand periods, when multiple large projects were going on concurrently.

Availability of outsourced cloud compute and storage services changed all this. The risks and costs of procuring the latest and greatest equipment was shifted to cloud services companies. Customers were able to convert large upfront fixed costs (capital expenditures) to use-based variable costs. Customers simply accessed what was needed, when it was needed and how (the resource mix) it was needed. Utilizing on-demand-computing capability from an outsourced cloud-services provider started making sense for companies of all sizes.

Is shifting to outsourced cloud-based on-demand computing just about cost savings and converting fixed cost to variable cost? Depending on the compute application and a combination of right tools and methodologies, the benefits could be lot more than the obvious cost benefits.

A recently published whitepaper showcases the value to a customer in characterizing libraries over the cloud. The whitepaper was collaboratively authored by Baris Guler and Kenneth Chang of Amazon’s AWS Division and Matthieu Fillaud and Wei-Lii Tan of Siemens EDA. Library characterization is the process of generating timing models for library elements that will be used for chip-level or block-level timing simulation and analysis. It is a task that lends itself well for scalability offered by cloud platforms.

In this blog, I’ll touch on just some of the key aspects of the Siemens-AWS solution for library characterization.

Rapid Deployments with Repeatable Success

Just as a reference design or platform contains essential elements of a system that a user may modify to customize as required, an AWS CloudFormation Template is a reference template that specifies essential elements needed for the cloud service. Siemens has collaborated with Amazon’s AWS division to create a template that is an excellent starting point for library characterization purposes. From this starting point, customers can easily customize to their specific need on hand by modifying the template. The AWS CloudFormation service itself leverages the template to create and provision the resources in an orderly and predictable way.

Rapid deployments of characterization runs are enabled by AWS ParallelCluster. It is an AWS supported open-source cluster management tool for quickly deploying and managing the clusters (resources) in the AWS Cloud. It automatically sets up the required compute resources and shared filesystem.

Data Security

The Siemens-AWS solution includes security measures incorporating user identification process and traceability for actions taken on the cloud. The solution also executes protocols to ensure data is transported, used and stored securely.

Predictability of Runtimes

A key benefit of moving to cloud-based on-demand computing will be lost if characterization runtimes become unpredictable. The Siemens-AWS collaboration has yielded a quick to setup, easy to use solution that results in predictable runtimes. Referring to Figure 1, users can adjust the resource provisioning in a predictable fashion, depending on how long a runtime their projects can tolerate.

Figure 1: Characterization runtime chart

Source: Siemens EDA

 

Efficient Scalability of CPUs

AWS ParallelCluster allows library characteriza­tion users to dynamically deploy and manage compute clusters. This allows invoking virtual machine instances on demand, as well as shut down and deallocation of virtual machine instances after use. This enables users to scale to large numbers of CPUs effectively during characterization runs. Referring to Figure 2, Siemens’ cloud characterization flow can achieve close-to linear scalability up to 10,000 CPUs on AWS.

Figure 2: CPU Scalability chart

Source: Siemens EDA

 

As summarized in this blog, Siemens and Amazon have collaborated to offer a rapidly deployable, secure, cost-effective and scalable cloud characteriza­tion flow to accelerate library characterization with runtime predictability. For a detailed insight into the solution, please refer to the whitepaper and have exploratory discussions with Siemens EDA. You can download the whitepaper “Siemens Cloud Characterization on Amazon Web Services” here.

Also Read:

Smarter Product Lifecycle Management for Semiconductors

Observation Scan Solves ISO 26262 In-System Test Issues

Siemens EDA wants to help you engineer a smarter future faster


Why Would Anyone Perform Non-Standard Language Checks?

Why Would Anyone Perform Non-Standard Language Checks?
by Daniel Nenni on 03-29-2021 at 6:00 am

Non Standard

The other day, I was having one of my regular chats with Cristian Amitroaie, CEO and co-founder of AMIQ EDA. One of our subjects was a topic that we discussed last year, the wide range of languages and formats that chip design and verification engineers use these days. AMIQ EDA has put a lot of effort into adding support for many of these in their integrated development environment, DVT Eclipse IDE. I know that the list includes SystemVerilog, Universal Verification Methodology (UVM), Verilog, Verilog-AMS, VHDL, e, Property Specification Language (PSL), C/C++/SystemC, Unified Power Format (UPF), Common Power Format (CPF), Portable Stimulus Standard (PSS), and probably a few more.

All these languages and formats are standards of one kind of another, most from IEEE and/or Accellera. As we talked about supporting design and verification language standards, and checking code for compliance, Cristian made the intriguing comment that they also have almost 150 non-standard checks. I was rather puzzled by that term, so I asked him to explain. Cristian said that these are checks for language constructs that deviate from the standards but are supported by specific EDA tools and vendors. Why would vendors do this? It turns out that there are two common reasons:

  1. The vendors have older languages with constructs that their users like, so they add similar constructs on top of the standard to keep their users happy
  2. The vendors have ideas for extensions to the standard that they may propose for the next version but, in the meantime, they want their users to benefit

That led me to wonder why users would use non-standard constructs. Cristian mentioned five possible reasons:

  1. The users want to continue to use language constructs that they like from older languages but that are not in the new standard
  2. The users see high value in the non-standard constructs and are willing to deviate from the standard in order to get the benefits
  3. The vendors may not be entirely clear about what constructs in their examples and training are non-standard, so the users may not realize their deviations
  4. The users have already used non-standard constructs in their legacy code, and are reluctant to perturb and re-verify working code
  5. The users rely on a single EDA vendor for most of their tools, so they don’t worry too much about using non-standard constructs supported only by that vendor

I think that the last point is particularly important. One of the values of EDA standards is that users can code once and then work with any vendor, or any mix of vendors, without having to start from scratch. Relying on non-standard constructs can trap users with one vendor and make it expensive to switch to another. Unless they are making a deliberate choice to use these constructs, users want to know when they are deviating from the standard. In fact, it’s a good idea to warn them anyway.

Cristian said that’s exactly where AMIQ EDA comes in. DVT Eclipse IDE tells users when their code contains non-standard language constructs. These are warnings by default; users who want strict compliance to standards can choose to elevate these warnings to errors. These users will have a much easier time switching EDA vendors or adding new tools into their design and verification flow. On the other hand, users who have made a conscious decision to use certain non-standard constructs can disable or waive the related warnings.

Then I asked Cristian how they figure out when other vendors have non-standard support. Much of this information comes indirectly from their customers. Typically, a user runs DVT Eclipse IDE and sees an error for a language construct that is accepted by their simulator (or, occasionally, another tool). AMIQ EDA investigates and, if the construct is actually legal, updates their own tool. If the construct is non-standard as per the relevant Language Reference Manual, they add a check to issue a warning upon use of the construct.

Cristian noted that they have excellent partnerships with other EDA vendors, and have many tools in house so that they can easily cross-check how languages are handled. He stressed that they never reveal to users which tools support which non-standard constructs, since that would potentially be a violation of their partnership agreements. Users don’t require that information anyway; what they need to know is that they are using non-standard language constructs. Then they can decide whether they wish to continue this usage and accept the loss of vendor portability, or to conform strictly to the standard.

Most of the non-standard checks are for SystemVerilog, not surprising given the complexity of the language. These same checks are available in the AMIQ EDA Verissimo SystemVerilog Testbench Linter, useful for users who run lint in batch mode rather than from the IDE. VHDL is also noted for having vendor-specific extensions, and so DVT Eclipse IDE has checks for these deviations from the standard as well.

I found this whole conversation and topic to be quite interesting. The ability of AMIQ EDA’s tools to detect and report language compliance issues is clearly a benefit to users. It enables them to make fully informed decisions on whether to make use of non-standard language constructs specific to one or more vendors.

To learn more, visit https://www.dvteclipse.com. To see the list of AMIQ EDA’s non-standard checks, see https://dvteclipse.com/documentation/sv/Non_Standard_Checks.html.

Also Read

Does IDE Stand for Integrated Design Environment?

Don’t You Forget About “e”

The Polyglot World of Hardware Design and Verification


MRAM Magnetic Immunity – Empirical Study Summary

MRAM Magnetic Immunity – Empirical Study Summary
by Mads Hommelgaard on 03-28-2021 at 10:00 am

MRAM Magnetic Immunity

The main threat for the wide adoption of MRAM memories continues to be their lack of immunity to magnetic fields. MRAM magnetic immunity (MI) levels has seen significant research over the years and new data is continuously published from the main MRAM vendors.

This data, however, is rarely compared to magnetic field exposure scenarios which will occur in consumer applications. The study will show the state of magnetic immunity reported from the most prominent players with focus on Spin Transfer Torque MRAM (STT-MRAM). Then two specific exposure scenarios are evaluated, and the results are compared to the reported MI levels from suppliers. Finally some improvements are proposed.

Embedded STT-MRAM Magnetic Immunity Overview

TSMC and GlobalFoundries have published a set of standby MI levels vs. exposure time and temperature for their most robust macros, and provided an extrapolation to 10-year exposure levels. Below these levels are plotted again adjusted to 1ppm bit error rate (BER).

Figure 1: MRAM MI levels from GlobalFoundries and TSMC with 10-year extrapolation

While both companies show ability to withstand more than 1000 Oe @ RT in standby mode, they also show a significant degradation over temperature. Both have also published active magnetic immunity levels, which are 2-4x lower (250-500 Oe) depending on conditions. Depending on your application, the active mode may be the worst-case threat scenario.

DC Field Exposure from Rare Earth Magnets

Exposure from powerful rare-earth magnets are regarded as the worst-case scenario, as these are now widely used in various product cases and smartphone holders.

As an example of this scenario, we used data for two Neodymium magnets with a surface field strength of 5000 Oe (N52) and 3500 Oe (N48) and plotted the field strength at various distances.

Figure 2: Neodymium magnetic field vs. distance to components

Although the magnetic field quickly deteriorates, components within 2-3 mm of the magnet surface are still experiencing field strength above what MRAM technology is capable of handling today.

AC Field Exposure from Wireless Charging Pads

Wireless chargers are becoming more powerful to a point where they could threaten MRAM data integrity when charging.

The Federal Communications Commission (FCC) specifies a maximum permissible exposure (MPE) to magnetic fields generated by such devices, and specifies a compliance limit for devices at 50% this MPE level. Below are plotted the converted exposure levels for a 15W wireless QI charger, a 5W wireless QI charger, as well as the FCC compliance limit with a square law attenuation wrt. distance.

Figure 3: Estimated magnetic field exposure from wireless chargers & FCC compliance limit

It is clear that the concerns wrt. wireless chargers are much lower than was the case for static magnetic fields. Still for some MRAM offerings, the level of 100-200 Oe at close range may impact memory reliability in active mode.

Conclusion

Judging by the data presented, STT-MRAM memories are not yet able to guarantee reliable performance in these common use-cases. When integrating STT-MRAM these effects must be taken into consideration and discussed with your memory vendor.

To fully mitigate the risk from these scenarios, the MRAM technology needs to improve current standby and active MI levels by 2-4x. MRAM suppliers should be encouraged to report MI levels in a uniform or standardized way and to develop standard reliability flows for quantifying MI levels for their customers.

As there are no good alternative for embedded memory in advanced nodes, the incentive for vendors to create such standardized data to mitigate this risk should continue to grow.

The full study including references to all material used is available at the resource page on MemXcell.com.


Can Our Privacy be Protected in Cars?

Can Our Privacy be Protected in Cars?
by Roger C. Lanctot on 03-28-2021 at 8:00 am

Can Our Privacy be Protected in Cars

“Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” — Benjamin Franklin

I hope Ben Franklin was not opposed to enhancing driving safety, but he may have looked with a jaundiced eye at the proliferation of in-cabin driver monitoring technology. It’s clear that Consumer Reports does not approve.

Mere months after applauding Comma.ai’s aftermarket driver assistance device for its integration of driver monitoring technology, Consumer Reports has taken issue with Tesla Motors’ acknowledged use of in-cabin video to advance its development of self-driving technology. CR sees the activity as an undisclosed invasion of privacy.

I am no expert on privacy. Listening in on Morrison & Foerster’s Webinar on the new Virginia Consumer Privacy Act and how it compares and contrasts with California’s Consumer Privacy Act and the European Union’s Global Data Protection Regulation it became clear that if these three jurisdictions were unable to agree on a single path to privacy protection it is clearly not an easily resolved issue.

Virginia’s Consumer Data Protection Act: What Changes Does It Require, and How Does It Compare to CPRA 

The complexity of preserving privacy – which will now be left to attorneys and judges to sort out in the context of these new laws – is unfortunate given the proliferation of cameras in public spaces, on mobile devices, and in and around automobiles. This proliferation raises questions of access and control and, of course, privacy.

Making the matter even more difficult to resolve is the reality that privacy regulations are not confined by boarders. A company or an individual based or living in the U.S. that does business in the E.U. – even without traveling there – is subject to GDPR, just as anyone transacting in or traveling through California or Virginia must be mindful of these new regulations. And all of these regulations have already seen revisions and will be forced to respond to legal interpretations.

The fundamentals are the same everywhere. Clear and concise disclosures. Require affirmative consumer opt in. Data access and transparency. Disclosure of intended uses. Right to erasure. It’s the details that get thorny.

Jon Fasman’s “We See It All” chronicles the increasing role of technology in law enforcement and the many ways privacy is steadily being compromised in the pursuit of enhanced security and public safety. Early on in the book he notes the use of facial recognition technology by airlines during boarding and he advises readers to avoid this technology at all cost – even if doing so makes boarding less convenient.

Fasman’s message, which is conveyed throughout the book, is that if an intrusive potentially privacy violating technology can be abused, it will be. No pollyanna, he goes on to note the range of negative collateral impacts from the use of “shotspotting,” body cameras, and widely dispersed closed circuit video cameras as well as the use of artificial intelligence for deploying police forces and in sentencing.

Fasman argues for improvements in the regulation of these technologies including such measures as limiting access to the data gathered by these systems and limiting the period of time allowed for their storage or retention for future use. But the moral of the story appears to be that the battle to preserve privacy must be fought continuously even though it already appears to be lost.

China is, of course, the worst case scenario, as detailed in Kai Strittmatter’s “We Have been Harmonized.” The author describes a scenario where the local police’s city ubiquitous CCTV-based surveillance system, equipped with facial recognition technology, is able to locate allowing officers to detain him in a matter of minutes in a test.

Something similar is coming to the cabins of cars. In-cabin sensors are increasingly being used to detect driver drowsiness. But the transition to camera-based systems is being pioneered for solutions such as General Motors’ Super Cruise driver assistance system – which uses camera-based monitoring to ensure driver vigilance when the hands-free driving function is activated.

The European New Car Assessment Program (Euro-NCAP) – Europe’s protocol for granting five-star safety ratings for new cars – will require driver monitoring systems beginning sometime after 2022. Like local privacy policies that have global influence, Euro-NCAP’s requirement will have a global impact.

What remains unclear is how consumers will react. In the past few years, consumers have “discovered” far more passive monitoring systems in their cars – such as Daimler’s in-dash coffee cup icon when one has been driving too long uninterrupted – but inward facing cameras is something new.

Seeing Machines, which provides in-cabin cameras for General Motors’ Super Cruise and for fleet operators, has been careful to note that its devices do not store video and that they neither transmit video nor are externally hackable. But cameras do represent both a privacy and a security vulnerability.

In its own research, Strategy Analytics has found a wide range of conflicting insights regarding consumer perceptions of privacy. Consumers have expressed concerns about protecting their privacy, but readily surrender that privacy when pressed by a manufacturer or service provider – somewhat more so in the U.S. than in the E.U.

Ironically, a global survey conducted by Strategy Analytics revealed that policies, such as the E.U.’s GDPR, have caused consumers to lower their privacy guard even further. Presumably the institution of the regulation instills a sense of security and safety rather than raising a sense of necessary vigilance.

Not all consumers are so sanguine. An Amazon driver recently created headlines when he quit as a result of the company’s deployment of Netradyne four-camera vehicle monitoring systems. Thomson Reuters quoted the man: “It was both a privacy violation, and a breach of trust, and I was not going to stand for it.”

It may well be that the price of access to semi-autonomous vehicle functions, like GM’s Super Cruise, will be a loss of consumer privacy manifest in cabin-mounted cameras. Car makers will surely promise not to store or transmit sensitive data, but the best consumers may be able to hope for is to have fun sending selfies while driving. That sounds like a reasonable tradeoff, right?

There is a bit of good news from Strategy Analytics research. In a world increasingly bereft of privacy protections in spite of new regulations, car makers stand out in the minds of consumers. According to Strategy Analytics research: “Though consumers have mixed feelings about trusting telecom and tech-centric hardware and software firms with their data, this concern clearly does not extend to automakers.” Time will tell whether auto makers can preserve this perception as they flirt with invasive monitoring technologies.

Consumers and the Data Trust Gaps Between Automakers and Big Tech

Data Privacy: Lack of Knowledge, Resignation, and Unfounded Confidence 

Survey Highlights Privacy Paradox 


SALELE Double Patterning for 7nm and 5nm Nodes

SALELE Double Patterning for 7nm and 5nm Nodes
by Fred Chen on 03-28-2021 at 6:00 am

SALELE Double Patterning for 7nm and 5nm Nodes 4

In this article, we will explore the use of self-aligned litho-etch-litho-etch (SALELE) double patterning for BEOL metal layers in the 7nm node (40 nm minimum metal pitch [1]) with DUV, and 5nm node (28 nm minimum metal pitch [2]) with EUV. First, we mention the evidence that this technique is being used; Xilinx [3] disclosed the use of the technique in 7nm BEOL. Secondly, a minimum metal pitch as small as 28 nm leads to restricted illumination (low pupil fill) reducing the transmitted source power by 50% [4]. Throughput would be faster with the use of two EUV tools in series for double patterning (with 56 nm minimum metal pitch) since the number of wafers per day is tied to one litho tool handing off to the next. More seriously, stochastic defects [5] are a serious issue for single exposure at pitches ~30 nm [6]; however, pitch splitting by printing the same feature twice at twice the pitch exacerbates this [5]. Fortunately, SALELE [7] offers a way out, as will be explained below.

To achieve 14 nm features on a 28 nm pitch, for example, SALELE may start with 28 nm features, e.g., trenches, on a 56 nm pitch (Figure 1). This is advantageous over using 14 nm features on a 56 nm pitch or 28 nm pitch, due to the high incidence of EUV stochastic defects for the smaller features.

Figure 1. First patterned trenches (28 nm width on 56 nm pitch).

The trenches can be expanded, e.g., photoresist trimming [8], to 42 nm width. Then a 14 nm sidewall spacer is deposited and etched back to leave a 14 nm sidewall liner surrounding a 14 nm core feature filled within (Figure 2).

Figure 2. Trenches are expanded to 42 nm width, then sidewall liner of 14 nm formed on inside wall.

Outside and between two adjacent liners, an additional 14 nm trench may be patterned directly (actual width can be close to 28 nm); the liners help keep the latter trench aligned with the previous ones (hence, the self-aligned aspect) (Figure 3).

Figure 3. Additional trench patterned with alignment margin provided by the sidewall liners. The dotted line indicates the margin for printing or placing the feature.

The trenches patterned at the two different stages can be filled with two different materials which etch differently, such as oxide and nitride. This allows those trenches to be cut more safely (Figure 4), since a cutting line can extend over the neighboring trench.

Figure 4. Trenches from the two stages are cut separately.

In total, four masks are used [7], two for the trenches, and two for the separate trench cuts. Self-aligned quadruple patterning (SAQP) using only DUV immersion tools can bring this down to three masks, but requires further process control maturity in addressing pitch walking [9].

While the cuts can be performed in EUV, they would suffer the previously mentioned stochastic defects issue, so DUV is more likely to be used. This would mean two EUV tools and two DUV tools being set up for the SALELE flow. This would be preferable to binding four EUV tools to this flow. For the earlier 7nm process [1], four immersion tools would be allocated. A more conventional self-aligned double patterning (SADP) can also reduce this to three masks, three tools. 20 nm features still pose a stochastic defect risk for EUV [5,6]. SALELE offers an easy transition from the LELE double patterning flow of the older 14/16/22nm nodes, but requires a substantial increase in lithography tooling.

References

[1] S-Y. Wu et al., “A 7nm CMOS platform technology featuring 4th generation FinFET transistors with a 0.027um2 high density 6-T SRAM cell for mobile SoC applications,” IEDM 2016.

[2] J. C. Liu et al., “A Reliability Enhanced 5nm CMOS Technology Featuring 5th Generation FinFET with Fully-Developed EUV and High Mobility Channel for Mobile SoC and High Performance Computing Application,” IEDM 2020.

[3] Q. Lin et al., “Improvement of SADP CD control in 7nm BEOL application,” Proc. SPIE 11327, 113270X (2020).

[4] D. Rio et al., “Extending 0.33 NA EUVL to 28 nm pitch using alternative mask and controlled aberrations,” Proc. SPIE 11609, 116090T (2021).

[5] P. de Bisschop and E. Hendrickx, “On the dependencies of the stochastic patterrning- failure cliffs in EUVL lithography,” Proc. SPIE 11323, 113230J (2020).

[6] J. Church et al., “Fundamental characterization of stochastic variation for improved single-expose extreme ultraviolet patterning at aggressive pitch,” J. Micro/Nanolith. MEMS MOEMS 19, 034001 (2020).

[7] Y. Drissi et al., “SALELE process from theory to fabrication,” Proc. SPIE 10962, 109620V (2019).

[8] L.Jang et al., “SADP for BEOL using chemical slimming with resist mandrel for beyond 22nm nodes,” Proc. SPIE 8325, 83250D (2012).

[9] H. Ren et al., “Advanced process control loop for SAQP pitch walk with combined lithography, deposition and etch actuators,” Proc. SPIE 11325, 1132523 (2020).

This article first appeared in LinkedIn Pulse: SALELE Double Patterning for 7nm and 5nm Nodes

Related Lithography Posts


Podcast EP13: The Three Pillars of Verification with Adnan Hamid

Podcast EP13: The Three Pillars of Verification with Adnan Hamid
by Daniel Nenni on 03-26-2021 at 10:00 am

Dan goes on a scenic tour of verification with Adnan Hamid, founder and CEO of Breker Verification Systems.  We discuss the rather unusual way Adnan got into semiconductors and SoC verification. Adnan then breaks down the verification task into its fundamental parts to reveal what the three pillars of verification are and why they are so important.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Breker Verification Systems


Foundry Fantasy- Deja Vu or IDM 2?

Foundry Fantasy- Deja Vu or IDM 2?
by Robert Maire on 03-26-2021 at 8:00 am

Foundry Profit 2020

– Intel announced 2 new fabs & New Foundry Services
– Not only do they want to catch TSMC they want to beat them
– It’s a very, very tall order for a company that hasn’t executed
– It will require more than a makeover to get to IDM 2.0

Intel not only wants to catch TSMC but beat them at their own game

Intel announced that it was going to spend $20B on two new fabs in Arizona and establish Intel Foundry Services as part of re-imagining Intel into “IDM 2.0”. The stated goal would be to provide foundry services to customers much as TSMC does so well today.

This will not be easy. A lot of companies have died on that hill or been wounded. Global Foundries famously gave up. Samsung still spends oodles of money trying to keep within some sort of distance to TSMC. UMC, SMIC and many others just don’t hold a candle to TSMC’s capabilities and track record.

This all obviously creates a very strange dynamic where Intel is highly dependent upon TSMC’s production for the next several years but then thinks it can not only wean itself off of TSMC’s warm embrace but produce enough for itself as well as other customers to be a real foundry player.

If Pat Gelsinger can pull this off he deserves a billion dollar bonus

This goes beyond doubling down on Intel’s manufacturing and well into a Hail Mary type of play. This may turn out to be an aspirational type of goal in which everyone would be overjoyed if they just caught back up to TSMC.

Like Yogi Berra said “It’s Deja Vu all over again”- Foundry Services 2.0

Lest anyone conveniently forget, Intel tried this Foundry thing before and failed, badly. It just didn’t work. They were not at all competitive.

It could be that we are just past the point of remembering that it was a mistake and have forgotten long enough to try again.

We would admit that Intel’s prior attempt at being a foundry services provider seemed almost half hearted at best. We sometimes thought that many long time Intel insiders previously snickered at being a foundry as they somehow thought it beneath them.

Trying to “ride the wave” of chip shortage fever?

It could also be that Intel is trying to take advantage of the huge media buzz about the current chips shortage by playing into that theme, and claiming to have the solution.

We would remind investors that the current chip shortage that has everyone freaked out will be long over, done and fixed and a distant memory before the first brick is even laid for the two new fabs Intel announced today. But it does make for good timing and PR.

Could Intel be looking for a chunk of “Chips for America” money?

Although Intel said on the call that government funding had nothing to do with whether or not they did the project we are certain that Intel will have its hand out and lobby big time to be the leader of Chips for America.

We would remind investors that the prior management of Intel was lobbying the prior White House administration hard to be put in charge of the “Chips for America” while at the exact same time negotiating to send more product (& jobs) to TSMC.

This is also obviously well timed as is the current shortage. Taken together the idea of Intel providing foundry services makes some sense on the surface at least.

Intel needs to start with a completely clean slate with funding

We think it may be best for Intel to start as if it never tried being a foundry before. Don’t keep any of the prior participants as it didn’t work before.
Randhir Thakur has been tasked with running Intel Foundry Services. We would hope that enough resources are aimed at the foundry undertaking to make it successful. It needs to stand alone and apart.

Intel’s needs different “DNA” in foundry- two different companies in one

The DNA of a Foundry provider is completely different than that of being an IDM. They both do make chips but the similarity stops there.

The customer and customer mindset is completely different. Even the technology is significantly different from the design of the chips, to the process flows in the fabs to package and test. The design tools are different, the manufacturing tools are different and so is packaging and test equipment.

While there is a lot of synergy between being a fab and an IDM it would be best to run this as two different companies under one corporate roof. It’s going to be very difficult to share: Who gets priority? Who’s needs come first? One of the reason’s Intel’s foundry previously failed was the the main Intel seemed to take priority over foundry and customers will not like the obvious conflict which has to be managed.

Maybe Intel should hire a bunch of TSMC people

Much as SMIC hired a bunch of TSMC people when it first started out, maybe Intel would be well served to hire some people from TSMC to get a jump start on how to properly become a real foundry. It would be poetic justice of a US company copying an Asian company that made its bones copying US companies in the chip business.

We have heard rumor that TSMC is offering employees double pay to move from Taiwan to Arizona to start up their new fab there. Perhaps Intel should offer to triple pay TSMC employees to move and jump ship. It would be worth their while. Intel desperately needs the help.

Pat Gelsinger is bringing back a lot of old hands from prior years at Intel as well as others in the industry (including a recent hire from AMAT) but Intel needs people experienced in running a foundry and dealing with foundry customers. Intel has to hire a lot of new and experienced people because they not only need people to catch up their internal capacity, which is not easy, and it needs more people to become a foundry company and the skillsets, like the technology are completely different. This is not going to be either cheap or easy.

I don’t get the IBM “Partnership”

IBM hasn’t been a significant, real player in semiconductors in a very, very long time. It may have a bunch of old patents but it has no significant current process technology that is of true value. It certainly doesn’t build current leading edge or anything close nor does it bring anything to the foundry party.
Its not like IBM helped GloFo a lot. They brought nothing to the table. GloFo still failed in the Moore’s law race. In our view IBM could be a net negative as Intel has to “think different” to be two companies in one, it needs to re-invent itself.

The IBM “partnership” is just more PR “fluff” just like the plug from Microsoft and quotes from tech leaders in the industry that accompanied the press release. Its nonsense.

Don’t go out and buy semi equipment stocks based on Intel’s announcements

Investors need to stop and think how long its going to be before Intel starts ordering equipment for the two $10B fabs announced. Its going to be years and years away.

The buildings have to be designed, then built before equipment can even be ordered. Maybe if we are lucky the first shovel goes in the ground at the end of 2021 and equipment starts to roll in in 2023…maybe beginning production at reasonable scale by 2025 if lucky. Zero impact on current shortage – Even though Intel uses the current shortage as excuse to restart foundry

The announcement has zero, none, nada impact on the current shortage for two significant reasons;

First, as we have just indicated it will be years before these fabs come on line let alone are impactful in terms of capacity. The shortages will be made up for by TSMC, Samsung, SMIC, GloFo and others in the near term. The shortages will be ancient history by the time Intel gets the fabs on line.

Second, as we have previously reported, the vast majority of the shortages are at middle of the road or trailing edge capacity made in 10-20 years old fabs on old 8 inch equipment. You don’t make 25 cent microcontrollers for anti-lock brakes in bleeding edge 7NM $10B fabs, the math doesn’t work. So the excuse of getting into the foundry business because of the current shortage just doesn’t fly, even though management pointed to it on the call.

Could Intel get Apple back?

As we have said before, if we were Tim Apple, a supply chain expert, and the entire being of our company was based on Taiwan and China we might be a little nervous. We also might push our BFF TSMC to build a gigafab in the US to secure capacity. The next best thing might be for someone else like Intel or Samsung to build a gigafab foundry in the US that I could use and go back to two foundry suppliers fighting for my business with diverse locations.

The real reason Intel needs to be a foundry is the demise of X86

Intel has rightly figured out that the X86 architecture is on a downward spiral. Everybody wants their own custom ARM, AI, ML, RISC, Tensor, or what ever silicon chip. No one wants to buy off the rack anymore they all want their own bespoke silicon design to differentiate the Amazons from the Facebooks from the Googles.

Pat has rightly figured out that its all about manufacturing. Just like it always was at Intel and something TSMC never stopped believing. Yes, design does still matter but everybody can design their own chip these days but almost no one, except TSMC, can build them all.

Either Intel will have to start printing money or profits will suffer near term

We have been saying that Intel is going to be in a tight financial squeeze as they were going to have reduced gross margins by increasing outsourcing to TSMC while at the same time re-building their manufacturing, essentially having a period of almost double costs (or at least very elevated costs).

The problem just got even worse as Intel is now stuck with “triple spending”. Spending (or gross margins loss) on TSMC, re-building their own fabs and now a third cost of building additional foundry capacity for outside customers.
We don’t see how Intel avoids a financial hit.

Its not even sure that Intel can spend enough to catch up let alone build foundry capacity even if it has the cash

We would point out that TSMC has the EUV ASML scanner market virtually tied up for itself. They have more EUV scanners than the rest of the world put together.

Intel has been a distant third after Samsung in EUV efforts. If Intel wants to get cranking on 7NM and 5NM and beyond it has a lot of EUV to buy. It can’t multi-pattern its way out of it. Add on top of that a lot of EUV buying to become a foundry player as the PDKs for foundry process rely a lot less on the tricks that Intel can pull on its own in house design and process to avoid EUV. TSMC and foundry flows are a lot more EUV friendly.

As we have previously pointed out the supply of EUV scanners can’t be turned on like a light switch, they are like a 15 year old single malt, it takes a very long time to ramp up capacity, especially lenses which are a critical component.
I don’t know if Intel has done the math or called their friends at ASML to see if enough tools are available. ASML will likely start building now to be ready to handle Intel’s needs a few years from now if Intel is serious.

Being a foundry is even harder now

Intel was asked on the call “what’s different this time” in terms of why foundry will work now when it didn’t years ago and their answer was that foundry is a lot different now.

We would certainly agree and suggest that being a leading edge foundry is even much more difficult now. Its far beyond just spending money and understanding technology. Its mindset and process. Its not making mistakes. To underscore both TSMC and Pat Gelsinger its “execution, execution & execution” We couldn’t agree more. Pat certainly “gets it” the question is can he execute?

The tough road just became a lot tougher

Intel had a pretty tough road in front of it to catch the TSMC juggernaut. The road just got a lot more difficult to both catch them and beat them at their own game, that’s twice as hard.

However we think that Pat Gelsinger has the right idea. Intel can’t just go back to being the technology leader it was 10 or 20 years ago, it has to re-invent itself as a foundry because that is what the market wants today (Apple told them so).

It’s not just fixing the technology , it’s fixing the business model as well, to the new market reality.

It’s going to be very, very tough and challenging but we think that Intel is up for it. They have the strategy right and that is a great and important start.

All they have to do is execute….

Related:

Intel Will Again Compete With TSMC by Daniel Nenni 

Intel’s IDM 2.0 by Scotten Jones 

Intel Takes Another Shot at the Enticing Foundry Market by Terry Daly