RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Library Characterization: A Siemens Cloud Solution using AWS

Library Characterization: A Siemens Cloud Solution using AWS
by Kalar Rajendiran on 03-29-2021 at 10:00 am

Characterization Runtime Chart

Pressing demands on compute speeds, storage capacity and rapid access to data are not new to the semiconductor industry. A desire for access to on-demand computing resources have always been there. During pre-cloud-computing era, companies provisioned on-demand compute capacity by procuring high performance computing equipment that could handle peak demand. This led to under-utiliza­tion of equipment during typical demand periods. Interestingly, larger companies, in spite of their ownership of lot of high-performance computing assets, sometimes also experienced the opposite situation. And that was lack of availability of the right kind of compute resources during extreme peak demand periods, when multiple large projects were going on concurrently.

Availability of outsourced cloud compute and storage services changed all this. The risks and costs of procuring the latest and greatest equipment was shifted to cloud services companies. Customers were able to convert large upfront fixed costs (capital expenditures) to use-based variable costs. Customers simply accessed what was needed, when it was needed and how (the resource mix) it was needed. Utilizing on-demand-computing capability from an outsourced cloud-services provider started making sense for companies of all sizes.

Is shifting to outsourced cloud-based on-demand computing just about cost savings and converting fixed cost to variable cost? Depending on the compute application and a combination of right tools and methodologies, the benefits could be lot more than the obvious cost benefits.

A recently published whitepaper showcases the value to a customer in characterizing libraries over the cloud. The whitepaper was collaboratively authored by Baris Guler and Kenneth Chang of Amazon’s AWS Division and Matthieu Fillaud and Wei-Lii Tan of Siemens EDA. Library characterization is the process of generating timing models for library elements that will be used for chip-level or block-level timing simulation and analysis. It is a task that lends itself well for scalability offered by cloud platforms.

In this blog, I’ll touch on just some of the key aspects of the Siemens-AWS solution for library characterization.

Rapid Deployments with Repeatable Success

Just as a reference design or platform contains essential elements of a system that a user may modify to customize as required, an AWS CloudFormation Template is a reference template that specifies essential elements needed for the cloud service. Siemens has collaborated with Amazon’s AWS division to create a template that is an excellent starting point for library characterization purposes. From this starting point, customers can easily customize to their specific need on hand by modifying the template. The AWS CloudFormation service itself leverages the template to create and provision the resources in an orderly and predictable way.

Rapid deployments of characterization runs are enabled by AWS ParallelCluster. It is an AWS supported open-source cluster management tool for quickly deploying and managing the clusters (resources) in the AWS Cloud. It automatically sets up the required compute resources and shared filesystem.

Data Security

The Siemens-AWS solution includes security measures incorporating user identification process and traceability for actions taken on the cloud. The solution also executes protocols to ensure data is transported, used and stored securely.

Predictability of Runtimes

A key benefit of moving to cloud-based on-demand computing will be lost if characterization runtimes become unpredictable. The Siemens-AWS collaboration has yielded a quick to setup, easy to use solution that results in predictable runtimes. Referring to Figure 1, users can adjust the resource provisioning in a predictable fashion, depending on how long a runtime their projects can tolerate.

Figure 1: Characterization runtime chart

Source: Siemens EDA

 

Efficient Scalability of CPUs

AWS ParallelCluster allows library characteriza­tion users to dynamically deploy and manage compute clusters. This allows invoking virtual machine instances on demand, as well as shut down and deallocation of virtual machine instances after use. This enables users to scale to large numbers of CPUs effectively during characterization runs. Referring to Figure 2, Siemens’ cloud characterization flow can achieve close-to linear scalability up to 10,000 CPUs on AWS.

Figure 2: CPU Scalability chart

Source: Siemens EDA

 

As summarized in this blog, Siemens and Amazon have collaborated to offer a rapidly deployable, secure, cost-effective and scalable cloud characteriza­tion flow to accelerate library characterization with runtime predictability. For a detailed insight into the solution, please refer to the whitepaper and have exploratory discussions with Siemens EDA. You can download the whitepaper “Siemens Cloud Characterization on Amazon Web Services” here.

Also Read:

Smarter Product Lifecycle Management for Semiconductors

Observation Scan Solves ISO 26262 In-System Test Issues

Siemens EDA wants to help you engineer a smarter future faster


Why Would Anyone Perform Non-Standard Language Checks?

Why Would Anyone Perform Non-Standard Language Checks?
by Daniel Nenni on 03-29-2021 at 6:00 am

Non Standard

The other day, I was having one of my regular chats with Cristian Amitroaie, CEO and co-founder of AMIQ EDA. One of our subjects was a topic that we discussed last year, the wide range of languages and formats that chip design and verification engineers use these days. AMIQ EDA has put a lot of effort into adding support for many of these in their integrated development environment, DVT Eclipse IDE. I know that the list includes SystemVerilog, Universal Verification Methodology (UVM), Verilog, Verilog-AMS, VHDL, e, Property Specification Language (PSL), C/C++/SystemC, Unified Power Format (UPF), Common Power Format (CPF), Portable Stimulus Standard (PSS), and probably a few more.

All these languages and formats are standards of one kind of another, most from IEEE and/or Accellera. As we talked about supporting design and verification language standards, and checking code for compliance, Cristian made the intriguing comment that they also have almost 150 non-standard checks. I was rather puzzled by that term, so I asked him to explain. Cristian said that these are checks for language constructs that deviate from the standards but are supported by specific EDA tools and vendors. Why would vendors do this? It turns out that there are two common reasons:

  1. The vendors have older languages with constructs that their users like, so they add similar constructs on top of the standard to keep their users happy
  2. The vendors have ideas for extensions to the standard that they may propose for the next version but, in the meantime, they want their users to benefit

That led me to wonder why users would use non-standard constructs. Cristian mentioned five possible reasons:

  1. The users want to continue to use language constructs that they like from older languages but that are not in the new standard
  2. The users see high value in the non-standard constructs and are willing to deviate from the standard in order to get the benefits
  3. The vendors may not be entirely clear about what constructs in their examples and training are non-standard, so the users may not realize their deviations
  4. The users have already used non-standard constructs in their legacy code, and are reluctant to perturb and re-verify working code
  5. The users rely on a single EDA vendor for most of their tools, so they don’t worry too much about using non-standard constructs supported only by that vendor

I think that the last point is particularly important. One of the values of EDA standards is that users can code once and then work with any vendor, or any mix of vendors, without having to start from scratch. Relying on non-standard constructs can trap users with one vendor and make it expensive to switch to another. Unless they are making a deliberate choice to use these constructs, users want to know when they are deviating from the standard. In fact, it’s a good idea to warn them anyway.

Cristian said that’s exactly where AMIQ EDA comes in. DVT Eclipse IDE tells users when their code contains non-standard language constructs. These are warnings by default; users who want strict compliance to standards can choose to elevate these warnings to errors. These users will have a much easier time switching EDA vendors or adding new tools into their design and verification flow. On the other hand, users who have made a conscious decision to use certain non-standard constructs can disable or waive the related warnings.

Then I asked Cristian how they figure out when other vendors have non-standard support. Much of this information comes indirectly from their customers. Typically, a user runs DVT Eclipse IDE and sees an error for a language construct that is accepted by their simulator (or, occasionally, another tool). AMIQ EDA investigates and, if the construct is actually legal, updates their own tool. If the construct is non-standard as per the relevant Language Reference Manual, they add a check to issue a warning upon use of the construct.

Cristian noted that they have excellent partnerships with other EDA vendors, and have many tools in house so that they can easily cross-check how languages are handled. He stressed that they never reveal to users which tools support which non-standard constructs, since that would potentially be a violation of their partnership agreements. Users don’t require that information anyway; what they need to know is that they are using non-standard language constructs. Then they can decide whether they wish to continue this usage and accept the loss of vendor portability, or to conform strictly to the standard.

Most of the non-standard checks are for SystemVerilog, not surprising given the complexity of the language. These same checks are available in the AMIQ EDA Verissimo SystemVerilog Testbench Linter, useful for users who run lint in batch mode rather than from the IDE. VHDL is also noted for having vendor-specific extensions, and so DVT Eclipse IDE has checks for these deviations from the standard as well.

I found this whole conversation and topic to be quite interesting. The ability of AMIQ EDA’s tools to detect and report language compliance issues is clearly a benefit to users. It enables them to make fully informed decisions on whether to make use of non-standard language constructs specific to one or more vendors.

To learn more, visit https://www.dvteclipse.com. To see the list of AMIQ EDA’s non-standard checks, see https://dvteclipse.com/documentation/sv/Non_Standard_Checks.html.

Also Read

Does IDE Stand for Integrated Design Environment?

Don’t You Forget About “e”

The Polyglot World of Hardware Design and Verification


MRAM Magnetic Immunity – Empirical Study Summary

MRAM Magnetic Immunity – Empirical Study Summary
by Mads Hommelgaard on 03-28-2021 at 10:00 am

MRAM Magnetic Immunity

The main threat for the wide adoption of MRAM memories continues to be their lack of immunity to magnetic fields. MRAM magnetic immunity (MI) levels has seen significant research over the years and new data is continuously published from the main MRAM vendors.

This data, however, is rarely compared to magnetic field exposure scenarios which will occur in consumer applications. The study will show the state of magnetic immunity reported from the most prominent players with focus on Spin Transfer Torque MRAM (STT-MRAM). Then two specific exposure scenarios are evaluated, and the results are compared to the reported MI levels from suppliers. Finally some improvements are proposed.

Embedded STT-MRAM Magnetic Immunity Overview

TSMC and GlobalFoundries have published a set of standby MI levels vs. exposure time and temperature for their most robust macros, and provided an extrapolation to 10-year exposure levels. Below these levels are plotted again adjusted to 1ppm bit error rate (BER).

Figure 1: MRAM MI levels from GlobalFoundries and TSMC with 10-year extrapolation

While both companies show ability to withstand more than 1000 Oe @ RT in standby mode, they also show a significant degradation over temperature. Both have also published active magnetic immunity levels, which are 2-4x lower (250-500 Oe) depending on conditions. Depending on your application, the active mode may be the worst-case threat scenario.

DC Field Exposure from Rare Earth Magnets

Exposure from powerful rare-earth magnets are regarded as the worst-case scenario, as these are now widely used in various product cases and smartphone holders.

As an example of this scenario, we used data for two Neodymium magnets with a surface field strength of 5000 Oe (N52) and 3500 Oe (N48) and plotted the field strength at various distances.

Figure 2: Neodymium magnetic field vs. distance to components

Although the magnetic field quickly deteriorates, components within 2-3 mm of the magnet surface are still experiencing field strength above what MRAM technology is capable of handling today.

AC Field Exposure from Wireless Charging Pads

Wireless chargers are becoming more powerful to a point where they could threaten MRAM data integrity when charging.

The Federal Communications Commission (FCC) specifies a maximum permissible exposure (MPE) to magnetic fields generated by such devices, and specifies a compliance limit for devices at 50% this MPE level. Below are plotted the converted exposure levels for a 15W wireless QI charger, a 5W wireless QI charger, as well as the FCC compliance limit with a square law attenuation wrt. distance.

Figure 3: Estimated magnetic field exposure from wireless chargers & FCC compliance limit

It is clear that the concerns wrt. wireless chargers are much lower than was the case for static magnetic fields. Still for some MRAM offerings, the level of 100-200 Oe at close range may impact memory reliability in active mode.

Conclusion

Judging by the data presented, STT-MRAM memories are not yet able to guarantee reliable performance in these common use-cases. When integrating STT-MRAM these effects must be taken into consideration and discussed with your memory vendor.

To fully mitigate the risk from these scenarios, the MRAM technology needs to improve current standby and active MI levels by 2-4x. MRAM suppliers should be encouraged to report MI levels in a uniform or standardized way and to develop standard reliability flows for quantifying MI levels for their customers.

As there are no good alternative for embedded memory in advanced nodes, the incentive for vendors to create such standardized data to mitigate this risk should continue to grow.

The full study including references to all material used is available at the resource page on MemXcell.com.


Can Our Privacy be Protected in Cars?

Can Our Privacy be Protected in Cars?
by Roger C. Lanctot on 03-28-2021 at 8:00 am

Can Our Privacy be Protected in Cars

“Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” — Benjamin Franklin

I hope Ben Franklin was not opposed to enhancing driving safety, but he may have looked with a jaundiced eye at the proliferation of in-cabin driver monitoring technology. It’s clear that Consumer Reports does not approve.

Mere months after applauding Comma.ai’s aftermarket driver assistance device for its integration of driver monitoring technology, Consumer Reports has taken issue with Tesla Motors’ acknowledged use of in-cabin video to advance its development of self-driving technology. CR sees the activity as an undisclosed invasion of privacy.

I am no expert on privacy. Listening in on Morrison & Foerster’s Webinar on the new Virginia Consumer Privacy Act and how it compares and contrasts with California’s Consumer Privacy Act and the European Union’s Global Data Protection Regulation it became clear that if these three jurisdictions were unable to agree on a single path to privacy protection it is clearly not an easily resolved issue.

Virginia’s Consumer Data Protection Act: What Changes Does It Require, and How Does It Compare to CPRA 

The complexity of preserving privacy – which will now be left to attorneys and judges to sort out in the context of these new laws – is unfortunate given the proliferation of cameras in public spaces, on mobile devices, and in and around automobiles. This proliferation raises questions of access and control and, of course, privacy.

Making the matter even more difficult to resolve is the reality that privacy regulations are not confined by boarders. A company or an individual based or living in the U.S. that does business in the E.U. – even without traveling there – is subject to GDPR, just as anyone transacting in or traveling through California or Virginia must be mindful of these new regulations. And all of these regulations have already seen revisions and will be forced to respond to legal interpretations.

The fundamentals are the same everywhere. Clear and concise disclosures. Require affirmative consumer opt in. Data access and transparency. Disclosure of intended uses. Right to erasure. It’s the details that get thorny.

Jon Fasman’s “We See It All” chronicles the increasing role of technology in law enforcement and the many ways privacy is steadily being compromised in the pursuit of enhanced security and public safety. Early on in the book he notes the use of facial recognition technology by airlines during boarding and he advises readers to avoid this technology at all cost – even if doing so makes boarding less convenient.

Fasman’s message, which is conveyed throughout the book, is that if an intrusive potentially privacy violating technology can be abused, it will be. No pollyanna, he goes on to note the range of negative collateral impacts from the use of “shotspotting,” body cameras, and widely dispersed closed circuit video cameras as well as the use of artificial intelligence for deploying police forces and in sentencing.

Fasman argues for improvements in the regulation of these technologies including such measures as limiting access to the data gathered by these systems and limiting the period of time allowed for their storage or retention for future use. But the moral of the story appears to be that the battle to preserve privacy must be fought continuously even though it already appears to be lost.

China is, of course, the worst case scenario, as detailed in Kai Strittmatter’s “We Have been Harmonized.” The author describes a scenario where the local police’s city ubiquitous CCTV-based surveillance system, equipped with facial recognition technology, is able to locate allowing officers to detain him in a matter of minutes in a test.

Something similar is coming to the cabins of cars. In-cabin sensors are increasingly being used to detect driver drowsiness. But the transition to camera-based systems is being pioneered for solutions such as General Motors’ Super Cruise driver assistance system – which uses camera-based monitoring to ensure driver vigilance when the hands-free driving function is activated.

The European New Car Assessment Program (Euro-NCAP) – Europe’s protocol for granting five-star safety ratings for new cars – will require driver monitoring systems beginning sometime after 2022. Like local privacy policies that have global influence, Euro-NCAP’s requirement will have a global impact.

What remains unclear is how consumers will react. In the past few years, consumers have “discovered” far more passive monitoring systems in their cars – such as Daimler’s in-dash coffee cup icon when one has been driving too long uninterrupted – but inward facing cameras is something new.

Seeing Machines, which provides in-cabin cameras for General Motors’ Super Cruise and for fleet operators, has been careful to note that its devices do not store video and that they neither transmit video nor are externally hackable. But cameras do represent both a privacy and a security vulnerability.

In its own research, Strategy Analytics has found a wide range of conflicting insights regarding consumer perceptions of privacy. Consumers have expressed concerns about protecting their privacy, but readily surrender that privacy when pressed by a manufacturer or service provider – somewhat more so in the U.S. than in the E.U.

Ironically, a global survey conducted by Strategy Analytics revealed that policies, such as the E.U.’s GDPR, have caused consumers to lower their privacy guard even further. Presumably the institution of the regulation instills a sense of security and safety rather than raising a sense of necessary vigilance.

Not all consumers are so sanguine. An Amazon driver recently created headlines when he quit as a result of the company’s deployment of Netradyne four-camera vehicle monitoring systems. Thomson Reuters quoted the man: “It was both a privacy violation, and a breach of trust, and I was not going to stand for it.”

It may well be that the price of access to semi-autonomous vehicle functions, like GM’s Super Cruise, will be a loss of consumer privacy manifest in cabin-mounted cameras. Car makers will surely promise not to store or transmit sensitive data, but the best consumers may be able to hope for is to have fun sending selfies while driving. That sounds like a reasonable tradeoff, right?

There is a bit of good news from Strategy Analytics research. In a world increasingly bereft of privacy protections in spite of new regulations, car makers stand out in the minds of consumers. According to Strategy Analytics research: “Though consumers have mixed feelings about trusting telecom and tech-centric hardware and software firms with their data, this concern clearly does not extend to automakers.” Time will tell whether auto makers can preserve this perception as they flirt with invasive monitoring technologies.

Consumers and the Data Trust Gaps Between Automakers and Big Tech

Data Privacy: Lack of Knowledge, Resignation, and Unfounded Confidence 

Survey Highlights Privacy Paradox 


SALELE Double Patterning for 7nm and 5nm Nodes

SALELE Double Patterning for 7nm and 5nm Nodes
by Fred Chen on 03-28-2021 at 6:00 am

SALELE Double Patterning for 7nm and 5nm Nodes 4

In this article, we will explore the use of self-aligned litho-etch-litho-etch (SALELE) double patterning for BEOL metal layers in the 7nm node (40 nm minimum metal pitch [1]) with DUV, and 5nm node (28 nm minimum metal pitch [2]) with EUV. First, we mention the evidence that this technique is being used; Xilinx [3] disclosed the use of the technique in 7nm BEOL. Secondly, a minimum metal pitch as small as 28 nm leads to restricted illumination (low pupil fill) reducing the transmitted source power by 50% [4]. Throughput would be faster with the use of two EUV tools in series for double patterning (with 56 nm minimum metal pitch) since the number of wafers per day is tied to one litho tool handing off to the next. More seriously, stochastic defects [5] are a serious issue for single exposure at pitches ~30 nm [6]; however, pitch splitting by printing the same feature twice at twice the pitch exacerbates this [5]. Fortunately, SALELE [7] offers a way out, as will be explained below.

To achieve 14 nm features on a 28 nm pitch, for example, SALELE may start with 28 nm features, e.g., trenches, on a 56 nm pitch (Figure 1). This is advantageous over using 14 nm features on a 56 nm pitch or 28 nm pitch, due to the high incidence of EUV stochastic defects for the smaller features.

Figure 1. First patterned trenches (28 nm width on 56 nm pitch).

The trenches can be expanded, e.g., photoresist trimming [8], to 42 nm width. Then a 14 nm sidewall spacer is deposited and etched back to leave a 14 nm sidewall liner surrounding a 14 nm core feature filled within (Figure 2).

Figure 2. Trenches are expanded to 42 nm width, then sidewall liner of 14 nm formed on inside wall.

Outside and between two adjacent liners, an additional 14 nm trench may be patterned directly (actual width can be close to 28 nm); the liners help keep the latter trench aligned with the previous ones (hence, the self-aligned aspect) (Figure 3).

Figure 3. Additional trench patterned with alignment margin provided by the sidewall liners. The dotted line indicates the margin for printing or placing the feature.

The trenches patterned at the two different stages can be filled with two different materials which etch differently, such as oxide and nitride. This allows those trenches to be cut more safely (Figure 4), since a cutting line can extend over the neighboring trench.

Figure 4. Trenches from the two stages are cut separately.

In total, four masks are used [7], two for the trenches, and two for the separate trench cuts. Self-aligned quadruple patterning (SAQP) using only DUV immersion tools can bring this down to three masks, but requires further process control maturity in addressing pitch walking [9].

While the cuts can be performed in EUV, they would suffer the previously mentioned stochastic defects issue, so DUV is more likely to be used. This would mean two EUV tools and two DUV tools being set up for the SALELE flow. This would be preferable to binding four EUV tools to this flow. For the earlier 7nm process [1], four immersion tools would be allocated. A more conventional self-aligned double patterning (SADP) can also reduce this to three masks, three tools. 20 nm features still pose a stochastic defect risk for EUV [5,6]. SALELE offers an easy transition from the LELE double patterning flow of the older 14/16/22nm nodes, but requires a substantial increase in lithography tooling.

References

[1] S-Y. Wu et al., “A 7nm CMOS platform technology featuring 4th generation FinFET transistors with a 0.027um2 high density 6-T SRAM cell for mobile SoC applications,” IEDM 2016.

[2] J. C. Liu et al., “A Reliability Enhanced 5nm CMOS Technology Featuring 5th Generation FinFET with Fully-Developed EUV and High Mobility Channel for Mobile SoC and High Performance Computing Application,” IEDM 2020.

[3] Q. Lin et al., “Improvement of SADP CD control in 7nm BEOL application,” Proc. SPIE 11327, 113270X (2020).

[4] D. Rio et al., “Extending 0.33 NA EUVL to 28 nm pitch using alternative mask and controlled aberrations,” Proc. SPIE 11609, 116090T (2021).

[5] P. de Bisschop and E. Hendrickx, “On the dependencies of the stochastic patterrning- failure cliffs in EUVL lithography,” Proc. SPIE 11323, 113230J (2020).

[6] J. Church et al., “Fundamental characterization of stochastic variation for improved single-expose extreme ultraviolet patterning at aggressive pitch,” J. Micro/Nanolith. MEMS MOEMS 19, 034001 (2020).

[7] Y. Drissi et al., “SALELE process from theory to fabrication,” Proc. SPIE 10962, 109620V (2019).

[8] L.Jang et al., “SADP for BEOL using chemical slimming with resist mandrel for beyond 22nm nodes,” Proc. SPIE 8325, 83250D (2012).

[9] H. Ren et al., “Advanced process control loop for SAQP pitch walk with combined lithography, deposition and etch actuators,” Proc. SPIE 11325, 1132523 (2020).

This article first appeared in LinkedIn Pulse: SALELE Double Patterning for 7nm and 5nm Nodes

Related Lithography Posts


Podcast EP13: The Three Pillars of Verification with Adnan Hamid

Podcast EP13: The Three Pillars of Verification with Adnan Hamid
by Daniel Nenni on 03-26-2021 at 10:00 am

Dan goes on a scenic tour of verification with Adnan Hamid, founder and CEO of Breker Verification Systems.  We discuss the rather unusual way Adnan got into semiconductors and SoC verification. Adnan then breaks down the verification task into its fundamental parts to reveal what the three pillars of verification are and why they are so important.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Breker Verification Systems


Foundry Fantasy- Deja Vu or IDM 2?

Foundry Fantasy- Deja Vu or IDM 2?
by Robert Maire on 03-26-2021 at 8:00 am

Foundry Profit 2020

– Intel announced 2 new fabs & New Foundry Services
– Not only do they want to catch TSMC they want to beat them
– It’s a very, very tall order for a company that hasn’t executed
– It will require more than a makeover to get to IDM 2.0

Intel not only wants to catch TSMC but beat them at their own game

Intel announced that it was going to spend $20B on two new fabs in Arizona and establish Intel Foundry Services as part of re-imagining Intel into “IDM 2.0”. The stated goal would be to provide foundry services to customers much as TSMC does so well today.

This will not be easy. A lot of companies have died on that hill or been wounded. Global Foundries famously gave up. Samsung still spends oodles of money trying to keep within some sort of distance to TSMC. UMC, SMIC and many others just don’t hold a candle to TSMC’s capabilities and track record.

This all obviously creates a very strange dynamic where Intel is highly dependent upon TSMC’s production for the next several years but then thinks it can not only wean itself off of TSMC’s warm embrace but produce enough for itself as well as other customers to be a real foundry player.

If Pat Gelsinger can pull this off he deserves a billion dollar bonus

This goes beyond doubling down on Intel’s manufacturing and well into a Hail Mary type of play. This may turn out to be an aspirational type of goal in which everyone would be overjoyed if they just caught back up to TSMC.

Like Yogi Berra said “It’s Deja Vu all over again”- Foundry Services 2.0

Lest anyone conveniently forget, Intel tried this Foundry thing before and failed, badly. It just didn’t work. They were not at all competitive.

It could be that we are just past the point of remembering that it was a mistake and have forgotten long enough to try again.

We would admit that Intel’s prior attempt at being a foundry services provider seemed almost half hearted at best. We sometimes thought that many long time Intel insiders previously snickered at being a foundry as they somehow thought it beneath them.

Trying to “ride the wave” of chip shortage fever?

It could also be that Intel is trying to take advantage of the huge media buzz about the current chips shortage by playing into that theme, and claiming to have the solution.

We would remind investors that the current chip shortage that has everyone freaked out will be long over, done and fixed and a distant memory before the first brick is even laid for the two new fabs Intel announced today. But it does make for good timing and PR.

Could Intel be looking for a chunk of “Chips for America” money?

Although Intel said on the call that government funding had nothing to do with whether or not they did the project we are certain that Intel will have its hand out and lobby big time to be the leader of Chips for America.

We would remind investors that the prior management of Intel was lobbying the prior White House administration hard to be put in charge of the “Chips for America” while at the exact same time negotiating to send more product (& jobs) to TSMC.

This is also obviously well timed as is the current shortage. Taken together the idea of Intel providing foundry services makes some sense on the surface at least.

Intel needs to start with a completely clean slate with funding

We think it may be best for Intel to start as if it never tried being a foundry before. Don’t keep any of the prior participants as it didn’t work before.
Randhir Thakur has been tasked with running Intel Foundry Services. We would hope that enough resources are aimed at the foundry undertaking to make it successful. It needs to stand alone and apart.

Intel’s needs different “DNA” in foundry- two different companies in one

The DNA of a Foundry provider is completely different than that of being an IDM. They both do make chips but the similarity stops there.

The customer and customer mindset is completely different. Even the technology is significantly different from the design of the chips, to the process flows in the fabs to package and test. The design tools are different, the manufacturing tools are different and so is packaging and test equipment.

While there is a lot of synergy between being a fab and an IDM it would be best to run this as two different companies under one corporate roof. It’s going to be very difficult to share: Who gets priority? Who’s needs come first? One of the reason’s Intel’s foundry previously failed was the the main Intel seemed to take priority over foundry and customers will not like the obvious conflict which has to be managed.

Maybe Intel should hire a bunch of TSMC people

Much as SMIC hired a bunch of TSMC people when it first started out, maybe Intel would be well served to hire some people from TSMC to get a jump start on how to properly become a real foundry. It would be poetic justice of a US company copying an Asian company that made its bones copying US companies in the chip business.

We have heard rumor that TSMC is offering employees double pay to move from Taiwan to Arizona to start up their new fab there. Perhaps Intel should offer to triple pay TSMC employees to move and jump ship. It would be worth their while. Intel desperately needs the help.

Pat Gelsinger is bringing back a lot of old hands from prior years at Intel as well as others in the industry (including a recent hire from AMAT) but Intel needs people experienced in running a foundry and dealing with foundry customers. Intel has to hire a lot of new and experienced people because they not only need people to catch up their internal capacity, which is not easy, and it needs more people to become a foundry company and the skillsets, like the technology are completely different. This is not going to be either cheap or easy.

I don’t get the IBM “Partnership”

IBM hasn’t been a significant, real player in semiconductors in a very, very long time. It may have a bunch of old patents but it has no significant current process technology that is of true value. It certainly doesn’t build current leading edge or anything close nor does it bring anything to the foundry party.
Its not like IBM helped GloFo a lot. They brought nothing to the table. GloFo still failed in the Moore’s law race. In our view IBM could be a net negative as Intel has to “think different” to be two companies in one, it needs to re-invent itself.

The IBM “partnership” is just more PR “fluff” just like the plug from Microsoft and quotes from tech leaders in the industry that accompanied the press release. Its nonsense.

Don’t go out and buy semi equipment stocks based on Intel’s announcements

Investors need to stop and think how long its going to be before Intel starts ordering equipment for the two $10B fabs announced. Its going to be years and years away.

The buildings have to be designed, then built before equipment can even be ordered. Maybe if we are lucky the first shovel goes in the ground at the end of 2021 and equipment starts to roll in in 2023…maybe beginning production at reasonable scale by 2025 if lucky. Zero impact on current shortage – Even though Intel uses the current shortage as excuse to restart foundry

The announcement has zero, none, nada impact on the current shortage for two significant reasons;

First, as we have just indicated it will be years before these fabs come on line let alone are impactful in terms of capacity. The shortages will be made up for by TSMC, Samsung, SMIC, GloFo and others in the near term. The shortages will be ancient history by the time Intel gets the fabs on line.

Second, as we have previously reported, the vast majority of the shortages are at middle of the road or trailing edge capacity made in 10-20 years old fabs on old 8 inch equipment. You don’t make 25 cent microcontrollers for anti-lock brakes in bleeding edge 7NM $10B fabs, the math doesn’t work. So the excuse of getting into the foundry business because of the current shortage just doesn’t fly, even though management pointed to it on the call.

Could Intel get Apple back?

As we have said before, if we were Tim Apple, a supply chain expert, and the entire being of our company was based on Taiwan and China we might be a little nervous. We also might push our BFF TSMC to build a gigafab in the US to secure capacity. The next best thing might be for someone else like Intel or Samsung to build a gigafab foundry in the US that I could use and go back to two foundry suppliers fighting for my business with diverse locations.

The real reason Intel needs to be a foundry is the demise of X86

Intel has rightly figured out that the X86 architecture is on a downward spiral. Everybody wants their own custom ARM, AI, ML, RISC, Tensor, or what ever silicon chip. No one wants to buy off the rack anymore they all want their own bespoke silicon design to differentiate the Amazons from the Facebooks from the Googles.

Pat has rightly figured out that its all about manufacturing. Just like it always was at Intel and something TSMC never stopped believing. Yes, design does still matter but everybody can design their own chip these days but almost no one, except TSMC, can build them all.

Either Intel will have to start printing money or profits will suffer near term

We have been saying that Intel is going to be in a tight financial squeeze as they were going to have reduced gross margins by increasing outsourcing to TSMC while at the same time re-building their manufacturing, essentially having a period of almost double costs (or at least very elevated costs).

The problem just got even worse as Intel is now stuck with “triple spending”. Spending (or gross margins loss) on TSMC, re-building their own fabs and now a third cost of building additional foundry capacity for outside customers.
We don’t see how Intel avoids a financial hit.

Its not even sure that Intel can spend enough to catch up let alone build foundry capacity even if it has the cash

We would point out that TSMC has the EUV ASML scanner market virtually tied up for itself. They have more EUV scanners than the rest of the world put together.

Intel has been a distant third after Samsung in EUV efforts. If Intel wants to get cranking on 7NM and 5NM and beyond it has a lot of EUV to buy. It can’t multi-pattern its way out of it. Add on top of that a lot of EUV buying to become a foundry player as the PDKs for foundry process rely a lot less on the tricks that Intel can pull on its own in house design and process to avoid EUV. TSMC and foundry flows are a lot more EUV friendly.

As we have previously pointed out the supply of EUV scanners can’t be turned on like a light switch, they are like a 15 year old single malt, it takes a very long time to ramp up capacity, especially lenses which are a critical component.
I don’t know if Intel has done the math or called their friends at ASML to see if enough tools are available. ASML will likely start building now to be ready to handle Intel’s needs a few years from now if Intel is serious.

Being a foundry is even harder now

Intel was asked on the call “what’s different this time” in terms of why foundry will work now when it didn’t years ago and their answer was that foundry is a lot different now.

We would certainly agree and suggest that being a leading edge foundry is even much more difficult now. Its far beyond just spending money and understanding technology. Its mindset and process. Its not making mistakes. To underscore both TSMC and Pat Gelsinger its “execution, execution & execution” We couldn’t agree more. Pat certainly “gets it” the question is can he execute?

The tough road just became a lot tougher

Intel had a pretty tough road in front of it to catch the TSMC juggernaut. The road just got a lot more difficult to both catch them and beat them at their own game, that’s twice as hard.

However we think that Pat Gelsinger has the right idea. Intel can’t just go back to being the technology leader it was 10 or 20 years ago, it has to re-invent itself as a foundry because that is what the market wants today (Apple told them so).

It’s not just fixing the technology , it’s fixing the business model as well, to the new market reality.

It’s going to be very, very tough and challenging but we think that Intel is up for it. They have the strategy right and that is a great and important start.

All they have to do is execute….

Related:

Intel Will Again Compete With TSMC by Daniel Nenni 

Intel’s IDM 2.0 by Scotten Jones 

Intel Takes Another Shot at the Enticing Foundry Market by Terry Daly


Intel Takes Another Shot at the Enticing Foundry Market

Intel Takes Another Shot at the Enticing Foundry Market
by Terry Daly on 03-26-2021 at 6:00 am

Intel IDM 2.0

Intel made a big splash on March 23, 2021 by doubling down on manufacturing with the creation of Intel Foundry Services (IFS). The big announcement was supported by potential customers such as Qualcomm, Cisco, Ericsson, Google, Amazon, Microsoft, and IBM. With an accompanying $20B investment, the EDA and equipment industries, policymakers and governors were highly supportive.

Financial and industry analysts were a bit more skeptical. Pundits acknowledged new leadership in CEO Pat Gelsinger but recalled that Intel has tried this play before and failed. They pointed to the moat that TSMC has created in this space with its maniacal customer focus and execution excellence.

The strategy for an IDM to offer its capability to external customers as a Foundry is not without precedent. IBM made a similar move in the early 1990s and competed for almost 25 years prior to its acquisition by GLOBALFOUNDRIES. Samsung leveraged a big win with Apple into the Foundry model, and prior to the Intel announcement was alone in competing with TSMC on the leading edge.

Intel has substantial capabilities to bring to market, is establishing IFS as an “independent organization” with a separate P&L and says it will leverage three key pillars that it views as underpinning a world-class Foundry: Committed Capacity, Advanced Technology and Design Enablement. It asserts that it will differentiate on Service, Solutions and Scale. Fair enough.

But announcements are one thing; execution is another. The Foundry space is hyper-competitive, and despite Intel’s legacy IDM assets, it faces an uphill battle. One hopes that Intel has combined some deep introspection from its prior efforts with extensive benchmarking in the Foundry segment. Here are some lessons learned from a battle-scarred veteran of both IBM’s experience and that of GLOBALFOUNDRIES as a start-up in the pure-play Foundry industry – call it a preview of a day in the life of IFS.

“You can’t live with them and you can’t live without them.” Here we are talking about customers. They are the reason for your business existence, the source of income for innovation, investment, and shareholder return. They must be highly valued, the focal point for your company. On the other hand, they are demanding, but understandably so. After months designing products central to their market competitiveness and business success, they want their baby instantiated in hardware – ASAP – with cycle times faster than the raw process time of your factory. Top priority! IFS will have been an integral part of that creation, providing the design environment, IP, and perhaps some design services.

They expect “first time right” with high yields out of the chute – after all, they designed into your flow and used your design tools and IP blocks. They expect a seamless relationship with the OSAT partner, both in logistics and yield; you marketed a pre-qualified combination of silicon and packaging. They want immediate burst capacity to scale production and get their product into the marketplace – at benchmark yield. They will only pay for known good die; yield shortfalls to commit are on you. They want the ability to turn volumes on and off like a faucet. You have a lot of customers, so fab utilization is your problem.

And that was just your first customer. Then there is a dozen, then 50, then 100 – developed in pursuit of full fab utilization. Should you be more selective? The high-volume customers set an extremely high bar for execution – they tell stories of how life is so much better with TSMC. They have strong negotiating leverage. The small volume customers are similarly challenging. They all have a great growth story, but many are competing for the same end market. Are you double booking demand? The same work is required to qualify their parts, yet the demand is small and frequently moves to the right. Pricing is higher, but are these small accounts profitable? Can you afford to be more selective, or will you risk missing out on the next Qualcomm? You can only enable a certain number of expedites without slowing everything down in the line and risking supply commitments (that must be made “to the piece to the day”) to all customers. Allocating precious capacity is a challenge – every customer expects to be #1.

And then comes the call from the Intel mother ship. The Intel product team has finally finished its landmark design and needs everything cleared out of the way to qualify and ramp (notwithstanding that independent, separate P&L thing). No excuses. Go manage any necessary re-commits to your external customers; the future of Intel rests on getting this product to the marketplace. What? But you have contractual commitments to these customers. A solution is going to be painful but essential.

Next the Head of IFS R&D comes in the door, furious about the lack of tool and line priority for the qualification of the base Intel technologies and all the new technology platforms just committed to the marketplace. Remember that ultra-low power version? And the one that integrates embedded memory, RF, and the other features for the IoT customers just signed up? She needs priority over everything else to qualify these processes and the new ecosystem partner IP per IFS commitments.

Yikes. And the Intel Corporate COO and CFO are clamoring for improvements in your deteriorating operational and cost metrics driven by all this complexity. Then your lead fab manager calls with news of a new excursion that will impact supply commits. Seriously? When are those experienced Foundry hires from TSMC and GLOBALFOUNDRIES coming on board? Will they fit into the Intel culture? Will the Intel culture open itself to them? Can you accelerate the transformations needed to win? What a day!

Well, you get the idea. But Intel probably knows all this and has it figured out. Or do they? TSMC certainly makes it all appear effortless. But beneath their execution is brutally hard work. Perhaps a checklist for the refrigerators of Pat Gelsinger and Randhir Thakur will be of use for the journey ahead.

Critical Success Factors for Intel’s new Foundry business:

Focus on initial target markets and customers, with a phased roadmap to expand over time.

Competitive offerings tuned to those markets, including technology platforms, design enablement (PDKs, IP, services), and early access for lead customers to drive product-process co-optimization.

Passion for the customer. Customer-centricity backed by organizational and business process re-design to translate passion to execution, including Product Management, Product Development, New Product Introduction, Supply Chain Management, and Customer Relationship Management.

World class execution. Be the benchmark in manufacturing cycle times, yield (process and product); transparency (lot status, in-line parametric performance), quality and delivery commitments.

Multi-tasking. Balancing the needs of internal and external customers and managing value chain conflicts, as Intel may be simultaneously competing with customers and outsourcing to other foundries.

Scale. Establishing and maintaining scale to afford large annual R&D and capital investments.

Consistently high factory utilization. There is nothing worse in chip manufacturing than an underloaded fab. Full utilization is required for profitability in this capital-intensive industry.

Cost competitiveness across the board: Capex/$k (beware built-in costs on tool acquisition, install and hook-up), process complexity (beware designed-in cost), wafers, chemicals, gases, raw materials and operating supplies, labor, power, water, and other infrastructure support (exercise Intel VPA leverage).

Intel will have the benefit of depreciated fabs in mature technologies, but not on the leading edge.

This will be a transformation challenge of the highest order: from single customer (Intel) to multiple customers; from few high-volume parts to many high/medium/low volume parts; from a focused process technology menu (supporting only Intel products) to a diverse menu. And Intel is taking this challenge while struggling to “regain the recipe” on R&D execution. The toughest challenge may well be transforming culture, from “Intel knows best” to “the customer is central to our success and survival”.

In the end this will be a leadership challenge.

Many are pulling for Intel’s success. Why? The US needs a healthy Intel for national security and strengthening the US semiconductor manufacturing base (see The CHIPS ACT). The global semiconductor industry needs Intel to be successful in process leadership and manufacturing to broaden geographic and supplier diversification beyond TSMC and Samsung, as great as they are. And competition is always a good thing – for innovation, for customers, and ultimately for shareholders.

Wishing Intel all the success! It will be the run of a lifetime!

Terry Daly is a retired semiconductor industry executive, independent consultant, and senior fellow at The Council on Emerging Market Enterprises, The Fletcher School of Law & Diplomacy, Tufts University

 

Intel Will Again Compete With TSMC by Daniel Nenni 

Intel’s IDM 2.0 by Scotten Jones 


Flex Logix Closes $55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption

Flex Logix Closes $55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption
by Mike Gianfagna on 03-25-2021 at 10:00 am

Flex Logix Closes 55M in Series D Financing and Accelerates AI Inference and eFPGA Adoption

Flex Logix is a unique company. It is one of the few that supplies both FPGA and embedded FPGA technology based on a proprietary programmable interconnect that uses half the transistors and half the metal layers of traditional FPGA interconnect. Their architecture provides some rather significant advantages. I wrote about their ground-breaking InferX X1 technology here. The company is gaining significant momentum in high-growth markets such as AI inference. Their low power opens up a lot of possibilities at the edge. Recently, Flex Logix announced a $55 million oversubscribed Series D funding round. This is a significant round of investment and opens up new possibilities for the company. I spent some time with their CEO, Geoff Tate to get some of the backstory behind this round.  Read on to get the details about how Flex Logix closes $55M in Series D financing and accelerates AI inference and eFPGA adoption.

Geoff Tate

First, a bit about Geoff Tate. Geoff has a storied career in semiconductors. After getting an MBA from Harvard Business School, Geoff did a stint as senior VP of microprocessors and logic at AMD. He then went on to lead Rambus from four people and $2 million in equity to a NASDAQ IPO and a multi-billion dollar market cap as its CEO and chairman. After several more CEO, board and advisory roles he co-founded Flex Logix as their CEO. Geoff has a strong command of the semi market, its trends and customer needs. His significant achievements essentially all came together to develop new and innovative solutions at Flex Logix.

During my discussion with Geoff, he explained that this funding round gives Flex Logix significant flexibility on how to grow the company. The plan is to double in size over the next year or so, with about 80 precent of the resources going to support the inference market. Geoff was quick to point out that he’s seeing substantial growth for the embedded FPGA product as well. This market seems to have reached something of a “tipping point”, with more and more customers now ready to integrate embedded FPGA technology into their advanced SoCs. Data center, 5G and base station designs are all moving toward embedded FPGAs. Geoff explained that these applications have always used discrete FPGAs. Embedded FPGAs offer the opportunity to reduce cost and power, with power being a critical item for all these applications.

I asked Geoff what caused embedded FPGAs to finally start taking off. He reviewed three key developments:

1) Proof the technology works – the price of entry for pretty much any new technology

2) Validation in real applications by early adopters. In this case, it was folks like Sandia Labs, Boeing, Morning Core (Datung Telecom), and DARPA. There was a compelling need to develop a domestic supply of advanced chips with embedded FPGA technology and this work laid the foundation for what was to follow

3) A mainstream application that proves success in a high-volume application. Geoff cited Flex Logix’s win at Dialog Semiconductor as a very high-volume application

With these three events the stage is now set for substantial growth in the adoption of embedded FPGA technology – watch this space. Of course, Flex Logix uses its embedded FPGA technology for its stand-alone chip products, so more proof there.

Back to some of the details of the funding round. Mithril Capital Management led the round with significant participation by existing investors Lux Capital, Eclipse Ventures and the Tate Family Trust. “We are impressed with the very high inference-throughput/$ architecture that Flex Logix has developed based on unique intellectual property that gives it a sustainable competitive advantage in a very high growth market,” said Ajay Royan, managing general partner and founder of Mithril Capital Management.

I concluded my discussion with Geoff discussing what the future looked like. Geoff sees significant growth across many markets, from AI inference at the edge to programmable networks in data centers, wireless networks and more. Geoff explained that, if you look at all the current and future opportunities, Flex Logix is essentially in the reconfigurable computing business. This was a great insight. We ended our discussion on that note. Now you know some of the backstory that allows Flex Logix to Close $55M in Series D financing and accelerate AI inference and eFPGA adoption.


Reducing Compile Time in Emulation. Innovation in Verification

Reducing Compile Time in Emulation. Innovation in Verification
by Bernard Murphy on 03-25-2021 at 6:00 am

Innovation image 2021

Is there a way to reduce cycle time in mapping large SoCs to an FPGA-based emulator? Paul Cunningham (GM, Verification at Cadence), Jim Hogan (RIP) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Improving FPGA-Based Logic Emulation Systems through Machine Learning. This paper was presented at the ACM Transactions on Design Automation of Electronic Systems in 2020. The authors are from Georgia Tech and Synopsys.

FPGA-based emulation, emulating a large SoC design through an array of FPGAs, is one way to model an SoC accurately yet fast enough to run significant software loads. But there’s a challenge: compiling a large design onto said array of FPGAs is not easy. A multi-billion-gate SoC must map onto hundreds of large FPGAs (300+ in some of the authors’ test cases), through a complex partitioning algorithm followed by multiple place and route (P&R) trials on “hard” partitions. P&R runs can be parallelized, but each still takes many hours. If any run fails, you must start over with a new partitioning or new P&R trials.

Because designers use emulation to optimize cycle-time through chip verification and debug, it is critical to optimize compile wall-clock time, within reasonable compute resources. Figuring out the best partitioning and best P&R strategies requires design know-how and experience from previous designs. Which makes this problem an appealing candidate for machine learning (ML) methods. The authors use ML to predict if a P&R job will be easy or hard and use this prediction to apply different P&R strategies. They also use ML to estimate best resourcing to optimize throughput and to fine-tune partitioning. They’re able to show improvements in both total compute and wall-clock time with their methods.

Paul’s view

Optimizing throughput in emulation is very relevant today as we continue to chase exponential growth in verification complexity. For typical emulation usage debug cycle time is critical and so compile wall-clock time is very important. The authors have shown a 20% reduction in P&R wall clock time which is very good. This paper is very well written, presenting some strong results based on using ML to predict if a P&R job will be “easy” or “hard” and then using these predictions to optimize partitioning and determine P&R strategies.

As with any ML system, the input feature set chosen is critical. The authors have some great insights here, in particular the use of Lloyd Shapley’s Nobel Prize winning techniques in game theory for feature importance weighting. One thought I have on possible further improvements would be to consider some local measures of P&R difficultyin their feature set – the features listed appear to all be global measures such as number of LUTs, wires, clocks. However, a local hotspot on a small subset of a partition can still make P&R difficult, even if these global metrics for the overall partition look easy.

The paper builds up to a strong headline result of reducing wall clock time for the overall P&R stage of emulation compile significantly, from 15 hours to 12 hours. Nice.

Jim’s view

Jim, an inspiration to many of us, passed away while we were working on this blog. We miss you dearly Jim. Wherever you’re watching us from, we hope we’ve correctly captured what you had shared with us live on this paper:

This is some impressive progress by Synopsys on FPGA-based emulation compile times. If it was coming from a startup then for sure I’d invest. “Outside” ML to drive smarter P&R strategies makes total sense, not only for emulation, but also ASIC implementation.

My view

I agree with Jim. This method should also be applicable to other forms of implementation: FPGA prototyping as well as FPGA emulation, ASIC implementation, and even the custom emulation processors that Cadence and Mentor have. Even for prototyping large designs which will go to production in FPGA implementations. I’m thinking of large infrastructure basebands and switches for example. I’m also tickled that while ML more readily finds a home in implementation rather than verification, here verification builds on that strength in implementation!

We know Jim would want us to continue this blog, as do Paul and I. We’re working to find a new partner to join us for next month. Stay tuned!

Also Read

Cadence Underlines Verification Throughput at DVCon

TECHTALK: Hierarchical PI Analysis of Large Designs with Voltus Solution

Finding Large Coverage Holes. Innovation in Verification