SiC Forum2025 8 Static v3

WEBINAR: Understanding TSN and its use cases for Aviation, Aerospace and Defence

WEBINAR: Understanding TSN and its use cases for Aviation, Aerospace and Defence
by Daniel Nenni on 09-13-2023 at 10:00 am

Ethernet TSN Profiles

This webinar will introduce Time-Sensitive Networking (TSN) and unveil how TSN can provide value in aviation, aerospace and defence.

TSN is a new set of standard extensions based on the IEEE 802.1 and IEEE 802.3 Ethernet standards. It is designed to provide deterministic guarantees on Quality of Service (QoS) metrics and reliability in a switched Ethernet network. TSN is a broad concept with many features within the four key areas: Time Synchronization, Reliability, Latency and Resource Management. TSN is applicable in multiple industries, all characterized by a need of real-time applications and determinism.  One of several benefits of TSN is that the profiles and standards are open, thus making it possible for different manufacturers to interoperate with each other.

To ensure an optimum feature set and configuration for each industry, IEEE work groups are working on defining profiles for each industry. Below list includes the six profiles currently defined. All six profiles are available as drafts with variating maturity.

Figure 1: Illustration of the TSN profiles

This webinar will focus on TSN in aerospace, aviation and defence, as it is described in P802.1DP.

Benefits of TSN

The TSN features makes it possible to use Ethernet for applications, where conventional Ethernet is not feasible due to the limitations within real-time communication and reliability. TSN makes it possible to use the same Ethernet network for both high-priority, time-critical messages and conventional best-effort Ethernet traffic.

This gives many different use cases in numerous industries. Below is a list of some of the benefits that TSN provides.

  • Time synchronization within a network
  • Efficiency, easier management, and cost-effectiveness
  • Reduced and deterministic latency
  • Improved reliability and possibility of redundancy
  • Scalability and simple expansion of the network
  • Possibility of converging networks, while ensuring vital process data is handled in a reliable and deterministic manner despite of other network traffic on the same network

Join Kim Schjøtler Sørensen our Ethernet IPs Product Manager for a webinar, where he will simplify TSN’s core concepts and present its applications in aerospace, aviation and defence.

Don’t miss out on this opportunity to unravel the potential of TSN technology within your business.

Register Now: https://www.comcores.com/webinar-time-sensitive-networking-tsn-usecases-aviation-aerospace-defence/

US & Europe: Tuesday, 03 October 2023, 11 AM EST, 8 AM PST, 5 PM CET

Asia & Europe: Wednesday, 04 October 2023, 3 PM China, 4 PM Japan & Korea, 9 AM CET

Also Read:

JESD204D: Expert insights into what we Expect and how to Prepare for the upcoming Standard

WEBINAR: O-RAN Fronthaul Transport Security using MACsec

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken


Scaling LLMs with FPGA acceleration for generative AI

Scaling LLMs with FPGA acceleration for generative AI
by Don Dingee on 09-13-2023 at 6:00 am

Crucial to FPGA acceleration of generative AI is the 2D NoC in the Achronix Speedster 7t

Large language model (LLM) processing dominates many AI discussions today. The broad, rapid adoption of any application often brings an urgent need for scalability. GPU devotees are discovering that where one GPU may execute an LLM well, interconnecting many GPUs often doesn’t scale as hoped since latency starts piling up with noticeable effects on user experience. Achronix’s Bill Jenkins, Director of AI Product Marketing, has a better solution for scaling LLMs with FPGA acceleration for generative AI.

Expanding from conversational AI into transformer models

Most of us are familiar with conversational AI as a tool on our smartphones, TVs, or streaming devices, providing voice-based search capability to simple questions, usually returning the best result or a short list. Requests head to the cloud where data and search indexing live, and results come back within a few seconds, usually faster than typing requests. Behind the scenes of these queries are three steps, depending on the application: automatic speech recognition (ASR), natural language processing (NLP), and speech synthesis (SS).

 

 

 

 

 

 

 

Generative AI builds on this concept with more compute-intensive transformer models with billions of parameters. Complex, multi-level prompts can return thousands of words seemingly written from research across various short-form and long-form sources. Accelerating ASR, NLP, and text synthesis – using an LLM like ChatGPT –  becomes crucial if response times are to stay bounded within reasonable limits.

A good LLM delivering robust results quickly can draw hundreds of simultaneous users, complicating a hardware solution. One popular approach avoids long vendor lead times, allocation that can lock out smaller customers, and high capital costs of procuring high-end GPUs with cloud-based GPU implementations using on-demand elastic resource expansion. But, operating expenses can eat up the apparent advantages of rented GPUs at scale. “Spending millions of dollars in the cloud for GPU-based generative AI processing and still ending up with latency and inefficiency is not for everybody,” observes Jenkins.

FPGA acceleration for generative AI throughput and latency

The solution for LLMs is not bigger GPUs or more of them because the generative AI latency problem isn’t due to execution unit constraints. “When an AI model fits in a single high-end GPU, it will win in a contest versus an FPGA,” says Jenkins. But as models get larger, requiring multiple GPUs to increase throughput, the scale tips in favor of Achronix Speedster 7t FPGAs due to their custom-designed 2D network-on-chip (NoC) running at 2 GHz and built all the way out to the PCIe interfaces. Jenkins indicates they are seeing as much as 20 Tbps of bandwidth across the chip and up to 80 TOPS, essentially wiping out floor planning issues.

 

 

 

 

 

 

Achronix has been evangelizing that one FPGA accelerator card can replace up to 15 high-end GPUs for speech-to-text applications, reducing latency by 90%. Jenkins decided to study GPT-20B (an LLM named for its 20 billion parameters) to see how the architectures compare in accelerating generative AI applications. We’ll cut to the punchline: at 32 devices, Achronix FPGAs deliver 5 to 6 times better throughput and similarly reduced latency. The contrast is striking at INT8 precision, which also reduces power consumption in an FPGA implementation.

“Generative AI developers can choose Achronix FPGAs they can actually get their hands on quickly, getting 5x-6x more performance for the same device count, or using fewer parts and saving space and power,” Jenkins emphasizes. He continues to say that familiarity with high-level libraries has kept many developers on GPUs, but they may not realize how inefficient a GPU-based architecture is until they run into these larger generative AI models. Jenkins worked on the team that developed OpenCL, so he understands programming libraries. He shares that AI compilers and FPGA IP libraries have advanced so developers don’t need intimate knowledge of FPGA hardware details or hand-coding to get the performance advantages.

LLMs are not getting smaller, and high-end GPUs are not getting cheaper (although vendors are working on the lead time problems). As models develop and grow, FPGA acceleration for generative AI will be a more acute need. Achronix stands ready to help teams understand where GPUs become inefficient in generative AI applications, how to deploy FPGAs for scalability in real-world scenarios, and how to keep capital and operating expenses in check.

Learn more about the GPT-20B study in the Achronix blog post:
FPGA-Accelerated Large Language Models Used for ChatGPT

Also Read:

400 GbE SmartNIC IP sets up FPGA-based traffic management

eFPGA Enabled Chiplets!

The Rise of the Chiplet


Soitec is Engineering the Future of the Semiconductor Industry

Soitec is Engineering the Future of the Semiconductor Industry
by Mike Gianfagna on 09-12-2023 at 10:00 am

Soitec is Engineering the Future of the Semiconductor Industry

The crystalline structure of silicon delivers the incredible capabilities that have fueled the exponential increases defined by Moore’s Law. It turns out that silicon in its purest form will fall short at times – power handling and speed are examples. In these cases, adding additional materials to the silicon can enhance its capabilities for demanding requirements. Called compound semiconductors, these enhanced materials unlock many of the high-performance applications that are emerging today. But adding an epitaxial layer of new material to silicon is very difficult, even unpredictable at times. But an innovative company has changed all that. Read on to see how Soitec is engineering the future of the semiconductor industry.

Soitec – A Brief History

The semiconductor supply chain is a highly complex, multi-national web of organizations and capabilities. If we trace that supply chain back to its roots, we find the raw material used to manufacture semiconductor devices. This is where Soitec lives. Born out of Grenoble’s CEA-Leti (Atomic Energy Commission/Electronics and IT Technology Laboratory) in the 1990’s, Soitec has become a critical source of engineered substrate materials for the entire semiconductor industry.

With state-of-the-art manufacturing facilities in France, Belgium, Singapore, and China, Soitec has become a global leader in engineered substrates. Using its unique Smart Cut™ process, Soitec can reliably and cost-effectively insert an insulating layer between two layers of silicon oxide, creating silicon-on-insulator (SOI) wafers. One of these layers contains the differentiating materials that delivers the required improvements in system performance.

Depending on the materials used, these engineered substrates can deliver enabling performance for RF, power and optical communications as examples. Using its Smart Cut process, Soitec has an ambitious plan for heterogeneous material combinations to deliver an anything-on-anything roadmap.

The possibilities for such a roadmap have broad implications for the entire semiconductor industry. Let’s look at the impact silicon carbide (SiC) compound semiconductors have on the automotive market.

Connecting the Automotive Ecosystem with SiC Manufacturing

This was the title of a presentation Soitec gave at the recent Semicon West event in San Francisco. The presentation focused on the powertrain for EVs and the impact silicon carbide material can have there. Powertrain elements examined included:

  • Electric Motor (and e-transmissions)
  • Battery Pack (modules, cells, battery management)
  • Power Electronics (E-drive/inverter (DC/AC), DC/DC converter, on-board charger (AC/DC))

These elements can add up to over $10,000 of system cost. The use of silicon carbide can have a big impact on these elements. When compared to traditional silicon material based on insulated-gate bipolar transistors (IGBTs), the following substantial improvements are possible:

  • ~50 percent faster charging time
  • ~5% – 10% increased range
  • ~$500 – $1,000 reduced system cost

So, the question becomes what is the best path to these improvements? It turns out silicon carbide compound semiconductor material is costly, energy intensive and time-consuming to produce. To manufacture a boule of SiC which will contain 40-50 wafers each, there are many process steps that must be carefully controlled. The whole process can take about two weeks at high temperature of 2500°C which is roughly the temperature at the surface of the sun. Soitec presented the diagram below to summarize the requirements.

The presentation then gave a glimpse into how real Soitec’s anything-on-anything roadmap is. Using the fundamentals of its Smart Cut™ process, Soitec has created a SmartSiC™ engineered substrate. Soitec’s SmartCut™ process – think of it as an atomic scalpel – transfers an ultra-thin single crystalline SiC layer extracted from a silicon carbide so-called donor wafer, which is then bonded to an ultra-low-resistivity polycrystalline silicon carbide wafer. The donor wafer can then be re-used 10 times, said Emmanuel Sabonnadiere, vice president, automotive and industrial at Soitec, which makes this new engineered substrate unrivalled.

The benefits of this process are substantial and include:

  • 40,000 tons of CO2 reduction for each 1 million wafers
  • 200mm scalability to accelerate SiC adoption through 10x reusability
  • Enable a new generation of SiC devices thanks to an RDSON improvement of up to 20%
  • ~8X improved conductivity compared to a conventional single crystal SiC
  • Reduced Capex & Opex

The figure below shows the details of the process.

SmartSiC Process

Strategic partnerships are being set up across the automotive supply chain to deliver on the substantial benefits of this approach.

Comments From the Presenter

Emmanuel Sabonnadière

Emmanuel Sabonnadière, Vice President Division Automotive & Industrial at Soitec was the presenter at Semicon West. I had the opportunity to chat with him for a bit on the work being done at Soitec and its implications.

He began by explaining that the automotive division at Soitec has grown by 80% over the past year. Impressive. Emmanuel clearly has a passion for the impact that silicon carbide can have on system cost and performance. He has a history dating back as CEO of CEA-Leti where a lot of the early innovation occurred.

He discussed the extreme efficiency of Soitec’s process – a silicon carbide layer is complex and challenging to produce and can be used many times to create engineered substrates resulting in a highly sophisticated process.

Emmanuel also described the substantial investment being made by Soitec to build out the manufacturing infrastructure needed to broadly deploy its capabilities in the fast-growing EV market. Opening to celebrate first production is planned for end of September 2023

To Learn More

Soitec has developed a short, under two-minute video that puts all the benefits of the SmartSiC process in perspective. I highly recommend to have a look, you can find the Soitec video here. This will help you understand how Soitec is engineering the future of the semiconductor industry.


Chiplets and IP and the Trust Problem

Chiplets and IP and the Trust Problem
by Bernard Murphy on 09-12-2023 at 6:00 am

Trust min

Perforce recently hosted a webinar on “IP Lifecycle Management for Chiplet-Based SoCs”, presented by Simon Butler, the GM for the Methodics IPLM BU. The central theme was trust, for IPs as much as chiplets. How can an IP/chiplet consumer trust that what they receive has not been compromised somewhere in the value chain from initial construction to deployment in an OEM product?

What is the trust scope?

This feels like a big problem to tackle. On a quick search I see multiple proposed solutions to address different classes of attack:

  • Late stage added hardware trojans, against which a physical inspection certification authority has been proposed,
  • Known good die tagging with a PUF, where the correct tag is not reproducible in a fake die,
  • Zero trust chiplets which assume they are operating in an insecure environment; good for them but doesn’t necessarily fix the total system,
  • In the pre-silicon part of the chain mechanisms to fingerprint an IP component along with metadata for validation on receipt.

The Perforce approach to trust management

The last of these options is the area that Simon aims to address. This centers around the bill of materials (BOM) for the SoC. Each IP and the SoC itself can be characterized by multiple factors: version number, design configuration scripts, tool versions and configuration scripts, and embedded software. This last item can similarly be broken down to top-level code, libraries, and packages, etc. also with version numbers.

Simon advises that for each item in the BoM version numbers should be automatically updated where appropriate throughout the development lifecycle. These version updates are important to support traceability – who made what change, when and why. Metadata should be stored with IP information to track open bugs in each release and the release in which they were fixed and test results for the IP. I wonder if here they could also include a fingerprint for the simulation input and output? The results themselves would be too bulky to store but a fingerprint (like a hash over the testbench and the sim output) would be a tricky thing to fake.

Taken altogether, each component representation, and the BOM should be immutable, ensuring traceability of any changes in the BOM, and should therefore also be easily checkable, so if a change was introduced after the IP or soft SoC was shipped that fact would become apparent immediately.

Blockchain as a ledger management system for provenance

Of course if I am an experienced bad actor I can learn all the ways you generated your metadata and fingerprints and update all checks after I have inserted my malware. Simon’s suggestion to get around that problem is to use blockchain managed signatures for important metadata. Here, blockchain should be integrated into the component management platform so that ledger entries can be made and signed on each release. This is a much more difficult thing to compromise. In fact I wonder if blockchain couldn’t become a part of the larger chiplet trust solution? Interesting idea.

Methodics IP Lifecycle Management (IPLM)capabilities

Methodics provides a comprehensive range of IPLM capabilities, including a fully traceable ecosystem enforcing version immutability through design evolution, release management, IP discovery and reuse features including automatic cataloging of all IP and metadata, workspace management across design organizations and IP-centric planning support enabling different teams to understand characteristics and challenges flagged by other teams, in planning and during development.

You can register to view the webinar HERE.


Synopsys Expands Synopsys.ai EDA Suite with Full-Stack Big Data Analytics Solution

Synopsys Expands Synopsys.ai EDA Suite with Full-Stack Big Data Analytics Solution
by Kalar Rajendiran on 09-11-2023 at 10:00 am

Wafer Circuit Detail

More than two years ago, Synopsys launched its AI-driven design space optimization (DSO.ai) capability. It is part of the company’s Synopsys.ai EDA suite, an outcome of its overarching AI initiative. Since then, DSO.ai has boosted designer productivity and has been leveraged for 270 production tape-outs. DSO.ai uses machine learning (ML) techniques to explore the design space and identify optimal solutions that meet the designer’s PPA targets. DSO.ai capability was just the tip of the ice berg in terms of AI-driven technology from Synopsys. Since then, the company has been expanding its AI-driven tools offerings.

At its annual Synopsys Users Group (SNUG) conference back in March 2023, the company announced additional optimization capabilities. These capabilities include verification space optimization (VSO.ai), test space optimization (TSO.ai), analog design migration automation and lithography models development acceleration. Proof of rapid adoption of these tools and capabilities is the fact that Synopsys’ AI-driven revenue already makes up about 10% ($0.5 billion) of the company’s annual revenue.

I sat down this week with Shankar Krishnamoorthy, Synopsys’ GM of the EDA Group to learn about the company’s next expansion of its Synopsys.ai EDA suite with a full-stack big data analytics solution. The newly announced capabilities are made possible by applying AI/ML driven analysis to aggregated data.

“AI and data are two sides of the same coin, and our announcement today is to really augment that Synopsys.ai vision with an end-to-end EDA data analytics platform that we are introducing.” said Krishnamoorthy. “…There’s a tremendous opportunity to run AI/ML pipelines on this data to help customers build very useful applications.”

Aggregating Data is Key

While the DSO.ai, VSO.ai and TSO.ai capabilities are optimizer capabilities, this week’s announcement is about scalable data analytics. Design tools, testing tools and manufacturing tools all generate large amounts of data. By aggregating these data into big data stores and performing AI/ML driven analysis, the full-stack big data analytics solutions help customers build customized applications for various use cases. The big opportunity with data analytics is that once the data is aggregated, we can start building models to predict what can happen in the future, which is predictive analytics. We can also take it one step further to prescribe what needs to be changed to achieve improvements, which is prescriptive analytics. Both predictive and prescriptive analytics are key benefits of the end-to-end EDA analytics solution. Whether performing root cause analysis on issues, identifying anomalies, or optimizing workflows, customers benefit from improved results and increased productivity.

Generative AI

With AI, the data on which foundation models are trained determines the quality of the end applications to be enabled. That is why the data aspect of the Synopsys.ai initiative is so critical and equally important is a data platform to aggregate relevant data.  Synopsys is providing this platform to help customers aggregate their data, whether it be design data, silicon/product engineering data or fab data. Customers are enabled to build interesting GenAI models to allow them to drive a higher level of automation and increase efficiencies even more.

Design.da:

Relevant data generated from DSO.ai, VSO.ai and TSO.ai are aggregated and AI/ML techniques applied to enable customers to build interesting applications to improve productivity and efficiencies. The result is accelerated design closure, optimized PPA and fast time to market.

Silicon.da:

Product engineering data already exist at fabless semiconductor companies (FSC). The Silicon.da capability allows FSCs to aggregate the data and perform analytics and build models for looking at wafer test data and product test data. Customers benefit from rapid root cause analysis of failed dies and products.

Fab.da:

On the foundry side, process control data have largely been not acted upon in the past for lack of big data analytics capability. Digitizing the fab floor is a priority right now for all foundries. Fab.da capabilities help address this priority of digitizing the fab floor by analyzing the process control data and help build models for achieving efficiencies. By applying AI/ML techniques to analyze the data generated by the various tools at the fab, root cause for deviations and excursions can be quickly identified. The various fab tools and processes can be improved not only to increase yield efficiencies but also for other objectives such as CO2 emissions for example.

Summary

No matter whether one is a foundry, a FSC or an integrated device manufacturer (IDM), customers are always interested in improving efficiencies and time to market. Depending on the customer type, one or more of the newly announced capabilities will be of appeal and value. An IDM will benefit from using all three (Design.da, Silicon.da and Fab.da) of Synopsys’ newly announced capabilities. A FSC will benefit from using the Design.da and Silicon.da capabilities. And a foundry will benefit from using the Fab.da capability.

You can read the full press release here. To learn more details, visit the data analytics page.

Also Read:

ISO 21434 for Cybersecurity-Aware SoC Development

Key MAC Considerations for the Road to 1.6T Ethernet Success

AMD Puts Synopsys AI Verification Tools to the Test


Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists

Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists
by Fred Chen on 09-11-2023 at 8:00 am

Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists 1

Recent articles have focused much effort on studying the stochastic behavior of secondary electron exposure of EUV resists [1-4]. Here, we consider the implications of extending similar treatments to DUV lithography.

Basic Model Setup

As before, the model uses pixel-by-pixel calculations of absorbed photon dose, followed by a quantum yield of acids (previously secondary electrons for EUV [1-2]) per pixel, with both absorbed photon number and acid generation values being subject to Poisson statistics. Gaussian blur is then applied per pixel; however, unlike conventional considerations, the blur scale parameter (often known as sigma) is itself another number randomly chosen from a range or distribution. Smoothing can be finally applied to give more visually realistic images.

Acid Diffusion Length

Experimentally, it was found that secondary electron blur increased with dose and would itself follow an exponential or normal distribution [1-2,5]. Likewise, acid diffusion lengths should also be considered to follow a similar distribution. From the literature, we note that (1) it is not dependent on dose [6], and (2) it is in fact obviously dependent on bake temperature and time [7-8]. Generally, the diffusion length is given as 2*sqrt(Dt), where D is the diffusion coefficient, and t is the time elapsed (during bake). So, the range or distribution of acid diffusion lengths corresponds to that of the diffusion coefficient. From the values in the references [6,8], we can estimate a standard deviation of ~1 nm. The target value of the acid diffusion length should, of course, be sufficiently smaller than the target critical dimension (CD), e.g., 40 nm.

ArF Immersion Example (80 nm pitch)

Following [6], we may take a target diffusion length value of 10 nm with +/-7 standard deviations of +/-7nm, giving a range of 3-17 nm. The absorbed dose is taken to be 10% of the nominal dose of 30 mJ/cm2. The acid quantum yield is assumed to be 0.33. The worst case would be at +7 standard deviations, or 17nm, with a probability of 1.28e-12. We examine the typical and worst cases below.

Figure 1. Typical acid deprotected image for 10nm acid diffusion length. 3 mJ/cm2 absorbed over 80 nm line pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

Figure 2. +7 standard deviation deprotected image with 17 nm acid diffusion length. 3 mJ/cm2 absorbed over 80 nm pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

The trend seems to be that narrower exposed feature are most sensitive to stochastic defects, and edge roughness would be most commonly observed. Obvious means to address these issues would be higher doses and more absorptive resists. Increasing the absorbed dose to 8 mJ/cm2 (e.g., 20% absorbed from 40 mJ/cm2) gives us the following.

Figure 3. Typical acid deprotected image for 10nm acid diffusion length. 8 mJ/cm2 absorbed over 80 nm line pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

Figure 4. +7 standard deviation deprotected image with 17 nm acid diffusion length. 8 mJ/cm2 absorbed over 80 nm pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

Clearly, the higher dose helps to smoothe out the roughness, but the narrow exposed feature is still vulnerable to becoming defective at a low rate. With brightfield attenuated phase-shift masks becoming a standard for improving NILS [9], narrow exposed features can be practically avoided anyway.

References

[1] F. Chen, Modeling EUV Stochastic Defects with Secondary Electron Blur, https://www.linkedin.com/pulse/modeling-euv-stochastic-defects-secondary-electron-blur-chen

[2] F. Chen, Secondary Electron Blur as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

[3] H. Fukuda, Localized and cascading secondary electron generation as causes of stochastic defects in extreme ultraviolet projection lithography, J. Microlith./Nanolith. MEMS MOEMS 18, 013503 (2019), https://www.spiedigitallibrary.org/journals/journal-of-micro-nanolithography-mems-and-moems/volume-18/issue-1/013503/Localized-and-cascading-secondary-electron-generation-as-causes-of-stochastic/10.1117/1.JMM.18.1.013503.full

[4] H. Fukuda, Cascade and cluster of correlated reactions as causes of stochastic defects in extreme ultraviolet lithography, J. Microlith./Nanolith. MEMS MOEMS 19, 024601 (2020), https://www.spiedigitallibrary.org/journals/journal-of-micro-nanolithography-mems-and-moems/volume-19/issue-2/024601/Cascade-and-cluster-of-correlated-reactions-as-causes-of-stochastic/10.1117/1.JMM.19.2.024601.full

[5] F. Chen, EUV Stochastic Defects from Secondary Electron Blur Increasing With Dose, https://www.youtube.com/watch?v=Q169SHHRvXE

[6] M. Yoshii et al., Influence of resist blur on resolution of hyper-NA immersion lithography beyond 45-nm half-pitch, J. Microlith./Nanolith. MEMS MOEMS 8, 013003 (2009).

[7] D. Van Steenwinckel et al., Lithographic importance of acid diffusion in chemically amplified resists, Proc. SPIE 5753, 269 (2005).

[8] M. D. Stewart et al., Acid catalyst mobility in resist resins, JVST B 20, 2946 (2002).

[9] F. Chen, “Phase-Shifting Masks for NILS Improvement – A Handicap for EUV?”, https://www.linkedin.com/pulse/phase-shifting-masks-nils-improvement-handicap-euv-frederick-chen

Also Read:

Advancing Semiconductor Processes with Novel Extreme UV Photoresist Materials

Modeling EUV Stochastic Defects with Secondary Electron Blur

Enhanced Stochastic Imaging in High-NA EUV Lithography


The TSMC Pivot that Changed the Semiconductor Industry!

The TSMC Pivot that Changed the Semiconductor Industry!
by Daniel Nenni on 09-11-2023 at 6:00 am

Don Brooks Interview 2000

During my research I found an interview with Don Brooks from February 2000. It was very interesting and confirmed some of the things I knew about Don and brought up a few things I did not know. It’s an hour but it is a video of Don telling his story and is definitely worth a look. One of the things that was not mentioned however is the pivot that TSMC made when Don was president that enabled the transformation of the fabless semiconductor ecosystem, absolutely.

https://exhibits.stanford.edu/silicongenesis/catalog/cj789gh7170

Here are notes from the interview:

Don Brooks, former Senior Vice President of Texas Instruments and President and CEO of Fairchild Industries, discusses his experiences in semiconductor manufacturing.

00:00:00 Interviewer introduces Brooks and his overall career.

00:01:00 Discussion of going to school at SMU in a co-op program with Texas Instruments (TI), and his experience working in fabs (semiconductor fabrication plant) and with Jack Kilby.

00:05:10 Discusses TI’s attempt to vertically integrate while horizontally expanding the product line, and Intel’s departure from the DRAM business.

00:11:06 Discusses management at TI and leaving Texas Instruments to be President at Fairchild.

00:15:37 Discusses the demise of Fairchild, sale to Schlumberger, subsequent sale to National, and the Fujitsu proposed merger.

00:27:28 Discusses his upbringing and how he came to work at TI.

00:31:35 Discusses his work in venture capital, becoming the President of Taiwan Semiconductor Manufacturing Company (TSMC), and the context for foundry business and product engineering in the semiconductor industry.

00:37:35 Discusses Morris Chang, the politics of being president at TSMC, and Europe, the U.S., and Taiwan’s cultural differences in the semiconductor industry.

00:47:15 Discussion of the lifetime of fabs, the cost of equipment, and profit sharing.

00:57:20 Discusses moving back to the US, working at UMC on the Board of Directors, and focusing on his work in venture capital.

Interviewed by Rob Walker, February 8, 2000, Sunnyvale, California.

In regards to the pivot, this is what I remember but it is open for debate. Back when TSMC first started it was a very difficult transition from using an ASIC company like VLSI Technology or LSI Logic to using a pure-play foundry. There was a serious amount of foundation IP, PDKs and customer owned tooling (COT) that had to be done before a fabless company could design to a new process.

When TSMC first arrived a pureplay foundry was a very difficult sell to chip designers since there was no real ecosystem to support them. IDMs were the first targets since they had internal EDA and IP groups but the bigger margin markets were the emerging fabless companies or what was to be the Qualcomm, Nvidia, and Broadcoms of the world.

The first couple of processes TSMC used were licensed from Philips so there were some PDKs and IP available. After that TSMC developed their own processes with “greenfield” fabs and the real work began.

Not long after assessing the TSMC sales strategy, Don Brooks sold the TSMC board on the idea of opening up TSMC’s design rules to the EDA and IP companies for quicker and broader adoption of the TSMC process technologies. I don’t recall exactly the first set of design rules TSMC released,  I believe it was 1.0µm, but I do recall the first commercial EDA/IP company to adopt them and it was Compass Design Automation, a spin out of VLSI Technology, which was later purchased by Avant! (I worked for Avant!). In fact, my god friend and favorite co-author Paul McLellan, a long time VLSI Technology employee, was president of Compass.

To make a long story short, not only did all of the EDA and IP companies adopt TSMC PDKs, TSMC’s competitors did as well. A fabless company could design a chip for TSMC and take it to UMC, Chartered (now GF), or SMIC for second source manufacturing. I experienced this first hand many times.  One tape-out I was involved in originated at TSMC and was manufactured by all four foundries during its lifetime. This “T like” process development strategy continued until the FinFET era (16nm). The PDK “accessibility” made TSMC what they are today, the highest margin foundry the world has ever seen. But during the CMOS years (down to 28nm) TSMC’s margins were compressed by the smaller foundries so this level of openness was a double edged sword.

The bottom line: Morris Chang’s hands-off management style during Don’s tenure was a good thing. Had Don Brooks not opened up the TSMC design rules the semiconductor ecosystem may not be what it is today, a true force of nature.

Also Read:

Former TSMC President Don Brooks

The First TSMC CEO James E. Dykes

How Philips Saved TSMC

Morris Chang’s Journey to Taiwan and TSMC

How Taiwan Saved the Semiconductor Industry


Podcast EP181: A Tour of yieldHUB’s Operation and Impact with Carl Moore

Podcast EP181: A Tour of yieldHUB’s Operation and Impact with Carl Moore
by Daniel Nenni on 09-08-2023 at 10:00 am

Dan is joined by Carl Moore, a semiconductor and yield management expert with a career spanning 40 years. Carl’s held technical management positions across product and test engineering, assembly, manufacturing, and design at established semiconductor companies. Carl is passionate about data analytics and has a reputation for building strong customer relationships.

Carl explains how yieldHUB helps its customers improve yield and the benefits and impact of the work. Many aspects of return on investment (ROI) are discussed, along with the resulting efficiency improvements across the entire organization. Carl explains the collaborative nature of yieldHUB and how this benefits its customers.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


SMIC N+2 in Huawei Mate Pro 60

SMIC N+2 in Huawei Mate Pro 60
by Scotten Jones on 09-08-2023 at 6:00 am

TechInsights Huawei SMIC

Up until last December I was president and owner of IC Knowledge LLC, at the end of November, I sold IC Knowledge LLC to TechInsights. It has been interesting to become an insider at the world’s leading semiconductor reverse engineering and knowledge company. The latest SMIC N+2 analysis is an excellent example of TechInsight’s incredible capabilities in action. One of our salespeople in Asia was able to procure a Huawei Mate Pro 60 and hand carried it back to the lab in Ottawa. Our analysts have now extracted the processor and started running analysis on it, you can see the announcement about it here.

When we analyzed the previous N+1 device we found pitches in line with TSMC’s 10nm process but also more advanced features like single diffusion break and 6 track cells not seen until TSMC 7/7+ processes. The overall density of the dense logic on N+1 was slightly less than TSMC 7nm but close and we called N+1 a 7nm class device.

When I blogged about N+1 here, I noted that SMIC could further reduce the pitches even without EUV. TSMC’s original 7nm process was all done with optical multipatterning, the pitches were all achievable with double patterning except the fin pitch that required quadruple patterning. SMIC should be able to produce the same pitches without EUV.

Now we have the N+2 device in the early stages of analysis.

The N+2 Contacted Poly Pitch (CPP) and Metal 2 Pitches (M2P) are both tighter than N+1 but not as tight as TSMC 7nm, CPP in particular is relaxed from TSMC 7nm. CPP is made up of gate length (Lg), contact width (Wc) and gate to contact spacer thickness (Tsp). Lg is limited by leakage, Wc by parasitic resistance and Tsp by parasite capacitance. This indicates to me that SMIC is still struggling to achieve low leakage and low parasitic resistance and capacitance, M2P is much closer to TSMC 7nm. The overall high density logic transistor density for N+2 is intermediate between TSMC 7nm and 7nm+ making it a solid 7nm process. There is even some room to further shrink the pitches with double patterning to achieve something along the lines of TSMC 6nm densities in a future process (N+3?).

N+2 is an incremental improvement over N+1 moving from a borderline 7nm process to a solid 7nm process. This process is still within the limits of what optical double patterning can achieve and even has some room for additional shrinks.

To get more details and continue to follow this story as it unfolds, please go here.

I would like to thank Rajesh Krishnamurthy for helpful discussions and the whole TechInsights team for their outstanding work on this analysis.

Also Read:

ASML Update SEMICON West 2023

Intel Internal Foundry Model Webinar

Applied Materials Announces “EPIC” Development Center


Podcast EP180: A New Silicon Catalyst Incubator Program in the UK with Sean Redmond

Podcast EP180: A New Silicon Catalyst Incubator Program in the UK with Sean Redmond
by Daniel Nenni on 09-07-2023 at 6:00 pm

Dan is joined by Sean Redmond, managing partner for Silicon Catalyst. Sean has nearly 40 years of experience in the semiconductor and software industries including stints at VLSI technology, Verisity Design, Cadence, ARC, and others. Sean has recently worked closely with the UK government on industrial digital strategy, co-chairing the ElecTech council and became a core member of the Secretary of State’s industrial digital leadership team.

Sean describes a unique collaboration with the UK government and Silicon Catalyst to create an incubator for pre-seed / early stage semiconductor startups. The pilot program will be run by SiliconCatalyst.UK, an experienced start-up accelerator, and will nurture semiconductor start-ups from across the UK through an extensive nine-month incubator program. Sean covers all aspects of the new program, which will be starting soon.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.