SILVACO 073125 Webinar 800x100

Semiconductor Devices: 3 Tricks to Device Innovation

Semiconductor Devices: 3 Tricks to Device Innovation
by Milind Welling on 09-22-2023 at 8:00 am

Semiconductor Devices 3 Tricks to Device Innovation 1

The semiconductor industry’s incredible juggernaut has been powered by device innovations at its very core. Moreover, present-day enterprises encounter immense competitive pressures and innovations are a key differentiator to maintain their competitive edge1.

“It wasn’t that Microsoft was so brilliant or clever in copying the Mac, it’s that the Mac was a sitting duck for 10 years. That’s Apple’s problem: Their differentiation evaporated.” – Steve Jobs1 in The Rolling Stone interview (1994)

Interestingly, despite innovation being such a key to differentiation and value creation, there can be a wide range in the adoption of new innovations, as seen in Fig. 12.

Figure 1: Typical diffusion and adoption of innovation into industry

Having established how important innovation can be to successful enterprises, let us focus now on the topic at hand – what are those 3 tricks to semiconductor device innovation. Well, sorry to disappoint you but there are actually no easy tricks. And now that I have your attention, let me start by debunking a few myths.

Device Innovation: some True Lies

First, there is nothing magical about semiconductor device innovation. A second myth is that innovation is some sort of a Eureka moment. For thousands of years, humans have believed in the fallacy that innovation occurs like a lightning-strike of brilliance. It is generally believed that: 1) a person must passively wait for breakthrough ideas to hit and cannot take direct control of the creative process; 2) any person lucky enough to receive a significant idea must grab the most benefit possible because lightning-strikes of brilliance may never reoccur; 3) finally, serial innovators and inventive geniuses are rare talents. All these concepts are flawed. Much like other innovations, semiconductor device successes have instead been a product of structured innovation at its best.

Device Innovation: the gift that keeps on giving

Device innovation is often a virtuous cycle of continuous co-optimization of 3 key ingredients: materials, stack/device structure and device electrical operation. You start with materials which determine what is possible. Then you optimize the device structure to build what is manufacturable and finally you tune the electrical operation to ensure that the device stays reliable over its product life. As an example, you can breathe on a wafer and create a native oxide device that can even switch between 2 memory states. Question is whether it will switch reliably over a billion plus cycles and meet present day performance, manufacturability and cost criteria. A structured innovation cycle of co-optimization of these 3 criteria is the methodology that needs to be repeated diligently till the device Key Parametric Indices (KPIs) are met. As an example, Intermolecular has successfully demonstrated use of its device innovation capabilities in such a virtuous cycle to realize many leading-edge memory and selector devices across various materials systems. This wheel of materials and device innovation is illustrated below in Fig. 2 below:

Figure 2: Device innovation powered by co-optimization of materials, electrical operation, and device structure to meet device KPIs.

Its very foundation is the co-optimization of materials, device structure, and device operation which is achieved by rapid combinatorial depositions, advanced physical and electrical characterizations, data analysis to assess device performance and reliability, and an ongoing understanding of mechanisms that drive device behavior.

Semiconductor Devices: The heat is on

Not so surprisingly, technical progress in the semiconductor industry follows a “method to the madness”. For semiconductor products, it starts with an application that drives software and system architecture which in turn drives chip architecture to devices to process integration to materials. For successful device innovations, it is essential to understand the metrics that drive device behavior.  Emerging and leading-edge logic and memory devices are a co-optimization and improvement on the following parameters listed in Table 1:

Table 1: Exemplary leading-edge parameters (KPIs) that drive device innovations

Device innovation: a case study is worth a thousand words

Next, let us review a case study which will further underscore the methodology of co-optimization describe in section 3 to achieve KPIs as illustrated in section 4. A few years ago, a leading-edge memory maker approached Intermolecular to find a selector device that would have best in class performance for all the parameters in Table 1’s emerging selectors column. The material system for this Ovonic Threshold Switch (OTS) diode was expected to be a multinary (3 to 7) chalcogenide elements. While each of those parameters are extremely difficult, a major “stone wall” was a trade-off between leakage (IOFF) and thermal stability (Fig. 3)

Figure 3: Fundamental leakage versus thermal stability trade-off for OTS selectors

The technical team took on this challenge by simultaneously considering and co-optimizing the materials system based on co-ordination number and electrical bandgap, careful management of electrical compliance during operation, leveraging the device structure’s thermal conduction properties, underlying mechanisms understanding and last but not the least, machine learning to leverage the diversity and the quantity of the rich data set. As a result, over a 3 year period, as shown in Fig. 4, the device’s multinary material system was significantly improved to address device level KPIs such as leakage, thermal stability and furthermore, a chip physical design parameter such as threshold voltage drift (VTH).

Figure 4: Optimizing multinary elements (A to E) for device and design KPIs

Systems to chip design to devices to materials: that is how the cookie crumbles

With device innovation at its core, present day technology development focuses on emerging methodologies that extends device and materials technology co-optimization even further to higher orders of abstraction. Such leading edge technology development strategies involve including design interdependencies aka DTCO (Design – Technology Co-optimization) and some are stretching the optimization to include product and systems level careabouts such as with STCO (System – Technology Co-optimization). Following is what our key leading edge customers are highlighting as their focus areas. Fig. 5 shows TSMC’s3 estimates of the increasing DTCO contribution at each node versus traditional scaling that is independent of co-optimization with chip design.

Figure 5: Growing contribution of DTCO vs technology node

Similarly, Micron4 expects improved R&D efficiency and value to their end customers when a wholistic approach to technology optimization includes chip design, packaging and product level interdependencies, as seen in Fig. 6.

Figure 6: Wholistic approach that includes product, design, package, process and device interdependencies for improved R&D efficiency and value generation

Semiconductor Devices: to infinity and beyond

The global semiconductor industry is anticipated to grow to US$1 trillion in revenues by 2030, doubling in this decade5. This will be enabled by innovations in devices and materials at its core. The Roadmaps of the Electronics industry underscore this target rich landscape and a bright future for semiconductor devices. So don’t stop thinking about tomorrow and be a device innovator now and forever. Each one of us will be a contributor to this incredible progress either as the innovator, the maker or perhaps even a user of the semiconductor devices. As these emerging devices not just survive but actually thrive, I invite you to embrace structured innovation and leave Eureka to just being a coastal city in Humboldt County, California.

References:
  1. https://www.rollingstone.com/culture/culture-news/steve-jobs-in-1994-the-rolling-stone-interview-231132/
  2. Silicon Valley Engineering Council (SVEC) Journal, Vol. 2, 2010 pp 38-71.
  3. Mark Liu, TSMC, ISSCC – International Solid-State Circuits Conference, 2021
  4. S Deboer– Micron, Tech Roadmap, November 2020 https://www2.deloitte.com/us/en/pages/technology-media-and-telecommunications/articles/semiconductor-industry-outlook.html

About EMD Electronics
EMD Electronics is the U.S. and Canada electronics business of Merck KGaA, Darmstadt, Germany. EMD Electronics’ portfolio covers a broad range of products and solutions, including high-tech materials and solutions for the semiconductor industry as well as liquid crystals and OLED materials for displays and effect pigments for coatings and cosmetics. Today, EMD Electronics has approximately 2,000 employees around the country, with regional offices in Tempe (AZ) and Philadelphia (PA).
For more information, please visit www.emd-electronics.com.

About Merck KGaA, Darmstadt, Germany
Merck KGaA, Darmstadt, Germany, a leading science and technology company, operates across life science, healthcare, and electronics. More than 64,000 employees work to make a positive difference to millions of people’s lives every day by creating more joyful and sustainable ways to live. From providing products and services that accelerate drug development and manufacturing as well as discovering unique ways to treat the most challenging diseases to enabling the intelligence of devices – the company is everywhere. In 2022, Merck KGaA, Darmstadt, Germany, generated sales of € 22.2 billion in 66 countries. The company holds the global rights to the name and trademark “Merck” internationally. The only exceptions are the United States and Canada, where the business sectors of Merck KGaA, Darmstadt, Germany, operate as MilliporeSigma in life science, EMD Serono in healthcare, and EMD Electronics in electronics. Since its founding in 1668, scientific exploration and responsible entrepreneurship have been key to the company’s technological and scientific advances. To this day, the founding family remains the majority owner of the publicly listed company.

Also Read:

Investing in a sustainable semiconductor future: Materials Matter

LIVE WEBINAR: New Standards for Semiconductor Materials

Step into the Future with New Area-Selective Processing Solutions for FSAV


CEO Interview: Dr. Tung-chieh Chen of Maxeda

CEO Interview: Dr. Tung-chieh Chen of Maxeda
by Daniel Nenni on 09-22-2023 at 6:00 am

Dr. Tung chieh Chen of Maxeda

Dr. Tung-chieh Chen has been serving as the CEO of Maxeda Technology since 2015. In 2021, at DAC, the largest EDA conference, Dr. Chen was honored with the Under-40 Innovators Award in recognition of his exceptional achievements and contributions to EDA development. He is the infrastructure designer of NTUplace, a circuit placer that has won in three top EDA contests: DAC, ICCAD, and ISPD.

In addition to his role at Maxeda, Dr. Chen has held positions as an R&D manager at SpringSoft and Synopsys. He has authored more than 30 EDA papers and holds 14 U.S. patents. Dr. Chen received his Ph.D. degree in Electrical Engineering and Computer Science (EECS) from National Taiwan University (NTU).

Tell us about Maxeda Technology
Maxeda Technology envisions pioneering AI-assisted EDA solutions for the optimization of next-generation chip design. Through close collaboration with partners, we develop validated floorplan and dataflow-analysis tools to support IC design engineers in overcoming design challenges, especially as the design complexity increases along with the macro quantities within the chip. Our clients include several global top 10 fabless companies and some well-known IC design service providers.

What keeps your customers up at night? What problems are you solving?
The semiconductor industry’s growth is driven by the chip requirements of AI/5G and high-performance computing applications, especially as Generative AI attracts increasing attention. Those chips contain millions of components, which results in designs becoming too complex to generate even by experienced engineers.

Therein lies the challenge: the optimized placement of these components is difficult given the huge number of possible placement states. Therefore more iterations are required to optimize the design and this is incredibly time-consuming.

As a consequence, a growing number of IC designers are now considering the incorporation of AI technology, particularly reinforcement learning, in their chip floorplan design process.

Even for a tech giant like Google, it is challenging to integrate Reinforcement Learning into the chip design flow. One reason is the need for more than 100,000 iterations to complete the learning process. Therefore it is an extremely time-consuming method that makes heavy demands on machine resources.

What is the solution Maxeda provided to address the problem and how do you differentiate?
A completely new approach is necessary to apply Reinforcement Learning to chip floorplan design. What is needed are ultra-fast placement and routing, ultra-fast rewards calculation, and a high correlation to final results. Maxeda is collaborating with MediaTek and NTU to develop the MaxPlace™ RL (Reinforcement Learning) Reward Platform to address these demands. Through expedited placement and its strong correlation with rewards, reinforcement learning has proven highly effective in optimizing chip performance, reducing the physical design process from months to just days. What sets this platform apart is its demonstrated performance in actual production.

Existing commercial place and route solutions, which take a completely different approach by aiming for precise placement and routing to meet chip tape-out criteria, are not well-suited for reinforcement learning due to their resource-intensive nature. Hence, no other vendor provides such an effective method for reward calculation.

Figure 1: The MaxPlace™ RL Reward Platform optimizes chip floorplan design.

What are Maxeda’s upcoming plans?
As an EDA company with a vision to develop innovative solutions, Maxeda Technology continues to collaborate closely with partners to develop validated AI-assisted EDA solutions. In Q3 of 2023, we proudly released DesignPlan™, an SoC floorplan exploration tool designed to facilitate block outline and location exploration during the early stages of chip design. Furthermore, we are targeting the development of a completely new AI-assisted verification tool by the end of 2024.

Moreover, we are actively partnering with tier-one foundries to meet the evolving demands of advanced process nodes and navigate the challenges of the post-Moore era. We aim to expand our success from Taiwan to customers worldwide by leveraging this robust partner ecosystem.

Also Read:

CEO Interview: Koen Verhaege, CEO of Sofics

CEO Interview: Harry Peterson of Siloxit

Breker’s Maheen Hamid Believes Shared Vision Unifying Factor for Business Success


Nvidia Number One in 2023

Nvidia Number One in 2023
by Bill Jewell on 09-21-2023 at 8:00 pm

Nvidia number one in 2023

Nvidia will likely become the largest semiconductor company for the year 2023. We at Semiconductor Intelligence (SC-IQ) estimate Nvidia’s total 2023 revenue will be about $52.9 billion, passing previous number one Intel at an estimated $51.6 billion. Nvidia’s 2023 revenue will be almost double its 2022 revenue on the strength of its processors for artificial intelligence (AI). Intel has been the top semiconductor company for most of the last twenty-one years – except for 2017, 2018 and 2021 when Samsung was number one.

According to its website, Nvidia was founded 30 years ago in 1993 to create 3D graphics ICs for gaming and multimedia. It created the graphics processing unit (GPU) in 1999. Nvidia became involved in artificial intelligence (AI) in 2012. The company became public in 1999. Its revenue for fiscal 1999 was $158 million. Three years later its revenue was $1,369 million, an over eight-fold increase. In fiscal 2023 ended in January, its $27 billion in revenues were split between $15.1 billion in compute & networking and $11.9 billion in graphics.

Despite the fast pace of the semiconductor industry and the numerous startup companies, the top ten companies in 2023 have all been in business at least 30 years. Nvidia is the youngest at 30. Number four Broadcom Inc. is the result of Avago Technologies acquiring Broadcom Corporation in 2015. However, the original Broadcom Corporation was founded 32 years ago. Avago was a spin-off of Hewlett-Packard which entered the semiconductor business 52 years ago.

38-year-old Qualcomm grew to number five primarily through cellphone ICs and licensing revenues. Only Qualcomm’s IC revenues are included in the rankings. Number ten STMicroelectronics was formed in 1987 through the merger of SGS Microelettronica of Italy with Thomson Semiconducuteurs of France. The semiconductor businesses of SGS and Thomson both date back to the 1970s.

Two of the top ten companies were among the industry pioneers about 70 years ago. Texas Instruments was founded in 1930 and entered the semiconductor business in 1954. Infineon Technologies was originally part of Siemens AG, which was founded in 1847. Siemens began producing semiconductors in 1953. Infineon was spun out as a separate company in 1999.

The two South Korean companies, Samsung Electronics and SK Hynix, have over 40 years of semiconductor sales. They became dominant in the memory business after it was largely abandoned by U.S. and Japanese companies (except Micron Technology). SK Hynix was originally Hyundai Electronics which began making semiconductors in 1983. Hyundai merged with LG Semiconductor in 1999 to form Hynix, later SK Hynix.

Intel started 55 years ago and originally sold memory devices. AMD began 54 years ago producing logic ICs. Today the two companies primarily sell microprocessors, together accounting for over 95% of the market for computer microprocessors.

The relative stability of the top semiconductor companies can be seen by comparing the 2023 top ten with 1984, 39 years ago and the year the principal of Semiconductor Intelligence began in semiconductor market analysis. Of the top ten semiconductor companies in 1984, most are still in business today in one form or another. TI was number one in 1984. Since then, TI has narrowed its focus to become primarily an analog company. Number two Motorola split off its discrete business as ON Semiconductor in 1999. ON is now an $8 billion company and acquired industry pioneer Fairchild Semiconductor in 2016. Motorola spun off its IC business as Freescale Semiconductor in 2004. NXP Semiconductors was split off from number seven Philips in 2006. Freescale merged with NXP in 2015. NXP is currently a $13 billion company. Number five National Semiconductor was acquired by TI in 2011. Intel and AMD were number seven and eight, respectively, in 1984. They will be number two and number six in 2023.

Japanese companies were strong in the semiconductor industry in most of the 1980s and 1990s, especially in memory. They were all large, vertically integrated companies. Beginning in the late 1990s these companies began spinning off their semiconductor operations. Renesas Electronics was formed by the merger of the non-memory operations of Hitachi, Mitsubishi, and NEC. Renesas is now a $13 billion company. NEC and Hitachi split off their DRAM businesses in 1999 to form Elpida Memory. Elpida was acquired by Micron Technology in 2013. Toshiba spun off its flash memory business as Kioxia in 2016. Kioxia had over $11 billion in revenue in 2022. Toshiba continues to provide primarily discrete semiconductor devices. Fujitsu divested its IC foundry business in 2014 which was later acquired by UMC. Fujitsu formed a joint venture with AMD for flash memory, Spansion. Spansion merged with Cypress Semiconductor in 2014 and Cypress was acquired by Infineon in 2020.

The relative stability of the semiconductor industry is demonstrated by the market shares of the top ten companies in 1984 and 2023. In 1984 TI had a 9.3% share. In 2023 Nvidia will have about a 10.6% share. The combined market share of the top ten companies in 1984 was 63%. In 2023 it will be about 62%. Although the top companies are relatively stable, the industry has grown from $26 billion in 1984 to $500 billion in 2023, almost a 20-fold increase.

A significant trend since the 1980s has been the rise of fabless semiconductor companies. In 1984 all the top companies had their own wafer fabs. In 2023, three of top ten (Nvidia, Broadcom and Qualcomm) were founded as fabless companies. AMD became fabless in 2008 by spinning off its wafer fabs to what is now GlobalFoundries. Intel, TI, Infineon, and STMicroelectronics all use outside foundries to provide some of their semiconductor manufacturing. The rise of fabless companies was enabled by the founding of major wafer foundry TSMC in 1987, which currently has over 50% of the market. Other significant wafer foundries are Samsung, GlobalFoundries, UMC, and SMIC.

Also Read:

Turnaround in Semiconductor Market

Has Electronics Bottomed?

Semiconductor CapEx down in 2023


Cadence Tensilica Spins Next Upgrade to LX Architecture

Cadence Tensilica Spins Next Upgrade to LX Architecture
by Bernard Murphy on 09-21-2023 at 6:00 am

Xtensa LX8 processor

When considering SoC architectures it is easy to become trapped in simple narratives. These assume the center of compute revolves around a central core or core cluster, typically Arm, more recently perhaps a RISC-V option. Throw in an accelerator or two and the rest is detail. But for today’s competitive products that view is a dangerous oversimplification.  Most products must tune for application-dependent performance, battery life, and unit cost. In many systems general purpose CPU cores may still manage control, however the heavy lifting for the hottest applications has moved to proven mainstream DSPs or special purpose AI accelerators. In small, price-sensitive, power-sipping systems, DSPs can also handle control and AI in one core.

When only a DSP can do the job

While general purpose CPUs or CPUs with DSP extensions can handle some DSP processing, they are not designed to handle the high throughput streaming data flows common in a wide range of communications protocols, high quality audio applications, high quality image signal processing, safety-critical Radar and Lidar processing or the neural network processing common in object recognition and classification.

DSPs natively support fixed- and floating-point arithmetic essential for handling analog values that dominate signal processing, and they support massively parallel execution pipelines to accelerate the complex computation through which these values flow (think FFTs and filters for example) while also supporting significant throughput for streaming data. Yet these DSPs are still processors, fully software programmable therefore retaining the flexibility and futureproofing that application developers expect. Which is why, after years of Arm embedded processor ubiquity and the emerging wave of RISC-V options, DSPs still sit at the heart of devices you use every day, including communication, automotive infotainment and ADAS, and home automation. They also support the AI-powered functions within many compact power sensitive devices – smart speakers, smart remotes even smart earbuds, hearing aids, and headphones.

The Tensilica LX series and LX8

The Tensilica Xtensa LX series has offered a stable DSP platform for many years. A couple of stats that were new to me are that Tensilica counts over 60 billion devices shipped around their cores and they are #2 in processor licensing revenue (behind Arm), reinforcing how dominant their solutions are in this space.

Customers depend on the stability of the platform, so Tensilica evolves the architecture slowly; the last release, LX7, was back in 2016. As you might expect, Tensilica ensures that platforms remain compatible with all major OSes, debug tools and ICE solutions, supported by an ecosystem of third-party software/dev tools. The ISA has been extensible from the outset, long before RISC-V emerged while offering the same opportunities for differentiation that are now popular in RISC-V. The platform is aimed very much at embedded applications, delivering high performance at low power.

The latest version in this series, LX8, was released recently and adds two major features to the architecture in support of growing intelligence at the edge, a new L2 memory subsystem and an integrated DMA. I always like looking at features like this in terms of how they enable larger system objectives, so here is my take.

First the L2 cache will improve performance on L1 misses which should translate to higher frames per second rates for object recognition applications, as one example. The L2 can also be partitioned into cache and fixed memory sections, offering application flexibility through optimizing the L2 memory for a variety of workloads. The integrated DMA among other features supports 1D, 2D and 3D transfers, very important in AI functions. 1D could support a voice stream, 2D an image and 3D would be essential for radar/lidar data cubes. This hardware support will further accelerate frame rates. Also, the iDMA in LX8 supports zero value decompression, a familiar need in transferring trained network weights where significant stretches of values may be zeroed either through quantization or pruning and are compressed to something like <12:0> rather than a string of twelve zeroes. This is good for compression, but the expanded structure must be recovered before tensor operations can be applied in inference. Again, hardware assist accelerates that task, reducing latency between updates to the weight matrix.

Not revolutionary changes but essential to product builders who must stay on the leading edge of performance while preserving a low power footprint. Both SK Hynix and Synaptics have provided endorsements. You can read the press release HERE.


Water Sustainability in Semiconductor Manufacturing: Challenges and Solutions

Water Sustainability in Semiconductor Manufacturing: Challenges and Solutions
by Kalar Rajendiran on 09-20-2023 at 10:00 am

Typical on line sensor monitoring points in the semiconductor industry

Water, the planet’s lifeblood, remains a finite and precious resource. The Earth’s total water supply has remained relatively constant over millennia. However, it is the uneven distribution of freshwater and the challenges of providing access to clean water that are causing stress in various parts of the world. Coupled with the growing demands of both human consumption and industrial use, the imperative for quite some time has been, to find innovative ways to balance and sustainably manage water.

Industries are significant water users and consequentially contributors to environmental stress. For example, semiconductor fabs require substantial amounts of water, with some facilities using as much as 460 cubic meters per hour for manufacturing processes. But the semiconductor industry is already a leader in water reclamation and recycling due to its critical need for ultrapure water (UPW). Another reason Fabs invest heavily in water recycling is to reduce the demand on freshwater resources and minimize the discharge of pollutants into the environment. This is not to say that the industry does not face challenges in implementing cost-efficient and effective solutions. Continuous innovation is needed to keep up with the advances in the semiconductor manufacturing processes.

Reclaiming water involves treating and purifying wastewater generated during semiconductor manufacturing processes to restore it to a suitable quality for using again. Recycling water involves collecting and treating various wastewater streams generated within a facility and then repurposing this treated water for use in other processes or areas within the same facility. Reusing water refers to the practice of using treated wastewater for non-critical purposes unrelated to semiconductor manufacturing processes.

Mettler-Toledo recently published a whitepaper that goes into the details of the challenges faced during reclaiming wastewater and recommended solutions to enable water recycling and reusing.

Challenges to Water Reclamation, Recycling and Reuse

Semiconductors are manufactured in a highly controlled environment that demands ultrapure water (UPW) with extremely low levels of impurities. Semiconductor wastewater is characterized by wide disparities in pH, dissolved oxygen (DO), conductivity, total organic carbon (TOC), suspended solids content, and metallic contamination. Finding the right technology to treat such wastewater and ensuring consistent and reliable operation can be challenging. In addition, the industry faces several unique challenges when attempting to implement water reclamation, recycling, and reuse practices due to its stringent water quality requirements and sensitivity to contamination. Even minor variations in water composition can impact the performance and reliability of the equipment, potentially leading to product defects or yield losses. The semiconductor industry also operates under strict environmental regulations, requiring companies to comply with various standards and guidelines.

Continuous innovation and collaboration with technology providers are key to overcoming these challenges and ensuring sustainable water management in semiconductor manufacturing.

Mettler-Toledo’s Solutions

Mettler-Toledo’s analytical measurements provide the semiconductor industry with the critical sensors needed to help continuously measure and control water quality. Conductivity, TOC, temperature, pH, and DO are all measured and controlled continuously. Continuous, real-time monitoring with multi-parameter analytical process sensors is pivotal in achieving effective measurement and control during the water reclaim process. The following Figure shows the typical in-line sensor monitoring and measuring points.

TOC & Conductivity Measurement

Traditionally, semiconductor facilities have used conductivity, pH, and DO to measure and control the waste stream. However, recent advancements in analytical technology, such as Mettler Toledo’s Thornton 6000TOCi Total Organic Carbon sensor and the NEW UPW Unicond Resistivity sensor, have revolutionized the process. TOC measurement is critical for controlling the varying waste streams in real-time, as it immediately detects excursions and allows for quick corrective action. There has been a need for improvement and innovation in resistivity monitoring in UPW when it comes to temperature compensation and signal stability. The NEW UPW Unicond sensor is the breakthrough the industry has been waiting for. It delivers next level stability and accuracy well beyond current industry standards for resistivity.

To learn more details, visit www.mt.com/6000TOCi

To learn more details, visit www.mt.com/upwUniCond

Summary

As the semiconductor industry continues to evolve and develop more advanced technologies, the burden on local water resources and support infrastructures intensifies. This not only poses environmental challenges but also impacts the long-term viability of semiconductor manufacturing in water stressed regions of the world. However, sustainable water management practices and responsible water use can help mitigate these challenges.

Mettler Toledo’s whitepaper provides valuable insights and recommendations to guide this transformation. By prioritizing measurement, control, and improvement in materials reclaim, recycling, and reuse, semiconductor manufacturers can reduce their environmental impact, minimize waste, and contribute to a greener future.

Also Read:

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

The TSMC OIP Backstory

Podcast EP182: The Alphacore/Quantum Leap Solutions Collaboration Explained, with Ken Potts and Mike Ingster


Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?

Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?
by Robert Maire on 09-20-2023 at 6:00 am

SMIC 7nm
  • Huawei’s 7NM chip? This wasn’t supposed to happen
  • Are Chips a weapon for U.S. or China? Role reversal?
  • Will Taiwan turn from protected asset to unwanted liability?
  • Are sanctions so porous that US has already lost to China?
While EUV is critical to advanced chips there are workarounds

Many people either thought or assumed that lacking EUV scanners would act as a complete roadblock to Chinese semiconductor companies seeking to go beyond 14NM technology. After all, this is obviously the case with Global Foundries in the US which after voluntarily abandoning EUV & R&D has been stuck in the technological dark ages of 14NM.

This has clearly proven to not be the case as Huawei has a new 7NM chip which has (surprisingly) shocked many people. Even without EUV, SMIC has been able to do what Global Foundries (and others) seemingly can’t, that is produce 7NM chips.

You don’t need EUV for 7NM

The mistaken assumption on the part of many in the industry and the US government is that blocking access to EUV scanners would by default limit further progress on Moore’s Law beyond 14NM or 10NM. This is patently untrue…..

The reality is quite different. Back when 7NM was being developed, years ago, EUV technology was a lot less certain than it is today. There were still many questions about its readiness for HVM, and whether it would work as needed at the costs hoped for.

All the major chip makers, TSMC, Intel & Samsung etc; had a “dual path” approach to 7NM, that they worked on in parallel. One path was multi-patterning using dual and quad patterning without EUV at all, and the other path was using EUV. Work on 7NM process started way back in about 2013 long before EUV was a settled issue.

Even after EUV was proven as a viable technology, the dirty little secret in the industry is that a number of chip makers still used multi-patterning at 7NM.

Obviously EUV will be the eventual winner as we progress down Moore’s law so everyone wants to get on board and start using it at 7NM and below.

ASML also made a very strong case that EUV was cheaper and it was obviously less complex with fewer steps in the process flow than multi-patterning…..so the choice to transition to EUV seemed clear.

While its quite clear that EUV has a better, simpler process flow, we are not so sure about it actually being significantly cheaper as ASML suggests as there have been a number of public papers that suggest that multi-patterning at 7NM is cheaper (when we get to 5NM, EUV is definitely cheaper).

SMIC can produce 7NM without EUV

Given that a lot of engineers have left TSMC to go to SMIC and likely taken with them all that they learned at TSMC its no surprise that they have been able to take the non-EUV fork of the dual path approach. Also, when you look at the cost basis, its likely not a significant cost hit to make the chips without EUV. After all a 193 scanner is less than a quarter of the cost of an EUV scanner.

Ex TSMC engineer left TSMC to help SMIC 7NM effort

The only thing we don’t know is how good the yields are…..However, with lots of metrology and inspection tools made by KLAC, NVMI, ONTO etc; which are still shipping into China in huge volumes, they can likely figure out the process over time.

Don’t be surprised when SMIC does 5NM

Yes, you can do 5NM without EUV, which means that SMIC can do 5NM. The process flow does however get quite complex and it will certainly cost more than EUV with likely lower yields. But it is indeed “doable” at some high cost & lower yield.

If you have no other choice and need the technology you will do whatever it takes to get access to that technology.

Given that SMIC has figured out multi-patterning for 7NM they can likely figure it out for 5NM.

Blocking EUV scanners is clearly not enough

SMIC has clearly proven that it can get around the EUV ban. With multi-patterning and enough advanced deposition (ALD) tools, etch tools and metrology/inspection tools.

Applied Materials, Lam, KLA and others are still shipping tons of tools to China which is their largest market by far and growing, as memory has shrunk and TSMC has slowed, China is still buying anything not nailed down and obviously getting enough advanced dep, etch and metrology tools to do 7NM

As we have suggested in the past, the current sanctions are likely very porous. The proof of the porosity is SMIC’s ability to do 7NM which would not be possible without advanced dep, etch & metrology….its just that simple.

In many cases older generation tools are simply no longer made by tool makers and current generation tools may be just “software restricted” to older technology nodes. In many cases the difference between an advanced tool and a less capable tool is just a “software switch”.

In lithography there is a clear, crisp line between EUV and 193, in other tools, not so much. As we have mentioned in the past the only sure way to limit technology is to limit to 200MM (8 inch) rather than 300MM as that is not porous and easily verifiable.

So if we truly want to limit China we need to get serious about sanctions and not put it all on ASML and the scanners.

It would be a lot easier for China to just develop a new litho tool than to have to copy litho, dep, etch & metrology and everything else needed to do 7NM so the real sanction would be across the board.

Has the US already lost the Chip war?

If SMIC is at 7NM, they are likely about 5 years or so behind TSMC and maybe a couple of years behind Intel & Samsung. Already close enough for many applications such as 5G and going to 5NM will get them firmly into AI applications.

So if the goal was to keep China out of 5G and AI, by definition, we have already lost the war.

We lost the war due to lack of resolve and bad technology assumptions….

Will Taiwan become a liability?

We have suggested in prior notes a while ago that China taking over Taiwan would be a “hollow victory” as all someone has to do is drop a grenade or satchel charge in the EUV scanners as they are leaving the fab during the invasion by China. China would thus be left with useless fabs and a somewhat hollow victory.

We think that logic may have already been turned on its head…..

The real question is who needs Taiwan more? The US or China? China now has 7NM (not too far behind Intel). They will likely get 5NM in the not too distant future. They can do 5G and AI with that.

Intel isn’t yet doing real AI and doesn’t have 5G like TSMC does. So if Taiwan were to go away tomorrow the US has no domestic fabs that can do a foundry based AI device nor does its have a 5G foundry device…..China in contrast now has a 5G capable 7NM process and probably 5NM AI capable in the future.

China has been ramping semiconductor capacity in a huge way, the US still hasn’t figured out who gets CHIPS act money and TSMC’s Arizona fab is delayed and Intel doesn’t yet have its foundry act together.

Right now it would be China that would have the advantage in semiconductors. All China would have to do is launch a few low yield missiles into TSMC’s Taiwan fabs and the US and the rest of the world would be screwed while China would not be that bad off as they are essentially cut off from TSMC anyway (so why let the rest of the world get the chips that they can’t have). So who needs Taiwan more?

After the fabs are knocked out, so goes the Taiwanese “silicon shield” as there would be nothing left to protect and Taiwan would become a liability rather than an asset to the US as there would be no semiconductors left to protect and the US government likely doesn’t care about the Taiwanese people just the strategic value of semiconductors to the US and global economies.

You may say….but wait!, there’s still Samsung….and I would say that Samsung’s fabs are just about in artillery or short rocket range of North Korea (China’s puppet & buddy) which would then have similar leverage to China under the control of someone even worse than Xi ……

Not too many good options, no quick fixes, likely a decade or two and more away from the US increasing its long lost semiconductor independence, even if we tripled the CHIPS Act.

For the political, intellectually challenged like Ramaswamy who think the US will be semiconductor independent by 2028, I have a bridge in Brooklyn for sale, cheap……

The stocks

We think that the latest news out of SMIC increase the odds of sanctions being tightened ever further, and not just on ASML, as 7NM has proven that enough other dep, etch & metrology/inspection equipment that is advanced enough is getting into China to produce advanced devices. That means AMAT, KLAC, LRCX in the US and TEL, ASMI and others. We are nearing the one year anniversary of the October sanctions of 2022 and so far its a big fail……as SMIC & Huawei have thumbed their noses at the US.

The down cycle is far from over as TSMC’s recent delay of tools underscores. Memory still sucks, although pricing seems to have bottomed we are a very, very long way from needing to increase memory chip production.

However, the stocks are still near all time highs and the recent ARM IPO was a raging success and likely carried semiconductor valuations which were already high even further.

We still see a lot of risk everywhere and not much of it reflected in semiconductor stocks. We think the ARM IPO while great was more of a sign of “cabin fever” being released on the first big tech IPO in a while with everyone wanting a piece at any price.

We’ll see if the apparent failure of sanctions on the one year anniversary has any reaction…..and what that may be….

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

SMIC N+2 in Huawei Mate Pro 60

ASML-Strong Results & Guide Prove China Concerns Overblown-Chips Slow to Recover

SEMICON West 2023 Summary – No recovery in sight – Next Year?

Micron Mandarin Memory Machinations- CHIPS Act semiconductor equipment hypocrisy


Hyperstone Webinar – There’s More to a Storage System Than Meets the Eye

Hyperstone Webinar – There’s More to a Storage System Than Meets the Eye
by Mike Gianfagna on 09-19-2023 at 10:00 am

Hyperstone Webinar There's More to a Storage System Than Meets the Eye

Founded in 1990, Hyperstone is a fabless NAND flash memory controller company enabling safe, reliable and secure storage systems. The company designs, develops and delivers high-quality, innovative semiconductor solutions to enable its customers to produce world-class products for industrial, embedded, automotive and global data storage applications. With a pedigree like this, you can bet the company helped a lot of companies navigate the many choices associated with storage systems. Hyperstone will be presenting an informative webinar on the topic. A link to register for the webinar is coming. Read on to understand why there’s more to a storage system than meets the eye.

The Webinar Presenter

Steffen Allert

Steffen Allert is the webinar presenter. He heads the global sales organization of Hyperstone, orchestrating the company’s worldwide engagements with clients. For almost two decades he has adeptly bridged the gap between customers and engineers, amassing a profound understanding of the intricate nuances and demands intrinsic to storage design.

This experience has given many insights into navigating the conversation around trade-offs between reliability, security, performance, price, and endurance – critical for customers to ensure an optimal storage module. During the webinar, Steffen shares his substantial experience, explaining how storage solution choices can have a profound impact on the overall success of a product.

Many topics are covered by Steffen. The actual webinar title provides a clue about the breadth of the discussion.

Issues in Data Storage? Cyber Security, Data Privacy, AI, Boot Storage, IoT, and Mission Critical Data and Autonomous Driving

 The Webinar Topics

Steffen frames the initial discussion using the “iceberg” graphic shown above. He touches on the clear choices that lie above the water line:

Performance – This is one of the first criteria. Speed is key, but how long can the module hold that performance?

Price – Another top-of-mind item. But what exactly are you paying for?

As we venture below the water line, the topics become more subtle and far-reaching.

Use Case & Reliability – Where will the storage module be integrated? Into what kind of an application? How much storage is needed? Will It be reading or writing data? Is it in operation 24/7 or only occasionally?

Longevity & Supply – Realistically, how long should the end-application be in operation? Are you looking for long-term chip supply (+7 years) or is 2 years before EOL OK?

Your Data’s Value – Have you considered the true value of your data in relation to security or potential power failures? If your company was hacked, or a power outage resulted in a loss of data, what would the consequences be?

What Are You Willing to Trade Off? – You have heard it before, but you can’t have it all. To achieve the optimal solution, you must be prepared to make sacrifices for unnecessary functionality, which in turn allows optimization of other features.

With this setup, Steffen goes through several application examples and use cases to illustrate the options available and how to navigate the choices for an optimal result for the specific product and deployment being developed.  Environmental and quality considerations are also discussed.

Steffen provides a very useful set of questions to be asking as you make your storage system choices and a detailed view of the key trade-offs that impact the final product. Anyone who requires an optimized storage system for their product will get substantial benefit from this webinar.

To Learn More

The webinar will be broadcast on Wednesday, Oct 11, 2023 10:00 AM – 10:30 AM Pacific time. You can register for the webinar here. I highly recommend it. Hyperstone also has a rich download library on their website here. You can find lots of great storage design topics to dig into there. Clearly, there’s more to a storage system than meets the eye.

Also Read: 

Selecting a flash controller for storage reliability

CEO Interview: Jan Peter Berns from Hyperstone


Inference Efficiency in Performance, Power, Area, Scalability

Inference Efficiency in Performance, Power, Area, Scalability
by Bernard Murphy on 09-19-2023 at 6:00 am

AI graphic

Support for AI at the edge has prompted a good deal of innovation in accelerators, initially in CNNs, evolving to DNNs and RNNs (convolutional neural nets, deep neural nets, and recurrent neural nets). Most recently, the transformer technology behind the craze in large language models is proving to have important relevance at the edge for better reasons than helping you cheat on a writing assignment. Transformers can increase accuracy in vision and speech recognition and can even extend learning beyond a base training set. Lots of possibilities but an obvious question is at what cost? Will your battery run down faster, does your chip become more expensive, how do you scale from entry-level to premium products?

Scalability

One size might be able to fit all, but does it need to? A voice activated TV remote can be supported by a CNN-based accelerator, keeping cost down and extending battery life in the remote. Smart speakers must support a wider range of commands and surveillance systems must be able to trigger on suspicious activity, not a harmless animal passing by; both cases demand a higher level of inference, perhaps through DNNs or RNNs.

Vision transformers (ViT) are gaining popularity through higher accuracy in classification than CNNs. This is further enhanced through the global attention nature of transformer algorithms, allowing them to consider a whole scene in classification. That said, ViT is commonly paired with CNNs since CNNs can recognize objects much faster. Together performance and vision accuracy demand both a CNN and a ViT. In natural language processing, we have all seen how large language models can provide eerily high accuracy in recognition, now also practical at the edge. For these applications you must use transformer-based algorithms.

But wait… before you can use AI, an edge device employing voice-based control also needs a voice pickup front-end for audio beamforming, noise/echo cancellation, and wake word recognition. Image-based systems need a computer vision front end for image signal processing, de-mosaicing, noise reduction, dynamic range scaling, and so on.

These smart systems demand a lot of functionality, adding complexity and concerns to hardware development, silicon, margin costs, battery lifetimes, together with software development and maintenance for a family of products spanning a range of capabilities. How do you build a range of edge inference solutions to meet competitive inference rates, cost, and energy goals?

Configurable DSPs plus an optional configurable AI accelerator

Cadence has been in the DSP IP business for a long time, offering among other options their HiFi DSPs for audio, voice, and speech (popular in always-on very low power home and automotive infotainment) and their vision DSPs (used in mobile, automotive, VR/AR, surveillance, and drones/robots). In all of these they have established hardware and software solutions for audio/video pre-processing and AI. Intelligence extends from always-on functions – voice or visual activity detection for example – running at very low power to more complex neural net (NN) models running on the same DSP.

Higher performance recognition or classification requires a dedicated AI engine to run a specialized NN model, offloaded from the DSP processor. Cadence’s NNE 110 core handles full-featured convolutional models to provide this acceleration, supporting up to 256 GOPS per core. They have now announced a next-generation neural net accelerator, the Neo® NPU, raising performance significantly to 80 TOPS per core, also with support for multi-core.

The Neo NPU and the NeuroWeave SDK

Neo NPUs are targeted to a wide range of edge applications, from hearables, wearables and IoT; to smart speakers, smart TVs, to AR/VR and gaming; all the way up to automotive infotainment and ADAS.

The hardware architecture for such cores is becoming familiar. A tensor control unit manages accessing models, downloading/uploading data, and feeding operations to the 3D engine for tensor operations or a planar unit for scalar/vector operations. In an LLM the 3D engine might be used for self-attention operations, the planar engine for normalizations. For a CNN, the 3D engine would perform convolution operations and the planar engine would handle pooling.

Control and both engines are closely coupled through unified memory, again common in these state-of-art accelerators, to minimize not only off-chip memory accesses but even out-of-core memory accesses.

The SDK for this platform is called NeuroWeave, providing a unified development kit not only for Neo NPUs but also for Tensilica DSPs and the NNE 110. Scalability is important not only for hardware and models but also for model developers. With the NeuroWeave SDK model, developers have one development kit to map trained models to any of the full range of Cadence DSP/AI platforms. NeuroWeave supports all the standard (and growing number) of network development interfaces to develop compiled networks, also interpreted delegate options such as TensorFlow Lite Micro and Android Neural Network, continuing compatibility with flows for existing NNE 110 users. In all cases I am told, translation to a target platform is code-free. It is only necessary to dial in optimization options as needed.

Back to efficiency. Cadence has particularly emphasized both power and area efficiency in combined DSP + Neo solutions. In benchmarking (same process, 7nm, and 1.25GHz clock for Neo, 1Ghz for NNE 110), comparing HiFi 5 alone versus HiFi5 with NNE 110, they show 5X to 60X improvement in IPS (inferences per second) per microjoule, and 5X to 12X on top of that when replacing NNE 110 with Neo. When comparing IPS/mm2 between NNE 110 and Neo they show an average 2.7X improvement. In other words, you can get much better inference performance at the same energy and in the same area using Neo, or you can get the same performance at lower energy and in smaller area. Cadence provide lots of knobs to configure both DSPs and Neo as you tune for IPS, power and area, helping you dial down to the targets you need to meet.

Availability

Cadence already has early customers and is planning their official release for Neo NPU and NeuroWeave in December. You can learn more HERE.


Intel Ushers a New Era of Advanced Packaging with Glass Substrates

Intel Ushers a New Era of Advanced Packaging with Glass Substrates
by Mike Gianfagna on 09-18-2023 at 10:00 am

Intel Ushers a New Era of Advanced Packaging with Glass Substrates


Intel recently issued a press announcement that has significant implications for the future of semiconductors.  The release announces Intel’s new glass substrate technology. The headline states: Glass substrates help overcome limitations of organic materials by enabling an order of magnitude improvement in design rules needed for future data centers and AI products. This should definitely get your attention. I had the opportunity to get a pre-briefing that went into a bit of the backstory on this new development. Read on to understand how Intel ushers a new era of advanced packaging with glass substrates and why it matters.

The Briefing

Rahul Manepalli

Details say a lot. As I logged into the pre-briefing presentation on Intel’s glass substrate technology, I looked at the attendee list. It was a virtual Who’s Who for just about every high-power market analyst and researcher. When Intel talks, the world listens.  The briefing was presented by Rahul Manepalli, Intel Fellow & Senior Director of Substrate TD Module Engineering. Rahul has almost 24 years of tenure at Intel. During his introduction, it was explained that Rahul and his team are responsible for the development of the next generation of materials, processes and equipment for Intel’s package substrate pathfinding and development efforts. Quite a lot of responsibility. Rahul has a Ph.D. in Chemical Engineering from the Georgia Institute of Technology. He has a substantial command of what’s happening at Intel, and his presentation was simultaneously easy to understand and very rich in technical details. This is a rare set of skills.

The highlights of Rahul’s presentation were:

  • Intel’s breakthrough achievement enables continued scaling and advances Moore’s Law
  • Glass substrates enable an order of magnitude improvement in design rules needed for future data center and AI products
  • Chip architects can pack more “chiplets” in a smaller footprint on one package
  • Improved density and performance properties will lead to lower overall cost and power usage

Glass substrates took center stage for the discussion. The resulting packaging technology will enter the mainstream later this decade. Given the long lead time. don’t underestimate the impact this technology will have. There is currently an electrically functional, assembled MCP test vehicle with three layers of RDL and TGV of 75um. The photo at the top of this post is the test vehicle.

Some Details

It turns out Intel has been leading the way in advanced package design for quite a while. In 1995, the company led the transition to organic substrates. That was followed by Intel’s invention of embedded multi-die interconnect bridge, or EMIB. Intel leads the way again with the introduction of glass core substrates.

Glass core substrates offer substantial improvements in packaging technology when compared with organic substrates. Like organic materials, glass can be fabricated in a variety of sizes. Rahul explained that organic substrates are a composite material. Glass, on the other hand, is a homogenous amorphous material. This allows Intel to tune the properties of the glass substrate to bring it closer to the properties of silicon. This opens up the opportunity for many performance and density enhancements – the order of magnitude improvement in design rules mentioned earlier.

Benefits exist along both electrical and mechanical axes. Rahul provided the following summary:

  • Tunable modulus and CTE closer to silicon → large form factor enabling:
    • Dimensional stability → Improved feature scaling
    • High (~10x) through-hole density → improved routing and signaling
    • Low loss → high speed signaling
    • Higher temperature capability → advanced integrated power delivery

Rahul also shared some more details about the improvements glass substrates deliver over organic:

  • Tolerance for higher temperatures offers 50% less pattern distortion
  • Glass substrates have ultra-low flatness for improved depth of focus for lithography
  • Dimensional stability needed for extremely tight layer to layer interconnect overlay
  • Up to 10x increase in interconnect density possible with glass
  • Improved mechanical properties of glass enable ultra-large form-factor packages with very high assembly yields
  • Glass provides improved flexibility in setting design rules for power delivery and signal routing
  • Ability to seamlessly integrate optical interconnects, as well as embed inductors and capacitors into the glass at higher temperature processing
  • Better power delivery solutions while achieving high-speed signaling that is needed at much lower power

Glass clearly opens the door to a new level of integration and performance.  Rahul shared some information about how this is all done at Intel. The company’s work on glass goes back a decade. There is a fully integrated glass R&D line with over $1B investment in Chandler, AZ. Intel is working closely with equipment and materials partners to enable the ecosystem. To support demanding AI and data center applications, filled through glass vias with a ~20:1 aspect ratio for 1mm core thickness have been fabricated. As mentioned, there is an electrically functional, assembled MCP test vehicle. And Intel has over 600 inventions related to architecture, process, equipment, and materials. This is an impressive summary.

To Learn More

The press release has a lot of good information. In addition, there is a 3.5-minute video on Intel’s packing pedigree that is definitely worth a look here. And that’s how Intel ushers a new era of advanced packaging with glass substrates.

Also Read: 

How Intel, Samsung and TSMC are Changing the World

Intel Enables the Multi-Die Revolution with Packaging Innovation

Intel Internal Foundry Model Webinar


The TSMC OIP Backstory

The TSMC OIP Backstory
by Daniel Nenni on 09-18-2023 at 6:00 am

TSMC OIP 2023

This is the 15th anniversary of the TSMC Open Innovation Platform (OIP). The OIP Ecosystem Forum will kick off on September 27th in Santa Clara, California and continue around the world for the next two months in person and on-line in North America, Europe, China, Japan, Taiwan, and Israel. These are THE most attended semiconductor ecosystem networking events! I hope to see you there!

For more information check TSMC.com.

Growing up in Silicon Valley with a 40 year career in the semiconductor industry/ecosystem has been an amazing experience. Working with the most intelligent people around the world, solving some of the most complex problems, and seeing the fruits of our labor change the world, there is nothing like being a semiconductor professional.

This next passage is an updated chapter from our book “Fabless: The transformation of the Semiconductor Industry“. It captures the OIP backstory quite nicely but there is just one thing I would like to add. The amount of money invested by TSMC and the OIP partners in the ecosystem every year is billions of dollars. The total ecosystem investment is most certainly more than a trillion dollars and I must say we certainly are getting our money’s worth, absolutely.

In Their Own Words: TSMC and Open Innovation Platform
TSMC, the largest and most influential pure-play foundry,
has many fascinating stories to tell. In this section, TSMC
covers some of their basic history, and explains how creating
an ecosystem of partners has been key to their success, and to
the growth of the semiconductor industry.

The history of TSMC and its Open Innovation Platform (OIP)® is, like almost everything in semiconductors, driven by the economics of semiconductor manufacturing. Of course, ICs started 50 years ago at Fairchild, very close to where Google is headquartered today (these things go in circles). The planarization approach, whereby a wafer (just 1” originally) went through each process step as a whole, led to mass production. Other companies such as Intel, National, Texas Instruments and AMD soon followed and started the era of the Integrated Device Manufacturer (although we didn’t call them that back then, we just called them semiconductor companies).

The next step was the invention of ASICs with LSI Logic and VLSI Technology as the pioneers. This was the first step of separating design from manufacturing. Although the physical design was still done by the semiconductor company, the concept was executed by the system company. Perhaps the most important aspect of this change was not that part of the design was done at the system company, but rather the idea for the design and the responsibility for using it to build a successful business rested with the system company, whereas IDMs still had the “if we build it they will come” approach, with a catalog of standard parts.

In 1987, TSMC was founded and the separation between manufacture and design was complete. One missing piece of the puzzle was good physical design tools. Fortunately, Cadence was created in 1988 from the merger of SDA and ECAD (and soon after, Tangent). Cadence was the only supplier of design tools for physical place and route at the time. It was now possible for a system company to buy design tools, design their own chip and have TSMC manufacture it. The system company was completely responsible for the concept, the design, and selling the end-product (either the chip itself or a system containing it). TSMC was completely responsible for the manufacturing (usually including test, packaging and logistics too).

At the time, the interface between the foundry and the design group was fairly simple. The foundry would produce design rules and SPICE parameters for the designers; the design would be given back to the foundry as a GDSII file and a test program. Basic standard cells were required, and these were available on the open market from companies like Artisan, or some groups would design their own. Eventually TSMC would supply standard cells, either designed in-house or from Artisan or other library vendors (bearing an underlining royalty model transparent to end users). However, as manufacturing complexity grew, the gap between manufacturing and design grew too. This caused a big problem for TSMC: there was a lag from when TSMC wanted to get designs into high volume manufacturing and when the design groups were ready to tape out. Since a huge part of the cost of a fab is depreciation on the building and the equipment, which is largely fixed, this was a problem that needed to be addressed.

At 65 nm TSMC started the Open Innovation Platform (OIP) program. It began at a relatively small scale but from 65 nm to 40 nm to 28 nm the amount of manpower involved went up by a factor of 7. By 16 nm FinFET, half of the design effort is IP qualification and physical design because IP is used so extensively in modern SoCs. OIP actively collaborated with EDA and IP vendors early in the life-cycle of each process to ensure that design flows and critical IP were ready early. In this way, designs would tape-out just in time as the fab was starting to ramp so that the demand for wafers was well-matched with the supply.

In some ways the industry has gone a full circle, with the foundry and the design ecosystem together operating as a virtual IDM. The existence of TSMC’s OIP program further sped up disaggregation of the semiconductor supply chain. This was enabled partly by the existence of a healthy EDA industry and an increasingly healthy IP industry. As chip designs had grown more complex and entered the SoC era, the amount of IP on each chip was beyond the capability or the desire of each design group to create. But, especially in a new process, EDA and IP qualification was a problem.

On the EDA side, each new process came with some new discontinuous requirements that required more than just expanding the capacity and speed of the tools to keep up with increasing design size. Strained silicon, high-K metal gate, double patterning and FinFETs each require new support in the tools and designs to drive the development and test of the innovative technology.

On the IP side, design groups increasingly wanted to focus all their efforts on parts of their chip that differentiated them from their competition, and not on re-designing standard interfaces. But that meant that IP companies needed to create the standard interfaces and have them validated in silicon much earlier than before.

The result of OIP has been to create an ecosystem of EDA and IP companies, along with TSMC’s manufacturing, to speed up innovation everywhere. Because EDA and IP groups need to start work before everything about the process is ready and stable, the OIP ecosystem requires a high level of cooperation and trust.

When TSMC was founded in 1987, it really created two industries. The first, obviously, is the foundry industry that TSMC pioneered before others entered. The second was the fabless semiconductor industry where companies did not need to invest in fabs.

The foundry/fabless model largely replaced IDMs and ASIC. An ecosystem of co-operating specialist companies innovates fast. The old model of having process, design tools and IP all integrated under one roof has largely disappeared, along with the “not invented here” syndrome that slowed progress since ideas from outside the IDMs had a tough time penetrating. Even some of the earliest IDMs from the “Real men have fabs” era have gone “fab lite” and use foundries for some of their capacity, typically at the most advanced nodes.

Legendary TSMC Chairman Morris Chang’s “Grand Alliance” is a business model innovation of which OIP is an important part, gathering all the significant players together to support customers—not just EDA and IP, but also equipment and materials suppliers, especially for high-end lithography.

Digging down another level into OIP, there are several important components that allow TSMC to coordinate the design ecosystem for their customers.

  • EDA: the commercial design tool business flourished when designs got too large for hand-crafted approaches and most semiconductor companies realized they did not have the expertise or resources in-house to develop all their own tools. This was driven more strongly in the front-end with the invention of ASIC, especially gate-arrays, and then in the back end with the invention of foundries.
  • IP: this used to be a niche business with a mixed reputation, but now is very important with companies like ARM, Imagination, CEVA, Cadence, and Synopsys, all carrying portfolios of important IP such as microprocessors, DDRx, Ethernet, flash memory and so on. In fact, large SoCs now contain over 50% and sometimes as much as 80%.
  • Services: design services and other value-chain services calibrated with TSMC process technology helps customers maximize efficiency and profit, getting designs into high volume production rapidly.
  • Packaging: TSMC expanded the OIP ecosystem to include a 3D Fabric Alliance.
  • People: More than 3,000 TSMC employees are part of OIP plus 10,000 people from the more than 100 OIP partners. The OIP now includes 50,000 titles, 43,000 tech files, and 2,800 PDKs.

Processes are continuing to get more advanced and complex, and the size of a fab that is economical also continues to increase. This means that collaboration needs to increase as the only way to both keep costs in check and ensure that all the pieces required for a successful design are ready just when they are needed.

TSMC has been building an increasingly rich ecosystem for over 30 years and feedback from partners is that they see benefits sooner and more consistently than when dealing with other foundries. Success comes from integrating usage, business models, technology and the OIP ecosystem so that everyone succeeds. There are a lot of moving parts that all have to be ready. It is not possible to design a modern SoC without design tools. More and more SoCs involve more and more 3rd party IP, and, at the heart of it all, the process and the manufacturing ramp with its associated yield learning all needs to be in place at TSMC.

Bottom line: The OIP ecosystem has been a key pillar in enabling this sea of change in the semiconductor industry.

Also Read:

How Taiwan Saved the Semiconductor Industry

Morris Chang’s Journey to Taiwan and TSMC

How Philips Saved TSMC

The First TSMC CEO James E. Dykes

Former TSMC President Don Brooks

The TSMC Pivot that Changed the Semiconductor Industry!