SiC Forum2025 8 Static v3

Water Sustainability in Semiconductor Manufacturing: Challenges and Solutions

Water Sustainability in Semiconductor Manufacturing: Challenges and Solutions
by Kalar Rajendiran on 09-20-2023 at 10:00 am

Typical on line sensor monitoring points in the semiconductor industry

Water, the planet’s lifeblood, remains a finite and precious resource. The Earth’s total water supply has remained relatively constant over millennia. However, it is the uneven distribution of freshwater and the challenges of providing access to clean water that are causing stress in various parts of the world. Coupled with the growing demands of both human consumption and industrial use, the imperative for quite some time has been, to find innovative ways to balance and sustainably manage water.

Industries are significant water users and consequentially contributors to environmental stress. For example, semiconductor fabs require substantial amounts of water, with some facilities using as much as 460 cubic meters per hour for manufacturing processes. But the semiconductor industry is already a leader in water reclamation and recycling due to its critical need for ultrapure water (UPW). Another reason Fabs invest heavily in water recycling is to reduce the demand on freshwater resources and minimize the discharge of pollutants into the environment. This is not to say that the industry does not face challenges in implementing cost-efficient and effective solutions. Continuous innovation is needed to keep up with the advances in the semiconductor manufacturing processes.

Reclaiming water involves treating and purifying wastewater generated during semiconductor manufacturing processes to restore it to a suitable quality for using again. Recycling water involves collecting and treating various wastewater streams generated within a facility and then repurposing this treated water for use in other processes or areas within the same facility. Reusing water refers to the practice of using treated wastewater for non-critical purposes unrelated to semiconductor manufacturing processes.

Mettler-Toledo recently published a whitepaper that goes into the details of the challenges faced during reclaiming wastewater and recommended solutions to enable water recycling and reusing.

Challenges to Water Reclamation, Recycling and Reuse

Semiconductors are manufactured in a highly controlled environment that demands ultrapure water (UPW) with extremely low levels of impurities. Semiconductor wastewater is characterized by wide disparities in pH, dissolved oxygen (DO), conductivity, total organic carbon (TOC), suspended solids content, and metallic contamination. Finding the right technology to treat such wastewater and ensuring consistent and reliable operation can be challenging. In addition, the industry faces several unique challenges when attempting to implement water reclamation, recycling, and reuse practices due to its stringent water quality requirements and sensitivity to contamination. Even minor variations in water composition can impact the performance and reliability of the equipment, potentially leading to product defects or yield losses. The semiconductor industry also operates under strict environmental regulations, requiring companies to comply with various standards and guidelines.

Continuous innovation and collaboration with technology providers are key to overcoming these challenges and ensuring sustainable water management in semiconductor manufacturing.

Mettler-Toledo’s Solutions

Mettler-Toledo’s analytical measurements provide the semiconductor industry with the critical sensors needed to help continuously measure and control water quality. Conductivity, TOC, temperature, pH, and DO are all measured and controlled continuously. Continuous, real-time monitoring with multi-parameter analytical process sensors is pivotal in achieving effective measurement and control during the water reclaim process. The following Figure shows the typical in-line sensor monitoring and measuring points.

TOC & Conductivity Measurement

Traditionally, semiconductor facilities have used conductivity, pH, and DO to measure and control the waste stream. However, recent advancements in analytical technology, such as Mettler Toledo’s Thornton 6000TOCi Total Organic Carbon sensor and the NEW UPW Unicond Resistivity sensor, have revolutionized the process. TOC measurement is critical for controlling the varying waste streams in real-time, as it immediately detects excursions and allows for quick corrective action. There has been a need for improvement and innovation in resistivity monitoring in UPW when it comes to temperature compensation and signal stability. The NEW UPW Unicond sensor is the breakthrough the industry has been waiting for. It delivers next level stability and accuracy well beyond current industry standards for resistivity.

To learn more details, visit www.mt.com/6000TOCi

To learn more details, visit www.mt.com/upwUniCond

Summary

As the semiconductor industry continues to evolve and develop more advanced technologies, the burden on local water resources and support infrastructures intensifies. This not only poses environmental challenges but also impacts the long-term viability of semiconductor manufacturing in water stressed regions of the world. However, sustainable water management practices and responsible water use can help mitigate these challenges.

Mettler Toledo’s whitepaper provides valuable insights and recommendations to guide this transformation. By prioritizing measurement, control, and improvement in materials reclaim, recycling, and reuse, semiconductor manufacturers can reduce their environmental impact, minimize waste, and contribute to a greener future.

Also Read:

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

The TSMC OIP Backstory

Podcast EP182: The Alphacore/Quantum Leap Solutions Collaboration Explained, with Ken Potts and Mike Ingster


Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?

Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?
by Robert Maire on 09-20-2023 at 6:00 am

SMIC 7nm
  • Huawei’s 7NM chip? This wasn’t supposed to happen
  • Are Chips a weapon for U.S. or China? Role reversal?
  • Will Taiwan turn from protected asset to unwanted liability?
  • Are sanctions so porous that US has already lost to China?
While EUV is critical to advanced chips there are workarounds

Many people either thought or assumed that lacking EUV scanners would act as a complete roadblock to Chinese semiconductor companies seeking to go beyond 14NM technology. After all, this is obviously the case with Global Foundries in the US which after voluntarily abandoning EUV & R&D has been stuck in the technological dark ages of 14NM.

This has clearly proven to not be the case as Huawei has a new 7NM chip which has (surprisingly) shocked many people. Even without EUV, SMIC has been able to do what Global Foundries (and others) seemingly can’t, that is produce 7NM chips.

You don’t need EUV for 7NM

The mistaken assumption on the part of many in the industry and the US government is that blocking access to EUV scanners would by default limit further progress on Moore’s Law beyond 14NM or 10NM. This is patently untrue…..

The reality is quite different. Back when 7NM was being developed, years ago, EUV technology was a lot less certain than it is today. There were still many questions about its readiness for HVM, and whether it would work as needed at the costs hoped for.

All the major chip makers, TSMC, Intel & Samsung etc; had a “dual path” approach to 7NM, that they worked on in parallel. One path was multi-patterning using dual and quad patterning without EUV at all, and the other path was using EUV. Work on 7NM process started way back in about 2013 long before EUV was a settled issue.

Even after EUV was proven as a viable technology, the dirty little secret in the industry is that a number of chip makers still used multi-patterning at 7NM.

Obviously EUV will be the eventual winner as we progress down Moore’s law so everyone wants to get on board and start using it at 7NM and below.

ASML also made a very strong case that EUV was cheaper and it was obviously less complex with fewer steps in the process flow than multi-patterning…..so the choice to transition to EUV seemed clear.

While its quite clear that EUV has a better, simpler process flow, we are not so sure about it actually being significantly cheaper as ASML suggests as there have been a number of public papers that suggest that multi-patterning at 7NM is cheaper (when we get to 5NM, EUV is definitely cheaper).

SMIC can produce 7NM without EUV

Given that a lot of engineers have left TSMC to go to SMIC and likely taken with them all that they learned at TSMC its no surprise that they have been able to take the non-EUV fork of the dual path approach. Also, when you look at the cost basis, its likely not a significant cost hit to make the chips without EUV. After all a 193 scanner is less than a quarter of the cost of an EUV scanner.

Ex TSMC engineer left TSMC to help SMIC 7NM effort

The only thing we don’t know is how good the yields are…..However, with lots of metrology and inspection tools made by KLAC, NVMI, ONTO etc; which are still shipping into China in huge volumes, they can likely figure out the process over time.

Don’t be surprised when SMIC does 5NM

Yes, you can do 5NM without EUV, which means that SMIC can do 5NM. The process flow does however get quite complex and it will certainly cost more than EUV with likely lower yields. But it is indeed “doable” at some high cost & lower yield.

If you have no other choice and need the technology you will do whatever it takes to get access to that technology.

Given that SMIC has figured out multi-patterning for 7NM they can likely figure it out for 5NM.

Blocking EUV scanners is clearly not enough

SMIC has clearly proven that it can get around the EUV ban. With multi-patterning and enough advanced deposition (ALD) tools, etch tools and metrology/inspection tools.

Applied Materials, Lam, KLA and others are still shipping tons of tools to China which is their largest market by far and growing, as memory has shrunk and TSMC has slowed, China is still buying anything not nailed down and obviously getting enough advanced dep, etch and metrology tools to do 7NM

As we have suggested in the past, the current sanctions are likely very porous. The proof of the porosity is SMIC’s ability to do 7NM which would not be possible without advanced dep, etch & metrology….its just that simple.

In many cases older generation tools are simply no longer made by tool makers and current generation tools may be just “software restricted” to older technology nodes. In many cases the difference between an advanced tool and a less capable tool is just a “software switch”.

In lithography there is a clear, crisp line between EUV and 193, in other tools, not so much. As we have mentioned in the past the only sure way to limit technology is to limit to 200MM (8 inch) rather than 300MM as that is not porous and easily verifiable.

So if we truly want to limit China we need to get serious about sanctions and not put it all on ASML and the scanners.

It would be a lot easier for China to just develop a new litho tool than to have to copy litho, dep, etch & metrology and everything else needed to do 7NM so the real sanction would be across the board.

Has the US already lost the Chip war?

If SMIC is at 7NM, they are likely about 5 years or so behind TSMC and maybe a couple of years behind Intel & Samsung. Already close enough for many applications such as 5G and going to 5NM will get them firmly into AI applications.

So if the goal was to keep China out of 5G and AI, by definition, we have already lost the war.

We lost the war due to lack of resolve and bad technology assumptions….

Will Taiwan become a liability?

We have suggested in prior notes a while ago that China taking over Taiwan would be a “hollow victory” as all someone has to do is drop a grenade or satchel charge in the EUV scanners as they are leaving the fab during the invasion by China. China would thus be left with useless fabs and a somewhat hollow victory.

We think that logic may have already been turned on its head…..

The real question is who needs Taiwan more? The US or China? China now has 7NM (not too far behind Intel). They will likely get 5NM in the not too distant future. They can do 5G and AI with that.

Intel isn’t yet doing real AI and doesn’t have 5G like TSMC does. So if Taiwan were to go away tomorrow the US has no domestic fabs that can do a foundry based AI device nor does its have a 5G foundry device…..China in contrast now has a 5G capable 7NM process and probably 5NM AI capable in the future.

China has been ramping semiconductor capacity in a huge way, the US still hasn’t figured out who gets CHIPS act money and TSMC’s Arizona fab is delayed and Intel doesn’t yet have its foundry act together.

Right now it would be China that would have the advantage in semiconductors. All China would have to do is launch a few low yield missiles into TSMC’s Taiwan fabs and the US and the rest of the world would be screwed while China would not be that bad off as they are essentially cut off from TSMC anyway (so why let the rest of the world get the chips that they can’t have). So who needs Taiwan more?

After the fabs are knocked out, so goes the Taiwanese “silicon shield” as there would be nothing left to protect and Taiwan would become a liability rather than an asset to the US as there would be no semiconductors left to protect and the US government likely doesn’t care about the Taiwanese people just the strategic value of semiconductors to the US and global economies.

You may say….but wait!, there’s still Samsung….and I would say that Samsung’s fabs are just about in artillery or short rocket range of North Korea (China’s puppet & buddy) which would then have similar leverage to China under the control of someone even worse than Xi ……

Not too many good options, no quick fixes, likely a decade or two and more away from the US increasing its long lost semiconductor independence, even if we tripled the CHIPS Act.

For the political, intellectually challenged like Ramaswamy who think the US will be semiconductor independent by 2028, I have a bridge in Brooklyn for sale, cheap……

The stocks

We think that the latest news out of SMIC increase the odds of sanctions being tightened ever further, and not just on ASML, as 7NM has proven that enough other dep, etch & metrology/inspection equipment that is advanced enough is getting into China to produce advanced devices. That means AMAT, KLAC, LRCX in the US and TEL, ASMI and others. We are nearing the one year anniversary of the October sanctions of 2022 and so far its a big fail……as SMIC & Huawei have thumbed their noses at the US.

The down cycle is far from over as TSMC’s recent delay of tools underscores. Memory still sucks, although pricing seems to have bottomed we are a very, very long way from needing to increase memory chip production.

However, the stocks are still near all time highs and the recent ARM IPO was a raging success and likely carried semiconductor valuations which were already high even further.

We still see a lot of risk everywhere and not much of it reflected in semiconductor stocks. We think the ARM IPO while great was more of a sign of “cabin fever” being released on the first big tech IPO in a while with everyone wanting a piece at any price.

We’ll see if the apparent failure of sanctions on the one year anniversary has any reaction…..and what that may be….

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

SMIC N+2 in Huawei Mate Pro 60

ASML-Strong Results & Guide Prove China Concerns Overblown-Chips Slow to Recover

SEMICON West 2023 Summary – No recovery in sight – Next Year?

Micron Mandarin Memory Machinations- CHIPS Act semiconductor equipment hypocrisy


Hyperstone Webinar – There’s More to a Storage System Than Meets the Eye

Hyperstone Webinar – There’s More to a Storage System Than Meets the Eye
by Mike Gianfagna on 09-19-2023 at 10:00 am

Hyperstone Webinar There's More to a Storage System Than Meets the Eye

Founded in 1990, Hyperstone is a fabless NAND flash memory controller company enabling safe, reliable and secure storage systems. The company designs, develops and delivers high-quality, innovative semiconductor solutions to enable its customers to produce world-class products for industrial, embedded, automotive and global data storage applications. With a pedigree like this, you can bet the company helped a lot of companies navigate the many choices associated with storage systems. Hyperstone will be presenting an informative webinar on the topic. A link to register for the webinar is coming. Read on to understand why there’s more to a storage system than meets the eye.

The Webinar Presenter

Steffen Allert

Steffen Allert is the webinar presenter. He heads the global sales organization of Hyperstone, orchestrating the company’s worldwide engagements with clients. For almost two decades he has adeptly bridged the gap between customers and engineers, amassing a profound understanding of the intricate nuances and demands intrinsic to storage design.

This experience has given many insights into navigating the conversation around trade-offs between reliability, security, performance, price, and endurance – critical for customers to ensure an optimal storage module. During the webinar, Steffen shares his substantial experience, explaining how storage solution choices can have a profound impact on the overall success of a product.

Many topics are covered by Steffen. The actual webinar title provides a clue about the breadth of the discussion.

Issues in Data Storage? Cyber Security, Data Privacy, AI, Boot Storage, IoT, and Mission Critical Data and Autonomous Driving

 The Webinar Topics

Steffen frames the initial discussion using the “iceberg” graphic shown above. He touches on the clear choices that lie above the water line:

Performance – This is one of the first criteria. Speed is key, but how long can the module hold that performance?

Price – Another top-of-mind item. But what exactly are you paying for?

As we venture below the water line, the topics become more subtle and far-reaching.

Use Case & Reliability – Where will the storage module be integrated? Into what kind of an application? How much storage is needed? Will It be reading or writing data? Is it in operation 24/7 or only occasionally?

Longevity & Supply – Realistically, how long should the end-application be in operation? Are you looking for long-term chip supply (+7 years) or is 2 years before EOL OK?

Your Data’s Value – Have you considered the true value of your data in relation to security or potential power failures? If your company was hacked, or a power outage resulted in a loss of data, what would the consequences be?

What Are You Willing to Trade Off? – You have heard it before, but you can’t have it all. To achieve the optimal solution, you must be prepared to make sacrifices for unnecessary functionality, which in turn allows optimization of other features.

With this setup, Steffen goes through several application examples and use cases to illustrate the options available and how to navigate the choices for an optimal result for the specific product and deployment being developed.  Environmental and quality considerations are also discussed.

Steffen provides a very useful set of questions to be asking as you make your storage system choices and a detailed view of the key trade-offs that impact the final product. Anyone who requires an optimized storage system for their product will get substantial benefit from this webinar.

To Learn More

The webinar will be broadcast on Wednesday, Oct 11, 2023 10:00 AM – 10:30 AM Pacific time. You can register for the webinar here. I highly recommend it. Hyperstone also has a rich download library on their website here. You can find lots of great storage design topics to dig into there. Clearly, there’s more to a storage system than meets the eye.

Also Read: 

Selecting a flash controller for storage reliability

CEO Interview: Jan Peter Berns from Hyperstone


Inference Efficiency in Performance, Power, Area, Scalability

Inference Efficiency in Performance, Power, Area, Scalability
by Bernard Murphy on 09-19-2023 at 6:00 am

AI graphic

Support for AI at the edge has prompted a good deal of innovation in accelerators, initially in CNNs, evolving to DNNs and RNNs (convolutional neural nets, deep neural nets, and recurrent neural nets). Most recently, the transformer technology behind the craze in large language models is proving to have important relevance at the edge for better reasons than helping you cheat on a writing assignment. Transformers can increase accuracy in vision and speech recognition and can even extend learning beyond a base training set. Lots of possibilities but an obvious question is at what cost? Will your battery run down faster, does your chip become more expensive, how do you scale from entry-level to premium products?

Scalability

One size might be able to fit all, but does it need to? A voice activated TV remote can be supported by a CNN-based accelerator, keeping cost down and extending battery life in the remote. Smart speakers must support a wider range of commands and surveillance systems must be able to trigger on suspicious activity, not a harmless animal passing by; both cases demand a higher level of inference, perhaps through DNNs or RNNs.

Vision transformers (ViT) are gaining popularity through higher accuracy in classification than CNNs. This is further enhanced through the global attention nature of transformer algorithms, allowing them to consider a whole scene in classification. That said, ViT is commonly paired with CNNs since CNNs can recognize objects much faster. Together performance and vision accuracy demand both a CNN and a ViT. In natural language processing, we have all seen how large language models can provide eerily high accuracy in recognition, now also practical at the edge. For these applications you must use transformer-based algorithms.

But wait… before you can use AI, an edge device employing voice-based control also needs a voice pickup front-end for audio beamforming, noise/echo cancellation, and wake word recognition. Image-based systems need a computer vision front end for image signal processing, de-mosaicing, noise reduction, dynamic range scaling, and so on.

These smart systems demand a lot of functionality, adding complexity and concerns to hardware development, silicon, margin costs, battery lifetimes, together with software development and maintenance for a family of products spanning a range of capabilities. How do you build a range of edge inference solutions to meet competitive inference rates, cost, and energy goals?

Configurable DSPs plus an optional configurable AI accelerator

Cadence has been in the DSP IP business for a long time, offering among other options their HiFi DSPs for audio, voice, and speech (popular in always-on very low power home and automotive infotainment) and their vision DSPs (used in mobile, automotive, VR/AR, surveillance, and drones/robots). In all of these they have established hardware and software solutions for audio/video pre-processing and AI. Intelligence extends from always-on functions – voice or visual activity detection for example – running at very low power to more complex neural net (NN) models running on the same DSP.

Higher performance recognition or classification requires a dedicated AI engine to run a specialized NN model, offloaded from the DSP processor. Cadence’s NNE 110 core handles full-featured convolutional models to provide this acceleration, supporting up to 256 GOPS per core. They have now announced a next-generation neural net accelerator, the Neo® NPU, raising performance significantly to 80 TOPS per core, also with support for multi-core.

The Neo NPU and the NeuroWeave SDK

Neo NPUs are targeted to a wide range of edge applications, from hearables, wearables and IoT; to smart speakers, smart TVs, to AR/VR and gaming; all the way up to automotive infotainment and ADAS.

The hardware architecture for such cores is becoming familiar. A tensor control unit manages accessing models, downloading/uploading data, and feeding operations to the 3D engine for tensor operations or a planar unit for scalar/vector operations. In an LLM the 3D engine might be used for self-attention operations, the planar engine for normalizations. For a CNN, the 3D engine would perform convolution operations and the planar engine would handle pooling.

Control and both engines are closely coupled through unified memory, again common in these state-of-art accelerators, to minimize not only off-chip memory accesses but even out-of-core memory accesses.

The SDK for this platform is called NeuroWeave, providing a unified development kit not only for Neo NPUs but also for Tensilica DSPs and the NNE 110. Scalability is important not only for hardware and models but also for model developers. With the NeuroWeave SDK model, developers have one development kit to map trained models to any of the full range of Cadence DSP/AI platforms. NeuroWeave supports all the standard (and growing number) of network development interfaces to develop compiled networks, also interpreted delegate options such as TensorFlow Lite Micro and Android Neural Network, continuing compatibility with flows for existing NNE 110 users. In all cases I am told, translation to a target platform is code-free. It is only necessary to dial in optimization options as needed.

Back to efficiency. Cadence has particularly emphasized both power and area efficiency in combined DSP + Neo solutions. In benchmarking (same process, 7nm, and 1.25GHz clock for Neo, 1Ghz for NNE 110), comparing HiFi 5 alone versus HiFi5 with NNE 110, they show 5X to 60X improvement in IPS (inferences per second) per microjoule, and 5X to 12X on top of that when replacing NNE 110 with Neo. When comparing IPS/mm2 between NNE 110 and Neo they show an average 2.7X improvement. In other words, you can get much better inference performance at the same energy and in the same area using Neo, or you can get the same performance at lower energy and in smaller area. Cadence provide lots of knobs to configure both DSPs and Neo as you tune for IPS, power and area, helping you dial down to the targets you need to meet.

Availability

Cadence already has early customers and is planning their official release for Neo NPU and NeuroWeave in December. You can learn more HERE.


Intel Ushers a New Era of Advanced Packaging with Glass Substrates

Intel Ushers a New Era of Advanced Packaging with Glass Substrates
by Mike Gianfagna on 09-18-2023 at 10:00 am

Intel Ushers a New Era of Advanced Packaging with Glass Substrates


Intel recently issued a press announcement that has significant implications for the future of semiconductors.  The release announces Intel’s new glass substrate technology. The headline states: Glass substrates help overcome limitations of organic materials by enabling an order of magnitude improvement in design rules needed for future data centers and AI products. This should definitely get your attention. I had the opportunity to get a pre-briefing that went into a bit of the backstory on this new development. Read on to understand how Intel ushers a new era of advanced packaging with glass substrates and why it matters.

The Briefing

Rahul Manepalli

Details say a lot. As I logged into the pre-briefing presentation on Intel’s glass substrate technology, I looked at the attendee list. It was a virtual Who’s Who for just about every high-power market analyst and researcher. When Intel talks, the world listens.  The briefing was presented by Rahul Manepalli, Intel Fellow & Senior Director of Substrate TD Module Engineering. Rahul has almost 24 years of tenure at Intel. During his introduction, it was explained that Rahul and his team are responsible for the development of the next generation of materials, processes and equipment for Intel’s package substrate pathfinding and development efforts. Quite a lot of responsibility. Rahul has a Ph.D. in Chemical Engineering from the Georgia Institute of Technology. He has a substantial command of what’s happening at Intel, and his presentation was simultaneously easy to understand and very rich in technical details. This is a rare set of skills.

The highlights of Rahul’s presentation were:

  • Intel’s breakthrough achievement enables continued scaling and advances Moore’s Law
  • Glass substrates enable an order of magnitude improvement in design rules needed for future data center and AI products
  • Chip architects can pack more “chiplets” in a smaller footprint on one package
  • Improved density and performance properties will lead to lower overall cost and power usage

Glass substrates took center stage for the discussion. The resulting packaging technology will enter the mainstream later this decade. Given the long lead time. don’t underestimate the impact this technology will have. There is currently an electrically functional, assembled MCP test vehicle with three layers of RDL and TGV of 75um. The photo at the top of this post is the test vehicle.

Some Details

It turns out Intel has been leading the way in advanced package design for quite a while. In 1995, the company led the transition to organic substrates. That was followed by Intel’s invention of embedded multi-die interconnect bridge, or EMIB. Intel leads the way again with the introduction of glass core substrates.

Glass core substrates offer substantial improvements in packaging technology when compared with organic substrates. Like organic materials, glass can be fabricated in a variety of sizes. Rahul explained that organic substrates are a composite material. Glass, on the other hand, is a homogenous amorphous material. This allows Intel to tune the properties of the glass substrate to bring it closer to the properties of silicon. This opens up the opportunity for many performance and density enhancements – the order of magnitude improvement in design rules mentioned earlier.

Benefits exist along both electrical and mechanical axes. Rahul provided the following summary:

  • Tunable modulus and CTE closer to silicon → large form factor enabling:
    • Dimensional stability → Improved feature scaling
    • High (~10x) through-hole density → improved routing and signaling
    • Low loss → high speed signaling
    • Higher temperature capability → advanced integrated power delivery

Rahul also shared some more details about the improvements glass substrates deliver over organic:

  • Tolerance for higher temperatures offers 50% less pattern distortion
  • Glass substrates have ultra-low flatness for improved depth of focus for lithography
  • Dimensional stability needed for extremely tight layer to layer interconnect overlay
  • Up to 10x increase in interconnect density possible with glass
  • Improved mechanical properties of glass enable ultra-large form-factor packages with very high assembly yields
  • Glass provides improved flexibility in setting design rules for power delivery and signal routing
  • Ability to seamlessly integrate optical interconnects, as well as embed inductors and capacitors into the glass at higher temperature processing
  • Better power delivery solutions while achieving high-speed signaling that is needed at much lower power

Glass clearly opens the door to a new level of integration and performance.  Rahul shared some information about how this is all done at Intel. The company’s work on glass goes back a decade. There is a fully integrated glass R&D line with over $1B investment in Chandler, AZ. Intel is working closely with equipment and materials partners to enable the ecosystem. To support demanding AI and data center applications, filled through glass vias with a ~20:1 aspect ratio for 1mm core thickness have been fabricated. As mentioned, there is an electrically functional, assembled MCP test vehicle. And Intel has over 600 inventions related to architecture, process, equipment, and materials. This is an impressive summary.

To Learn More

The press release has a lot of good information. In addition, there is a 3.5-minute video on Intel’s packing pedigree that is definitely worth a look here. And that’s how Intel ushers a new era of advanced packaging with glass substrates.

Also Read: 

How Intel, Samsung and TSMC are Changing the World

Intel Enables the Multi-Die Revolution with Packaging Innovation

Intel Internal Foundry Model Webinar


The TSMC OIP Backstory

The TSMC OIP Backstory
by Daniel Nenni on 09-18-2023 at 6:00 am

TSMC OIP 2023

This is the 15th anniversary of the TSMC Open Innovation Platform (OIP). The OIP Ecosystem Forum will kick off on September 27th in Santa Clara, California and continue around the world for the next two months in person and on-line in North America, Europe, China, Japan, Taiwan, and Israel. These are THE most attended semiconductor ecosystem networking events! I hope to see you there!

For more information check TSMC.com.

Growing up in Silicon Valley with a 40 year career in the semiconductor industry/ecosystem has been an amazing experience. Working with the most intelligent people around the world, solving some of the most complex problems, and seeing the fruits of our labor change the world, there is nothing like being a semiconductor professional.

This next passage is an updated chapter from our book “Fabless: The transformation of the Semiconductor Industry“. It captures the OIP backstory quite nicely but there is just one thing I would like to add. The amount of money invested by TSMC and the OIP partners in the ecosystem every year is billions of dollars. The total ecosystem investment is most certainly more than a trillion dollars and I must say we certainly are getting our money’s worth, absolutely.

In Their Own Words: TSMC and Open Innovation Platform
TSMC, the largest and most influential pure-play foundry,
has many fascinating stories to tell. In this section, TSMC
covers some of their basic history, and explains how creating
an ecosystem of partners has been key to their success, and to
the growth of the semiconductor industry.

The history of TSMC and its Open Innovation Platform (OIP)® is, like almost everything in semiconductors, driven by the economics of semiconductor manufacturing. Of course, ICs started 50 years ago at Fairchild, very close to where Google is headquartered today (these things go in circles). The planarization approach, whereby a wafer (just 1” originally) went through each process step as a whole, led to mass production. Other companies such as Intel, National, Texas Instruments and AMD soon followed and started the era of the Integrated Device Manufacturer (although we didn’t call them that back then, we just called them semiconductor companies).

The next step was the invention of ASICs with LSI Logic and VLSI Technology as the pioneers. This was the first step of separating design from manufacturing. Although the physical design was still done by the semiconductor company, the concept was executed by the system company. Perhaps the most important aspect of this change was not that part of the design was done at the system company, but rather the idea for the design and the responsibility for using it to build a successful business rested with the system company, whereas IDMs still had the “if we build it they will come” approach, with a catalog of standard parts.

In 1987, TSMC was founded and the separation between manufacture and design was complete. One missing piece of the puzzle was good physical design tools. Fortunately, Cadence was created in 1988 from the merger of SDA and ECAD (and soon after, Tangent). Cadence was the only supplier of design tools for physical place and route at the time. It was now possible for a system company to buy design tools, design their own chip and have TSMC manufacture it. The system company was completely responsible for the concept, the design, and selling the end-product (either the chip itself or a system containing it). TSMC was completely responsible for the manufacturing (usually including test, packaging and logistics too).

At the time, the interface between the foundry and the design group was fairly simple. The foundry would produce design rules and SPICE parameters for the designers; the design would be given back to the foundry as a GDSII file and a test program. Basic standard cells were required, and these were available on the open market from companies like Artisan, or some groups would design their own. Eventually TSMC would supply standard cells, either designed in-house or from Artisan or other library vendors (bearing an underlining royalty model transparent to end users). However, as manufacturing complexity grew, the gap between manufacturing and design grew too. This caused a big problem for TSMC: there was a lag from when TSMC wanted to get designs into high volume manufacturing and when the design groups were ready to tape out. Since a huge part of the cost of a fab is depreciation on the building and the equipment, which is largely fixed, this was a problem that needed to be addressed.

At 65 nm TSMC started the Open Innovation Platform (OIP) program. It began at a relatively small scale but from 65 nm to 40 nm to 28 nm the amount of manpower involved went up by a factor of 7. By 16 nm FinFET, half of the design effort is IP qualification and physical design because IP is used so extensively in modern SoCs. OIP actively collaborated with EDA and IP vendors early in the life-cycle of each process to ensure that design flows and critical IP were ready early. In this way, designs would tape-out just in time as the fab was starting to ramp so that the demand for wafers was well-matched with the supply.

In some ways the industry has gone a full circle, with the foundry and the design ecosystem together operating as a virtual IDM. The existence of TSMC’s OIP program further sped up disaggregation of the semiconductor supply chain. This was enabled partly by the existence of a healthy EDA industry and an increasingly healthy IP industry. As chip designs had grown more complex and entered the SoC era, the amount of IP on each chip was beyond the capability or the desire of each design group to create. But, especially in a new process, EDA and IP qualification was a problem.

On the EDA side, each new process came with some new discontinuous requirements that required more than just expanding the capacity and speed of the tools to keep up with increasing design size. Strained silicon, high-K metal gate, double patterning and FinFETs each require new support in the tools and designs to drive the development and test of the innovative technology.

On the IP side, design groups increasingly wanted to focus all their efforts on parts of their chip that differentiated them from their competition, and not on re-designing standard interfaces. But that meant that IP companies needed to create the standard interfaces and have them validated in silicon much earlier than before.

The result of OIP has been to create an ecosystem of EDA and IP companies, along with TSMC’s manufacturing, to speed up innovation everywhere. Because EDA and IP groups need to start work before everything about the process is ready and stable, the OIP ecosystem requires a high level of cooperation and trust.

When TSMC was founded in 1987, it really created two industries. The first, obviously, is the foundry industry that TSMC pioneered before others entered. The second was the fabless semiconductor industry where companies did not need to invest in fabs.

The foundry/fabless model largely replaced IDMs and ASIC. An ecosystem of co-operating specialist companies innovates fast. The old model of having process, design tools and IP all integrated under one roof has largely disappeared, along with the “not invented here” syndrome that slowed progress since ideas from outside the IDMs had a tough time penetrating. Even some of the earliest IDMs from the “Real men have fabs” era have gone “fab lite” and use foundries for some of their capacity, typically at the most advanced nodes.

Legendary TSMC Chairman Morris Chang’s “Grand Alliance” is a business model innovation of which OIP is an important part, gathering all the significant players together to support customers—not just EDA and IP, but also equipment and materials suppliers, especially for high-end lithography.

Digging down another level into OIP, there are several important components that allow TSMC to coordinate the design ecosystem for their customers.

  • EDA: the commercial design tool business flourished when designs got too large for hand-crafted approaches and most semiconductor companies realized they did not have the expertise or resources in-house to develop all their own tools. This was driven more strongly in the front-end with the invention of ASIC, especially gate-arrays, and then in the back end with the invention of foundries.
  • IP: this used to be a niche business with a mixed reputation, but now is very important with companies like ARM, Imagination, CEVA, Cadence, and Synopsys, all carrying portfolios of important IP such as microprocessors, DDRx, Ethernet, flash memory and so on. In fact, large SoCs now contain over 50% and sometimes as much as 80%.
  • Services: design services and other value-chain services calibrated with TSMC process technology helps customers maximize efficiency and profit, getting designs into high volume production rapidly.
  • Packaging: TSMC expanded the OIP ecosystem to include a 3D Fabric Alliance.
  • People: More than 3,000 TSMC employees are part of OIP plus 10,000 people from the more than 100 OIP partners. The OIP now includes 50,000 titles, 43,000 tech files, and 2,800 PDKs.

Processes are continuing to get more advanced and complex, and the size of a fab that is economical also continues to increase. This means that collaboration needs to increase as the only way to both keep costs in check and ensure that all the pieces required for a successful design are ready just when they are needed.

TSMC has been building an increasingly rich ecosystem for over 30 years and feedback from partners is that they see benefits sooner and more consistently than when dealing with other foundries. Success comes from integrating usage, business models, technology and the OIP ecosystem so that everyone succeeds. There are a lot of moving parts that all have to be ready. It is not possible to design a modern SoC without design tools. More and more SoCs involve more and more 3rd party IP, and, at the heart of it all, the process and the manufacturing ramp with its associated yield learning all needs to be in place at TSMC.

Bottom line: The OIP ecosystem has been a key pillar in enabling this sea of change in the semiconductor industry.

Also Read:

How Taiwan Saved the Semiconductor Industry

Morris Chang’s Journey to Taiwan and TSMC

How Philips Saved TSMC

The First TSMC CEO James E. Dykes

Former TSMC President Don Brooks

The TSMC Pivot that Changed the Semiconductor Industry!


Podcast EP182: The Alphacore/Quantum Leap Solutions Collaboration Explained, with Ken Potts and Mike Ingster

Podcast EP182: The Alphacore/Quantum Leap Solutions Collaboration Explained, with Ken Potts and Mike Ingster
by Daniel Nenni on 09-15-2023 at 10:00 am

Dan is joined by Mike Ingster from Quantum Leap Solutions (QLS) and Ken Potts from Alphacore. Mike and Ken explain how QLS and Alphacore collaborate to provide industry-leading IP and system solutions to their mutual customers. The markets served by both QLS and Alphacore are discussed and the synergies are explained in this informative podcast.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Koen Verhaege, CEO of Sofics

CEO Interview: Koen Verhaege, CEO of Sofics
by Daniel Nenni on 09-15-2023 at 6:00 am

CEO Interview Koen Verhaege, CEO of Sofics

Koen Verhaege, CEO of Sofics (“Solutions for ICs”), has developed his career first as an engineer, later as a business leader and entrepreneur, working on IP development and valorisation. Koen’s technical accomplishments, publications and patents are in the field of on-chip ESD protection design.

Today, Koen leverages his problem-solving skills in shaping corporate strategy, evolving business models, and forging strategic deals. His unwavering commitment revolves around delivering Distinct Recurring Value in every facet of Sofics’ operations.

Tell us about your company?
We are and aspire to remain a premium physical design IP provider delivering device and circuit solutions to IC manufacturers and IC design companies by leveraging our extensive on-chip EOS/ESD/EMC IP portfolio.

Our motto is to consistently deliver Distinct Recurring Value in all our engagements, upholding the highest standards of excellence, quality, integrity, and social responsibility. Our strategy involves staying at the forefront of next-gen technologies and IC challenges, attracting high-value customers, and retaining top talent.

What problems are you solving?
We offer on-chip robustness solutions that outperform standard options or achieve normal robustness at lower costs or with less constraints, while ensuring first-time right silicon. Our library includes ESD cells and specialty I/Os and PHYs for all silicon processes from 0.18um down to 3nm today.

What application areas are your strongest?
We excel in addressing over-voltage and over-current hazards for a wide range of applications, as well as high-speed, high-frequency, and automotive designs. We cater to those who design their interfaces and those seeking circuit-ready solutions.

What keeps your customers up at night?
Customers worry about meeting conflicting reliability and normal operation specifications, the availability of foundry or 3rd party IP, and the risks of EOS/ESD/EMC failures or IP infringement causing them delays and impacting ROI and market share.

What does the competitive landscape look like and how do you differentiate?
We focus on providing Distinct Recurring Value and as such we will not engage in direct competition with low-cost service providers.

Our extensive patent portfolio in robust device solutions and interface circuits sets us apart. We lead in technology innovation, distinct recurring value solutions, and we support our solutions across a wide range of processes and foundries.

Half of our engineering time is reserved for and dedicated to research and development: constant innovation – also herein we are unique. This makes that our solutions are ready when the customer needs arise.

We are an IP and not a service company – but don’t be mistaken: we deliver the best service to our customers.

What new features/technology are you working on?
We focus on leading edge high value opportunities. This requires access to technology and to challenges. Our strong relationships with leading foundries, like TSMC and Samsung, grant us early access to new technology. Our customer base, including more than 100 fabless, secures access to the engineering challenges in future products.

Today, we see opportunities in automotive integrated circuits and legacy compatible interfaces in advanced CMOS and FinFET technologies.

How do customers normally engage with your company?
Customers discover us through our personal business network and via our digital presence (LinkedIn, blog, web-site). We developed a structured onboarding process with fixed-price customization and proven solutions delivery via license agreements, reducing upfront risk for customers while ensuring Distinct Recurring Value for both parties.

How do you make a difference for engineers in this time of resource shortages?
We prioritize Distinct Recurring Value for our employees, offering opportunities to work with advanced technologies and challenges in the IC field. Our engineers are constantly gathering building blocks for a great career, and at Sofics we pave the path for that career.

Our flexible work arrangements and our energy-efficient, employee-designed office provides an ideal hybrid working environment.

Also Read:

CEO Interview: Harry Peterson of Siloxit

Breker’s Maheen Hamid Believes Shared Vision Unifying Factor for Business Success

CEO Interview: Rob Gwynne of QPT


Deeper RISC-V pipeline plows through vector-scalar loops

Deeper RISC-V pipeline plows through vector-scalar loops
by Don Dingee on 09-14-2023 at 10:00 am

Atrevido 423 + V16 Vector Unit with its deeper RISC-V pipeline technology, Gazillion

Many modern processor performance benchmarks rely on as many as three levels of cache staying continuously fed. Yet, new data-intensive applications like multithreaded generative AI and 4K image processing often break conventional caching, leaving the expensive execution units behind them stalled. A while back, Semidynamics introduced us to their new highly customizable RISC-V core, Atrevido, with its Gazillion memory retrieval technology designed to solve more big data problems with a different approach to parallel fetching. We recently chatted with CEO and Founder Roger Espasa for more insight into what the deeper RISC-V pipeline and customizable core can do for customers.

Minimize taking your foot off the vector accelerator

We start with a deeper dive into the vector capability. It’s easy to think of cache misses as causing an outright pipeline stall, where all operations must wait until data moves refill the pipeline. A better-fitting metaphor for a long data-intensive pipeline, such as in Atrevido, may be a Formula 1 racecar. Wild hairpin corners may still require braking, but gentler turns around most circuits present an opportunity to stay on the accelerator, backing off as little as possible.

Few applications use vector math exclusively; scalar instructions sprinkled in the loop can cause a finely-tuned vector pipeline to sputter without proper handling. “Our obsession is to keep a deeper RISC-V pipeline busy at all times,” says Espasa. “So, we do whatever the memory pipeline needs, and in some cases, that may be a little bit more scalar performance.”

The Atrevido 423 core adds a 4-wide decode, rename, and issue/retire architecture designed to speed up mostly vector math with some scalar math mixed in. “The out-of-order pipeline coupled with 128 simultaneous fetches really helps get scalar instructions out of the way fast –  4-wide helps with that extra last bit of performance,” continues Espasa. “We can get back to the top of the loop, find more vector loads and start pulling those in while the scalar stuff at the tail end finishes.”

 

 

 

 

 

 

 

 

 

It’s worth noting everything happens without managing the ordering in software; the code just issues instruction primitives, and execution occurs when the data arrives. Espasa points out that one of the strengths of the RISC-V community is that his firm doesn’t need to work on a compiler; plenty of experts are working on that side, and the code is standard.

Vector units may appear a lot smaller than they are

After seeing that vector unit in the diagram, we couldn’t resist asking Espasa one question: how big is the Atrevido vector unit in terms of area? Die size is a your-mileage-may-vary question with so much customizability and different process nodes. And when they say customizability, they mean it. Instead of one configuration – say, ELEN=64 and eight Vcores for a 512-bit DLEN engine standard in some other high-end CPU architectures – customers can pick their vector scale. The vector register length is also customizable from 1x up to 8x.

 

 

 

 

 

 

 

 

 

“We don’t disclose die area publicly, but our larger vector unit configurations are taking up something like 2/3rds of the area,” says Espasa. “We’ve started calling them Vcores because it’s easier to transition customer thinking from CUDA cores in GPUs.” He then interjects some customers are asking for more than one vector unit connected to each Atrevido core (!). The message remains the same: Semidynamics can configure and size elements of a RISC-V Atrevido to meet the customer’s performance requirements more efficiently than tossing high-end CPUs or GPUs at big data scenarios.

Some emerging use cases for a deeper RISC-V pipeline

We also asked Espasa what has happened that maybe he didn’t expect with early customer engagements around the Atrevido core. His response indicates a use case taking shape: lots of threads running on simpler models.

“We continuously get requests for new data types, and our answer is always yes, we can add that with some engineering time,” Espasa points out. int4 and fp8 additions say a lot about the type of application they are seeing: simpler, less training-intensive AI inference models, but hundreds or thousands of concurrent threads. Consider something like a generative AI query server where users hit it asynchronously with requests. One stream is no big deal, but 100 can overwhelm a conventional caching scheme. Gazillion fetches help achieve a deeper RISC-V pipeline scale not seen in other architectures.

There’s also the near-far imaging problem – having to blast through high frame rates of 4K images looking for small-pixel fluctuations that may turn into targets of interest. Most AI inference engines are good once regions of interest take shape, but having to process the entire field of the image slows things down. When we mentioned one of the popular AI inference IP providers and their 24-core engine, Espasa blushed a bit. “Let’s just say we work with customers to adapt Atrevido to what they need rather than telling them what it has to look like.”

It’s a recurring theme in the Semidynamics story: customization within the boundaries of the RISC-V specification takes customers where they need to go with differentiated, efficient solutions. And the same basic Atrevido architecture can go from edge devices to HPC data centers with deeper RISC-V pipeline scalability choices, saving power or adding performance. Find out more about the recent Semidynamics news at:

https://semidynamics.com/newsroom

Also Read:

Deeper RISC-V pipeline plows through vector-scalar loops

RISC-V 64 bit IP for High Performance

Configurable RISC-V core sidesteps cache misses with 128 fetches

 


Successful Inter-Op Verification of Enterprise Flash Controller with ONFI 5.1 PHY IP

Successful Inter-Op Verification of Enterprise Flash Controller with ONFI 5.1 PHY IP
by Kalar Rajendiran on 09-14-2023 at 6:00 am

Mobiveil EFC

In an era defined by digital transformation and data-intensive applications, the solid-state device (SSD) market has emerged as a critical player in reshaping storage solutions. While there are several types of non-volatile memories, each with its own unique characteristics and use cases, Flash memory is increasingly overtaking other types of solid-state devices. This is due to its unique combination of characteristics and advantages that align with the evolving needs of modern computing and storage applications. While Flash memories gained their popularity in consumer applications, they are making significant inroads into enterprise applications. In addition to the obvious benefits over hard disk drives (HDD), flash memories can be scaled up to accommodate larger capacities, better than alternate non-volatile solutions. This scalability allows for high-density storage solutions and is vital for data centers, cloud storage, and other enterprise-level applications. Flash memory, particularly NAND Flash, is being increasingly adopted in enterprise storage systems to provide high performance and scalability for mission-critical applications.

Enterprise Flash Controller (EFC) and ONFI PHY

Enterprise Flash Controller IP refers to the core component responsible for managing the data flow between a host system and Enterprise NAND flash memory storage. Enterprise NAND flash devices are a type of NAND flash memory-based storage solution designed specifically for enterprise-level applications. These devices are optimized to meet the high-performance, reliability, and endurance demands of data center environments, servers, and mission-critical enterprise applications. An enterprise-grade flash controller is optimized for high-speed data transfer, error correction, wear leveling, and other crucial functions, ensuring the seamless operation of flash storage in demanding applications.

The ONFI PHY IP (Open NAND Flash Interface Physical Layer Intellectual Property) is a critical component of NAND flash memory systems. It refers to the implementation of the physical layer interface as defined by the ONFI specification, which governs the communication between a NAND flash memory controller and the NAND flash memory devices. The ONFI specification standardizes how data is transferred to and from NAND flash memory chips, ensuring compatibility and interoperability between different manufacturers’ products.

The ONFI 5.1 PHY specification is the latest revision of the ONFI PHY and extends NV-DDR3 and NV-LPDDR4 I/O speeds up to 3600MT/s. To support the faster data rates, ONFI5.1 introduces Write Duty Cycle Adjustment (WDCA), Per-Pin VrefQ Adjustment, Equalization and Unmatched DQS options for NAND vendors. ONFI 5.1 also adds ESD specifications, makes adjustments to tDQSRE and tDQSRH specifications and relaxes data input/output pre-amble timings for NV-DDR2/3 to tWPRE2/tRPRE2.

Empowering the Future

Enterprise flash controllers and ONFI 5.1 PHY IP are an ideal match for several fast-growing markets that prioritize high-speed data storage and transfer solutions.

Mobiveil’s EFC Design IP

Mobiveil is the first company to develop an extremely configurable EFC. Mobiveil’s EFC Design IP offers seamless access to external NAND flash memory, enabling high-speed transactions that capitalize on the pipeline performance of modern enterprise NAND flash devices. Among its many features, the Mobiveil EFC supports ONFI 5.1 specification with NV-LPDDR4 mode of operation, supports various datapath widths, and offers robust support for volume addressing, suspend and resume functions, multi-plan and asynchronous plane read commands, and more. Its configurable features adapt to diverse device needs, while its independent and pipelined interfaces streamline command, data, and report phases. Its architecture allows flexible control of ONFI 5.1 and toggle devices, ensuring compatibility and efficient addressing schemes. With its versatile architecture, the EFC Design IP delivers high performance while allowing software-defined control over device command sequences.

InPsytech’s ONFI 5.1 PHY IP

InPsytech is an IP company focused on high-speed source synchronous DDR architecture, SerDes interfaces, special I/Os and high-speed and low power standard cell offerings. Currently, InPsytech’s ONFI 5.1 PHY IP has undergone silicon validation across processes from N6/N7 to N28 and has already been delivered to customers. Volume production of customer products incorporating the ONFI PHY IP is expected to commence in 2H2023.

Successful Inter-Op Verification

Mobiveil and InPsytech recently announced the successful Inter-Op verification of Mobiveil’s EFC IP with InPsytech’s ONFI 5.1 PHY IP. Inter-op verification, short for interoperability verification, is the process of ensuring that two or more different components, systems, or technologies can work seamlessly together as intended. In this case, it involves testing the compatibility and interaction between Mobiveil’s EFC Design IP (which manages data transfers to and from NAND flash memory) and InPsytech’s ONFI 5.1 PHY IP (which handles the physical layer communication to NAND flash memory chips). The successful Inter-Op verification signifies that an integrated solution can effectively address the demands of the enterprise flash storage market while delivering the promised performance, reliability and compatibility.

Summary

While this post is about an SSD-focused solution, Mobiveil also announced a partnership recently with Winbond to deliver HyperRAM IP controller solution for SoC designs. HyperRAM offers much higher density than embedded SRAM and lower power compared to typical DRAMs. Since Mobiveil already offers a PSRAM controller solution, it was easy to adapt to HyperRAM. An earlier post on SemiWiki covered Mobiveil’s PSRAM controller solution in partnership with AP Memory.

Mobiveil is a fast-growing technology company that specializes in the development of Silicon Intellectual Properties, platforms and solutions for various fast growing markets. Its strategy is to grow with fast burgeoning markets by offering its customers valuable IPs that are easy to integrate into SoCs. It offers a wide range of IP solutions for various market and application needs.

To learn more, visit www.mobiveil.com.