SNPS1670747138 DAC 2025 800x100px HRes

A Recipe for Performance Optimization in Arm-Based Systems

A Recipe for Performance Optimization in Arm-Based Systems
by Bernard Murphy on 05-16-2024 at 6:00 am

Performance cookbook for Arm min (1)

Around the mid-2000’s the performance component of Moore’s Law started to tail off. That slack was nicely picked up by architecture improvements which continue to march forward but add a new layer of complexity in performance optimization and verification. Nick Heaton (Distinguished Engineer and Verification Architect at Cadence) and Colin Osbourne (Senior Principal System Performance Architect and Distinguished Engineer at Arm) have co-written an excellent book, Performance Cookbook for Arm®, explaining the origins of this complexity and how best to attack performance optimization/verification in modern SoC and multi-die designs. This is my takeaway based on a discussion with Nick and Colin, complemented by reading the book.

Who needs this book?

It might seem that system performance is a problem for architects, not designers or verification engineers. Apparently this is not not entirely true; after an architecture is delivered to the design team those architects move on to the next design. From that point on, or so I thought, design team responsibilities are to assemble all the necessary IPs and connectivity as required by the architecture spec, to verify correctness, and to tune primarily for area, power, and timing closure.

There are a couple of fallacies in this viewpoint. The first is the assumption that the architecture spec alone locks down most of the performance, and the design team need not worry about performance optimizations beyond implementation details defined in the spec. The second is that real performance measurement, and whatever optimization is still possible at that stage, must be driven by real workloads – perhaps applications running on an OS, running on firmware, running on the full hardware model.

But an initial architecture is not necessarily perfect, and there are still many degrees of freedom left to optimize (or get wrong) in implementation. Yet many of us, certainly junior engineers, have insufficient understanding of how microarchitecture choices can affect performance and how to find such problems. Worse still, otherwise comprehensive verification flows lack structured methods to regress performance metrics as a design evolves.  Which can lead to nasty late surprises.

The book aims to inform design teams on the background and methods in design and verification for high performance Arm-based SoCs, especially around components that can dramatically impact performance: the memory hierarchy, CPU cores, system connectivity, and the DRAM interface.

A sampling of architecture choices affecting performance

A Modern SoC Architecture (Courtesy Cadence Design Systems)

I can’t do justice to the detail in the book, but I’d like to give a sense of big topics. First, the memory hierarchy has huge impact on performance. Anything that isn’t idling needs access to memory all the time. Off-chip/chiplet DRAM can store lots of data but has very slow access times. On-chip memory is much faster but is expensive in area. Cache memory, relying on typical proximity of reference in memory addresses, provides a fast on-chip proxy for recently sampled addresses, needing to update from DRAM only on cache misses or a main memory update. All this is generally understood, however sizing and tuning these memory options is a big factor in performance management.

Processors are running faster, outpacing even fast memories. To hide latencies in fetching they take advantage of tricks like pre-fetch and branch prediction to request more instructions ahead of execution. In a multi-core system this creates more memory bandwidth demand. Virtual memory support also adds to latency and bandwidth overhead. Each can impact performance.

On-chip connectivity is the highway for all inter-IP traffic in the system and should handle target workloads with acceptable performance through a minimum of connections. This is a delicate performance/area tradeoff. For a target workload, some paths must support high bandwidth, some low latency, while others can allow some compromise. Yet these requirements are at most guidelines in the architecture spec. Topology will probably be defined at this stage: crossbar, distributed NoC, or mesh, for example. But other important parameters can also be configured, say FIFO depths in bridges and regulator options to prioritize different classes of traffic. Equally endpoint IP connected to networks often support configurable buffer depths for read/write traffic. All these factors affect performance, making connectivity a prime area where implementation is closely intertwined with architecture optimization.

Taking just one more example, interaction between DRAM and the system is also a rich area for performance optimization. Intrinsic DRAM performance has changed little over many years, but there have been significant advances in distributed read-write access to different banks/bank groups allowing for parallel controller accesses, and prefetch methods where the memory controller guesses what range of addresses may be needed next. Both techniques are supported by continually advancing memory interface standards (eg. in DDR) and continually more intelligent memory controllers. Again, these optimizations have proven critical to continued advances in performance.

A spec will suggest IP choices of course, and initial suggestions for configurable parameters but based on high-level sims; it can’t forecast detailed consequences emerging in implementation. Performance testing on the implementation is essential to check performance remains within spec, and quite likely tuning may at times be needed to stay within that window. Which requires that you have some way to figure out if you have created a problem, then have some way to isolate a root cause, and finally understand how to correct the problem.

Finding, fixing, and regressing performance problems

Key performance metrics

First, both authors stress that performance checking should be run bottom-up. Should be a no-brainer but the obvious challenge is what you use for test cases in IP or subsystem testing, even perhaps as a baseline for full system testing. Real workloads are too difficult to map to low-level functions, come with too much OS and boot overhead, and lack any promise of coverage however that should be defined. Synthetic tests are a better starting point.

Also you need a reference TLM model, developed and refined by the architect. This will be tuned especially to drive architecture optimization on the connectivity and DDR models.

Then bottom-up testing can start, say with a UVM testbench driving the interconnect IP connected to multiple endpoint VIPs. Single path tests (one initiator, one target) provide a starting point for regression-ready checks on bandwidth and latencies. Also important is a metric I hadn’t considered, but which makes total sense: Outstanding Transactions (OT). This measures the amount of backed up traffic. Cadence provides their System Testbench Generator to automate building these tests, together with Max Bandwidth, Min Latency and Outstanding Transaction Sweep tests, more fully characterizing performance than might be possible through hand-crafted tests.

The next level up is subsystem testing. Here the authors suggest using Cadence System VIP and their Rapid Adoption Kits (RAKs). These are built around the Cadence Perspec System Verifier augmented by the System Traffic Library, AMBA Adaptive Test Profile (ATP) support and much more. Perspec enables bare metal testing (without need for drivers etc.), with easy system-level scenario development. Very importantly, this approach makes extensive test reuse possible (as can be seen in available libraries). RAKs leverage these capabilities for out-of-the-box test solutions and flows, for an easy running start.

The book ends with a chapter on a worked performance debug walkthrough. I won’t go into the details other than to mention that it is based on an Arm CMN mesh design, for which a performance regression test exhibits a failure because of an over-demanding requester forcing unnecessary retries on a cache manager.

My final takeaway

This is a very valuable book, also very readable. These days I have a more theoretical than hands-on perspective, yet it opened my eyes on the both the complexity of performance optimization and verification, while for the same reasons making it seem more tractable. Equally important, this book charts a structured way forward to make performance a first-class component in any comprehensive verification/regression plan. With all the required elements: traffic generation, checks, debug, score boarding and the beginnings of coverage.

You can buy the book on Amazon – definitely worth it!


Synopsys Accelerates Innovation on TSMC Advanced Processes

Synopsys Accelerates Innovation on TSMC Advanced Processes
by Mike Gianfagna on 05-15-2024 at 10:00 am

Synopsys Accelerates Innovation on TSMC Advanced Processes

We all know that making advanced semiconductors is a team sport. TSMC can innovate the best processes, but without the right design flows, communication schemes and verified IP it becomes difficult to access those processes. Synopsys recently announced some details on this topic. It covers a lot of ground. The graphic at the top of this post will give you a feeling for the breadth of what was discussed. I’ll examine the announcement and provide a bit more information from a conversation with a couple of Synopsys executives. Let’s see how Synopsys accelerates innovation on TSMC advanced processes.

The Big Picture

Advanced EDA tools, silicon photonics, cutting edge IP and ecosystem collaboration were all touched on in this announcement. Methods for creating new designs as well as migrating existing designs were also discussed.

Sanjay Bali, vice president of strategy and product management for the EDA Group at Synopsys had this to say:

“The advancements in Synopsys’ production-ready EDA flows and photonics integration with our 3DIC Compiler, which supports the 3Dblox standard, combined with a broad IP portfolio enable Synopsys and TSMC to help designers achieve the next level of innovation for their chip designs on TSMC’s advanced processes. The deep trust we’ve built over decades of collaboration with TSMC has provided the industry with mission-critical EDA and IP solutions that deliver compelling quality-of-results and productivity gains with faster migration from node to node.”

And Dan Kochpatcharin, head of Design Infrastructure Management Division at TSMC said:

“Our close collaboration with Open Innovation Platform (OIP)® ecosystem partners like Synopsys has enabled customers to address the most challenging design requirements, all at the leading edge of innovation from angstrom-scale devices to complex multi-die systems across a range of high-performance computing designs. Together, TSMC and Synopsys will help engineering teams create the next generation of differentiated designs on TSMC’s most advanced process nodes with faster time to results.”

Digital and Analog Design Flows

It was reported that Synopsys’ production-ready digital and analog design flows for TSMC N3P and N2 process technologies have been deployed across a range of AI, high-performance computing, and mobile designs.

To get access to new processes faster, the AI-driven analog design migration flow enables rapid migration from one process node to another. Also discussed was a new flow for TSMC N5 to N3E migration.  This adds to the established flows from Synopsys for TSMC N4P to N3E and N3E to N2 processes.

Interoperable process design kits (iPDKs) and Synopsys IC Validator™ physical verification run sets were also presented. These capabilities allow efficient transition of designs to TSMC advanced process technologies. Using Synopsys IC Validator, full-chip physical signoff can be accomplished. This helps deal with the increasing complexity of physical verification rules. It was announced that Synopsys IC Validator is now certified on TSMC N2 and N3P process technologies.

Photonic ICs

AI training requires low-latency, power-efficient, and high-bandwidth interconnects for massive data sets. This is driving the adoption of optical transceivers and near-/co-packaged optics using silicon photonics technology.  Delivering these capabilities requires ecosystem collaboration.

Synopsys and TSMC are developing an end-to-end multi- die electronic and photonic flow solution for TSMC’s Compact Universal Photonic Engine (COUPE) technology to enhance system performance and functionality. This flow spans photonic IC design with Synopsys OptoCompiler™ and integration with electrical ICs utilizing Synopsys 3DIC Compiler and Ansys multiphysics analysis technologies.

Broad IP Portfolio N2 and N2P

Design flows and communication strategies are critical for a successful design, but the entire process is really enabled by verified IP for the target process. Synopsys announced the development of a broad portfolio of Foundation and Interface IP for the TSMC N2 and N2P process technologies to enable faster silicon success for complex AI, high-performance computing, and mobile SoCs.

Getting into some of the details, high-quality PHY IP on N2 and N2P, including UCIe, HBM4/3e, 3DIO, PCIe 7.x/6.x, MIPI C/D-PHY and M-PHY, USB, DDR5 MR-DIMM, and LPDDR6/5x, allows designers to benefit from the PPA improvements of TSMC’s most advanced process nodes. Synopsys also provides a silicon-proven Foundation and Interface IP portfolio for TSMC N3P, including 224G Ethernet, UCIe, MIPI C/D-PHY and M-PHY, USB/DisplayPort and eUSB2, LPDDR5x, DDR5, and PCIe 6.x, with DDR5 MR-DIMM in development.

Synopsys reported this IP has been adopted by dozens of leading companies to accelerate their development time. The figure below illustrates the breadth and performance of this IP portfolio for the TSMC N3E process. 

The Backstory

I was able to speak with two Synopsys experts –  Arvind Narayanan, Executive Director, Product Management and Mick Posner, Vice President, Product Management, High Performance Computing  IP Solutions.

Arvind Narayanan

I know both Arvind and Mick from my time working at Synopsys and I can tell you together they have a very deep understanding of Synopsys design technology and IP.

Arvind began by explaining how seamlessly Synopsys 3DIC Compiler, OptoCompiler, and the Ansys Multiphysics technology work together. This tightly integrated tool chain does an excellent job of supporting the TSMC COUPE technology. A well-integrated flow that is solving substantial data communication challenges.

It’s difficult to talk about communication challenges without discussing the growing deployment of multi-die strategies.  In this area, Mick explained that there is now an integration of 3DIC Compiler with the popular UCIe standard. This creates a complete reference flow for die-to-die interface connectivity.

Mick Posner

Arvind touched on the roles DSO.ai plays in the design migration process. For the digital portion, the models and knowledge built in DSO.ai for a design allows re-targeting of that design to a new process node with far less learning, simulation and analysis.  For the analog portion, the circuit and layout optimization capabilities of DSO.ai become quite useful.

Mick said he believes that Synopsys has the largest analog design team in the world. After thinking about it a bit, I believe he’s right. It is a very large team across the world working in many areas. Mick went on to point out that the significant design work going on at advanced nodes across that team becomes a substantial proving ground for new technology and flows. This is part of the reason why Synopsys tools are so well integrated.

To Learn More

You can access the full content of the Synopsys announcement here. In that announcement, you will find additional links to dig deeper on the various Synopsys technologies mentioned. And that’s how Synopsys accelerates innovation on TSMC advanced processes.


Podcast EP223: The Impact Advanced Packaging Will Have on the Worldwide Semiconductor Industry with Bob Patti

Podcast EP223: The Impact Advanced Packaging Will Have on the Worldwide Semiconductor Industry with Bob Patti
by Daniel Nenni on 05-15-2024 at 8:00 am

Dan is joined by Bob Patti, the owner and President of NHanced Semiconductors. Previously, Bob founded ASIC Designs Inc., an R&D company specializing in high-performance systems and ASICs. During his 12 years with ASIC Designs he participated in more than 100 tapeouts. Tezzaron Semiconductor grew from that company, with Bob as its CTO, and became a leading force in 3DIC technology. Tezzaron built its first working 3DICs in 2004. NHanced Semiconductors was spun out of Tezzaron to further advance and develop 2.5D/3D technologies, chiplets, die and wafer stacking, and other advanced packaging. Bob holds 21 US patents, numerous foreign patents, and many more pending patent applications in deep sub-micron semiconductor chip technologies.

In this broad analysis of the semiconductor industry, Bob discusses the significant impact advanced packaging is having and will have on innovation. The investments being made to bolster US capability in semiconductors is discussed with an evaluation of what areas of advanced packaging are opportunities for US growth.

Bob examines the various 2.5D/3D and mixed material assembly technologies on the horizon. He talks about a future semiconductor industry where “super OSATs” will play a major role in innovation and advanced technology sourcing.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Tools for Chips and Dips an Overview of the Semiconductor Tools Market

Tools for Chips and Dips an Overview of the Semiconductor Tools Market
by Claus Aasholm on 05-15-2024 at 7:00 am

Chips and Dips 2024

From humble beginnings in military applications, the semiconductor industry has been fundamental to all societal growth, and everything that grows exponentially depends on semiconductors.

It is not a gentle industry. Products over two years old are unsellable, and there is either too much supply or none. Semiconductor scarcity can kill companies faster than anything else, and components bought at the wrong price can make a product unsellable.

Thanks for reading Semiconductor Business Intelligence! Subscribe for free to receive new posts and support my work.

While the industry’s supply channels are incredibly complex, they are also the key to understanding what is happening. A change somewhere in the supply network causes ripples that move backwards and forwards in the network. Because of these propagations, you can predict what will happen in other areas of the supply network, and you can peek into the future.

Your future might be somebody else’s past

The first question in all strategy development is, “What is going on?” We make a living by dissecting the industry and its supply chain and providing strategic input to our clients.

Of course, you can join all the stock analysts at the investor calls and hear the Charismatic Earnings Overpromiser (CEO) tell you what’s really going on! We are still on the calls, but we know everybody had a good quarter and gained market share. We check the blah-blah through analysis. It is not always right, but it is always neutral and independent.

All Semiconductor companies had a great quarter and gained market share!

No—it does not start with a grain of sand! It begins with a tool. Some tools are more advanced than any other manmade object and constantly evolve. From a quiet existence in the university cities of Northern Europe, they are now vital assets in the geopolitical chess game between China and the Western world.

This makes analysing the Semiconductor Tools market one of the most exciting things we do (we don’t have a life).

The key companies in the tool market

Contrary to what should be expected when listening to all the noise about semiconductor subsidies, grants, and new factories being built, the revenue for tools is not growing. That is not to say it will not happen, only that it is unexpected. Also, the US is trying to regain chip manufacturing supremacy in the most advanced logic nodes; you should expect growth from ASML, which delivers the most advanced lithography tools for the industry.

The opposite is happening right now, with ASML revenue declining by almost 30% while its largest competitor, Tokyo Electron, grew 17%.

AMAT, the leading deposition Company, returned as the most prominent tool company in Q1-2024, with a 23% market share.

While the Chips Act and subsidies undoubtedly will eventually impact tool sales, it has not happened yet. The target of the Subsidies is independence from Chinese influence, be it military or economic. Still, the Chinese are not sitting with their hands in their laps while money is being distributed in the West.

Since the US Chips Act was signed, overall tool sales have decreased while tool sales to China have increased. China now buys nearly 45% of the tools the top Western Tool manufacturers sell.

The Chips Act has substantially impacted tool sales, although not in the expected direction. Significantly lower tool sales to the USA, Taiwan, and Korea have been recorded, while China’s appetite for Western Semiconductor tools has increased.

It is estimated that 75% of all Capital Expenditures (CapEx) associated with the construction of fabs are Tool expenses, with Lithography tools accounting for the largest share of the costs.

Below is an overview of CapEx spending for the large Semiconductor Fab owners. CapEx crossed $40B$ a quarter just after the Chips Act was signed but has been declining ever since. Not surprisingly, Tools revenue follows the CapEx development with two interesting deviations:

  1. Tools take a larger share of CapEx over time
  2. Except after the Chips Act was signed.

Undoubtedly, the Chips Act and other subsidies drive this change but not in the expected direction. We are not trying to imply that the Chips Act will not significantly impact the US Semiconductor Industry other than saying it might have some unintended side effects.

First, the Chips Act has changed the timing of Semiconductor investments as much as it has impacted their location.

Second, it has changed what CapEx is spent on. Greatly simplified, the Semiconductor factory owners spend the big money on four activities:

As semiconductor equipment is delicate and ages quickly, a significant amount of CapEx is spent on maintenance and one-for-one replacements to prevent capacity decline. Memory companies failed to do this during the downturn.

Right now, the Semiconductor fab owners have down-prioritised upgrading the existing factories to get subsidies. The US government and others want you to place your big box in their backyard to access the candy jar.

Currently, there is ample capacity outside memory and the smallest logic nodes, but we are early in the Semiconductor cycle. We are tinkering with the supply-demand equation, which can dramatically affect the industry. Soon, we will be capacity-constrained again, and this time around, we will not have built the capacity through upgrade activities. It could become nasty with potential AI upgrade cycles of PCs and Smartphones.

And suddenly, all of the capacity from the subsidies will come online at the beginning of 2026, creating a potential Mother of all Semiconductor cycles.

But what about China?

All this money-spraying was about stopping the Chinese attempt to dominate the Semiconductor Industry. This has been happening for decades since the Chinese authorities understood that the value was not in assembly but locked in the assembled semiconductors. Significant investments have been made, but it proved more difficult than expected.

As geopolitical tensions increased, the Chinese authorities shifted focus from building Semiconductor manufacturing capacity to investing in Semiconductor tools, as the most advanced equipment was embargoed.

Even though the Chinese semiconductor tools industry was small compared to Western manufacturers, it grew at an astonishing rate of nearly 50% CAGR until it reached above $2B in the last quarter of 2023.

Growth stopped in Q1-24—the decline of 21.9% is more than can be explained by the Chinese New Year. This can be seen by comparing this year’s holiday effects to the prior year’s.

It looks like congestion in the supply chain. Fortunately, our little box of numbers can add flavour to the discussion.

The Chinese Tool Manufacturers have grown rapidly but also amassed enormous inventory.

Closing in on three years of inventory is significant, especially if demand is collapsing.

Through production unit statistics, we can get some insights into the local demand in China. As can be seen, most of the significant Semiconductor products are not growing as rapidly as before. Combined, they have been flat over the last year.

Except for manufacturing of semiconductors. A massive 44.6% growth from a year ago supplying a market that is virtually flat.

The Chinese are not flooding the western markets with cheap products because they want to – it is because they need to. The Chinese semiconductor market is in complete oversupply.

The Chinese is already struggling with tariffs that where just made worse by the Biden Administration this week. Semiconductor tariffs will increase from 25% to 50% and electrical cars will bee slapped with a 100% tariff. This will not make the oversupply in China easier.

As you have seen, there is a lot going on in the market right now and it will be difficult to accurately predict what strategic decisions will prove to be right. We will keep monitoring all the supply pipes looking for hot water and leaks.

Should you be interested in using our research for your strategy development, please do not hesitate to contact us here.

We hope you have enjoyed our data story on the Semiconductor Tools market and our insights will help you make decisions. We would appreciate a like or a comments on our work.

As our insights are not the only insights that can be driven, feel free to add to the discussion if you have knowledge and insights that can add flavour to the discussion.

Also Read:

Oops, we did it again! Memory Companies Investment Strategy

Nvidia Sells while Intel Tells

Real men have fabs!


How Samtec Helps Achieve 224G PAM4 in the Real World

How Samtec Helps Achieve 224G PAM4 in the Real World
by Mike Gianfagna on 05-15-2024 at 6:00 am

How Samtec Helps Achieve 224G PAM4 in the Real World

224 Gbps PAM4 gets attention for applications such as data center, AI/ML, accelerated computing, instrumentation and test and measurement. The question is how real is it and what are the challenges that need to be overcome to implement reliable channels at that data rate? If you wonder about these kinds of topics for your next design, there was recently a webinar that will help you find the answers you’re looking for. A link is coming, but let’s first examine what was discussed and how Samtec helps achieve 224G PAM4 in the real world.

About the Webinar

The webinar was presented by Signal Integrity Journal and sponsored by Rhode & Schwartz. In case you aren’t familiar with the company, Rhode & Schwartz is a family-owned company based in Germany that has been pioneering RF engineering practices for over 90 years with a worldwide footprint.  The company offers state-of-the-art signal and power integrity measurement equipment, so they are also helping to facilitate 224G PAM4 channels. You can learn more about this important company here.  

Matt Burns

The webinar was presented by Matt Burns, Global Director, Technical Marketing at Samtec. Matt develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 20+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Matt is a frequent guest on SemiWiki, you can see some of his contributions here.

Matt set up the discussion with a direct observation – future system performance will depend on BOTH silicon speed and the speed of the channels to communicate data, be it an electron, a photon or an electromagnetic wave. The communication channel part of the system is where Samtec lives. The company refers to its capabilities as silicon-to-silicon solutions, underscoring the breadth of capabilities that are available. The diagram below gets to the main point, channel speed must never slow the system down.

Samtec’s Silicon to Silicon Solutions

If these points resonate, you must watch this webinar. A link is coming, but let’s examine some of the topics Matt discusses.

Topics Discussed

The graphic below is a good outline of the topics Matt covers in the webinar.

224 Gbps PAM4 Design Challenges

Matt spent time discussing all the transitions a signal will undergo in its silicon-to-silicon journey. There are many signal integrity challenges to be addressed here. Matt shared measurements of insertion loss across various frequencies on the PCB. He then showed insertion loss for Samtec’s Twinax cable technology, which delivers a far superior result as compared to the PCB.

You begin to see what an impact advanced transmission cables and connectors can have on system performance. Products that utilize Twinax were discussed, such as Samtec’s Flyover technology. There is a new product that delivers a 40% smaller cross-section area called Thinax and a third cable design that further improves performance and control called Eye Speed AIR.

If you thought cable design was simple, this webinar will enlighten you regarding how complex it really is when you are trying to achieve 224G PAM4 data rates. Samtec leads the industry in this area with some very unique technologies.

Matt gets into the details of how Samtec designs and manufactures its world-class cable assemblies. The complexity and technology involved are truly spectacular. He also gets into system design considerations – when to use copper and when to use optical communication for example. Beyond these decisions are many more regarding how the medium will interface to the chip, either directly as a co-packaged solution or as a near-chip connector.

There are many choices, and Samtec seems to have all of them covered. Signal density was also discussed. High-density Twinax connectors can route 1,024 signals from a large ASIC, a surprising statistic for me. Matt also gets into the thermal management techniques that can be applied to these high-density connectors.

Detailed performance data is provided for the newest technologies from Samtec. The results are quite impressive. An example design is also presented, with details of its performance.

To Learn More

Matt explained that Samtec goes beyond the products discussed in the webinar.  The company collaborates with its customers to build high performance communication channels that meet system demands across the entire design, from front panel to backplane and rack-to-rack. This is a partner you want for your next design.

If 224 Gbps PAM4 is in your future, this is a must-see event. You can access the full webinar here. You can also access a new white paper from Samtec entitled Achieving 224 Gbps PAM4: Interconnect Challenges, Advantages and Solutions. And that’s how Samtec helps achieve 224G PAM4 in the real world.


SoC Power Islands Verification with Hardware-assisted Verification

SoC Power Islands Verification with Hardware-assisted Verification
by Lauro Rizzatti on 05-14-2024 at 10:00 am

SoC Power Islands Figure 1

The ever-growing demand for longer battery life in mobile devices and energy savings in general have pushed power optimization to the top of designers’ concerns. While various techniques like multi-VT transistors and clock gating offer power savings at gate-level design, the real impact occurs at system level, where hardware and software collaborate.

A key power savings feature in modern SoC designs is the ability to dynamically deactivate design sections temporarily not in use. By enabling or disabling isolated design blocks, referred to as power islands, through hardware, firmware, or software applications, substantial energy savings can be achieved, as evidenced by hundreds of power islands found in state-of-the-art mobile devices. Notwithstanding its effectiveness, this approach poses new verification challenges, especially when validation necessitates hardware-assisted verification (HAV) to handle massive software workloads.

Functional Intent versus Power Intent

In contrast to analog design, which deals with every detail of the physical world, digital design embraces abstraction, and abstraction simplifies complexity via assumptions. At the core of the design abstraction hierarchy lies Register-Transfer Level (RTL). With RTL, functional intent, that is, design’s behavior, is captured in a hardware-description language (HDL) based on a set of rules that abstracts design functionality.

Similalrly, power intent, that is, power management strategies, is captured in a unified power format (UPF) consisting of a set of rules that abstracts the design power distribution architecture, including power domains definition and their interactions.

RTL and UPF coding reside in distinct files, merged during implementation and verification.

Power Islands Verification Challenges

In a monolithic power structure, it is reasonable to assume that if power and ground aren’t connected, no activity happens. But departing from uniform power distribution introduces complexity that can lead to functional errors.

When dealing with a design incorporating power islands, a verification plan must extend beyond verifying design logic to include power management. Merely testing functionality in static powered-up and -down states is insufficient. Transitions between on and off states present abundant opportunities for unexpected functional issues.

When an island is powered up, all registers and memories retain a defined, unambiguous logic state determined by RTL. Conversely, when an island is powered down, the circuitry’s state becomes unknown. Unknown states also arise during power transitions. Isolation cells and retention registers, as defined by UPF, prevent the corruption of design functionality caused by these indeterminate states.

A comprehensive analysis of a UPF-defined low-power design via low-level circuit simulation, while potentially very accurate, is not applicable on more than a few thousand transistors at best. Simulating software execution at RTL would be excruciatingly slow. For instance, booting an OS via an HDL simulator can take many months, making it unworkable.

A viable alternative for verifying the interaction of software with hardware as it runs on SoC embedded processors are hardware-assisted platforms. Several orders of magnitude faster than HDL simulators, HAV platforms can boot Linux in a matter of minutes instead of months. They also provide full visibility into the design-under-test (DUT) for thorough debugging.

Power Islands Validation Via Hardware-assisted Verification

Hardware-assisted verification can ensure that the logic of an entire design can still function when an island is deactivated or is transitioning from one power state to the other.

Low-power validation via HAV can identify several issues, such as:

  • Bug-free IP integration
  • Incorrect retention power management that leads to functional bugs
  • Incorrect isolation sequencing and connection of isolation control that lead to functional bugs
  • Incorrect power gating that leads to functional bugs
  • Missing isolation
  • Incorrect power-on reset sequencing

Unknown States in HAV

In the simulation world, the “unknown state” is represented by an “X” symbol. HAV platforms are optimized to process two-state logic and do not handle “X” states due to inefficiency in modeling a 4-state behavior with an inherently 2-state engine. During HAV every node retains either a logic “0” or “1”, which may not be known. An approach to sidestep this conundrum stems from the concept of “state corruption” affecting the island ports and core registers.

Figure 1: A powered-down island assumes an isolated interior and corrupted periphery (Source: Author)

Figure 1 shows a powered down island. Since the internal core logic is assumed to be isolated from the rest of the chip, only the ports matter, hence their values are corrupted (represented by “C” in the figure). For a given application, the designer may know what the powered-down values may be, thus giving him/her control to implement the corruption.

Figure 2: An island in power transition has an active periphery and a corrupted interior. (Source: Author)

Figure 2 shows the same circuit while transitioning from one power state to the other. The assumption here is that neither the logic states of the ports, nor those of the internal registers are known. The worst case would be when the periphery is still active, propagating the results of the unknown state of the core logic. The solution here is to corrupt the core register values while leaving the ports in an active state.

This assumption empowers a verification engineer to execute a target software workload controlling all power islands iteratively multiple times using corrupted values. The process instills confidence in the correct design functionality during power up/down operations.

Modeling Corruption: Scrambling and Randomization

The concept of corruption can be modeled by randomizing the generation of logic states of either the I/O ports or the core registers via built-in hardware instrumentation running at high frequency, much faster than the design frequency. Core registers are randomized only during power state transition.

I/O Ports Randomization

In a powered-down domain all port states are pseudo-randomly assigned a “1” or a “0” at a high frequency until the domain is turned back on, unless isolation cells are placed on those ports.

Scrambling of Internal States

During power switching of an island, all internal states are pseudo-randomly assigned a “1” or a “0.”

UPF Impact on HAV Platform Capacity, Compilation, Runtime

Compilation of UPF power intent attributes onto RTL generates unique hardware structures that enable the HAV platform to verify the correct operation of the power islands. The ensuing UPF hardware instrumentation impacts three HAV deployment characteristics to varying extents, contingent upon the specific UPF attributes involved.

Capacity Impact

When compiling UPF attributes on top of RTL, the design size expands in a range of 5% to 30%. The lower range applies to designs with rather few power domains. The higher limit affects fine grained UPF/RTL designs where every instance is instrumented with UPF.

Compilation Time Impact

UPF instrumentation increases only on the front-end of compilation process, up to 10% of the total compilation time.

Runtime Speed Impact

The impact on execution speed of a UPF instrumented design can be marginal. Due to larger clock cones, higher IO cuts and potentially some optimization like constant propagation that may not fully propagate could cause a speed degradation of less than 20% of non-UPF-based design execution.

Low-Power Design Debug and Coverage

While an HAV platform is not the ideal verification engine to debug X-propagation generated by low-power corruption, it becomes indispensable to debug UPF issues in consequence of executing long test sequences and processing heavy software workloads.

In such instances, debugging UPF instrumented designs demands enhanced capabilities on top of what is already requested for accurate RTL debug. Specific waveform notations ought to provide insight on the activity of UPF control signals and the status of UPF cells in power islands.

By adopting waveform reconstruction engines, notations to highlight powered-off, isolated, randomized or scrambled state conditions, can help verification engineers to identify:

  • Wrong results in low-power boundaries (see figure 3)
  • Value mismatch in HAV results (no X)
  • Wrong results in power islands output even if isolated
  • Wrong power domain shut-down
  • Wrong retention behavior

Figure 3: Example of wrong results in low-power boundary (Source: Synopsys)

Assertions and Coverage

As important as waveform debugging, ensuring assertions and coverage of UPF construct activity is essential to accelerate the identification of UPF incorrect behavior.

Examples of assertions include messages for power activity, warning/error messages and assertions regarding incorrect power sequences, reset/clock, and power sequence assertions.

Examples of critical coverage encompass control signals for activating power switching, isolation, and retention; power state tables, power domain simstate, and port state coverage.

UPF Operational Modes in HAVn

Although low-power SOC chips have gained popularity, few designers possess the expertise to verify power islands through HAV. To help the understanding of this emerging low-power verification methodology, a leading EDA industry conceived a series of operational modes that gradually activate UPF constructs, ranging from none to all, in incremental steps.

NOHARM UPF Mode

In NOHARM mode, UPF constructs are compiled into the DUT but remain dormant at run-time. This operational mode serves to rule out low-power instrumentation / netlist connectivity issues without the need to learn UPF functionality.

ALWAYS-ON UPF Mode

In ALWAYS-ON mode, UPF functionality is activated at runtime. Isolation and retention cells are active and will respond to corresponding isolation control and retention save/restore signals. No corruption on either power domain boundary ports or inside power down domain is performed. All power domains are active all the time for the entire HAV session, and the chip consumes full power.

FULL-WEIGHT UPF mode

In POWER-AWARE EMULATION (PAE) or FULL-WEIGHT UPF mode, all UPF attributes, like isolation, corruption, scrambling, randomization, are operational. See table I.

RUNTIME MODES Power Management Behavior
NOHARM ·        Low-power cells are transparent

·        Power Gating, Isolation, Retention are inactive

ALWAYS-ON ·        All power domains are turned on and no power switching happens

·        Isolation/Retention are active based on control/supplies

·        No Corruption/Scrambling

PAE ·        Power Aware Mode

·        Corruption, Scrambling, Power Gating, Isolation, Retention are active

Table I: Comparison of power management behavior in NOHARM, ALWAYS-ON and FULL-WEIGHT UPF operational modes.
(Source: Synopsys)

LIGHT-WEIGHT UPF Mode

Finally, LIGHT-WEGHT UPF is optimized to activate only certain UPF attributes, such as isolation and retention functionality but not corruption propagation, The benefits include lower capacity, and faster compile time and performance.

It is worth mentioning that according to user surveys, UPF isolation and retention are used by about 80% of test cases, while the remaining 20% of test cases require full PAE mode.

Table II captures the differences between FULL-WEIGHT UPF and LIGHT-WEIGHT UPF modes.

UPF Features PAE LWE
Isolation Yes Yes – Simplified
Retention Yes Yes – Simplified
Corruption/Scrambling Yes (Random/All 1/All 0) No
Power Domain Switching Yes No

Table II: Comparison of power management behavior in FULL-WEIGHT and LIGHT-WEIGHT UPF operational modes.
(Source: Synopsys)

Conclusions

Proven by its ability to significantly extend smartphone battery longevity to a full day of usage through efficient power island implementation, the UPF low-power technology is rapidly gaining momentum throughout the semiconductor industry.

The widespread adoption of UPF low-power technology underscores the necessity of hardware-assisted verification to ensure comprehensive and accurate validation of modern low-power SoC designs by executing real-world software workloads. Successful UPF validation flows leverage the performance and capacity of hardware-assisted verification together with capacity efficient UPF modeling technologies and sophisticated debug technologies.

It’s a safe bet to foresee that low-power design will emerge as a ubiquitous approach in System-on-Chip development.

Also Read:

Lifecycle Management, FuSa, Reliability and More for Automotive Electronics

Early SoC Dynamic Power Analysis Needs Hardware Emulation

Synopsys Design IP for Modern SoCs and Multi-Die Systems

Synopsys Presents AI-Fueled Innovation at SNUG 2024


Anirudh Fireside Chats with Jensen and Cristiano

Anirudh Fireside Chats with Jensen and Cristiano
by Bernard Murphy on 05-14-2024 at 6:00 am

Fireside chat min

At CadenceLIVE 2024 Anirudh Devgan (President and CEO of Cadence) hosted two fireside chats, one with Jensen Huang (President and CEO of NVIDIA) and one with Cristiano Amon (President and CEO of Qualcomm). As you would expect both discussions were engaging and enlightening. What follows are my takeaways from those chats.

Anirudh and Jensen

NVIDIA and Cadence are tight. As one example, NVIDIA have been using Palladium for functional verification for 20 years and Jensen asserted that their latest Blackwell AI chip would not have been possible without Palladium. For him Palladium is emblematic of a general trend to accelerated computing. In many if not most compute-intensive applications perhaps only 3% of the application code accounts for the great majority of the runtime, for which brute-force CPU-based parallelism is the wrong tool for the job.

No big surprise there but that 3% of the code is typically domain specific, especially in important modern applications such as AI, EDA/SDA (SDA is System Design Analysis), aerodynamic and turbine design, and molecular engineering. Boosting performance through CPU parallelism provides rapidly diminishing returns for large workloads at a heavy premium in cost and power. Domain specific accelerators like Cadence Palladium, Protium and Millennium, and the NVIDIA AI accelerators deliver 1000X or more performance boost at a tiny fraction of the cost and power of a 1000 CPUs. This is what makes today’s big AI and big chip design possible.

A very revealing comment from Jensen was that NVIDIA designs their circuits, chips, PCBs, systems, and datacenters with Cadence. While I’m sure NVIDIA also uses tools from other EDA/SDA vendors in a variety of functions, the span of collaboration between NVIDIA and Cadence is much wider than I had realized. That collaboration clearly is tight in emulation and in formal verification, also it extends to the Reality Digital Twin Platform for datacenter design, now integrated with NVIDIA’s Omniverse™ platform. I expect this collaboration will blossom further through Cadence multiphysics analytics for power/heating, and computational fluid dynamics for cooling as design center power management imperatives grow.

Details on the GPU behind the Cadence Millennium accelerator are hard to find but at least one press release suggests this involved collaboration between Cadence and NVIDIA. Further, Cadence’s Orion molecular design platform is integrated with NVIDIA’s BioNeMo generative AI platform. Looks to me like the Cadence/NVIDIA partnership is already more than a conventional customer/supplier relationship.

Anirudh and Cristiano

Qualcomm and Cadence have also been partnered for a long time, though here I infer a more traditional if also successful relationship. Cristiano stressed appreciation of Cadence’s evolving AI-based design capabilities and performance/capacity advances in the product line, to help them keep pace with growing and ever more demanding roadmap targets. My takeaways from this talk are more around the Cristiano/Qualcomm vision of trends in mobile and mobile-adjacent opportunities, themselves very interesting.

Qualcomm continues to see their differentiated advantage in communication and high-performance low power mobile platforms. Their strategy is to build around these core strengths in the markets they consider most promising. One such market he describes as the merger of physical and digital worlds in VR/AR, more generally XR where he sees glasses as a big opportunity, today partnering with Meta, Samsung, and others. Not a new idea of course but backed by Qualcomm communication superiority perhaps this area could be ripe for rapid growth.

A second opportunity is in the car (the auto electronics market is high on everyone’s list). Qualcomm is apparently doing very well with automakers, building on their established Snapdragon track record in mobile performance, an immersive experience, and AI capability. I have heard from another source that Qualcomm is now a common choice for the infotainment chip. Cristiano hinted that Qualcomm’s role may already extend further. For example, he talks of training an LLM model on the entire car digital operating manual, enabling you to directly ask (voice based I guess) a question if something goes wrong. Better yet, Qualcomm is partnered with Salesforce so you can ask the LLM to schedule a service appointment, which will then channel straight through to the dealer.

He mentioned several other opportunities, but I’ll call out one I find striking, which he calls convergence of mobile and the PC. Imagine you see on your phone an email that requires an involved reply. Today you might wait till you get back to your laptop, but with Copilot maybe you have it draft a reply with prompts to include points you consider important to stress. I can certainly see this being a more appealing mobile use-model for Copilot than on a laptop. Allowing for iteration on prompts, such a use-model could be a pretty convenient way to reply to a lot of email. Maybe even to handle other tasks from your phone.

In the same vein, he sees an opportunity to redefine the PC market with their new Snapdragon X Elite which he believes will push Windows laptops back into the lead in performance, in battery life and (I would assume) in native connectivity. All on a common laptop/handset platform (looking at you Apple). Microsoft is apparently very enthusiastic about this direction.

My summary takeaways

Both talks highlighted for me all 3 companies transitioning from established market roles (EDA/SDA and semiconductor) to larger systems roles. They also stressed the importance of co-design with market leaders to guide growth in unfamiliar territory. Exciting times for all 3 companies. You can learn more about Cadence Reality Digital Twin Platform HERE, and the Cadence/NVIDIA collaboration in molecular science HERE.

Also Read:

Anirudh Keynote at CadenceLIVE 2024. Big Advances, Big Goals

Fault Sim on Multi-Core Arm Platform in China. Innovation in Verification

Cadence Debuts Dynamic Duo III with a Basket of Goodies


ARC-V portfolio plus mature software IP targets three tiers

ARC-V portfolio plus mature software IP targets three tiers
by Don Dingee on 05-13-2024 at 10:00 am

ARC-V portfolio from Synopsys

Synopsys is bridging its long-running ARC® processor IP strategy into a RISC-V architecture – Bernard Murphy introduced the news here on SemiWiki last November. We’re getting new insight from Synopsys on its ARC-V portfolio and how they see RISC-V IP plus their mature software development toolchain IP fitting customer needs in automotive, consumer, IoT, networking, storage, and other applications. The portfolio, unveiled at a RISC-V Summit Silicon Valley keynote in November, shows Synopsys scaling the RISC-V ISA smoothly across three tiers of 32- and 64-bit host-based, real-time, and embedded processing needs.

Bringing more RISC-V resources for developers

ARC-V is more than the latest choice for RISC-V processor IP – Synopsys provides more chip and software development stack pieces in a comprehensive approach. Rich Collins, Director of Product Management at Synopsys, shows a powerful visual indicating that developers tend to see RISC-V processor IP as just the ISA but maybe not the whole set of implicit resources lurking beneath the surface that can make or break a RISC-V implementation.

“The implementation is where the rubber meets the road,” says Matt Gutierrez, Sr. Director of Marketing for Processor & Security IP and Tools at Synopsys. “The fact that RISC-V is an open standard does not mean it’s free – it comes down to the investment and expertise required to implement it.” Synopsys is aiming to help its customers bring ARC-V processor IP to life with two thrusts: customizing and optimizing the processor core for an optimal power-performance-area (PPA) footprint with better odds for first-pass silicon success, and bringing application software up on the new part.

RISC-V designers with limited experience creating chips will likely need help placing advanced cores in chip designs destined for advanced process nodes. Synopsys has its Fusion QuickStart Implementation Kit, multicore-enabled architectural exploration tools, FPGA-based prototyping for early software development, and a portfolio of IP to surround an ARC-V core in a real system-on-chip.

Synopsys also offers the MetaWare Development Toolkit, updated for RISC-V support, with a software development toolchain including compilers, profilers, and simulators. Synopsys immediately has proven SDKs for ARC-V and RISC-V software developers by porting its mature and optimized ARC tools and libraries. One interesting item in the toolkit is ISO 26262 functional safety (FuSa) support and certification, which appears prominently across the ARC-V families.

ARC-V portfolio highlights scalability, FuSa, and vector math

Collins introduced the ARC-V portfolio by positioning the new IP series in the same three tiers – embedded, real-time, and host – as the existing ARC IP series. Illustrating why they are making this move, the ARC-V offering sets up several clean choices in scalability with a choice of core IP starting points.

  • The embedded tier focus is power efficiency in single 32-bit RMX cores, with an ultra-low-power 3-stage pipeline or a still efficient 5-stage pipeline for more performance. There is also a clean break for either FuSa or DSP support.
  • The real-time tier features 32-bit RHX engines with a deeper 10-stage pipeline and multicore support, adding a similar clean break between FuSa or vector math support available in RISC-V with RISC-V Vector (RVV) extensions.
  • The host tier offers 64-bit RPX engines supporting complete memory management, multi-cluster cache coherency, and a clear choice between FuSa and RVV support.

The ARC-V portfolio and the surrounding software strategy are strong statements from Synopsys. While they emphasize that the existing ARC families continue their availability, the new emphasis on the ARC-V processor family makes it easier for customers to choose where they land and move back and forth between choices if needed. RISC-V flexibility means customers can customize and extend ARC-V cores to add their proprietary value, a carryover of the philosophy behind ARC since its introduction. The ARC-V RMX embedded processor series should be available soon, with the RHX real-time and RPX host processor series to follow.

See more about Synopsys’ ARC-V strategy in an article from Rich Collins:
How the RISC-V ISA Offers Greater Design Freedom and Flexibility

Also, listen to the SemiWiki podcast episode with host Daniel Nenni:
Podcast EP212: A View of the RISC-V Landscape with Synopsys’ Matt Gutierrez


Siemens EDA Makes 3D IC Design More Accessible with Early Package Assembly Verification

Siemens EDA Makes 3D IC Design More Accessible with Early Package Assembly Verification
by Mike Gianfagna on 05-13-2024 at 6:00 am

Siemens EDA Makes 3D IC Design More Accessible with Early Package Assembly Verification

2.5D and 3D ICs present special challenges since these designs contain multiple chiplets of different materials integrated in all three dimensions. This complexity demands full assembly verification of the entire stack, considering all the subtle electrical and physical interactions of the complete system. Identifying the right stack configuration as early as possible in the design process minimizes re-work and significantly improves the chances of success. Siemens Digital Industries Software recently published a comprehensive white paper on how to address these problems. A link is coming, but first let’s examine the array of challenges that are presented in the white paper to see how Siemens EDA makes 3D IC design more accessible with early package assembly verification.

2.5/3D IC Design Challenges

2.5D and 3D ICs are composed of multiple chiplets, each of which may be fabricated on separate process nodes. The white paper talks about some of the challenges for this kind of design methodology. For example, options for connecting chiplets are reviewed. A partial list:

  • Chiplets connected via interposer with bump connections and through-silicon-vias (TSVs)
  • Chiplets on package
  • Chiplets on packages with discreet and thinned interposers embedded without TSVs
  • Chiplets stacked on chiplets through direct bonding techniques
  • Chiplets stacked on chiplets with TSVs or copper pillars and more bumps

There is a lot to consider.

The 3D IC Assembly Flow

Challenges here include some method to disaggregate the components of a design into appropriate chiplets. Each chiplet must then be assigned to an appropriate foundry technology. The specific approach to assemble the design is critical. Material choices and chiplet placement will induce thermal and mechanical stresses that can impact the intended electrical behavior of the full assembly design. This phase may require many iterations.

3D IC Physical Verification

There are a lot of challenges and new methods discussed in this section. For example, the most common approach for checking physical and electrical compliance for a 3D IC requires the use of separate rule decks for design rule checking (DRC), LVS, etc. for each interface within the package (chip-to-chip, chip-to-interposer, chip-to-package, interposer-to-package, etc.). These rule decks typically use pseudo-devices, commonly in the form of 0-ohm resistors, to identify the connections across each interface while still preserving the individual chiplet-level net names.

Problems here include the fact that designers must associate the many individual rule decks to the corresponding interfaces within the assembly layout, which may not always be intuitive. As errors are identified, designers must be able to highlight them at the proper interfaces (with proper handling of rotations and magnifications) to help identify the appropriate fixes.

Many more challenges are discussed in this section. For example, without a holistic assembly approach, it is impossible to verify ESD protection when the ESD circuits exist in one chip and the protection devices exist in another.

Shift Left IC Design and Verification

In this section, the benefits of a “shift left” or early verification approach are reviewed. Reduced design time and a higher quality result are some of the benefits. How the Calibre nmPlatform and other Siemens EDA tools can be used to implement a shift left approach are discussed.

Shift Left for 3D IC Physical Verification

This section begins by pointing out that 3D IC verification of physical and electrical constraints requires a holistic assembly-level approach. A holistic approach requires full knowledge of both the 3D IC assembly and the individual chiplet processes.  Many tools must be integrated in the correct flow and emerging standards such as 3Dblox must be used correctly.

Approaches to handle thermal and mechanical stress analysis are also detailed.  This is also a complex process requiring many tools used in the correct way. The importance of a holistic approach is again stressed. For example, thermal and mechanical issues cannot be treated in isolation. Mechanical stresses induce heat. Thermal impacts create mechanical stress, and so on. A correct approach here can avoid unwanted surprises by the time final iteration is performed.

To Learn More

This white paper covers a lot of aspects of package assembly verification for multi-die designs. The benefits of a shift left, or early approach are clearly defined, along with a description of how the current flow must be modified to accommodate these techniques.

If you are considering a multi-die design, I highly recommend reading this white paper. It will save you a lot of time and effort. You can get your copy of this important document here. And that’s how Siemens EDA makes 3D IC design more accessible with early package assembly verification.


Podcast EP222: The Importance of Managing and Preserving Ultrapure Water in Semiconductor Fabs with Jim Cannon

Podcast EP222: The Importance of Managing and Preserving Ultrapure Water in Semiconductor Fabs with Jim Cannon
by Daniel Nenni on 05-10-2024 at 10:00 am

Dan is joined by Jim Cannon, Head of OEM and Markets at Mettler-Toledo Thornton. Jim has over 35 years of experience managing, designing, and developing ultrapure water treatment and technology. Jim is currently involved in the standards and regulatory organizations including the Facilities and Liquid Chemicals Committee, Reclaim/Reuse/Recycle Task Force, and the UPW Task Force.

Jim focuses on a unique challenge the semiconductor industry faces. Rather than the power or performance of the end device, he discusses the substantial challenge of reducing water usage for fabs, both new and existing facilities. It turns out semiconductor fabs use ultra-pure water as the universal solvent for all process steps. When you consider that multiple rinses are required for many steps, the amount of water used by large fabs can literally drain the water table of the surrounding area.

Jim discusses the ways the industry is focusing on this problem, both from a regulatory perspective as well as employing advanced technology to both sense water purity and develop new methods to reclaim water in the process The techniques discussed will have substantial impact on the growth of the semiconductor industry.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.