[carousel-horizontal-posts-content-slider]
100X800 Banner (1)

The latest ideas on time-sensitive networking for aerospace

The latest ideas on time-sensitive networking for aerospace
by Don Dingee on 05-06-2024 at 10:00 am

Aircraft domain requirements for time sensitive networking in aerospace

Time-sensitive networking for aerospace and defense applications is receiving new attention as a new crop of standards and profiles approaches formal release, anticipated before the end of 2024. CAST, partnering with Fraunhofer IPMS, has developed a suite of configurable IP for time-sensitive networking (TSN) applications, with an endpoint IP core now running at 10Gbps and switched endpoint and multiport switch IP cores available at 1Gbps and extending to 10Gbps soon. They have just released a short white paper on TSN in aerospace applications, providing an overview of TSN standards and how they map to aerospace network architectures.

Standards come, standards go, but Ethernet keeps getting better

A couple of decades ago, at a naval installation far, far away – NWSC Dahlgren, Virginia, to be exact – I had the privilege of accompanying Motorola board-level computer architects in our customer research conversation with Dahlgren’s system architects. The topic was high-bandwidth backplane communication, with the premise of moving from the parallel VMEbus to a faster serial interface.

The Dahlgren folks politely listened to a presentation overviewing throughput and latency differences between Ethernet, InfiniBand, and RapidIO. Ethernet was just transitioning to 1Gbit speeds. Our architects leaned toward RapidIO for its faster throughput with low latency but were open to considering input from real-world customers. When the senior Dahlgren scientist started speaking after the last slide, he gave an answer that stuck with me all these years.

His well-entrenched position was that niche use cases notwithstanding, Ethernet would always win by satisfying more system interoperability requirements and positioning deployed systems to survive in long life-cycle applications via upgrades. Just as Token Ring and FDDI appeared and were subsequently displaced by Ethernet, he projected that InfiniBand and RapidIO would eventually fall to Ethernet as standards work improved performance.

The discussion is still relevant today. Real-time applications demand determinism and reliability, especially in mission-critical contexts like aerospace and defense. Enterprise Ethernet technology provides robust interoperability and throughput but occasional non-deterministic behavior. It only takes one late-arriving packet to throw off an application depending on that data within a fixed time window. Bandwidth covers up many sins, but as applications demand more data from more sources in large systems, the margin for error shrinks.

Addressing four TSN concepts and common aerospace profiles

Quoting the white paper: “TSN consists of a set of standards that extend Ethernet network communication with determinism and real-time transmission.” IEEE TSN standards address four key concepts not available in enterprise-class Ethernet:

  • Time synchronization, setting up a common perception across all devices in a network, builds on concepts from IEEE 1588 and adds resilience through multiple time domains.
  • Latency, establishing the idea of high-priority traffic versus best-effort traffic, and applying time-aware and credit-based shaping.
  • Resource management, with protocols to set up switches, determine topology, and request and reserve network bandwidth.
  • Reliability focuses on recovering from defective paths while minimizing redundant transmissions and protecting networks from propagating incorrect data.

Profiles are being developed for using TSN in specific industries, such as aerospace and defense, industrial automation, automotive, audio-video bridging systems, and others. Existing communication standards for aerospace, like ARINC-664, MIL-STD-1553, and Spacewire, fail to handle all the above concepts that drove the design of TSN.

CAST/Fraunhofer IPMS offer a concise table of domains and key requirements for networking, noting that a big problem is isolated network islands using various technologies inside a large system. They propose TSN as a unifying network architecture that can meet all the requirements with less expensive components and simpler cabling.

The paper concludes with a brief overview of the CAST/Fraunhofer IPMS TSN IP cores available in RTL source or FPGA-optimized netlists, describing features designed for real-time networking with determinism and low latency.

To get the whole story on time-sensitive networking in aerospace and defense with more detail on the emerging IEEE TSN standards and profiles and aerospace network requirements, download the CAST/Fraunhofer IPMS white paper:

White Paper — Time Sensitive Networking for Aerospace


Analog Bits Continues to Dominate Mixed Signal IP at the TSMC Technology Symposium

Analog Bits Continues to Dominate Mixed Signal IP at the TSMC Technology Symposium
by Mike Gianfagna on 05-06-2024 at 6:00 am

Analog Bits Continues to Dominate Mixed Signal IP at the TSMC Technology Symposium

The recent TSMC Technology Symposium in the Bay Area showcased the company’s leadership in areas such as solution platforms, advanced and specialty technologies, 3D enablement and manufacturing excellence. As always, the TSMC ecosystem was an important part of the story as well and that topic is the subject of this post. Analog Bits came to the event with three very strong demonstrations of enabling IP on multiple fronts. Let’s examine how Analog Bits continues to dominate mixed signal IP at the TSMC Technology Symposium.

Demo One – New LDO, High Accuracy PVT Sensors, High Performance Clocks, Droop Detectors, and more in TSMC N3P

As more designs are going to multicore architectures, managing power for all those cores becomes important. The new LDO macro can be scaled, arrayed, and shared adjacent to CPU cores and to simultaneously monitor power supply health. With Analog Bits’ detector macros, power can be balanced in real time. Mahesh Tirupattur, Executive Vice President at Analog Bits said, “It is like PLL’s that maintain clocking stability we have are now able to offer IP’s to maintain power integrity in real time.”

Features of the new LDO macro include:

  • Integrated voltage reference for precision stand-alone operation
  • Easy to integrate, use, and configure with no additional components or special power requirements
  • Scalable for multiple output currents
  • Programmable output level
  • Trimmable
  • Implemented with Analog Bits’ proprietary architecture
  • Requires no additional on-chip macros, minimizing power consumption

Taking a look at one more IP block for power management, Analog Bits’ Droop Detector addresses SoC power supply and other voltage droop monitoring needs. The Droop Detector macro includes an internal bandgap style voltage reference circuit which is used as a trimmed reference to compare the sampled input voltage against.

The part is synchronous with latched output. Only when the monitored voltage input has exceeded a user-selected voltage level will the Droop Detector output signal indicate that a violation is detected.

Below is a block diagram of an implementation. The composite Droop Detector comprises a primary Droop Detector, which includes a bandgap, plus additional Droop Detectors if needed by the application and which connect by abutment.

Droop Detector Block Diagram

Demo Two – Patented Pinless PLL’s and Sensors in TSMC N3, N4 and N5

As discuss in this post, for gate-all-around architectures there will be only one gate oxide thickness available to support the core voltage of the chip. Other oxide thicknesses to support higher voltages are simply no longer available. In this scenario, the Pinless Technology invented by Analog Bits will become even more critical to migrate below 3nm as all of the pinless IP will work directly from the core voltage.

Examining the Pinless PVT Sensor at TSMC N5 and N3, this device provides full analog process, voltage, and temperature measurements with no external pins access required by running off the standard core power supply. This approach delivers many benefits, including:

  • Pinless PVT Voltage Linearity
    No on-chip routing of the analog power supply
  • No chip bumps
  • No package traces or pins
  • No PCB power filters

For voltage measurements, the device delivers excellent linearity as shown in the diagram.

Demo Three – Automotive Grade SERDES, PLL’s, Sensors, and IOs in TSMC N5A

As the electronic content in automobiles continues to increase, the need for a complete library of IPs that meet the stringent requirements of this operating environment become more important. In this demo, Analog Bits showcased a wide range of IP that meets automotive requirements on the TSMC N5A process.

I’ll take a look at Analog Bits’ Wide Range PLL.  This IP addresses a large portfolio of applications, ranging from simple clock de-skew and non-integer clock multiplication to programmable clock synthesis for multi-clock generation.  This IP is designed for AEC-Q100 Automotive Grade 2 operation.

The PLL macro is implemented in Analog Bits’ proprietary architecture that uses core and IO devices. In order to minimize noise coupling and maximize ease of use, the PLL incorporates a proprietary ESD structure, which is proven in several generations of processes. Eliminating bandgaps and integrating all on-chip components such as capacitors and ESD structure helps the jitter performance significantly and reduces stand-by power.

The diagram below shows the block diagram for this IP.

PLL Block Diagram

Stepping back a bit, the figure below shows the various Analog Bits IP that is showcased in the N3P test chip demo at TSMC Technology Symposium. 

N3P Test Chip Demo

Executive Perspective

Mahesh Tirupattur

I had the opportunity to chat with Mahesh Tirupattur to get his comments on the recent announcements at the TSMC Technology Symposium. He said:

“The Analog Bits team is always innovating leading edge novelty IP solutions to solve customer design challenges on the latest processes.  Our mission is to enable integration of many off-chip components that reside on the board to on-die and soon on-chiplets. The benefits are significant by enabling embedded clocks, PMIC, LDO on-die – reduced form factor, costs, and improved performance with lower power. What is not to like about this? Our approach sets us apart in the marketplace as we truly add value by amalgamating system knowledge with leading edge mixed signal designs in advanced processes to enable new AI architectures.”

To Learn More

All of the Analog Bits demos from the TSMC Technology Symposium are now available to see online. If you missed the event, you can catch up on the Analog Bits demos here. You can also see all the TSMC processes Analog Bits supports here. It’s quite a long list. And that’s how Analog Bits continues to dominate mixed signal IP at the TSMC Technology Symposium.

 


Why NA is Not Relevant to Resolution in EUV Lithography

Why NA is Not Relevant to Resolution in EUV Lithography
by Fred Chen on 05-05-2024 at 8:00 am

Why NA is Not Relevant to Resolution in EUV Lithography

The latest significant development in EUV lithography technology is the arrival of High-NA systems. Theoretically, by increasing the numerical aperture, or NA, from 0.33 to 0.55, the absolute minimum half-pitch is reduced by 40%, from 10 nm to 6 nm. However, for EUV systems, we need to recognize that the EUV light (consisting of photons) is ionizing, i.e., releases photoelectrons in absorbing materials. A 92 eV EUV photon, once absorbed, kicks off a ~70-80 eV photoelectron which gradually deposits most of the energy of the original photon [1]. The image information originally in the EUV photon density is replaced by the final migrated photoelectron density. The difference between minimum and maximum resist exposure is defined by where all the photoelectrons finally reach a particular energy cutoff. We also need to add the photoelectron number randomness which stems from the photon absorption randomness. The stochastic effects from the roughness lead to edge roughness, unpredictable edge position shifts, and even defects. All these considerations taken together require us to revisit the actual practical resolution for EUV lithography.

Photoelectron Model details

EUV light arriving at the wafer is a mixture of two components, one polarized parallel to the plane of incidence (TM), and one polarized perpendicular to the plane of incidence (TE). The photoelectron is emitted predominantly along the direction of polarization [2]. Purely unpolarized light should be a 50-50% mixture, but we may expect some departure because the mirrors in the EUV system may reflect near the Brewster angle [3]. With regard to lines being imaged, the photoelectrons along the TE polarization move along the lines, while the photoelectrons along the TM polarization move perpendicular to the lines. It is only the latter which degrades the image. Photoelectrons moving laterally effectively shift the image. Figure 1 shows the relative probability for a photoelectron to migrate a given distance to a 3 eV cutoff. This cutoff corresponds to the resist thickness loss following exposure and development [1].

Figure 1. Probability density for EUV photoelectron travel distance for an open-source resist, to a 3 eV energy cutoff [1].

The resist exposure at a given point will be affected by photoelectrons from a given distance away, weighted by the probability density for that distance, only for the TM case. The TE portion is taken to be unaffected by photoelectron migration as the photoelectron travels along the lines [3].

Picturing Photoelectron Spread

In a previous study [3], it was found that as pitch decreased below 40 nm, the image contrast, indicated by the normalized image log-slope (NILS), would degrade due to the photoelectron spread. As a reference, the photoelectron spread at 40 nm pitch can be pictured in Figure 2. The absorbed photon dose is affected by shot noise which directly affects the number of photoelectrons generated. At 5 mJ/cm2, roughly what is absorbed by 40 nm thick organic chemically amplified resist (CAR) at 30 mJ/cm2, the nominally unexposed region is penetrated by photoelectrons which can potentially print defects, and the nominally exposed region is thoroughly penetrated by photoelectron printing gaps. Raising the dose reduces this stochastic severity.

Figure 2. Photoelectron spread for 20 nm half-pitch vs absorbed dose. The orange portion indicates where the photoelectron density exceeds the half-pitch printing threshold. The pixel size is 1/40 of the pitch.

When the pitch is increased to 50 nm (Figure 3), the photoelectrons do not appear to spread as far stochastically, especially at the higher dose. This is due to the increased contrast, i.e., separation between maximum and minimum photoelectron densities in the image.

Figure 3. Photoelectron spread for 25 nm half-pitch vs absorbed dose. The orange portion indicates where the photoelectron density exceeds the half-pitch printing threshold. The pixel size is 1/40 of the pitch.

Redirecting the Determination of EUV Lithography Resolution

The EUV photoelectron spread probability density function shown in Figure 1 leads to a practical resolution limit of ~50 nm pitch for ~30 mJ/cm2, ~40 nm pitch for ~90 mJ/cm2 for a typical CAR. This is far above the expected resolution limit from 0.33 or 0.55 NA. The resolution limit should therefore not be primarily associated with the optics of the EUV system but in fact be tied to the photoelectron as well as secondary electron migration in the EUV resist. Moreover, the resolution is closely tied to the dose absorbed by the resist; a higher dose enables better resolution. This leads to a throughput tradeoff [4], requiring higher source power to compensate. The resolution of a given EUV resist must be characterized by calibrating low-energy electron scattering simulations [1,5] with resist thickness loss vs. electron dose measurements [1]. It must be kept in mind that while metal-containing resists are known for their enhanced EUV absorption [6], the SnOx-based resist does not necessarily have an advantage in photoelectron spread distance over the organic CAR [4]. Restrictions in elemental composition will prevent much deviation in the photoelectron spread function. As the resist will be the main determinant of EUV lithography resolution, less attention should be paid to the marketing of High-NA.

References

[1] A. Narasimhan et al., “What We Don’t Know About EUV Exposure Mechanisms,” J. Photopolym. Sci. and Tech. 30, 113 (2017).

[2] M. Kotera et al., Extreme Ultraviolet Lithography Simulation by Tracing Photoelectron Trajectories in Resist,” Jap. J. Appl. Phys. 47, 4944 (2008).

[3] F. Chen, Resolution Limit From EUV Photoelectron Spread, 2024 https://www.youtube.com/watch?v=3BIGo9UsIEA

[4] H. J. Levinson, Jpn. J. Appl. Phys. 61 SD0803 (2022).

[5] P. L. Theofanis et al., “Modeling photon, electron, and chemical interactions in a model hafnium oxide nanocluster EUV photoresist,” Proc. SPIE 11323, 113230I (2020).

[6] http://euvlsymposium.lbl.gov/pdf/2015/Posters/P-RE-06_Fallica.pdf).

This article first appeared in LinkedIn Pulse: Why NA is Not Relevant to Resolution in EUV Lithography

Also Read:

Intel High NA Adoption

Huawei’s and SMIC’s Requirement for 5nm Production: Improving Multipatterning Productivity

ASML- Soft revenues & Orders – But…China 49% – Memory Improving


Podcast EP221: The Importance of Design Robustness with Mayukh Bhattacharya

Podcast EP221: The Importance of Design Robustness with Mayukh Bhattacharya
by Daniel Nenni on 05-03-2024 at 10:00 am

Dan is joined by Mayukh Bhattacharya, Engineering, Executive Director, at Synopsys. Mayukh has been with Synopsys since 2003. For the first 14 years, he made many technical contributions to PrimeSim XA. Currently, he leads R&D teams for PrimeSim Design Robustness and PrimeSim Custom Fault products. He was one of the early adopters of AI/ML in EDA. He led the development of a FastSPICE option tuner – Customizer – as a weekend hobby, which later became the inspiration behind the popular DSO.ai product. He has 11 granted (and 4 pending) patents, 7 journal papers, and 20 conference publications.

Dan explores the concept of design robustness with Mayukh. Design robustness is a measure of how sensitive a design is to variation – less sensitivity means a more robust design. For advanced nodes where there is significant potential for variation, design robustness becomes very important.

Mayukh explains the many dimensions of robustness, with a particular focus on memory design. He describes the methods required and how the Synopsys PrimeSim portfolio supports those methods. How AI fits into the process is also discussed, along with the benefits of finding problems early , the importance of adaptive flows and the overall impact on reliability.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Harish Mandadi of AiFA Labs

CEO Interview: Harish Mandadi of AiFA Labs
by Daniel Nenni on 05-03-2024 at 6:00 am

image2

Harish Mandadi is the CEO and Founder of AiFA Labs, a service-based IT company that provides best-in-class solutions for clients across various industries. With over 20 years of experience in IT sales and delivery, I have a unique blend of entrepreneurial vision and hands-on expertise in the dynamic landscape of technology.

Tell us about your company?
Sure. So, AiFA Labs is an IT solutions company that guides clients through digital transformations as they update their business processes to incorporate generative AI, machine learning, optical character recognition, and other technologies.

In the past, we were a service-based company. However, we just launched our first product, Cerebro AI, on March 30th, 2024. It’s an all-in-one generative AI platform with more tools, features, and integrations than any other AI platform on the market. It will change the way companies do business and we are very proud of it.

What problems are you solving?
We are solving problems related to scalability, high overhead, and time to market. Cerebro has the ability to expand reach globally, reduce labor costs by 30-40%, and speed up content creation by 10x. One of Cerebro’s main features is SAP AI Code Assist, which automates SAP ABAP development  SDLC process and brings down the effort by 30 to 50% with the click of a button.

We also have an AI Prompt Marketplace, where users can buy and sell AI prompts to make their interactions with generative AI more efficient and effective. Knowledge AI collects all company data and trains AI based on it. It allows business users to interact with their business data in natural language. In total, Cerebro is 17 tech products in one and we expect that number to grow. It is truly a one-stop shop for everything AI.

What application areas are your strongest?
Our strongest application areas are IT, life sciences, Consumer, marketing, customer service, HR, and education. We are hoping to break into law, entertainment, and a few other use cases.

What keeps your customers up at night?
I don’t know for sure, but I think missed opportunities keep our customers up at night. Experiencing increased demand without the means to meet client expectations is every business owner’s nightmare. With Cerebro, no customer inquiry goes unanswered, every hot topic is covered in print within a few hours, and software solutions are delivered in half the amount of time it usually takes.

What does the competitive landscape look like and how do you differentiate?

Right now, the competitive landscape is flooded with new generative AI products, and we expect to see many more of them come onto the market in the next few years. Most of them have a singular mode of operation, similar to ChatGPT or Gemini.

Our product is special because it incorporates all of the most popular large language models and allows users to choose which ones they use. It also integrates with Amazon AWS, Microsoft Azure, Google, SAP, and more. Cerebro possesses almost any AI functionality you can think of and then some. 

What new features/technology are you working on?
Our latest features are AI Test Automation and an AI Data Synthesizer. The first feature runs tests on SAP ABAP code to gauge performance and identify potential issues. The second feature processes data with missing information and fills in the gaps based on context.

How do customers normally engage with your company?
Customers engage with us on LinkedIn, Twitter/X, or our company website.

Also Read:

CEO Interview with Clay Johnson of CacheQ Systems

CEO Interview: Khaled Maalej, VSORA Founder and CEO

CEO Interview with Ninad Huilgol of Innergy Systems


Self-heating and trapping enhancements in GaN HEMT models

Self-heating and trapping enhancements in GaN HEMT models
by Don Dingee on 05-02-2024 at 10:00 am

RTH0 extraction

High-fidelity models incorporating real-world, cross-domain effects are essential for accurate RF system simulation. The surging popularity of gallium nitride (GaN) technology in 5G base stations, satellite communication, defense systems, and other applications raises the bar for transistor modeling. Keysight dives deeply into two GaN effects – self-heating and trapping – in enhanced ASM-HEMT 101.4 and MVSG_CMC 3.2.0 GaN HEMT models shipping in the latest release of Advanced Design System (ADS), developed using its advanced parameter extraction package in IC-CAP.

A quick intro to ASM-HEMT and MVSG_CMC

The Compact Model Coalition (CMC), a Silicon Integration Initiative (Si2) working group,  continues refining two industry-leading GaN transistor model specifications, ASM-HEMT and MVSG_CMC.

  • ASM-HEMT (Advanced SPICE Model for High Electron Mobility Transistors) is a computationally efficient, surface-potential-based model for terminal current and charge, accounting for various secondary device effects, including self-heating and trapping.
  • MVSG_CMC (MIT Virtual Source GaNFET Compact Model Coalition) is a self-consistent charge-based model with versatile field plate current and charge configurations. It also includes effects like leakage, noise, bias dependencies, and self-heating and trapping.

Both models provide analytical solutions for GaN device behavior that are suitable for accurate simulation in frequency and time domains. They each use an R-C network with thermal resistance and capacitance to model self-heating effects. Both also provide parameter selections for various trapping scenarios, including the latest versions modeled with R-C networks incorporating variable drain-lag and gate-lag.

Self-heating parameter extraction

The increased power density of GaN devices concentrates self-heating in a smaller area, reducing mobility, increasing signal delays, and potentially shortening a device’s lifespan. The extraction of self-heating parameters using IC-CAP is similar for either ASM-HEMT or MVSG_CMC GaN HEMT models.

Modeling thermal resistance RTH0 is effective when using drain current Id with varying drain and gate voltage in static and pulsed stimulation. First, a static Id-Vd curve taken at room temperature provides a baseline. Then, short Id pulses applied with Vd0 and Vg0 held at 0V to minimize trapping and self-heating provide response curves at various temperatures. Overlaying the static curve with the pulsed curves results in intersections where Id is the same. Power is calculated and plotted versus temperature, and the slope of the line is RTH0.

Using the pulsed Id approach provides a more straightforward extraction method than extracting RTH0 from DC static characteristics alone.

Trapping parameter extraction

Trapping effects in GaN devices also factor heavily into performance and reliability. Charge trapping in buffer and interface layers reduces 2DEG channel charge density and dynamic ION, increases dynamic RON and cut-off voltage, and modulates Id.

Again, the methodology for parameter extraction is similar between ASM-HEMT and MVSG_CMC, even with the differences in the implementation of the R-C network between the models. Trapping parameter extraction is done after the DC, IV, thermal, and S-parameter extraction. Gate-lag trapping extraction happens first since it affects the initial transistor response and overall behavior, activating only surface traps. With gate-lag behavior analyzed, drain-lag trapping extraction is more accurate, activating both surface and buffer traps.

ASM-HEMT Trapping Model 4 uses two R-C circuits to model drain-lag and gate-lag.

MVSG_CMC Trapping Model 2 uses a similar network with a slightly different physical model, accounting for variable trapping (capture) and de-trapping (emission) time.

Parameter extraction pulses Vg while holding Vd constant for gate-lag and pulses Vd while holding Vg constant for drain-lag. A representative drain-lag plot for MVSG-CMC illustrates the difference in capture and emission effects.

IC-CAP keeps pace with the latest GaN HEMT models

The automated parameter extraction flow in IC-CAP simplifies the process for any developer of GaN device models, whether they are a CMC member or not. Keysight’s experience with these industry-leading models also helps IC-CAP customers apply extraction strategies for their process, improving GaN device model fidelity.

IC-CAP also supports ADS users with the latest GaN HEMT models shipped in each successive release. Self-heating and trapping are good examples of adding more complex effects to improve RF circuit simulation results. As the CMC continues improving its models, Keysight keeps pace with tools for foundry and RF design customers.

Further information on the CMC and its upcoming meetings is available at:

https://si2.org/cmc/

Two application notes explain Keysight’s automated parameter extraction strategy for robust GaN HEMT models in more detail:

How to Extract the ASM-HEMT Model for GaN RF Devices Including Thermal Effects

Trapping Extraction of GaN HEMTs


KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside

KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside
by Robert Maire on 05-02-2024 at 8:00 am

Semiconductor Manufacturing

– KLA reported a good QTR but more importantly passing the bottom
– Lead times mean KLA gets orders early in up cycle-just behind ASML
– Potential upside in upcycle as packaging needs more process control
– 2024 2nd half weighted with stronger recovery likely in 2025

A solid quarter as expected with good guide

KLAC reported revenues of $2.36B and EPS of $5.26 versus street of $2.31B and $5.01, a modest beat. Guidance is for $2.5B +-$125M and EPS of $6.07+-$0.60 versus street of $2.42B and $5.68.

So all around a decent report….

Past the bottom in March

The most important comment is that the company has clearly put a stake in the ground as March being the bottom of the down cycle with subsequent quarters being up from here. This is certainly a much more definitive answer than what we heard last night from Lam and sets a much more positive tone going forward. While it does not sound like 2024 will be a barn burner but at least we will see steady recovery from here and a second half weighted year going into a stronger 2025.

The stronger 2025 agrees with Lam’s comments and other comments we have heard as we still have a number of issues that are headwinds in the industry, like NAND over supply, trailing edge weaker etc; etc;.

But it does sound like a lot of the other issues will resolve or reduce by the end of the year. This has been one of the more extended down cycles we have been through and KLA has done a good job through it all.

KLA tools tend to be early in the order cycle

KLA tools tend to be ordered early in the cycle for two main reasons.1) you need KLA tools to get other tools and the overall fab process up to speed in the next node 2) KLA tools have longer lead times than process tools which are typically morte of a turns business where KLA tools have lead times of multiple quarters. The only tools that have longer lead times and precede KLA tools are litho tools from ASML.

We would imagine that the order book will likely start to fill throughout 2024 for delivery starting in 2025 and beyond.

Investors need to remember that there are a lot of new greenfield fabs that need building construction to be finished before equipment can be received

Exiting flat panel business is a good move

The flat panel business sometimes made the semiconductor business look stable by comparison. We never saw flat panel as being strongly inside KLAs wheelhouse. It makes a lot more sense for KLA to focus on things closer to home or adjacent to home. Back end is obviously adjacent to front end……

Packaging finally gets some respect

We have been talking about the back end of the business needing more process control for a number of years now and it seems as if it has been very late in coming but may finally be getting somewhere. The days of rows of “sewing machine” wire bonders under bare light bulbs in an ugly, dirty factory in Taiwan are behind us.

Packaging is now a front end process business with micron level dimensions seen in the front end a while ago.

We think this could be a significant opportunity for growth outside of KLAs core wafer and reticle inspection markets and closer to KLAs wheelhouse than the Orbotech acquisition and is obviously lower cost, organic growth to boot.

While the back end is notoriously cheap and avoids expense we think the complexity has gotten to the point where there is an overwhelming need for front end like process control.

The Stocks

We would expect a more positive investor response to KLA than what we saw and heard from Lam. The only tempering factor may be the Intel earnings report released at the same time which seems to be somewhat underwhelming which may put a wet blanket on the overall industry momentum.

It still clear to us that we are far from being out of the woods of the downcycle but at least KLA is past the bottom and sees upside from here. We would repeat again that this is going to be a long, slow recovery and 2024 isn’t going to be great and probably look a lot like a mirror image of 2023 but its a start.

We still remain cautious that the stocks don’t get too far out over the tip of their surfboard as they had gotten in past months and perhaps the retraction we have seen will keep expectations and stock prices in check a bit more.

On the positive side, we think downside disappointment is likely limited going forward so we primarily have to pay attention to valuation

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.

We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

LRCX- Mediocre, flattish, long, U shaped bottom- No recovery in sight yet-2025?

ASML- Soft revenues & Orders – But…China 49% – Memory Improving

ASML moving to U.S.- Nvidia to change name to AISi & acquire PSI Quantum


One Step Ahead of the Quantum Threat

One Step Ahead of the Quantum Threat
by admin on 05-02-2024 at 6:00 am

PQShield Quantum Threat

When it comes to the security of tomorrow, the time to prepare is today, and at PQShield, we’re focused on shaping the way the digital world is protected from the inevitable quantum threat. We deliver real-world, quantum-safe hardware and software upgrades, and it’s our mission to help modernize the legacy security systems and components of the world’s technology supply chain.

Based in the UK, PQShield began as a spin-out from the University of Oxford, and is now the largest collaboration of post-quantum cryptographers under one roof, anywhere in the world. We’re also world-leaders in advanced hardware side-channel protection, and we’re a source of truth, providing clarity to our stakeholders at every level. With teams across 10 countries, covering EU, UK, US and Japan, we’ve been involved in the PQC conversation globally, working with industry, academia, and government.

Within a decade, the mathematical defenses that currently keep online information safe will be at risk from a cryptographically relevant quantum computer, sufficiently powerful to break those defenses. In fact, even before quantum technology exists, there’s a significant risk of ‘harvest-now-decrypt-later’ attacks, poised to extract stolen information when the technology to do so becomes available. We believe it’s critical that industries, organizations, governments, and manufacturers are aware of the threat, and follow the best roadmap to quantum resistance.

This is a critical moment. With the recent push for legislation in the US, such as NSM-10 and HR.7535, as well as CNSA 2.0 and the National Cybersecurity Strategy, federal agencies and government departments are now mandated to prepare and budget for migration to full PQC by 2033. Meanwhile in Europe, organizations such as ANSSI (French Cybersecurity Agency) and BSI (German Federal Office for Information Security) have published key recommendations on deployment scenarios, and in the UK, the National Cyber Security Centre (NCSC) are recommending next steps in preparing for post-quantum cryptography. International influence is also growing quickly. We recently presented at the European Parliament, attended a roundtable discussion at the White House, and we’ve been key contributors to the World Economic Forum on regulation for the financial sector. There’s no doubt that the world is waking up to the quantum threat.

PQC is also finding its way into major applications. Recently, Apple unveiled a major update, introducing their PQ3 protocol for post-quantum secure iMessaging. This follows Signal’s large-scale update to post-quantum messaging (referencing PQShield’s research in this domain), as well as Cloudflare’s deployment of post-quantum cryptography on outbound connections. Google Chrome version 116 also includes hybrid PQC support for browsing, and AWS Key Management service now includes support for post-quantum TLS. Other providers are certain to follow.

In addition, the publication of the finalized NIST PQC standards, 2024 is set to kickstart even more widespread awareness and adoption. It’s certainly a point that the team at PQShield have been working towards; our ‘think openly, build securely’ ethos has helped us contribute directly to the NIST project, and we’ve created a portfolio of forward-thinking solutions using the expected algorithms. Our products are already in the hands of key customers such as Microchip, AMD, Raytheon, Tata Consulting Services, and many more.

PQShield’s goal is to stay one step ahead of the attackers, and we believe our security portfolio can help. With our FIPS 140-3-ready software libraries, our side-channel protected hardware solutions, and our embedded IP for microcontrollers, we’re aiming to provide configurable products that maximise high performance and high security for the technology supply chain. We’ve understood the reality of the quantum threat, and at PQShield we’re focused on helping the world to defend against it.

Also Read:

Crypto modernization timeline starting to take shape

WEBINAR: Secure messaging in a post-quantum world

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography


WEBINAR: Navigating the Power Challenges of Datacenter Infrastructure

WEBINAR: Navigating the Power Challenges of Datacenter Infrastructure
by Mike Gianfagna on 05-01-2024 at 10:00 am

WEBINAR Navigating the Power Challenges of Datacenter Infrastructure 1

 

We all know power and energy management is a top-of-mind item for many, if not all new system designs. Optimizing system power is a vexing problem. Success requires coordination of many hardware and software activities. The strategies to harmonize operation for high performance and low power are often not obvious. Much work is going on here. Data center design is at the heart of the problem as the cloud has created massive processing capability with a massive power bill. You need to look at the problem from multiple points of view to make progress. A recent webinar from proteanTecs brought together a panel of experts who look at this problem from multiple points of view. The insights they offer are significant, some quite surprising and counterintuitive. Read on to get a better understanding of navigating the power challenges of datacenter infrastructure.

The Webinar

proteanTecs is a unique company that focuses deep data analytics through on-chip monitoring. You can learn about the company’s technology here. Recently, proteanTecs announced a power reduction solution that leverages its existing sensing and analytics infrastructure. I covered that announcement here. So, this combination of solutions around power optimization give the company a broad and unique perspective.

In a recent webinar, proteanTecs brought together a group of distinguished experts from A-list companies who are involved in various aspects of data center design. The insights offered by this group are quite valuable. A link is coming so you can get all the information from the panel directly. Let’s first review some of the key take-aways.

The Panel

The panelists are shown in the graphic at the top of this post. I will review each panelist’s opening remarks to set the stage for what follows. Going from right to left:

Mark Potter moderated the panel. He has been involved in the data center industry for over 30 years, much of that time at Hewlett Packard Enterprise and Hewlett Packard Labs, where he was Global CTO and Director. Today, Mark is involved in venture capital and is on the advisory board of proteanTecs. The moderator is a key role in an event like this. Mark is clearly up to the challenge of guiding the discussion in the right direction, uncovering significant insights along the way.

Evelyn Landman, Co-Founder and CTO at proteanTecs. Evelyn has broad responsibility for all proteanTec solutions across the markets the company serves. She pointed out that new workloads are creating new demands at the chip and system level. While advanced technology remains important, density requirements are forcing a move to chiplet-based design. There is also a focus on reducing operating voltage to save power, but this brings a new set of challenges.

Eddie Ramirez, VP Go-To-Market, Infrastructure Line of Business at Arm. Eddie focuses on working with Arm’s substantial ecosystem to build efficient semiconductor solutions for the cloud, networking, and edge markets. Eddie discussed the exploding size of the user base as driving compute power challenges. Everyone wants to do more with larger data sets, and AI is driving a lot of that.

Artur Levin, VP AI Silicon Engineering at Microsoft. Artur and his team focus on developing the most efficient and effective AI solutions. Artur also sees unprecedented growth in compute demands. Thanks to new AI algorithms, he is seeing new forms of compute infrastructure that previously did not exist. Cooling becomes a system and silicon challenge. The mandate for sustainability will also impact the approaches taken.

Shesha Krishnapuria, Fellow and IT CTO at Intel. Shesha has the broad charter of advancing data center design at Intel and throughout the industry for energy and space efficiency. He has focused on this area for over 20 years. Shesha pointed out that over the past 20 years Intel chip computing for data centers has grown over 140,000 percent. An incredible statistic. Looking ahead, this growth is likely to accelerate due to the widespread use of GPUs for AI applications.

With this backdrop of power and cooling problems that are difficult and getting worse, Mark began exploring strategies and potential solutions with the panel. What followed was a series of insightful and valuable comments. You should invest an hour of your time to hear it live.

To whet your appetite, I will leave you with one insight offered by Shesha. He pointed out that data center infrastructure is still built with the same design parameters that have existed since the days of the mainframe.  That is, use extensive refrigeration systems to maintain a 68 degree Fahrenheit ambient. Looking at the operating characteristics of contemporary technology suggests an ambient of 91 degrees Fahrenheit should work fine. This suggests you can remove all the expensive and power-hungry cooling infrastructure and instead use the gray water provided by water utilities at a reduced price to drive simple heat exchangers, significantly simplifying the systems involved and lowering the cost.

To Learn More

There are many more useful insights discussed in the webinar. You can access the replay here. You can also access an informative white paper from proteanTecs on Application-Specific Power Performance Optimizer Based on Chip Telemetry. If you’d like to reach out and explore more details about this unique company you can do that here. All this will help you understand navigating the power challenges of datacenter infrastructure.


Nvidia Sells while Intel Tells

Nvidia Sells while Intel Tells
by Claus Aasholm on 05-01-2024 at 8:00 am

AMD Transformation 2024

AMD’s Q1-2024 financial results are out, prompting us to delve into the Data Center Processing market. This analysis, usually reserved for us Semiconductor aficionados, has taken on a new dimension. The rise of AI products, now the gold standard for semiconductor companies, has sparked a revolution in the industry, making this analysis relevant to all.

Jenson Huang of Nvidia is called the “Taylor Swift of Semiconductors” and just appeared on CBS 60 Minutes. He found time for this between autographing Nvidia AI Systems and suppliers’ memory products.

Lisa Su of AMD, who has turned the company’s fate, is now one of only 26 self-made female billionaires in the US. Later, she was the CEO of the year in Chief Executive Magazine and has been on the cover of Forbes magazine. Lisa Su still needs to be famous in Formula 1

Hock Tan of Broadcom, desperately trying to avoid critical questions about the change of WMware licensing, would rather discuss the company’s strides in AI accelerator products for the Data Center, which has been significant.

An honorable mention goes to Pat Gelsinger of Intel, the former owner of the Data Center processing market. He has relentlessly been in the media and on stage, explaining the new Intel strategy and his faith in the new course. He has been brutally honest about Intel’s problems and the monumental challenges ahead. We deeply respect this refreshing approach but also deal with the facts. The facts do not look good for Intel.

AMD’s reporting

While the AMD result was challenging from a corporate perspective, the Data Center business, the topic of this article, did better than the other divisions.

The gaming division took a significant decline, leaving the Data Center business as the sole division likely to deliver robust growth in the future. As can be seen, the Data Center business delivered a solid operating profit. Still, it was insufficient to take a larger share of the overall profit in the Data Center Processing market. The 500-pound gorilla in the AI jungle is not challenged yet.

The Data Center Processing Market

Nvidia’s Q1 numbers have been known for a while (our method is to allocate all of the quarterly revenue in the quarter of the last fiscal month), together with Broadcom’s, the newest entry into the AI processing market. With Intel and AMD’s results, the Q1 overview of the market can be made:

Despite a lower growth rate in Q1-24, Nvidia kept gaining market share, keeping the other players away from the table. Nvidas’ Data Center Processing market share increased from 66.5% to 73.0% of revenue. In comparison, the share of Operating profit declined from 88.4% to 87.8% as Intel managed to get better operating profits from their declining revenue in Q1-24.

Intel has decided to stop hunting low-margin businesses while AMD and Broadcom maintain reasonable margins.

As good consultants, we are never surprised by any development in our area once presented with numbers. That will not stop us from diving deeper into the Data Center Processing supply chain. This is where all energy in the Semiconductor market is concentrated right now.

The Supply Chain view of the Data Center Processing

A CEO I used to work for used to remind me: “When we discuss facts, we are all equal, but when we start talking about opinions, mine is a hell of a lot bigger than yours.”

Our consultancy is built on a foundation of not just knowing what is happening but also being able to demonstrate it. We believe in fostering discussions around facts rather than imposing our views on customers. Once the facts are established, the strategic starting point becomes apparent, leading to more informed decisions.

“There is nothing more deceptive than an obvious fact.” Sherlock Homes

Our preferred tool of analysis is our Semiconductor Market model, seen below:

The model has several different categories that have proven helpful for our analysis and are described in more detail here:

We use a submodel to investigate the Data Center supply chain. This is also an effective way of presenting our data and insights (the “Rainbow” supply and demand indicators) and adding our interpretations as text. Our interpretations can undoubtedly be challenged, but we are okay with that.

Our current findings that the supply chain is struggling to get sufficient CoWoS packaging technology and High Bandwith Memory is not a controversial view and is shared by most that follow the Semiconductor Industry.

This will not stop us from taking a deeper dive to be able to demonstrate what is going on.

The Rainbow bars between the different elements in the supply chain represent the current status.

The interface between Materials & Foundry shows that the supply is high, while the demand from TSMC and other foundries is relatively low.

Materials situation

This supply/demand situation should create a higher inventory position until the two bars align again in a new equilibrium. The materials inventory index does show elevated inventory, and the materials markets are likely some distance away from recovery.

Semiconductor Tools

The recent results of the semiconductor tools companies show that revenues are going down, and the appetite of IDMs and foundries indicates that the investment alike is saturated. The combined result can be seen below, along with essential semiconductor events:

The tools market has flatlined since the Chips Act was signed, and there can certainly be a causal effect (something we will investigate in a future post). Even though many new factories are under construction, these activities have not yet affected the tools market.

A similar view of the subcategory of logic tools which TSMC uses shows an even more depressed revenue situation. The tools revenue is back to a level of late 2021, in a time with unprecedented expansion of the semiconductor manufacturing foot print:

This situation is confirmed on the demand side as seen in the TSMC Capital Investments chart below.

Right after the Chips Act was signed, TSMC lowered the capex spend to close to half, making life difficult for the tools manufacturers.

The tools foundry interface has high supply and low demand as could be seen in the supply chain model. The tools vendors are not the limiting factor of GPU AI systems.

The Foundry/Fabless interface

To investigate the supply demand situation between TSMC and it’s main customers we choose to select AMD and Nvidia as they have the simplest relationship with TSMC as the bulk of their business is processors made by TSMC.

The inventory situation of the 3 companies can be seen below.

As TSMC’s inventory is building up slightly does not indicate a supply problem however this is TSMC total so their could be other moving parts. The Nvidia peak aligns with the introduction of the H100.

TSMC’s HPC revenue aligns with the Cost of Goods sold of AMD and Nvidia.

As should be expected, these is no surpises in this view. As TSMC’s HPC revenue is growing faster than the COGS of Nvidia and AMD, we can infer that a larger part of revenue is with other customers than Nvidia and AMD. This is a good indication that TSMC is not supply limited from a HPC silicon perspective. Still, the demand is still outstripping supply at the gate of the data centers.

The Memory, IDM interface

That the skyhigh demand for AI systems is supply is limited, can be seen by the wild operating profit Nvidia is enjoying right no. The supply chain of AI processors looks smooth as we saw before. This is confirmed by the TSMC’s passivity in buying new tools. If there was a production bottle neck, TSMC would have taken action from a tools perspective.

An anlysis of Memory production tools hints at the current supply problem.

The memory companies put the brakes on investments right after the last downcycle began. The last two quarters the demand has increased in anticipation of the High Bandwidth Memory needed for AI.

Hynix in their rececent investor call, confirmed that they had been underinvesting and will have to limit standard DRAM manufacturing in order to supply HBM. This is very visible in our Hynix analysis below.

Apart from the limited supply of HBM, there is also a limitation of advanced packaging capacity for AI systems. As this market is still embryonic and developing, we have not yet developed a good data method to be able to analyze it but are working on it.

While our methods does not prove everything, we can bring a lot of color to your strategy discussions should you decide to engage with our data, insights and models.

Thanks for reading Semiconductor Business Intelligence! Subscribe for free to receive new posts and support my work.

Also Read:

Real men have fabs!

Intel High NA Adoption

Intel is Bringing AI Everywhere

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production