Banner 800x100 0810

Podcast EP248: The Far-Reaching Impact of Finwave Technology With Dr. Pierre-Yves Lesaicherre

Podcast EP248: The Far-Reaching Impact of Finwave Technology With Dr. Pierre-Yves Lesaicherre
by Daniel Nenni on 09-20-2024 at 10:00 am

Dan is joined by Dr. Pierre-Yves Lesaicherre. Before joining Finwave as CEO, Dr. Lesaicherre was the president, CEO and a director of Nanometrics , a leading provider of advanced process control metrology and software analytics. He also held the CEO position for Lumileds, an integrated manufacturer of LED components and automotive lighting lamps. Dr. Lesaicherre previously held senior executive positions at NXP and Philips Semiconductors, and served as chairman of the board of Silvaco Group, a leading supplier of TCAD, EDA software and design IP.

Dan explores with Pierre-Yves the unique technology invented by Finwave and its impact on the industry. The founding of Finwave is discussed, which includes the invention of a novel type of gallium nitride (GaN) transistor based on a FinFET architecture by the Finwave founders while working at MIT.

Pierre-Yves describes the impact of Finwave’s 3DGaN FinFET technology to significantly enhance power amplifier linearity, meeting the demands of advanced communication systems. He touches on several other innovations, including the world’s first high-speed, broadband, high-power RF switch. The superior speed and power handling capabilities of Finwave’s technology are discussed. Pierre-Yves also describes the recently announced work with GlobalFoundries to partner on RF GaN-on-Si technology for cellular handset applications.

Future plans for the company, including product ramp and fund-raising are also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Adam Khan of Diamond Quanta

CEO Interview: Adam Khan of Diamond Quanta
by Daniel Nenni on 09-20-2024 at 6:00 am

Adam Khan Bio Profile

Adam Khan is a pioneer in diamond semiconductor technology, renowned for his foresight and expertise. As the founder of AKHAN Semiconductor, he played a crucial role in innovating lab-grown diamond thin-films for various applications, such as enhancing smartphone screens and lenses with Miraj Diamond Glass® and improving aircraft survivability with Miraj Diamond Optics®. In 2019, under Adam’s leadership, AKHAN partnered with the U.S. Army and a major Defense Contractor to showcase the robust capabilities of its diamond technology, highlighting its importance in national security and defense. His vision secured a $20 million A-Round investment, accelerating the scaling of their consumer device technology to meet growing demand. Adam’s leadership and achievements have been widely recognized, including being named a Forbes 30 Under 30 honoree in Energy & Industry.

Tell us about your company?

Diamond Quanta is here to bring about the future of diamond technology. For us, that means applying our groundbreaking fabrication processes to advance the material’s potential, allowing it to confidently handle the long-term, modern issues facing semiconductors and quantum technology.

Through our novel fabrication and doping techniques, we are pushing diamond’s limits beyond what was previously thought possible of the material, expanding its efficiency, durability and sustainability capabilities to provide real-world solutions facing a wide spectrum of prominent industries.

What problems are you solving?

Power electronics face a major issue with energy loss during power conversion. This leads to inefficiency, malfunction and poor sustainability, among other things. Diamond Quanta is harnessing and advancing the exceptional properties of diamond’s wideband gap applications to tackle this issue head-on. Our approach creates a transformed diamond material that minimizes this power conversion energy loss, meaning the introductions of devices that are far more efficient while also being more compact and lightweight.

With these advantages in mind, our technology brings about power electronics that can withstand higher temperatures, operate at increased voltages and deliver superior performance across a broader range of frequencies, with a reduced carbon footprint on top of all of that.

What application areas are your strongest?

Right now, our technology is well-suited to address the issues plaguing data centers, the electric vehicle (EV) transition and aerospace manufacturing.

As data centers expand, their computational needs are growing, making it more and more difficult to efficiently power them. Using our diamond-based technology, semiconductors will have a much easier time handling these lofty power loads, as their superior efficiency and heat dissipation will allow them to confidently meet the computational needs that are asked of them, without requiring the inefficient cooling processes currently used to keep today’s chips functional in these applications.

That same efficiency and heat dissipation is vital to our technology’s application in the EV industry. Diamond-based electronics enable more compact and efficient power electronics, which directly translates to weight savings that are crucial for improving EV range. Range anxiety is one of the key consumer apprehensions to EV adoption; alleviating that anxiety will be crucial in further proliferating the electric transition.

In the ever-evolving worlds of commercial and defense aerospace technology, the application of diamond-based power electronics will similarly bring about major advantages through increased efficiency, weight reduction and heat dissipation, all leading to vehicles with longer lifespans that will require fewer maintenance and replacement costs.

What keeps your customers up at night?

Our customers, which include leaders in the semiconductor, aerospace, automotive, and consumer electronics industries, are often concerned about staying ahead of technological advancements and managing costs. They seek solutions that improve efficiency and performance while reducing energy consumption and thermal management issues. The reliability and scalability of new technologies are critical considerations that keep them up at night, as these factors directly impact their competitive edge and market presence.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape in advanced materials, particularly diamond-based semiconductors, is evolving with several key players focusing on innovative applications of materials science. Diamond Quanta differentiates itself through our proprietary “Unified Diamond Framework,” which allows for precise doping and manipulation of diamond structures at the molecular level. This enables us to enhance mechanical, optical, and electronic properties to meet specific industry needs, which is not commonly offered by competitors. Our approach not only delivers superior performance but also ensures greater durability and efficiency in high-power and high-frequency, photonic and quantum transport applications.

What new features/technology are you working on?

Currently, Diamond Quanta is developing new techniques for integrating our diamond-based materials into existing semiconductor processes without the need for extensive retooling. We are also enhancing our quantum photonic devices, which promise to drastically improve data processing speeds and energy efficiency for applications in quantum computing and advanced AI systems. Furthermore, we are exploring the potential of our materials in supporting sustainable technologies, particularly in electric vehicles, data centers, and renewable energy systems.

How do customers normally engage with your company?

Customers typically engage with Diamond Quanta through direct inquiries via our website, at industry conferences, and through our business development team. We also see significant engagement through collaborations in research and development projects where we work closely with customer engineering teams to tailor our materials and technologies to their specific needs.

Also Read:

Executive Interview: Michael Wu, GM and President of Phison US

CEO Interview: Wendy Chen of MSquare Technology

CEO Interview: BRAM DE MUER of ICsense


TSMC OIP Ecosystem Forum Preview 2024

TSMC OIP Ecosystem Forum Preview 2024
by Daniel Nenni on 09-19-2024 at 10:00 am

TSMC OIP 2024

The 2024 live conferences have been well attended thus far and there are many more to come. The next big event in Silicon Valley is the TSMC Global OIP Ecosystem Forum on September 25th at the Santa Clara Convention Center. I expect a big crowd filled with both customers and partners.

This is the 16th year of OIP and it has been an honor to be a part of it. The importance of semiconductor ecosystems is greatly understated as is the importance of the TSMC OIP Ecosystem.

The big change I have seen over the last few years is momentum. The FinFET era has gained an incredible amount of ecosystem strength and the foundation of course is TSMC. When we hit 5nm the tide changed in TSMC’s favor with a huge amount of TSMC N5 EDA, IP, and ASIC services support. In fact, there were a record setting number of tape-outs on this node. This momentum has increased at 3nm with TSMC N3 (the final FinFET node) having the strongest ecosystem support and tape-outs in the history of the fabless ecosystem in my experience.

The momentum is continuing with N2 which will be the first GAA node for TSMC. Rumor has it N2 will have comparable tape-outs with N3. It is too soon to say what will happen with the angstrom era but my guess is that semiconductor innovation and Moore’s Law will continue in one form or another.

A final thought on the ecosystem, while it appears that IDM foundries have more R&D strength than pure-play foundries I can assure you that is not the case. The TSMC OIP Ecosystem, for example, includes the largest catalog of silicon verified IP in the history of the semiconductor industry. IP companies first develop IP in partnership with TSMC to leverage the massive TSMC customer base. In comparison, the IDM foundries pay millions of dollars to port select IP to each of their processes to encourage customer demand.

Throughout the FinFET era foundries, customers and partners have spent hundreds of billions of R&D dollars in support of the fabless semiconductor ecosystem which will get the semiconductor industry to the one trillion dollar mark by the end of this decade, absolutely.

Here is the event promo:

Get ready for a transformative event that will spark innovations of today and tomorrow’s semiconductor designs at the 2024 TSMC Global Open Innovation Platform (OIP) Ecosystem Forum!

This year’s forum is set to ignite excitement with a focus on how AI is transforming chip design and the latest advances in 3DIC system design. Join industry trailblazers and TSMC’s ecosystem partners for an inside look at the latest innovations and breakthroughs.

Through a series of compelling, multi-track presentations, you’ll witness firsthand how the ecosystem is collaborating to address critical design challenges and leverage AI in chip design processes.

Engage with thought leaders and innovators at this unique event, available both in-person and online across major global locations, including North America, Japan, Taiwan, China, Europe, and Israel.

Don’t miss out on this opportunity to connect with the forefront of semiconductor technology.

Get the latest on:
• Emerging challenges in advanced node design and corresponding design flows and methodologies for N3, N2, and A16 processes..

• The latest updates on TSMC’s 3DFabric chip stacking and advanced packaging technologies including InFO, CoWoS®, and TSMC-SoIC®, 3DFabric Alliance, and 3Dblox standard, along with innovative 3Dblox-based design enablement technologies and solutions, targeting HPC, AI/ML, and mobile applications.

• Comprehensive design solutions for specialty technologies, enabling ultra-low power, ultra-low voltage, analog migration, RF, mmWave, and automotive designs, targeting 5G, automotive, and IoT applications.

• Ecosystem-specific AI-assisted design flow implementations for enhanced productivity and optimization in 2D and 3D IC design.

• Successful, real-life applications of design technologies, IP solutions, and cloud-based designs from TSMC’s Open Innovation Platform® Ecosystem members and TSMC customers to speed up time-to-design and time-to-market.

REGISTER NOW

Also Read:

TSMC’s Business Update and Launch of a New Strategy

TSMC Foundry 2.0 and Intel IDM 2.0

What if China doesn’t want TSMC’s factories but wants to take them out?


Linear pluggable optics target data center energy savings

Linear pluggable optics target data center energy savings
by Don Dingee on 09-19-2024 at 6:00 am

Conceptual diagram of a retimed OSFP versus a linear direct drive solution using an advanced SerDes IP solution and linear pluggable optics

Data center density continues growing, driving interconnect technology to meet new challenges. Two of the largest are signal integrity and power consumption. Optical interconnects can solve many signal integrity issues posed by copper cabling and offer support for higher frequencies and bandwidths. Still, through sheer numbers in a data center – with projected 10x interconnect growth in racks for applications like AI – optical interfaces add up quickly to pose power consumption problems. Retiming circuitry provides flexibility at the cost of added power. New linear direct-drive techniques simplify interfaces, saving energy and helping close the interconnect scalability gap. Here, we highlight Synopsys’ efforts to usher in more efficient linear pluggable optics with their 1.6T Ethernet and PCIe 7.0 IP solutions.

What’s using most of the power in a pluggable optical interface?

Pluggable modules emerged years ago as an easier way to configure (and, in theory, upgrade within controller limits) physical network interfaces. Instead of swapping motherboards or expansion cards inside a server to get different network ports, designs accommodating SFPs let IT teams choose modules and mix and match them for their needs. SFPs also helped harmonize installations with varying types of network interfaces in different platforms across the enterprise network.

The latest form factor for high-speed Ethernet is OSFP. Density increases have fostered new types of OSFPs, which gang lower-speed lanes into a faster interface. A high-level view of an OSFP pluggable optical module shows there is more than just electrical to optical conversion – analog amplifiers team with an MCU and DSP for signal processing and retiming.

Because a high-speed network interface is likely continuously transferring a data stream, the PHY is continuously retiming the incoming signal. In a single OSFP, this power use may not seem like a lot. However, in a dense data center with aggregate transport bandwidth beyond 25T switches, projections show optical pluggable modules become one of the largest power consumers in the networking subsystem. With data center energy usage a crucial consideration, more efficient pluggable optical modules become essential to attain new levels of interconnect scalability.

New SerDes technology enables direct-drive optical interfaces

The complexity in an optical module arises from the onboard (or, more accurately, on-chip) PHY’s inability to compensate for a range of optical impairments, which worsen as speeds increase. What seemed like a good idea to move retiming into the optical module now merits rethinking as power efficiency bubbles up to the top of the list of concerns. A linear direct-drive (LDD) or linear pluggable optical (LPO) interface retools the electrical circuitry, usually in a network switch ASIC inside a server or network appliance, to handle the required compensation. One result is a simpler OSFP that deals only with electrical-to-optical conversion, significantly reducing the power consumption of the retiming function in the PHY.

The tradeoff is handling direct drive functionality efficiently in next-generation, optical-ready PHY IP. Moving the logic into a network controller ASIC requires careful attention to signal integrity and dealing with reflections, crosstalk, noise, dispersion, and non-linearities. High-speed digital circuitry in a compact footprint generates significant heat, requiring sound thermal management. Shared resources in the host ASIC supporting the SerDes IP provide power management advantages over the retimed implementation.

Synopsys is carving a path toward more efficient linear pluggable optics using co-simulation techniques to develop advanced SerDes IP solutions for faster Ethernet and PCI Express. At higher data rates, simplified models of photonic behavior through electrical equivalents provide inaccurate performance estimates. With more robust electro-optical modeling, simulating IP solutions in a system context offers better results. Synopsys IP solutions first appeared in demonstrations at OFC2024 using OpenLight’s PICs.

These Synopsys IP solutions enable scale-out and scale-up SoC designs:

  • A 1.6T Ethernet IP solution with multi-rate, multi-channel 1.6T Ethernet MAC and PCS controllers, 224G Ethernet PHY IP, and verification IP for easier SoC integration.
  • A PCIe 7.0 IP solution with a PHY, controller, IDE security module, and verification IP providing secure data transfers up to 512 GB/sec bidirectional in x16 configurations.

The Synopsys PHY IP for 224G Ethernet and PCIe 7.0/6.x have demonstrated capabilities for linear direct drive, and the 224G Ethernet works with retimed RX and TX.

Learning more about LDD and LPO solutions

Once the industry sees the possibilities for LDD/LPO in SoC designs for server and networking hardware, the ecosystem for linear pluggable optics solutions should develop rapidly to recapture as much as 30% of the energy used in a high-interconnect density data center. Synopsys is discussing more details of its unified electronic and photonic design approach and the optical direct drive IP solutions at two real-world events:

European Conference on Optical Communication (ECOC2024)

Optica Photonic-Enabled Cloud Computing Industry Summit at Synopsys

An on-demand Synopsys webinar also offers more insight into the rising interconnect demands, the evolution of OSFPs, LDD technology, and electro-optical co-simulation techniques:

To retime or not to retime? Getting ready for PCIe and Ethernet over Linear Pluggable Optics


Smarter, Faster LVS using Calibre nmLVS Recon

Smarter, Faster LVS using Calibre nmLVS Recon
by Daniel Payne on 09-18-2024 at 10:00 am

Calibre nmLVS Recon flow min

Back in the 1970s we did Layout Versus Schematic (LVS) checks manually, so when internal EDA tools arrived in the 1980s it was a huge time saver to use LVS in finding the differences between layout and schematics. One premise before running LVS is that both layout and schematics are complete and ready for comparisons. Fast forward to today, SoC design sizes can number in the billions of transistors. If a design team waits until signoff verification to start running LVS, then the first runs would report way too many errors and then that would then create many iterations to fix the errors, tending to delay the project.

The clever engineers in the Calibre team at Siemens have developed an approach that allow engineers to start running LVS much earlier in the design process, even when netlists are not completed. With the Calibre nmLVS Recon Compare tool you can start running early LVS comparisons, saving valuable time and effort.

This tool automates two things: incomplete blocks are black-boxed, and ports are mapped automatically. The traditional Calibre flow and Recon flows are compared below, to highlight the four areas where Recon comes into play:

Using the Recon flow your verification engineers can find and fix early circuits that are not finalized. Marvell used this flow and presented at the annual User2User Conference earlier this year, watch that online. Users have reported an average 10X improvement in run times, plus 3X less RAM requirements when using Calibre nmLVS Recon.

Large IC design teams divide up the work to conquer the project, and each block designed is typically in a different state of completion. Simply waiting for all blocks to be equally complete before running any LVS results in long project delays. You really want to start checking top-level connectivity early in the project to avoid delays and fix connectivity issues earlier. With the Recon methodology you are running early LVS at multiple points throughout the complete design phase, instead of just at the end.

Black Boxing

Incomplete blocks are automatically marked for black boxing, so that their internal details are not traced or compared, only the inputs and outputs to the block. This approach finds interconnect issues between the blocks more quickly.

Port Mapping

Ports on each block are automatically mapped between layout and schematics, so there’s consistency. Mapping knows how each block connects to all other blocks, even when details inside of a block are incomplete.

Comparison Engine

Both black boxing and mapping are done first, then Calibre nmLVS Recon Compare will evaluate layout and schematic info. If there are any missing connections or mismatched components, they get reported.

The Recon Compare flow is detailed below, and the tool reads input data and quickly completes through its smart logic.

An intuitive UI lets you specify comparison parameters and then see the results, so that you can find and fix any LVS errors.

Your IC design team benefits from using this Recon Compare by identifying LVS issues much earlier in the design process, so that you reach full-chip LVS clean much quicker than waiting for all blocks to be completed. Start running Recon Compare as soon as you have the first top-level of interconnect defined. Each design team working on their blocks will know sooner if there are any port connectivity issues. Reaching your goal of LVS clean happens sooner with this shift-left methodology, giving you higher confidence of first-silicon success.

It’s a best practice to run Recon Compare with your version control system, so that you track all revisions to each block.

Summary

LVS tools have been around since the 1980s, yet today’s large SoCs require updated methodologies in order to reduce turn-around-time (TAT), and ensure that schedules are met. Calibre nmLVS Recon Compare is a new approach that uses black boxing and port mapping that make early LVS runs on designs with incomplete blocks possible.

This shift-left approach has verification engineers running LVS much earlier in the project to find and fix connectivity errors more quickly than before. Debugging the LVS errors is intuitive, saving you time. Read the complete White paper online.

Related Blogs


Bird’s Eye View Magic: Cadence Tensilica Product Group Pulls Back the Curtain

Bird’s Eye View Magic: Cadence Tensilica Product Group Pulls Back the Curtain
by Bernard Murphy on 09-18-2024 at 6:00 am

car 5 copy small

Even for experienced technologists some technologies can seem almost indistinguishable from magic. One example is the bird’s eye camera view available on your car’s infotainment screen. This view appears to be taken from a camera hovering tens of feet above your car. As an aid to parallel parking, it’s a brilliant invention; you can see how close you are to the car in front, the car behind, and to the curb. Radar helps up to a point with the first two, not so much with close positioning or the curb. A bird’s eye view (BEV) makes all this easy and, better yet, intuitive. No need for you to integrate unfamiliar sensory inputs (radar warnings) with what you can see (incompletely) from your car seat. A BEV provides an immediately understandable and precise view everywhere around the car – no blind spots.

Image courtesy of Jeff Miles

The basic idea has its origins in early advances in art: an understanding of perspective developed in the 15th century from which projective geometry emerged in the 17th century. Both are concerned with accurately rendering 3D images on a plane from a fixed perspective. In BEV the input to this process starts with wide-angle images from around the car, stitched together for a 360o view, and projectively transformed onto the focal plane of an imaginary camera sitting 25 feet above the car. This is the heart of the BEV (bird’s eye view) trick. I offer a highly condensed summary below.

First capture surround view images

Most modern cars have at least one camera in front and one in the rear, plus cameras in the (external) side mirrors. These commonly use fisheye lenses to get a wide-angle view. Each image is highly distorted and must be processed through a non-linear transformation, a process known as de-warping, to recover an undistorted wide-angle image.

The full BEV flow is pictured below starting with de-warping (un-distortion) and projection (homography). Cameras are organized so that images have some overlap. Here let’s assume that the cameras are labeled north, south, east and west, so north has some overlap with west, a different overlap with east and so on.

These overlaps allow for calibration of the system since a key point that appears in say north and west images should map to a common point in the top-view plane. Calibration is accomplished by imaging with the car parked on a simple pattern like a grid. Based on this grid, common key points between de-warped images can easily be matched allowing projection matrices to be computed, between the top plane and each of the (de-warped) camera planes.

So far this will develop a reliable top image in calibration at the factory but should self-check periodically and fine-tune where needed (or report need for a service). We’ll get to that next. First, since images overlap and may have different lighting conditions, those overlaps must be “blended” to provide seamless transitions between images. This is a common function in computer vision (CV) libraries.

In-flight self-checking is a common capability in ASIL-D designs and here extends beyond low-level logic checks to check continued consistency with the original calibration. Very briefly this works by identifying common image features seen in overlaps between cameras. If the calibration is not completely accurate artifacts will be seen in edges not aligning, or blurring, or in ghost images. The self-checking flow will (optionally) find and correct for such cases. Amol Borkar (Automotive Segment Director, DSP Product Management & Marketing at Cadence) tells me that such checks are run periodically as you would expect, but the frequency they are run at varies between applications.

All these transformations, from de-warping through to blending, are ideally suited to CV platforms. The Cadence Tensilica Products Group has released a white paper on how they implement BEV in their Tensilica Vision product family (namely the Vision 240 and Vision 341 DSPs)

Also interesting is that AI is expected to play an increasing role in building the 3D view around the car, not only in analyzing the view once built. The BEV concept could also extend to car guidance perhaps with AR feedback to the driver. Exciting stuff!

You can read the Cadence white paper HERE.

Also Read:

Intel and Cadence Collaborate to Advance the All-Important UCIe Standard

Bug Hunting in NoCs. Innovation in Verification

Overcoming Verification Challenges of SPI NAND Flash Octal DDR


Serving their AI Masters

Serving their AI Masters
by admin on 09-17-2024 at 10:00 am

Semiconductor AI Supply Chain 2024

The Impact of the AI Revolution on the Server Manufacturers

While some will designate my research as market research, I view it differently. Having done and bought plenty of market sizing research, I have not seen it lead to any change in behaviour or strategy. It has been used to confirm a strategy already decided and the “great” performance of divisions and vice presidents.

If it pointed towards lower market share or lower divisional performance, you would come under attack, and more appropriate (confirming) research would be selected so people could get back to executing the strategy decided.

There’s a prevailing sentiment: ‘Don’t disrupt our strategy with facts’

Also, most market research is an Excel exercise done by an entry-level employee in isolation from other research. It is already outdated when sold, and calling it data would be a stretch.

For most companies, a strategy is a fixed plan that spans decades and gets adjusted now and then. This approach is straightforward for people to understand and adapt to but ignores that strategy is a response, and in a business setting, strategy is a response to a change in the marketplace.

Strategy is a response to a change in the marketplace

Even if the strategy is market-defining as Nvidia’s entry into AI, it is still a response to the marketplaces from which the new market is rising.

I do strategic market research, not market-sizing research. The size of the market rarely matters. Would you do anything different if you knew you had 10.8% market share or 14.3%? Market size does not change your strategy, but market change should, especially if the change in the market right now is as disruptive as the AI change.

Market change should drive strategy, not market size

A good strategy starts with the question: What is going on? What is changing? It is then built around a response that is within the company’s capabilities and protects it against change or exploits new opportunities arising from the change.

In other words, strategy is all about timing. But you already knew that. You have done things too early and too late before. Like most other strategies, a sound investment strategy relies on the proper timing of trades. The same goes for business strategies.

“A fairytale remains a fairytale only when you close the book at the happiest moment; Timing is everything.”

Intel’s strategy was incredibly successful until it was not. Intel’s response to EUV and the AI revolution in the data centre came too late, as Intel has not been used to responding to changes in the market. The company is now in a situation where its fate is in other people’s hands.

The current AI revolution is not creating soft waves in the fabric of the semiconductor supply chain; it is a tsunami that changes everything. This will decide winners and losers in all areas of the supply change for years to come. Ride the tsunami with us and gain insights for your strategic response.

The Screwdriver Circus

Years ago, a good friend in deep Russia commented on an electronics subcontractor: “They are just a screwdriver circus!”

While I don’t think negatively about electronic subcontractors, they are certainly a different game than semiconductors. Electronic Manufacturing Services (EMS) must live on small margins while managing significant purchasing risk. EMS is very exposed to market changes, and its response is usually immediate.

The AI revolution, in general, and Nvidia, in particular, has significantly changed all aspects of the Semiconductor supply chain and also impacted the manufacturing side of it.

Nvidia is no longer only selling Chips; the company sells a mixture of chips, GPU subsystems, and complete Server racks. They are no longer just buying silicon but also memory, components, power, chassis, and assembly.

In the good old days (a year ago), TSMC sold silicon to Nvidia, Nvidia sold Chips to a server manufacturer that also bought memory, power, network, chassis and other stuff to make a server.

This has all changed

The new supply chain is significantly more complex and lacks traditional chips. These have been replaced by GPU Subsystems, including high-bandwidth memory and networking.

Nvidia sells its own servers manufactured at the EMS/ODMs. At the same time, it sells its GPU subsystems to Server manufacturers, which make them into branded servers.

Lastly, they sell their GPU subsystems to end customers who use EMS/ODMs to create server systems that fit their needs.

The largest server customers are all designing their own accelerator chips for different workloads, which the Nvidia servers excel at. These are developed with companies like Broadcom, Marvel and Qualcomm using Silicon from the logic foundries. Like the GPU/CPU subsystems, these accelerator subsystems include memory and networking components.

These subsystems are installed into custom server systems controlled by the principal owner of the architecture. The best-known principals are:

  1. Amazon (Inferential, Trainium)
  2. Google (TPU)
  3. Microsoft (Maia)
  4. Meta (MTIA)
The cloud of Increasing Complexity

The increasingly complex supply chain makes it difficult to understand what is happening in the AI Supply chain. Most stakeholders can brush this off and state: “Fortunately we are not in the AI game”.

The problem with that attitude is that the AI supply chain is disrupting all other semiconductor and electronic manufacturing chains at the moment:

The most important foundry is transforming into a leading node only supplier.

The Memory companies are moving capacity to HBM making it difficult to maintain capacity for standard memory.

The manufacturing sector is prioritising AI servers with higher margin.

The AI embargos is making China the go to place for mature tech.

We believe the AI disruption of the supply chain will affect everybody and they will need to pay attention to what is changing and what Strategy they need to adapt.

As alway, I enjoy this complexity and take the opportunity to take a deeper dive into areas of the supply chain to uncover insights that can be used for strategy formation. The focus in this post will be on the server manufacturers.

The Server manufacturers

The top 5 server manufacturers are in the middle of the AI storm and are no longer surprised by the rapid need for higher power liquid cooled AI servers. You would expect that you could see it on their overall revenue, but you will be disappointed.

There is no visible change in revenue or in Gross and Operating profits other than revealing that this is a low margin business.

So we could stop the analysis here and conclude that nothing dramatic is going on. But “nothing” is not what I would expect so I continued the analysis.

While the overall revenue (including other products that servers) reveal no impact of the AI revolution, both the increasing cost of goods sold (COGS) and the increasing inventory position that something is going on.

This is likely because the BOM for AI systems are higher, impacting the working capital position and tying up more inventory for the AI servers.

The revenue by company reveals that there is some movement between the top 5 Server companies.

Super Micro Computer and Inspur EII, both pure play server companies are outgrowing the 3 larger competitors that have other revenue.

The Server based view

Isolating the server based revenue shows that the server business is indeed growing significantly and in particular Super Micro is in rapid growth mode.

Supermicro has moved from number 5 to number 2 in a little over a year and is very close to Dell in terms of Server revenue.

Revisiting COGS and Inventory for the server business alone, shows the dramatic increase in inventory and COGS. If the increase in revenue was from standard servers, COGS and Inventory would follow the revenue growth. So the acceleration we are seeing is from AI servers and the inventory points to future rapid growth of AI.

In order to understand the AI element of the server business we exclude Inspur from the following analysis as there is no credible data on their AI business.

The top 4 Server companies

Without Inspur, the Top 4 Server companies have a total server revenue of 17.5B$ in Q2-24. As Dell and HPE already has reported Q3 numbers that showed 29% growth in total, it would be surprising if the top 4 Server revenue is not a new significant record. If Lenovo and Super Micro has similar growth numbers, the top 4 revenue will grow to 22.5B$.

The revenue growth is totally dominated by the AI server revenue as can be seen below. Dell and HPE are showing close to 70% quarterly growth from Q2 to Q3. We are seeing a significant inflexion point in the business of the server companies.

The non AI part of the Server revenue is contracting even though Dell reported a Q3 increase in non AI Server based revenue.

The AI server share of server revenue reached 43%, up from 4% in the beginning of 2023.

The last two chart are important for Nvidia’s growth thesis that is described in the post below:

Nvidia Pulled out of the Black Well

There have been serious concerns about the ROI on AI and yield problems with Blackwell, but Nvidia pulled it off again and delivered a result significantly above guidance.

Jensen Huang believe that the AI server revenue will not only be on top of the existing CPU based data center infrastructure but also also replace a significant part of it. Our analysis shows that Jensen’s growth thesis could be valid as the non-AI server revenue is indeed declining not only in share but also in absolute $ terms.

This is still to early to conclude but is going to be something I follow from now on.

Conclusion

Hidden by flat overall revenues, the AI revolution is now also showing a massive impact on the numbers of the server companies. AI now represent 43% of the overall server revenue and the traditional Server revenue is in decline.

The server revenue is driven by the CPU/GPU architectures from Nvidia, Intel and AMD while the accelerator revenue will materialise through the EMS/ODM’s.

With the increasing AI share, the Server companies need to handle more power and Direct Liquid Cooling and the increasing costs are pushing up the COGS and the inventory position. This has limited the profit growth of the AI business up until now. Once fully adapted to AI and DLC, the profits will begin to increase.

While China seems to be able to get both high end and low end AI, the channel(s) are not revealed yet although we have some pretty good candidates. The flow of AI to China, has also impacted the internal development of GPU’s in China. The chinese GPU leader Xiangdixian Computing Technology, is in problems and has to scale down operations significantly.

Pledge your support for this content

Also Read:

The Semiconductor Business will find a way!

Nvidia Pulled out of the Black Well

The State of The Foundry Market Insights from the Q2-24 Results


Siemens EDA Offers a Comprehensive Guide to PCIe® Transport Security

Siemens EDA Offers a Comprehensive Guide to PCIe® Transport Security
by Mike Gianfagna on 09-17-2024 at 6:00 am

Siemens EDA Offers a Comprehensive Guide to PCIe Transport Security

It is well-known that there is more data being generated all the time. The need to store and process that data with less power and higher throughput dominates design considerations for virtually all systems. There is another dimension to the problem – ensuring the data is secure as all this movement and processing occurs. Within computing systems, the Peripheral Component Interconnect Express, or PCIe standard is the de-facto method to move data. This standard has gained tremendous momentum. If you’d like to peruse the various versions of the standard, I recommend you visit the PCI SIG website. The considerations for how to secure PCIe channels and how to verify the robustness of those channels is the subject of this post. The options to consider are many, as are the technical requirements to design and validate a robust architecture. The good news is that a market leader has published a white paper to help guide you. Let’s see how Siemens EDA offers a comprehensive guide to PCIe transport security.

Framing the Problem

The concept of a secure PCIe link is easy to imagine. Making it work reliably with real world constraints is not as easy, however. It turns out there are many tradeoffs to face, and many decisions to make. And once you’ve done that, verifying the whole thing will work reliably is yet another challenge. As I read the white paper from Siemens EDA, I got an appreciation for the complexity of this task. If you plan to use PCIe channels in your next design, you’ll want to get a copy. A link is coming, but first let’s look at some of the items covered.

Suprio Biswas

The white paper is written by Suprio Biswas, an IP Verification Engineer at Siemens EDA. He has been working in the field of digital design and communication at Siemens EDA for over four years and has presented his work at a recent PCI-SIG conference. Suprio has a knack for explaining complex processes in an approachable way. I believe his efforts on this new white paper will help many design teams.

Before we get into some details, I need to define two key terms that will pop up repeatedly in our discussion:

  • Security protocol and data model (SPDM) specification – defines a message-based protocol to offer various security processes for authentication and setting up a secure session for the flow of encrypted packets.
  • Component measurement and authentication (CMA) – defines a mapping of the SPDM specification for PCIe implementation.

With that out of the way, let’s look at some topics covered in the white paper.

Some Details

The white paper begins with an overview of the topics to consider and the decisions that need to be made. Authentication, access control, data confidentiality/integrity and nonrepudiation are discussed. This last item prevents either the sender or the receiver from denying the transmission of a message. There is a lot of coordination to consider among these topics.

The aspects of implementation are then covered. This discussion centers on the various approaches to encryption, decryption and how keys are handled. The design considerations to be made are inter-related. For example, there can be a single key (secret key), or a pair of keys (public key and private key) based on the chosen cryptographic algorithm.

Getting back to the terms defined above, there is a very useful discussion about implementing security through the CMA/SPDM flow. There are many considerations to weigh here and trade-offs to be made. It is best to read the white paper and get the direct guidance of Suprio. To whet your appetite, below is a high-level CMA/SPDM flow for establishing a secure connection.

CMA/SPDM flow for establishing a secure connection

Suprio then covers the Siemens Verification IP (VIP) for PCIe. This IP verifies designs that test the successful establishment of a secure connection through CMA/SPDM before starting the flow of encrypted packets. The IP is compliant with the CMA Revision 1.1 specification and SPDM version 1.3.0 specification.

Many more details are provided in the white paper.

To Learn More

If you’d like to learn more about PCIe Gen6 verification, you can find that here. And finally, download your own copy of this valuable white paper here. You will find it to be a valuable asset for your next design. And that’s how Siemens EDA offers a comprehensive guide to PCIe transport security.


Semiconductor Industry Update: Fair Winds and Following Seas!

Semiconductor Industry Update: Fair Winds and Following Seas!
by Daniel Nenni on 09-16-2024 at 10:00 am

Malcolm Penn Four Horsemen

Malcolm Penn did a great job on his semiconductor update call. This is about the whole semiconductor industry (Logic and Memory) versus what I track which is mostly logic based on design starts and the foundries. Malcolm has been doing this a lot longer than I have and he has a proven methodology, but even then, semiconductors are more of a rollercoaster than a carousel so predictability is a serious challenge.

Malcolm feels that we are at the bottom of the downturn after the pandemic boom. He calls it the Golden Cross Breach which will lead to a good term of growth. The Golden  Cross breach is when the green 3/12 curve breaches the blue 12/12 curve. Again, this is memory and logic. Inventory is a much bigger factor with memory and that is a big part of hitting bottom, depleting excess inventory from the pandemic shortages scare.

Remember, at the end of 2023 forecasters suggested double digit growth for 2024. TSMC also predicted a double digit industry growth (10%) and a TSMC revenue growth of more than double the industry growth. Today TSMC is at a 30% revenue increase and I see that continuing the rest of the year with 2025 being even better when more customers hit N3 HVM, absolutely.

Unfortunately, after the new year the semiconductor industry crashed going from a +8.4% growth rate in Q4 2023 to a -5.7% growth rate in Q1 2024 sending the forecasters back to the magic 8-ball for revised predictions. Q2 2024 came back with a vengeance with a +6.5% growth rate giving forecasters a whiplash. We have been very forecast positive since then with a double digit revenue growth for 2024.

Malcolm’s forecasting formula looks at four things:

As Malcolm explained, the economy determines what we can buy. This means consumers and suppliers (CAPEX). Unit shipments is critical in my opinion because that is what we actually buy but that number depends on inventory in the financial forecasting sense. According to Malcolm we still have excess inventory right now which is still liquidating. Unit shipments is a big indicator for me, much bigger than ASPs, which are the prices we sell chips for (supply versus demand). Given the AI boom and the excessive GPU prices (Nvidia) this number is artificially inflated in my opinion. Fab capacity is also a big one for me. The semiconductor industry generally runs with fab utilization averaging 80%-90%. During the pandemic, orders were cancelled then restarted again so some fabs rebounded with 100%+ utilization then fell back to 60-70%. Today I have read that average capacity utilization is hedging back up to 80%-90% which I believe will be the case for the rest of 2024 and 2025.

My big concern, which I have mentioned in the past, is over capacity. If you look at the press releases in 2022 and 2023 the fab build plans were out of control. It really was an arms race type of deal. I blame Intel for that since the IDM 2.0 plan included huge growth and fab builds and the rest of the foundries followed suit. We also have re-shoring going on around the world which is more of a political issue in my opinion. Reality has now hit so the fab builds will scale down but China is still over spending (more than 50% of the total world wide CAPEX) on semiconductor equipment. Malcolm covered that in his update in more detail.

Moving forward Malcom updated his forecast for 2024 to 15% growth for the semiconductor industry and 8% growth in 2025. We will hear from other forecasters in Q3 but I would guess that they will follow Malcolm’s double digit number this year and back down to the normal semiconductor industry single digit growth for 2025, absolutely.

Malcolm’s presentation had 50+ slides with a Q&A at the end. For more information give him a ring:

Future Horizons Ltd
Blakes Green Cottage
Sevenoaks, Kent
TN15 0LQ, England
T: +44 (0)1732 740440
E: mail@futurehorizons.com
W: https://www.futurehorizons.com/

Also Read:

Robust Semiconductor Market in 2024

Semiconductor CapEx Down in 2024, Up Strongly in 2025

Automotive Semiconductor Market Slowing

2024 Starts Slow, But Primed for Growth


Samsung Adds to Bad Semiconductor News

Samsung Adds to Bad Semiconductor News
by Robert Maire on 09-16-2024 at 6:00 am

Samsung Layoffs Intel
  • Samsung follows Intel in staff reductions due to weakness in chips
  • Chip industry split between haves & have nots (AI & rest of chips)
  • Capital spend under pressure – Facing Eventual China issues
  • Stick with monopolies, avoid commodities

Samsung announces layoffs amid weak chip business and outlook
Samsung announced staff reductions across the company with some areas seeing a potential reduction of up to 30% of staff. In addition the Taylor Texas fab appears in trouble with likely further delays on the horizon.

Samsung Cuts staff and Texas Fab

Samsung changes Chip leader & worker issues

Samsung CHIPS Act funding in jeopardy just like Intel

As with Intel, CHIPS Act grants and loans are milestone based and if Samsung doesn’t hit the milestones they may not get the money.

We remain concerned about the progress of CHIPS Act projects and Intel and Samsung are already at risk.

Given that the memory market is not in great shape we are also very concerned about Micron’s future progress in CHIPS Act fabs. We have stated from the beginning that the planned fabs in Clay NY would likely take a while given the volatile conditions in the memory market.

TSMC appears to be on track, more or less, but is still having issues getting qualified operators in the US.

Global foundries will likely spend CHIPS Act money on its existing fab but certainly doesn’t need a second fab in New York when there isn’t enough demand for the first and China based competition is breathing down their neck

DRAM pricing dropping like a stone in market share fight

DRAM pricing has been dropping over the past few months as it appears to be a typical market share fight that we have seen in the past……

In past cycles, Samsung has used its cost of manufacture advantage to try and drive the market away from weaker competitors by cutting pricing.

This time around its a bit different as Samsung does not appear to have the price advantage it has previously enjoyed so cutting pricing doesn’t gain market share, it just becomes a race to the bottom which benefits no one.

Unseasonal weakness even more concerning

We are at a point in the annual seasonality where memory pricing should be at its strongest as we have new IPhones coming out and products being built in anticipation of the holiday selling season…..but not so……

Memory pricing is going down when is should usually be going up….not good.

We hear that there is a lot of product/excess inventory in the channel……

HBM not to the rescue

As we have said a number of times in the past HBM and AI is nothing short of fantastic but HBM memory is single digit percentages of the overall memory market.

When we had just SK Hynix supplying HBM, prices were obviously high due to a monopoly. Now that Samsung and MIcron are adding to the mix, not so much a monopoly anymore……

HBM is a commodity just like every other type of memory…..don’t forget that fact and act accordingly

Memory makers becoming unhinged

Everyone for the past couple of years had been complementing the memory makers for their “rational” behavior….well not so anymore. Perhaps the world of politics is infecting the memory industry with irrational, unhinged, behavior. It feels as if memory makers are back to their old ways of irrational spend, pricing and market share expectations.

As we have seen in prior times this type of behavior suggests they are just shooting themselves in their own foot and creating their own oversupply/declining price driven downcycle.

We think memory maker stocks should likely reflect this irrational behavior much as their stock prices were previously rewarded for prior rational behavior…it means the recent stock price declines are well justified and will likely continue.

The Stocks
Commodities & Monopolies

As always, we would avoid commodity chip producers (AKA memory) unless there is an extended shortage (which we are obviously over) for demand or technology based reasons.

We prefer monopoly-like companies in both chips as well as chip equipment.

In chips, the best monopoly is clearly Nvidia as no one else seems to come close in AI devices (at least not yet).

In equipment companies, we continue to prefer the monopoly of ASML despite the China issues and regulatory problems.

In foundries, TSMC has a virtual monopoly as Samsung’s foundry business appears to have fallen even further behind TSMC in technology and yield. There is no other foundry within striking distance of TSMC, the rest are behind Samsung or not in the same universe.

We have been repeating for quite some time now that the chip industry is a one trick pony (AI) and the rest of the industry, which is the majority, is not in great shape and memory looks to be in decline.

Stock prices seem to finally have figured out what we have been saying.

Its equally hard to come up with a recovery scenario for semiconductor equipment stocks given the likely negative bias of Intel & Samsung (and others soon to follow)

If CHIPS Act related projects start to unravel, due to industry downturns, in Ohio, Texas, New York or similar supplemented projects in Germany, Israel, Korea etc; capital spending will also unravel.

If we can’t take advantage of essentially “free money” in a capital intensive industry somethings wrong…..

Then, on top of everything else we have the 800 pound gorilla that is China, both in Chip production as well as equipment purchases.

Rising China production is an existential threat to second tier foundries and the 40% of all equipment that continues to flow to China is keeping the equipment industry in the black.

Sooner or later, all the equipment that China has purchased will come on line. Sooner or later China will slow its non China based equipment purchases.

Things are shaky and getting shakier in the overall chip industry. Hardly a confidence inspiring situation as the news flow seems to be more negative when it should be getting more positive on a seasonal basis.

We still love AI and all related things and continue to own Nvidia, but the headwinds in the rest of the semiconductor industry may be building………

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery

LRCX Good but not good enough results, AMAT Epic failure and Slow Steady Recovery

The China Syndrome- The Meltdown Starts- Trump Trounces Taiwan- Chips Clipped