DAC2025 SemiWiki 800x100

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges
by Robert Maire on 01-31-2023 at 2:00 pm

Lamb chops

-Lam Research chops guidance, outlook & headcount sharply
-Further declines as 2023 will be H1 weighted- No end in sight
-System sales cut by more than half as even service is cut
-Memory is the culprit as expected-Forcing business “reset”

A sad sounding conference call….

While Lam reported a good December, as expected by us and others, coming in at $5.28B in revenues and non GAAP EPS of $10.71 versus expectations of $5.08B and EPS of $9.96.

The real problem is guidance going into 2023. Guidance is for $3.8B+-$300M and EPS of $6.50+-$0.75 versus street expectations, which were already sharply lowered of $4.38B and $7.88 EPS.

The real problem is that “real” results are much, much worse after you back out deferred revenue from incomplete units in the field waiting on parts.
In the December quarter Lam benefited to the tune of $700M so the “real” revenue would have been $4.58B. Worse yet with expectations of $3.8B in March, if you back out deferred revenue it looks to drop below $3B. Deferred revenue was down from September’s $2.75B to $2B.

Also remember that deferred comes in at higher margins. So its way worse than it looks at first blush. The call had a very downbeat tone overall with management using words like “reset”, “decline meaningfully”, “well below”, “Unprecedented” .

Perhaps most telling was CEO Tim Archer saying on Q&A that there was “no timeframe on recovery”. So it sounds like no end in sight, no hope of a second half recovery.

The company also said that revenue would be first half weighted which suggests a weaker, not better second half due largely to taking down the deferred revenue

The company will be taking about $250M in charges

Headcount cuts signal bad/long downturn

We haven’t seen layoffs in the semiconductor equipment business for quite some time. Lam announced headcount cuts of 1300 full time employees plus 700 part time/contract on top of earlier cuts, so well over 2000 cuts or a bit over 10%

Even service/support dropped- previously sacrosanct

Lam had previously spoke about service/support revenue as being bulletproof and not vulnerable to variations. Turns out to be not true as service/support was down from September’s $1.9 to December’s $1.7B as tools were idled and did not need service.

Even worse still, if we back out the declining service revenue could “system” sales fall below $2B and approach a low of $1B in Q1????This is really off a cliff and explains the actions taken.

Memory, especially NAND, is hardest hit

Its no surprise that memory is hardest hit as we have heard for months that the memory industry was in sharp decline. Utilization is way down, tools are idled and new projects are being pushed way out. It sure sounds like we are not going to see a memory recovery any time soon and not this year.

Tim Archer said that “memory is at levels we haven’t seen in 25 years”. If we turn the clock back 25 years, memory spending was a very small fraction of where it has been over the last year- probably single digit percentages
Memory is obviously off the proverbial cliff without skid marks……

March quarter not likely the bottom- Bottom may be H2

It sounds as if we are in a situation where Lam will see declines over the course of the year especially if their view that 2023 is “first half weighted” is accurate. This suggests a bottom in H2 (or beyond?) Certainly not the H2 recovery that bullish people are expecting.

Welcome to reality

We have been clear in our view of the negative impact we expected from Lam and we now have the proof in black and white. We suggested that Lam was a short while every other analyst on the street had at least a neutral and most with buys despite the very clear signals.
In our most recent note;

Where there’s smoke there’s fire

We pointed out that pre-announcements from both UCTT & ICHR clearly telegraphed a horrible outlook from Lam. How could everyone miss this?

The stocks

Lam was down sharply, 4% in the after market as the call went on. As all those bullish analyst cut their numbers and do a “reset” we would also assume a few ratings changes after the cows have left the barn.

There is obviously no reason to own the stock if we haven’t yet hit bottom nor have any idea where the bottom is. We can just wait on the side line and watch it get cheaper.

There may be some temptation to buy on a relief rally, that it could have been even worse but obviously thats not a very good reason to own a stock.
Things are clearly much worse than most (not all) expected.

There is likely some collateral damage as sub suppliers to Lam will see the effects of Lams inventory reductions talked about on the call., as Lam appropriately cuts back on parts. Obviously supply chain constraints are less of an issue in a sharp downturn.

We would expect AMAT to sing a similar tune but with slightly less impact as Lam remains the memory poster child. KLAC is obviously negatively impacted in China but historically been the foundry/logic poster child and less impacted by memory.

As we stated in our note on ASML this morning, ASML is almost completely unaffected and almost invulnerable as they remain head and shoulders above the dep and etch business which is reverting back into a very competitive “turns” business.

This reminds me of an old book….. A tale of two cities “it was the best of times (for ASML) it was the worst of times (for LRCX)”

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken


Weebit ReRAM: NVM that’s better for the planet

Weebit ReRAM: NVM that’s better for the planet
by Eran Briman on 01-31-2023 at 10:00 am

1.Weebit Nano RRAM ReRAM IP NVM for semiconductors green materials Eco friendly technology production

Together with our R&D partner CEA-Leti, we recently completed an environmental initiative in which we analyzed the environmental impact of Weebit’s Resistive Random-Access Memory (ReRAM / RRAM) technology compared to Magnetoresistive Random Access Memory (MRAM) – another emerging non-volatile memory (NVM) technology. The results were extremely positive for Weebit’s Oxide-based ReRAM (OxRAM), which was jointly developed with Leti, showing the environmental impact of ReRAM is much lower than that of MRAM.

A bit of background

The overall contribution of the semiconductor industry to global greenhouse gas (GHG) emissions is increasing as demand for semiconductors continues to grow. To mitigate negative impacts, environmental programs are extremely important for all players in the semiconductor ecosystem. In addition to CO2 emissions, semiconductor manufacturing can use a significant amount of energy, water, rare natural resources, and chemicals, which can contribute to global warming. The choices semiconductor companies make in design and specification phases, including their memory technology choices, are key to reducing a company’s overall carbon footprint.

MRAM is effectively the only other kind of emerging NVM that is commercially available today at foundries. It stores data as resistance using magnetic fields (versus ReRAM which stores it as resistance of a solid dielectric material, and flash which stores data as electric charges). MRAM has high endurance and is more often used as a replacement for embedded SRAM than for embedded flash. Still, there are companies using MRAM today as a replacement for embedded flash that do so because until now there hasn’t been a production-ready alternative at smaller geometries.

Compared to MRAM, Weebit ReRAM is the logical choice for embedded applications, with the number one reason being ease of manufacturing. Weebit ReRAM requires significantly fewer layers and masks and doesn’t use exotic materials or special equipment, so it can be manufactured in the standard CMOS production line and doesn’t require designated cleanroom facilities. All this translates to lower costs. MRAM adds an estimated 30-40% to wafer cost, compared to ReRAM’s 5-7%. We will go into more depth on MRAM in a future article, but for now, suffice it to say that ReRAM has a long list of advantages over MRAM, and in our new study, we’ve outlined yet another advantage – ReRAM is much more ecologically friendly! 

What we looked at

The team at CEA-Leti estimated the contribution of both OxRAM and MRAM to climate change, focusing on the production flows of each technology. To enable a fair comparison, the study looked at each technology in an identical die area in a similar process node and considered only the memory cell portion. They looked at raw materials and manufacturing processes (cradle to gate) without including infrastructure and abatement. Scroll to the end of the article to learn more about the data collection for the study*.

Key results

The study found that on all measured parameters, OxRAM demonstrated a better GHG related profile than MRAM. Below we’ve listed some of the key results.

ReRAM demonstrated the following benefits over MRAM:

  • 30% reduction in GHG emissions
  • 41% reduction in water use
  • 53% reduction in use of minerals and metals
  • 36% less electricity to process

The importance of critical materials

One of the key study findings is that the MRAM flow contains 2X more critical raw materials than the OxRAM flow. As defined by the European Union, the two main factors that define the criticality of a material are supply risk and economic importance. Supply risk is determined by criteria including supply concentration, import reliance, governance performance of suppliers, trade restrictions and criticality of substitute materials. Economic importance is based on a material’s added value, importance in end use applications, and the performance of any substitute materials. In the below chart you can see the criticality of various materials used in semiconductor manufacturing.

Many of the materials required for MRAM are at high supply risk, and some – like magnesium, platinum and cobalt – are critical in terms of both supply risk and economic importance. Any disruption of access to such materials, whether from political challenges, extreme weather, COVID lock-downs, or other issues can put a project at risk. In addition, the borates that are used in MRAM manufacturing have a very poor recycling input rate (less than 1%) – yet another consideration when looking at environmental impacts.

The bigger picture

There are many environmental considerations that come into play for semiconductor technologies such as NVMs. In our study, we specifically looked at the memory cells and circuits themselves, without accounting for the rest of the chip (e.g., microcontrollers) or the environmental impacts of the product lifecycle, such as power consumption during its usage and end-of-life recycling.

The results we’ve shown here can provide customers with confidence that when they are choosing an alternative to flash for their next design, they can not only count on the many known advantages of ReRAM, but they now know that Weebit ReRAM has a lower environmental impact and less supply chain risk than MRAM.

* Notes about the study

  • Primary data: All data about the steps of the production flow came from internal collection by Leti, which has broad expertise in both MRAM and ReRAM. Quantity and types of materials used (metals, chemicals and gases), water consumption, energy consumption, and air/water emissions were measured by Leti.
  • Secondary data: All raw materials data came from the Eco Invent database.
  • Production is in France and therefore the energy mix is the French mix.
Also Read:

How an Embedded Non-Volatile Memory Can Be a Differentiator

CEO Interview: Coby Hanoch of Weebit Nano


Model-Based Design Courses for Students

Model-Based Design Courses for Students
by Bernard Murphy on 01-31-2023 at 6:00 am

System design min

Amid the tumult of SoC design advances and accompanying verification and implementation demands, it can be easy to forget that all this activity is preceded by architecture design. At the architecture stage the usual SoC verification infrastructure is far too cumbersome for quick turnaround modeling. Such platforms also tend to be weak on system-wide insight. Think about modeling an automotive Ethernet to study tradeoffs between zonal and other system architectures. Synopsys Platform Architect is one possible solution though still centered mostly on SoC designers rather than system designers. MATLAB/Simulink offers a system-wide view, but you have to build your own model libraries.

Mirabilis VisualSim Architect offers a model-based design (MBD) system with ready-to-use libraries for popular standards and components in electronic design. They have now added a cloud-based subset of this system plus collateral to universities as a live, actionable training course. Called “Semiconductor and Embedded Systems Architecture Labs” (SEAL), the course provides hands-on training in system design to supplement MBD/MBSE courses.

Mirabilis VisualSim and MBD

Deepak Shankar (Founder at Mirabilis) makes the point that for a university or training center to develop a training platform requires they procure and maintain prototypes and tool platforms and build training material and lab tutorials. This is extremely time-consuming and expensive, and quickly drifts out of date.

VisualSim is a self-contained system plus model library requiring no integration with external hardware, tools or libraries. Even more important the full product is in active use today for production architecture design across an A-list group of semiconductor, systems, mil-aero, space and automotive companies who expect accuracy and currency in the model library. As one recent example, the library contains a model for UCIe, the new standard for coherent communication between chiplets.

Hardware models support a variety of abstractions, from SysML down to cycle accurate, and analog (with linear/differential equation solvers) as well as digital functionality. Similarly, software can evolve from a task-graph model to more fully elaborated code.

The SEAL Program

The lab is offered on the VisualSim Cloud Graphical Simulation Platform, together with training collateral in the form of questions and answer keys. The initial release covers 67 standards and 85 applications. Major applications supported by SEAL include AI, SoC, ADAS, Radars, SDR, IoT, Data Center, Communication, Power, HPC, multi-core, cache coherency, memory, Signal/Image/Audio Processing and Cyber Physical Systems. Major standards supported are UCIe, PCIe6.0, Gigabit Ethernet, AMBA AXI, TSN, CAN-XL, AFDX, ARINC653, DDR5 and processors from ARM, RISC-V, Power and x86.

Examples of labs and questions posed include:

  • What is the throughput degradation of multi-die UCIe based SoC versus an AXI based SoC?
  • How do autonomous driving timing deadlines change between multi-ECUs vs single HPC ECU?
  • How much power is consumed in different orbits of a multi-role satellites?
  • Which wired communication technology is more suitable for a flight avionics system – PCIe or Ethernet?

Course work can be graded by university teaching or training staff. Alternatively, Mirabilis is willing to provide certification at two levels. A basic level offers a Certificate of Completion for a student who works through a module and completes the Assessment Questions. More comprehensive options include a Professional Certificate for a student who successfully completes 6 modules, or a Mini Masters in Semiconductor and Embedded Systems for a student who completes 20 modules.

What’s Next?

While an MBD system of this type obviously needs some pretty sophisticated underlying technology to manage the multiple different types of simulation needed and stitching required between different modeling styles and abstractions, the practical strength of the system clearly rests on the strength of the library. Deepak tells me their commercial business splits evenly between semiconductor and systems clients, all doing architecture simulation. Working with both types of client keeps their model library tuned to the latest needs.

Semiconductor clients are constantly optimizing or up-revving SoC architectures. Systems clients are doing the same for more distributed system architectures – an automotive network, an O-RAN system, an avionics system, a multi-role satellite system. Which makes me wonder. We all know that system companies are now more heavily involved in SoC design, in support of their distributed systems. Some form of MBD must be the first step in that flow. A platform with models well-tuned (though not limited) to the SoC world might be interesting to such architects I would think?

You can learn more about the SEAL program HERE.

Also Read:

CEO Interview: Deepak Shankar of Mirabilis Design

Architecture Exploration with Miribalis Design

Rethinking the System Design Process


Counter-Measures for Voltage Side-Channel Attacks

Counter-Measures for Voltage Side-Channel Attacks
by Daniel Payne on 01-30-2023 at 2:00 pm

agileGLITCH min

Nearly every week I read in the popular press another story of a major company being hacked: Twitter, Slack, LastPass, GitHub, Uber, Medibank, Microsoft, American Airlines. What is less reported, yet still important are hardware-oriented hacking attempts at the board-level to target a specific chip, using voltage Side-Channel Attacks (SCA). To delve deeper into this topic I read a white paper from Agile Analog, and they provide IP to detect when a voltage side-channel attack is happening, so that the SoC logic can take appropriate security counter-measures.

Approach

Agile Analog has created a rather crafty IP block that plays the role of security sensor by measuring critical parameters like voltage, clock and temperature. Here’s the block diagram of the agileGLITCH monitor, comprised of several components:

agileGLITCH

The Bandgap component ensures a voltage reference, and operates across a wider voltage span to provide glitch monitoring. You may increase accuracy optionally using production trimming.

Each reference selector has a configurable input voltage to the programmable comparators, allowing you to adjust the glitch side. You would adjust the thresholds if your core is using Dynamic Voltage Frequency Scaling (DVFS).

There are two programmable comparators, one for positive voltage glitches, and the other for negative glitch detection. You get to configure the thresholds for glitch detection, and the level-shifters enable the IOs to use the core supply.

The logic following each comparator provides control of enables based on the digital inputs, latching momentary events on the output of comparators, disabling outputs while testing, and 3-way majority voting on the latched outputs.

Not shown in the block diagram is an optional ADC component to measure the supply value, something useful for lifetime issues, or measuring performance degradation.

Use Cases

Consider an IOT security device like a wireless door lock to a home, where a malicious person gains access to the lock and uses voltage SCA to enter debug mode of the device, reading all of the authorized keys for the lock. With agileGLITCH embedded, the IOT device detects and records the voltage glitch, alerting the cloud system of an attack, noting the date and time.

IOT WiFi lock

A security camera has been compromised using voltage SCA to get around the boot-signing sequence, allowing agents to reflash using hacked firmware. This kind of exploit lets the hacker view the video and audio stream, violating privacy and setting up a blackmail scenario. Using the agileGLITCH counter-measure, the camera system detects voltage glitch events, then stops any unknown code to be flashed, plus it could report to the consumer that the device was compromised before they purchased it.

Security Camera

An automotive supply regulator tests OK at the factory, however over time, during high load conditions, the voltage degrades and eventually fails. The agileGLITCH sensor is a key component of a system that could measure voltage degradation over time (using an ADC and digital data monitor), and report back to the automotive vendor so that they can issue a recall in order to repair or replace the supply regulator. The trend is to provide remote automotive fixes, over the air.

Supply Regulator

A hacker wants to remove Digital Rights Management (DRM) from a satellite system, installing a voltage glitcher on the HDMI controller supply to reset the HDMI output to be non-HDCP validated. Counter-measures in agileGLITCH detect voltage glitching, safeguarding the HDMI controller from tampering.

Satellite Receiver System

Summary

Hacking is happening every day, all around the world, and the exploits continue to grow in complexity and penetration. Voltage SCA is a hacking technique used when the bad actors have physical access to the electronics and they use supply glitching techniques to put the system into a vulnerable state, but this approach only works if there are no built-in counter-measures. With an approach like agileGLITCH embedded inside an electronic device, then these voltage SCA hacking attempts can be identified and thwarted, before any unwanted changes are made. An ounce of prevention is worth a pound of cure, and that applies to SCA mitigation.

To download and read the entire white paper, visit the Agile Analog site and complete a short registration process.

Related Blogs

 


Achronix on Platform Selection for AI at the Edge

Achronix on Platform Selection for AI at the Edge
by Bernard Murphy on 01-30-2023 at 10:00 am

Edge compute

Colin Alexander ( Director of product marketing at Achronix) released a webinar recently on this topic. At only 20 minutes the webinar is an easy watch and a useful update on data traffic and implementation options. Downloads are still dominated by video (over 50% for Facebook) which now depends heavily on caching at or close to the edge. Which of these applies depends on your definition of “edge”. The IoT world see themselves as the edge, the cloud and infrastructure world apparently see the last compute node in the infrastructure, before those leaf devices, as the edge. Potato, potahto. In any event the infrastructure view of the edge is where you will find video caching, to serve the most popular downloads as efficiently and as quickly as possible.

Compute options at the edge (and in the cloud)

Colin talks initially about infrastructure edge where some horsepower is required in compute and in AI. He presents the standard options: CPU, GPU, ASIC or FPGA. A CPU-based solution has the greatest flexibility because your solution will be entirely software based. For the same reason, it will also generally be the slowest, most power hungry and longest latency option (for round trip to leaf nodes I assume). GPUs are somewhat better on performance and power with a bit less flexibility than CPUs. An ASIC (custom hardware) will be fastest, lowest power and lowest latency, though in concept least flexible (all the smarts are in hardware which can’t be changed).

He presents FPGA (or embedded FPGA/eFPGA) as a good compromise between these extremes. Better on performance, power and latency than CPU or GPU and somewhere between a CPU and a GPU on flexibility. While much better than an ASIC on flexibility because an FPGA can be reprogrammed. Which all makes sense to me as far as it goes, though I think the story should have been completed by adding DSPs to the platform line up. These can have AI-specific hardware advantages (vectorization, MAC arrays, etc) which benefit performance, power, and latency. While retaining software flexibility. The other important consideration is cost. This is always a sensitive topic of course but AI capable CPUs, GPUs and FPGA devices can be pricey, a concern for the bill of materials of an edge node.

Colin’s argument makes most sense to me at the edge for eFPGA embedded in a larger SoC. In a cloud application, constraints are different. A smart network interface card is probably not as price sensitive and there may be a performance advantage in an FPGA-based solution versus a software-based solution.

Supporting AI applications at the compute edge through an eFPGA looks like an option worth investigating further. Further out towards leaf nodes is fuzzy for me. A logistics tracker or a soil moisture sensor for sure won’t host significant compute, but what about a voice activated TV remote? Or a smart microwave? Both need AI but neither need a lot of horsepower. The microwave has wired power, but a TV remote or remote smart speaker runs on batteries. It would be interesting to know the eFPGA tradeoffs here.

eFPGA capabilities for AI

Per the datasheet, Speedster 7t offers fully fracturable integer MACs, flexible floating point, native support for bfloat and efficient matrix multiplications. I couldn’t find any data on TOPS or TOPS/Watt. I’m sure that depends on implementation but examples would be useful. Even at the edge, some applications are very performance sensitive – smart surveillance and forward-facing object detection in cars for example. It would be interesting to know where eFPGA might fit in such applications.

Thought-provoking webinar. You can watch it HERE.

Also Read:

WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR The Rise of the SmartNIC

A clear VectorPath when AI inference models are uncertain


Taming Physical Closure Below 16nm

Taming Physical Closure Below 16nm
by Bernard Murphy on 01-30-2023 at 6:00 am

NoC floorplan

Atiq Raza, well known in the semiconductor industry, has observed that “there will be no simple chips below 16nm”. By which he meant that only complex and therefore high value SoCs justify the costs of deep submicron design.  Getting to closure on PPA goals is getting harder for such designs, especially now at 7nm and 5nm. Place and route technologies and teams are not the problem – they are as capable as ever. The problem lies in increasingly strong coupling between architectural and logic design and physical implementation. Design/physical coupling at the block level is well understood and has been addressed through physical synthesis.  However, below 16nm it is quite possible to design valid SoC architectures that are increasingly difficult to place and route, causing project delays or even SoC project cancellations due to missed market windows.

Why did this get so hard?

Physical implementation is ultimately an optimization problem. Finding a placement of interconnect components and connections between blocks in the floorplan which will deliver an optimum in performance and area. While also conforming to a set of constraints and meeting target specs within a reasonable schedule. The first goal is always possible if you are prepared to compromise on what you mean by “optimum”. The second goal depends heavily on where optimization starts and how much time each new iteration consumes in finding an improved outcome. Start too far away from a point which will deliver required specs, or take too long to iterate through steps to find that point and the product will have problems.

This was always the case, but SoC integrations in advanced processes are getting much bigger. Hundreds of blocks and tens of thousands of connections expand the size of the optimization space. More clock and power domains add more dimensions, and constraints. Safety requirements add logic and more constraints, directly affecting implementation. Coherent networks add yet more constraints since large latencies drag down guaranteed performance across coherent domains. In this expanding, many-dimensional and complex constrained optimization space with unpredictable contours, it’s not surprising that closure is becoming harder to find.

A much lower risk approach would start place and route at a point reasonably close to a good solution, without depending on long iteration cycles between design and implementation.

Physically aware NoC design

The integration interconnect in an SoC is at the heart of this problem. Long wires create long delays which defeat timing closure. Many wires running through common channels create congestion which forces chip area to expand to reduce congestion. Crossbar interconnects with their intrinsically congested connectivity were replaced long ago by network on chip (NoC) interconnects for just this reason. NoC interconnects use network topologies which can more easily manage congestion, threading network placement and routing though channels and white space in a floorplan.

But still the topology of the NoC (or multiple NoCs in a large design) must meet timing goals; the NoC design must be physically aware. All those added constraints and dimensions mentioned earlier further amplify this challenge.

NoC design starts as a logical objective, to connect all IP communication ports as defined by the product functional specification while assuring a target quality of service. And meeting power, safety and security goals. Now it is apparent that we must add a component of physical awareness to these logical objectives. Estimation of timing between IP endpoints and congestion based on a floorplan in early stages of RTL development, to be refined in later stages with a more accurate floorplan.

With such a capability, a NoC designer could run multiple trials very quickly, re-partitioning the design as needed, to deliver a good starting point for the place and route team. That team would then work their magic to fully optimize the physical awareness estimation. Confident that the optimum they are searching for is reasonably close to that starting point. That they will not need to send the design back for restructuring and re-synthesis.

Additional opportunities

Physically aware NoC design could offer additional advantages. By incorporating floorplan information in the design stage, a NoC designer can build a better NoC. Understanding latencies, placements and channel usage while still building the NoC RTL, they may realize opportunities to use a different topology (see the topology above as one example). Perhaps they can use narrower or longer connections on latency-insensitive paths, avoiding congestion without expanding area.

Ultimately, physical awareness might suggest changes to the floorplan which may deliver an even better implementation than originally considered.

Takeaway

Charlie Janac, CEO at Arteris, stressed this point in a recent SemiWiki podcast:

Physical awareness is helpful for back-end physical layout teams to understand the intent of the front-end architecture and RTL development teams.  Having a starting point that has been validated for latency and timing violations can significantly accelerate physical design and improve SoC project outcomes.  This is particularly important in scenarios where the architecture is being done by one company and the layout is being done by another. Such cases often arise between system houses such as automotive OEMs and their semiconductor design partners. Physical awareness is beneficial all around. It’s a win-win for all involved.

Commercial interconnect providers need to step up to make their NoC IP physically aware out of the box. This is becoming a minimum requirement for NoC design in advanced technologies. You might want to give Arteris a call, to understand how they are thinking about this need.

Also Read:

Arteris IP Acquires Semifore!

Arm and Arteris Partner on Automotive

Coherency in Heterogeneous Designs


Podcast EP141: The Role of Synopsys High-Speed SerDes for Future Ethernet Applications

Podcast EP141: The Role of Synopsys High-Speed SerDes for Future Ethernet Applications
by Daniel Nenni on 01-27-2023 at 10:00 am

Dan is joined by Priyank Shukla, Staff Product Manager for the Synopsys High Speed SerDes IP portfolio. He has broad experience in analog, mixed-signal design with strong focus on high performance compute, mobile and automotive SoCs and he has a US patent on low power RTC design.

Dan explores the use of high-speed SerDes with Priyank. Applications that enable high-speed Ethernet for data center and 5G systems are discussed. The performance, latency and power requirements for these systems is quite demanding, How Synopsys advanced SerDes IP is used to address these challenges is also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CTO Interview: John R. Cary of Tech-X Corporation

CTO Interview: John R. Cary of Tech-X Corporation
by Daniel Nenni on 01-27-2023 at 6:00 am

20220307SpanishHillsTacoDinner blurred 1

John R. Cary is professor of physics at the University of Colorado at Boulder and CTO of Tech-X Corporation. He received his PhD from the University of California, Berkeley, in Plasma Physics.  Prof. Cary worked at Los Alamos National Laboratory and the Institute for Fusion Studies at the University of Texas, Austin, prior to joining the faculty at the University of Colorado. At the University of Colorado, Professor Cary has served as department chair, center director, and faculty mentor.

In 1994, he co-founded Tech-X Corporation, which concentrates on computational applications for a wide variety of science and engineering applications.  Prof. Cary has researched multiple areas related to beam and plasma physics and the electromagnetics of structures.  He is a fellow of the American Physics Society, past chair of its Division of Plasma Physics, the 2015 recipient of the John Dawson Prize for Numerical Simulation of Plasmas, the 2016 recipient of the NPSS Charles K. Birdsall Award for Contributions to Computational Nuclear and Plasma Sciences, and the recipient of the 2019 IEEE Nuclear and Plasma Sciences Section Particle Accelerator Science and Technology Award.

What is the Tech-X backstory? 
The folks here at Tech-X have been working in high-performance computing, specifically as it relates to physical simulation, since the early 90’s.  Distributed memory parallelism, where a calculation is split effectively over many separate computers, was in its infancy. Tech-X was bringing the power of parallelism to plasma computations. Specifically, we excelled at computations of plasma acceleration of electrons due to high energies from the wake fields generated in plasmas caused by incident laser pulses. This work supported experiments at multiple national laboratories, fulfilling their needs for very large simulations.  Following these successes, Tech-X branched out over many areas of plasma physics, including magnetic fusion. We further broadened our capabilities to include electromagnetics of structures, such as cavities, antennas, and photonics.

In the process, Tech-X built an experienced cadre of high-performance computing experts.  These experts constructed a software stack for efficient computational scaling, which means that the computation does not bog down when performed on a large number of processors.  This software, VSim, is licensed for use on our customer’s own hardware.  In addition, Tech-X engages in consulting projects and partnerships staffed by its 30 employees and multiple long-term consultants.

More recently Tech-X has devoted increasing effort to democratizing High-Performance Computing (HPC), by building out an easy-to-use Graphical User Interface. Known as Composer, it allows users to build and run simulations as well as analyze and visualize the results.  Composer abstracts the process of job submission on HPC clusters so that to the user it is just like working on a desktop.  Tech-X is also developing a cloud strategy, so expect more announcements later this year.

What areas are you targeting for future growth?
Our mission is to provide specific capabilities in two areas. We currently provide VSimPlasma software and consulting services for the modeling of plasmas in semiconductor chambers. We are also in the early phases of productizing software for modeling of nano-photonics for photonic integrated circuits (PICs).  Both of these applications present unique challenges because of their small feature size of interest compared to the overall system size, which makes them computationally intensive. This arises because the range of feature scales is large, requiring fine resolution over a large region.  For example, in semiconductor chambers there are small features at the wafer surface, but even if the wafer is uniform the plasma forms sheaths, which represent drops in the electric potential at the edge of the wafer. These sheaths are much smaller than the size of the chamber.

In nano-photonics, PIC components being designed are measured typically in microns – but manufacturing causes roughness in the sidewalls that is much smaller, on the order of nm.  In either of these applications the grid must be very fine to resolve these small features to provide accurate results and it must also span a large region, leading to the requirement for many billions, or even trillions, of cells.  This is where Tech-X software excels.

 What makes VSimPlasma software unique?
Plasma chambers involve many different spatial scales, from the scale of the chamber itself down to the details of the plasma close to the wafer.  The larger scales have traditionally been modeled with fluid codes. However, to compute the details of the plasma sheath, (and consequently the distribution of particles hitting the wafer which determine, e.g., whether one can etch narrow channels sufficiently deep) one must use a particle-in-cell (PIC) method, as provided by VSimPlasma from Tech-X.  For such problems VSimPlasma is the leader due to its extensive physics including its capability to handle large sets of collisions, its many electromagnetic and electrostatic field solvers, and its multiple algorithms for particle-field interactions.  VSim also has the ability to model particle-surface interactions, including the generation of secondary particles, and the reactions of particles on the surface. These are crucial for accurately modeling plasma discharges.  In semiconductor etching, deep vias require the ions to hit the wafer with a near-vertical angle. VSim models that critical distribution extremely well, and we continue to refine our code with each release with feedback from our customers in the semiconductor industry.

An additional uniqueness of VSim in plasma modeling is fitting into commercial workflows. It has an easy-to-use interface and integrates with CAD.  VSim further allows the development of analyzer plugins so that the user can analyze both the fields and the particles within the plasma.

What keeps your customers up at night?
As everyone knows, moving to smaller critical dimensions is making the problems harder and driving up capex, which causes all kinds of business problems.  There are too many variables in advanced plasma processing to optimize with a pure experimental approach.  Semiconductor companies are augmenting prototyping with simulation. Plasma etch is a difficult area involving many variables, including geometries of the etch, wafer and chamber, the plasma energy and chemistry in the chamber, and the wafer surface and etch profile. Our semiconductor customers’ interests are to reduce time and cost by reducing experimental iterations when tackling an advanced process etching recipe. The ROI from use of simulation is measured in reduced time to production, development cost and machine utilization.

How do customers engage with you?
There are several ways our customers engage with us including directly phoning or emailing our sales team or requesting an evaluation license through our website.  An application engineer (AE) will then contact the customer to determine how our software might best fit their needs.  The AE sets up the download and walks the customer through the software.  Several of our customers have independently set up simulations using the software on their own.  VSim comes with a rich set of examples for modeling of plasmas, vacuum electronics devices, and electromagnetics for antennas and cavities.  In addition, we provide various levels of consulting services, ranging from an AE setting up your problem and guiding you to the solution, to an AE completely solving your problem, including data analysis, and then providing the direct result.

What is next for Tech-X?
We have a number of skunk-works projects under way that will bring exciting new capabilities to plasma and photonics modeling.  We are looking at GPU and cloud computing with the aim of making computations fast to reduce development time, the number of fabrication cycles and the need for capital expenditures.  We expect to be able to have improved capabilities for modeling the latest plasma etch reactors, which will be unique in the industry.  We have an upcoming webinar on proving our current capabilities, and will soon have a series of webinars that demonstrate our latest features and plans.

Webinar: Learn More About VSim 12.0
Built on the powerful Vorpal physics engine that researchers and engineers have used for over 20 years, VSim 12 offers new reaction capabilities, user examples, and numerous improvements designed to increase user efficiency.

Also Read:

Understanding Sheath Behavior Key to Plasma Etch


ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn
by Robert Maire on 01-26-2023 at 10:00 am

Robert Maire Bloomberg

-Demand far exceeds supply & much longer than any downturn
-Full speed ahead-$40B in solid backlog provides great comfort
-ASP increase shows strength- China is non issue
-In a completely different league than other equipment makers

Reports a good beat & Guide

Revenues were Euro6.4B with system sales making up Euro4.7B of that. EPS was Euro4.6 per share. All beating expectations. 18 EUV systems were shipped and 13 systems recognized. Most importantly order intake was Euro6.3B of which EUV was Euro3.4B. In essence , ASML’s book to bill ratio remains very strong at better than 1.3.

ASML has a huge, multi year backlog of Euro40.4B, which keeps them very warm at night. Reassuringly , the backlog continues to build.

Backlog timeframe well exceeds any possible downturn length

With Euro40.4B in backlog and continuing strong orders, ASML has a multi year backlog. The bottom line is that customers never get off the order queue and the queue keeps growing in length.

Customers understand the long term growth model of semiconductors and are clearly ignoring a short term weakness whether its 6 months, a year, or more. ASML will ride over any expected weak period.

Other equipment makers, who compete for business with quick lead times are not so fortunate and will revert to a “turns” business and see orders fall off as customers can easily get out of the order queue and get back on when the industry picks up again.

ASP increases demonstrate strength

ASML mentioned that its EUV ASPs are increasing from 160M to 165-170M which further indicates the level of strength that being a virtual monopoly brings. ASML is the only EUV game in town and can price to market. DUV pricing has also increased. Both based on productivity parameters.

We highly doubt that other semiconductor equipment segments are able to push through price increases in the face of falling orders, even with increased performance, which they usually give away for free.

This is one of the keys that separates ASML from others in the semi equipment market and puts them in a league of their own. ASML is looking at an up 2023 while others are talking about WFE being down 20%.

This also implies that if lithography spend is actually up in 2023 the non-litho is actually down more than 20%, further separating ASML from other semi equipment makers

Full speed ahead with high NA and production capacity increases

ASML has been under a lot of pressure to increase production and has spent a huge amount of both money and effort with suppliers, most notably Zeiss, to increase production to an expected 60 EUV and 375 DUV systems in 2023.
ASML will continue to spend as the job is not over as they need more capacity. Also a major expense is the high NA product which is seeing a large spend in development in advance of any revenue.

This all suggests that ASML’s results might be even better without the “headwinds” of additional spend they currently have. Clearly the spend is relatively minor with a Euro7.4B cash balance and strong earnings, they are very comfortably awash in cash.

Results will still vary as to mix and lumpiness

Given the high ASP of systems and the differential between ASPs of DUV & EUV we expect lumpiness in quarters depending upon what is shipped in which quarter and where customer near term demand goes. ASML is expecting a slightly weak Q1 which appears to be due primarily to mix and normal lumpiness, we are not in the least concerned.

China remains a non-issue as semiconductors are a global zero sum game

We have repeated many times that the semiconductor industry is a zero sum game. That is that chip demand remains the same in the face of where the chips are made. If chips are not made in China (due to the embargo) they will be made elsewhere by others, and those others will need the same litho tools that China would have otherwise bought. The only impact is that China is kept out of the leading edge that other countries have access to.

ASML will still sell the same number of EUV tools just shipping them to other places. Although politically sensitive and much talked about, the actual impact on ASML is near zero.

ASML remains above the near term fray maintaining focus on long term

Management, while certainly cautious about near term issues, is rightly more focused on long term issues of capacity and technology. This 5 to 10 year focus is very appropriate given the business that they are in. We saw the lead time in EUV was decades as ASML struggled through advances but was rewarded in the long term for their long dedication to the cause of technology. Building capacity is a long term and costly struggle as is technology and ASML is investing for the future.

The stocks

We continue to view ASMLs valuation as well above the rest of the semi equipment makers, in a league of their own. They are also unique in that their view is of an up year versus everyone else’s expectation of a down year.

Although ASML talked about a potential recovery of the industry in H2 2023, we are a bit more cautious given the depth of this downturn being one of the worst we have seen in a long time. But none of this matters to ASML given their horizon.

We would remain an owner/buyer of ASML stock but would remain light on the rest of the group especially LRCX and AMAT given their shorter term equipment model in the face of the widespread weakness coupled with China issues, a double whammy that ASML does not see.

As with lenses and focus length business that ASML is well acquainted with, being focused on the long term means the short term is out of focus and less relevant to them…..

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus


Designing a ColdADC ASIC For Detecting Neutrinos

Designing a ColdADC ASIC For Detecting Neutrinos
by Kalar Rajendiran on 01-26-2023 at 6:00 am

The DUNE Experiment

Cliosoft recently hosted a webinar where Carl Grace, a scientist from Lawrence Berkeley National Laboratory (LBNL) talked about a cutting edge project for detecting neutrinos. The project is called the Deep Underground Neutrino Experiment (DUNE) project. Many of us know what a neutron is, but what is a neutrino? Before we get to that, here is some background on Cliosoft and some insights into LBNL.

Cliosoft: The company has been serving the semiconductor industry for more than 25 years. Its product offerings fall into three main categories: hardware design data management, IP Reuse and highlighting differences between two designs directly on a schematic or layout. The relevance of Cliosoft to the DUNE project is directly tied to Cliosoft’s Hardware Design Data Management tool suite. This tool suite empowers multi-site design teams to efficiently collaborate on complex hardware designs and the DUNE project is quite a complex one with demanding requirements. The project involves collaboration among LBNL, Fermilab and Brookhaven National Laboratory national labs.

LBNL: Many of us have heard of LBNL but may not be aware of its expertise, excellence and diversity. With 3,500 employees and 1,000 students, it is much larger than many would have imagined. And it has 1,750 visiting researchers. With this much of brain power directed at the physical sciences, computing, biosciences, earth and energy sciences, material and nanotechnologies, it is the most diverse US National Laboratory. It offers the following user facilities for researchers to tap into: Advanced Light Source, National Energy Research Scientific Computing Center, Energy Sciences Network, Joint Genome Institute, Molecular Foundry including National Center for Electron Microscopy. The Lab has 14 Nobel prizes to its credit with the most recent one in chemistry by Prof. Jennifer Doudna (co-discoverer of CRISPR gene editing).

Whether one is in particle physics and/or semiconductors, there is something of interest and value in this webinar. To watch this on-demand webinar, go here.

What is a Neutrino and why study them?

Neutrinos are fundamental particles with very low mass, travel close to the speed of light and interact only through gravity and the weak nuclear force. Neutrinos could help answer questions such as, why is there matter in the universe, do isolated protons decay, how can we witness the birth of a blackhole, etc.,

How to detect Neutrinos?

Neutrinos can travel at almost the speed of light and can pass through 60 light years of water on average before interacting with any matter. This makes it very difficult to detect them. The solution is the DUNE detector, the largest cryogenic particle detector ever made that can detect neutrinos from an intense neutrino beam directed toward it from 800 miles away. A tight neutrino beam is developed when the neutrinos generated by a proton accelerator at Fermilab hit a target. This tight neutrino beam is sent over 800 miles through solid underground rock and earth material to a detector sitting one mile under the ground. This setup prevents cosmic rays from having any impact on the experiment. The detector itself is an extremely large tank filled with liquid Argon. Liquid Argon being very dense provides a lot of targets for the neutrinos to potentially hit. Being chemically inert, Argon does not cause any chemical reactions to disturb the experiment and pollute the collected data.

 

When a neutrino interacts

When a neutrino interacts with an atom of Argon, the atom is ionized. The freed electron generates an electric charge that travels through the liquid Argon in the tank. The tank is placed under an enormous electric field that drifts this charge on to planes of wires. When the charge reaches those wires, they generate very small currents that can then be recorded. Reading out and digitizing these tiny currents induced in these wires is a key part of the experiment. A key function of the detection electronics is the analog to digital conversion (ADC). Immersing the detector electronics in liquid Argon greatly reduces the cabling capacitance, allowing lower achievable noise, and serves as an enabling technology for DUNE project.

Cold ADC Requirements

  • 2MS/s sampling rate per channel and 16 channels at 12-bit resolution
  • Sub-LSB noise performance
  • 30 years reliability in cold environment (-184oC)
  • Operate at both room temperature and cold for testing purposes

Readily available off-the-shelf ADCs cannot meet the above requirements. Custom ADCs need to be built and integrated into ASICs implementing the detection electronics.

Collaboration among teams from the three labs

A small team from each of LBNL, Fermilab and Brookhaven National Laboratory collaborated to design the detection electronics for the DUNE project. With different pieces of the required design IP developed by teams geographically separated, the Cliosoft data management solution enabled an automated design-aware surgical data synchronization. This allowed fine-grained access controls for each participating National lab and provided a way for network storage optimization at each participating site.

Summary

The three-lab team has successfully developed the ColdADC ASICs to instrument the neutrino detector immersed in liquid Argon. Approximately 40,000 ColdADC ASICs will be deployed at the DUNE Far Detector complex and will be immersed in liquid Argon. Each ColdADC will be reading out 16-channels for a total of 640,000 wire channels. The detector electronics can be operated over a 250oC range and have achieved better noise performance than the commercial ADC solution used in the Short Baseline Neutrino Detector (SBND) experiment. The DUNE experiment will be conducted over a 30-year period.

Also Read:

Design to Layout Collaboration Mixed Signal

Webinar: Beyond the Basics of IP-based Digital Design Management

Agile SoC Design: How to Achieve a Practical Workflow