wide 1

ASIL B Certification on an Industry-Class Root of Trust IP

ASIL B Certification on an Industry-Class Root of Trust IP
by Bernard Murphy on 02-08-2023 at 6:00 am

ASIL B requirements

I have always been curious about how Austemper-based safety methodologies (from Siemens EDA) compares with conventional safety flows. Siemens EDA together with Rambus recently released a white paper on getting a root of trust IP to ASIL B certification. This provides a revealing insight beyond the basics of fault simulation into a detailed campaign on an industrial scale IP. Siemens EDA and Rambus describe using the Austemper toolset on the on Rambus RT-640 root of trust IP, and the steps they went through to achieve functional safety metrics required for ASIL B certification.

The Austemper toolkit

In a typical FMEDA flow you would first use spreadsheets and engineering judgment to decide where you should insert safety mitigation techniques in an RTL design. Then you would run a fault campaign using fault simulation to determine how effectively your mitigation techniques have worked, as measured by the appropriate ASIL requirements. This could lead to lengthy loops to reach targets such as those for ASIL B.

In the Austemper flow, SafetyScope will estimate FIT rates and FMEDA metrics, and suggest a preliminary fault list before safety insertion. It can then be run again after fault simulation to provide a summary report with final metrics and detection coverage. Kaleidoscope runs fault simulation, categorizing faults as detected, not detected, not triggered, or not observed (at an observable point).

Faults modeled in the analysis

Following the standard, the Austemper flow models three types of faults:

Transient. These are as the name suggests temporary faults and may result from cosmic rays, electromagnetic interference, or other transitory stimuli. The flow runs a quick pseudo-synthesis to find state elements, putting these into the fault list. During analysis such a fault will be enabled at the outset then removed after some time window, remaining active until that point. The length of the window is configurable.

Permanent. These are durable faults and may result from design errors, configuration errors, deadlocks or other influences which can create a stuck state. Candidates include state and non-state elements and are modeled using stuck-at-1 or 0 values, just as in DFT analyses. These errors persist throughout a fault simulation.

Latent. These faults are very tricky to find and to mitigate because they result from a failure depending on two or more faults in the system, especially when one of them occurs in safety mitigation logic. Austemper models latent faults with one stuck-at in the functional logic and one in the corresponding safety system. (Latent faults depending on 3+ simultaneous failures have very low probability.)

Practical considerations in the fault campaign

Fault simulation of many faults over a large circuit could consume a huge amount of time without careful planning. The Siemens and Rambus guys suggested several techniques they used to keep this manageable.

First, they don’t always work with the full fault list. They strategically evaluate subsets of faults at different stages to slim down the set, before working on the hardest cases. For example, they analyze first around known safety-critical areas. Then they (temporarily) reduce the fault-tolerant time interval (FTTI) to determine faults which can be detected quickly. With similar intent, they temporarily treat sequential elements as observable points, allowing them to filter out any faults which reach a primary output without triggering an alarm.

This ultimately leaves them with a subset of undetected faults which must be analyzed for the full FTTI to determine if any escape to an output without raising an alarm. These are the most expensive to evaluate since they can fanout through multiple cycles, creating multiple simultaneously active faulty traces before ultimately registering as detected or otherwise.

Fault simulation depends on stimulus vector which may not trigger a fault, or may trigger it but not lead to it raising an alarm or being observed at a primary output. These faults they consider unclassified. Improving the stimulus may help but there are limits to that option for a software based and heavily parallelized software fault simulation. They suggest a couple of options to reduce the number of unclassified faults. In bus simplification, they assert that if a fault in one bit is detected, then all bits get the same classification. They make a similar assertion for duplicated instances of a module. If all faults within one instance are successfully classified, then all instances in other instances are also deemed classified. Finally they set an empirical threshold for the number of stimuli against which they test. A level at which they feel they tried “hard enough”. Arbitrary yes, but I don’t know how I would do any better.

Nice paper. You can read it HERE.

Also Read:

The State of IC and ASIC Functional Verification

ASIL B Certification on an Industry-Class Root of Trust IP

3DIC Physical Verification, Siemens EDA and TSMC


3DIC Physical Verification, Siemens EDA and TSMC

3DIC Physical Verification, Siemens EDA and TSMC
by Daniel Payne on 02-07-2023 at 10:00 am

3DIC min

At SemiWiki we’ve written four times now about how TSMC is standardizing on a 3DIC physical flow with their approach called 3Dblox, so I watched a presentation from John Ferguson of Siemens EDA to see how their tool flow supports this with the Calibre tools. With a chiplet-based packaging flow there are new physical verification challenges, so the response at Siemens EDA was to develop Calibre 3DSTACK, which supports 3DIC and enables thermal analysis.

2.5DIC Interconnect

Physical checks for DRC ensure that substrate interfaces are correct with: alignment, overlaps, scaling and die-to-die spacings. LVS checking determines if connectivity through the interposer or package RDL are correct, compared to the golden netlist. Even the parasitics formed through the packaging interconnect need to be extracted and analyzed, as it impacts signal integrity and timing margins.

3D DRC and LVS

An early approach at 3DIC for LVS verification was to run it separately for each die to die interface, but that is impractical, instead the approach used with Calibre 3DSTACK is to check the full assembly, both DRC and LVS, with one deck, using one run.

To actually design and plan your 3DIC package assembly there’s another Siemens EDA tool called Xpedition Substrate Integrator (XSI), and that allows you to create the heterogeneous rule file, plus generate the source netlist. 3DIC package design and verification tools are shown below:

XSI and Calibre 3DSTACK

TSMC supplies the Assembly Design Kits (ADK) to support their 3Dblox tool flow, where it’s like a LEF/DEF flow, but in 3 dimensions now.

3Dblox package

Physical verification checking using the 3Dblox format is automated in this tool flow with Calibre 3DSTACK, and is independent of which tool creates the 3Dblox data.

3Dblox to Calibre 3DSTACK tool flow

In addition to 3DIC physical verification, there are new reliability issues like thermal, as the chiplets are placed in closer proximity. Temperature increases slow down silicon switching times and shorten semiconductor lifespan, which could lead to a timing or reliability failure. To find out how the package assembly impacts each chiplet, there’s another Siemens EDA tool, Simcenter Flotherm, to support the development of a thermal digital twin. With this you can get fast analysis, while in the early planning steps. Analysis results as static or dynamic heat maps are shown at the assembly, die or IP level. You can even get a post-layout netlist with the temperature coefficients of each device, which is used for signal integrity and timing analysis.

Simcenter Flotherm flow

Starting from a 3Dblox file, this thermal flow uses a 3DSTACK syntax, creating individual chip power maps across the assembly. Engineers will see wave forms or animated heat maps of the temperatures, or power can be shown at the chip or assembly level. Constraints can be specified, and then during thermal simulation any warnings or failures are noted.

Calibre and 3Dblox thermal flow

Adding thermal capabilities to support 3DIC packaging at Siemens EDA required close collaboration with TSMC.

Summary

The market excitement of 3DIC design also brings about new technology challenges, like how to perform physical verification with DRC and LVS in the most efficient method. TSMC has standardized in one format, the physical stacking and logic connectivity information, calling it 3Dblox. Siemens EDA with Calibre 3DSTACK fully supports the 3Dblox format in their DRC and LVS flows. Designing and planning 3D package assemblies is done with XSI, and new thermal analysis also uses the 3Dblox format. Thermal analysis for 3DIC packaging is also possible, allowing products to be designed to meet reliability goals.

The EDA, foundry and IP communities have collaborated together to face the new 3DIC design and verification challenges, allowing our economy to enjoy a steady stream of new products that are now reaching 100 billion transistors. The future of 3DIC is bright indeed.

Related Blogs


Advances in Physical Verification and Thermal Modeling of 3DICs

Advances in Physical Verification and Thermal Modeling of 3DICs
by Peter Bennet on 02-07-2023 at 6:00 am

Fig 1 3DIC

If, like me, you’ve been paying too little attention to historically less glamorous areas of chip design like packaging, you’ll wake up one day and realize just how much things have changed and continue to advance and how interesting it’s become.

One of the main drivers here is the increasing use of chiplets to counter the decreasing – indeed vanishing – cost gains from the latest process shrinks by allowing finer grain mapping of large sub-system blocks to their optimal process technology and optimise block reuse and design resources.

This is the sort of package scenario we’re dealing with (let’s call this an assembly of components).

The expanding world of 2.5D and 3D packaging falls between monolithic chip and PCB design, so both EDA and system level tools must be brought together to automate the process. Tasks like properly automating intra-package connectivity, checking vertical plane connections and more precise thermal modeling.

As with almost everything else in EDA these days, that means ever closer cooperation between EDA tool vendors, manufacturing and designers.

Siemens and TSMC’s work on 3DIC to jointly develop the TSMC 3Dblox standard unified design ecosystem based around Siemens’ Calibre 3DSTACK and c tools is a good example. John Ferguson’s presentation at TSMC OIP last October covered the advances here in both logical and physical verification and thermal analysis. Let’s take a closer look.

Closing the gaps in 3D Physical Verification

There are some obvious challenges here with 3DIC structures.

  • Processes may share layer names, whilst having different characteristics
  • Pin and pad names on components may be equivalent, but use different names
  • Tools need to create a combined PV deck, netlist and physical DB to verify the entire assembly and still maintain the correct rules for individual components
  • Potentially different input file formats for the components.

Packaging with heterogeneous process die creates new challenges for physical verification (PV), mainly in preparing a complete and accurate DB. Calibre 3DSTACK (see diagram below) already handled much of this PV prep – tasks like compiling the assembly physical DB with a single PV deck and computing the correct coupling between stacked die.

Adding Siemens’ Xpedition Substrate Integrator (XSI) planning tool closes the remaining gaps of describing the required components and connectivity (analagous to a spec or custom schematic), creating a merged netlist and managing the design DB; even automating the Calibre 3DSTACK verification.

One thing remains – finding a way to create adequate “library models” and “design rules” for the components. TSMC’s new 3DBlox approach does this with Assembly Design Kits (APDK) to describe the connectivity, process and assembly characteristics and design rules for each component.

Putting this all together we get a flow where can we prepare, run and debug the full assembly PV.

Thermal Analysis

3D packaging also creates greater thermal challenges including:

  • greater interaction between die
  • tougher heat dissipation challenge – greater power density due to 3D stacking
  • modeling vertical thermal gradients becomes necessary
  • modeling heatsink interaction for 3D die

Transistor performance strongly depends on temperature, so such thermal effects cannot be ignored. And these aren’t just signoff checks – we need good thermal and power modeling very early in the design and integrated into the ASIC design flow, since late changes here will create major rework.

With physical verification, the challenge was more one of verifying the top-level and component interfaces. Here it’s more about understanding the impact of the overall system on the components – and then how that feeds back into the top-level system.

A further collaboration with TSMC extended a flow built around the existing Siemens Calibre 3DSTACK and SimCenter Flotherm tools, reusing much of the infrastructure from the PV flow.

Analysis, including static and dynamic heat maps, can be carried out at assembly, die or IP level and power analysis run using mPower. Device temperature coefficients can be extracted for more precise signal and timing analysis.

Summary

Siemens and TSMC have put together a design methodology and flow to support current and future 3DICs based on proven tools (Calibre 3DSTACK and SimCenter Flotherm and with particular attention on simplifying configuration and modeling (3DBlox) and early design use. It’s something that should continue to scale as increasingly sophisticated 3D packaging technology arrives.

It’s also noteworthy that Siemens won a TSMC OIP Partner of the Year award for this work.

Further Information

The TSMC OIP presentation (“TSMC 3Dblox™ simplifies Calibre verification and analysis”) is available until May for readers with the original event registration link and code provided by TSMC.

Find out more about Calibre physical verification and 3DSTACK here:

https://eda.sw.siemens.com/en-US/ic/calibre-design/physical-verification/

https://eda.sw.siemens.com/en-US/ic/calibre-design/physical-verification/3DSTACK/

A white paper “Taking 2.5D/3DIC physical verification to the next level” is also available.

For Siemens Flotherm thermal analysis check here:

https://www.plm.automation.siemens.com/global/en/products/simcenter/flotherm.html

Also Read:

Achieving Faster Design Verification Closure

Siemens Aspires to AI in PCB Design

Building better design flows with tool Open APIs – Calibre RealTime integration shows the way forward


Privacy? What Privacy?

Privacy? What Privacy?
by Roger C. Lanctot on 02-06-2023 at 10:00 am

Automotive Privacy and Security

It seems as if every day brings news of yet another company that is using artificial intelligence to leverage smartphone data for “non-invasive” analytics of human movement. Our smartphones and smartwatches and fitbits can detect whatever activity we are doing, how well or poorly we are doing it, how it is affecting our mood, and whether we are drowsy, drunk, or suffering some undiagnosed infirmity.

Television commercials from Apple and Google tout the benefits of this non-invasive invasion of privacy – not unlike the privacy annihilation of Google Search. It reminds me of Fullpower founder (and reputed smartphone inventor) Philippe Kahn’s description of a human being’s movements being the equivalent of a signature or fingerprint. Kahn has leveraged biosensing analytics associated with smartbeds to diagnose ailments and predict menstrual cycles.

The implications of this work can as easily apply to automobiles. At the Drive TLV startup event on the Georgia Tech campus in Atlanta this week, a ConActions executive described how the company leverages steering wheel data to assess a driver’s cognitive state – fatigue, inattention, anxiety, or inebriation.

ConActions’ work is being subject to validation and testing by Volkswagen and Hyundai, among others. The implications are considerable. If the mere act of steering a car can reveal such a wide range of conditions, what further conclusions can be drawn from a deeper dive into the full suite of sensors in a typical vehicle.

Of course, regulators are increasingly requiring driver monitoring systems – literally driver-facing cameras – and passenger detection systems including infra-red and radar sensors inside vehicles. We all let our guards down for Google to peer into some of our deepest (darkest?) thoughts yet no one stops to consider what a privacy defeating operating environment the average car represents.

This is one of many reasons why vehicle data protection and consent management are so essential for the future of connected cars. Vehicle data companies such as Aiden, for one, have gone out of their way to introduce in-vehicle consent agreements before sharing data.

One industry that is drooling at the prospect of turning small bits of vehicle data into huge chunks of corporate value are insurance companies. Just as smartphone makers are happy to tout their analytical chops, insurance company experts are pressing hard to integrate smartphones and car connections into the insurance underwriting process.

More than 10 years after Progressive Insurance introduced its Pay As You Drive program around vehicle tracking technology, the insurance industry continues to press the car connectivity button with a combination of smartphone apps (to track driving as well as to file claims) and so-called OBD-II plug-in devices and other aftermarket add-ons. Insurance industry visionaries routinely highlight the merits of connected car data for enhancing insurance underwriting and fundamentally rewiring the industry.

The enthusiasm for connected car-based insurance is driven by the reality that insurers are typically forced to rely on historical driving records and credit reports to build their current underwriting models – especially after regulators have denied their ability to use location or gender metrics that might impact protected classes of consumers. Devices such as Progressive’s Snapshot that focus on the amount of driving, time of day, and harsh braking or acceleration have been found to be effective tools to lower the cost of customer acquisition and extend the period of customer retention and lifetime customer value – i.e. reduce customer churn.

All of the reasons that make connected car-insurance so attractive to the underwriters are precisely the things that ought to give customers anxiety and agita. Does anyone really trust an insurance company to properly handle and protect customer data? Really? Count me out.

This is why I find the value proposition posed by local Atlanta startup Mile Auto so attractive. I visited with founder Fred Blumer while I was in town for the Drive TLV event.

Just a few years old, but already being offered directly to consumers by OEMs such as Porsche, Honda, and Hyundai, Mile Auto is a mileage-focused car insurance product perfectly suited to low mileage drivers. The application requires “no hardware and no app,” according to its founder, and currently has 30 full-time employees.

Mile Auto is neither the first nor the only auto insurance startup to attempt to leverage mileage-focused underwriting. MetroMile was pursuing this path prior to its acquisition by Lemonade last year. Root is another. Both Root and MetroMile consider driver behavior as well. Both companies struggled to build value with MetroMile launching a SPAC at a billion-dollar valuation that ultimately spiraled downward. Root took the IPO route, with a similar poor outcome.

Mile Auto bases its underwriting on the user literally taking a picture of their car’s odometer every month. It doesn’t get much simpler and less invasive than that. The result has been – with limited advertising or promotion – 15,000 insured customer vehicles across 11 states with exceptionally low customer acquisition costs and superior lifetime value metrics.

I personally never understood the appeal of connected car insurance – especially after my experience with State Farm Drive Safe and Save many years ago. While State Farm insisted on its customer portal that it was saving me money, it was painfully clear to me at the time that it was not. Today, my wife and I and our three sons are all on Geico – and no one is being digitally monitored by their insurance company.

Mile Auto can’t save you from the privacy violation zone represented by your smartphone, your car, or your search, but it can keep your insurance company at bay. I think it’s a great idea – if you are a low mileage driver. Post-COVID, aren’t we ALL low mileage drivers?

Also Read:

ATSC 3.0: Sleeper Hit of CES 2023

ZoneCast Blast Misses Its Mark

To EV or NOT to EV?


KLAC- Weak Guide-2023 will “drift down”-Not just memory weak, China & logic too

KLAC- Weak Guide-2023 will “drift down”-Not just memory weak, China & logic too
by Robert Maire on 02-06-2023 at 6:00 am

KLAC Tencor SemiWiki

-Business will “drift down” over the course of 2023
-Not just memory is weak- China issue, foundry/logic slowing
-March guide worse than expected (Like Lam)
-Backlog likely saw push outs & cancelations but still long

Good quarter but weak guide

Much as we saw with Lam, KLA reported a beat on the December quarter but a weaker than expected guide on the March quarter as the industry is falling faster than most believe. Revenue came in at $3B with EPS of $7.38 versus street of$2.82 and EPS of $7.10. Guidance was for revenues of $2.35B +-$150M and EPS of $5.22+-$0.70 versus street of $2.55B and $5.89, similar to the miss in guide that Lam also reported

2023 will be H1 weighted

Management said that business would likely drift down through the year. Sounds like projects may have been pushed out and backlog will get reduced as we go through the year .
It obviously takes some time for the high rate of spend to slow.

Backlog still high but will drop

Management said that 45-50% of backlog was over 12 months in length. How stable the backlog is may be open to question as we will see pushes and pulls with more pushes than pulls as schedules get adjusted. While KLA’s backlog is second only to ASML they are more vulnerable to push outs and delays or cancellations of quicker turn products.

Its not just memory issues but China and foundry/logic as well

China business looks to be close to cut in half from levels prior to the embargo. While Lam may be the poster child for weak memory, KLA may be more impacted by the China embargo. The weak quarter out of Intel reminds us that foundry/logic is also weak though perhaps not down as sharply as memory or China. This “triple whammy” of memory, China & foundry/logic is obviously impacting all semi equipment makers with different segments impacting each participant differently

Length and depth of down cycle a great unknown

Management did not want to comment on the length or depth of the downturn other than to say that KLA should do better than most of their peers (with the obvious exception of ASML)

As we have said a while ago, we think 2023 is looking a lot like a write off with no real recovery until 2024 at the earliest. While KLA tends to have strong backlog, it could run out if the downturn lasts too long and then results will be a bit less predictable.

Welcome to reality

We think many investors do not believe the industry is in as bad a downturn as actually exists. Many ignored the dire guidance from Lam coupled with the very serious actions taken by the company to reduce costs which obviously wouldn’t have been taken unless there was a concern of a prolonged downturn.

While we didn’t hear specifically about layoffs from KLA, we are sure that they are selectively cutting expenses as they said they would “stabilize” spending which means reduce. This sounds like although things are not great, they are not as bad as they are at Lam

The stocks

With KLAC adding to Lam’s view that 2023 will not only be down but will be H1 weighted with continuing increasing weakness through the year, we are hardly motivated to buy any equipment stock akin to catching a falling knife.
We may see some false rallies as people may think the worst is over or Q1 is the bottom or other falsely optimistic views but we are in the midst of a good, old fashioned down cycle of the sort we haven’t seen in quite a while and some lesser experienced investors and analysts have never seen.

The long term secular trends remain as positive as ever but when they will return is anyone’s guess. In the mean time we could see this bouncing drift down the bottom of the cycle. Because this down cycle is not caused by a singular event, it is likely that we will need at least two of the three factors to improve before we see things move upward again.

KLA remains a nice house in a declining neighborhood but that doesn’t make us feel comfortable.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Hynix historic loss confirms memory meltdown-getting worse – AMD a bright spot

Samsung- full capex speed ahead, damn the downturn- Has Micron in its crosshairs

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges


Podcast EP142: The Drive Toward a More Sustainable Semiconductor Industry with EMD Electronic’s Anand Nambiar

Podcast EP142: The Drive Toward a More Sustainable Semiconductor Industry with EMD Electronic’s Anand Nambiar
by Daniel Nenni on 02-03-2023 at 10:00 am

Dan is joined by Anand Nambiar, Executive Vice President and Global Head of Semiconductor Materials at EMD Electronics, the North American Electronics business of Merck KGaA, Darmstadt, Germany. Anand has over 23 years’ experience in the semiconductor industry. His previous roles include Associate Director – Quality at Nikon Inc., Vice President of Operations at Cascade Microtech, Operations Director at AZ Electronic materials, and Managing Director of the Optronics Division at AZ Electronic Materials up to its acquisition by EMD Electronics, He has headed the semiconductor materials business for the past four years, overseeing its high-profile acquisition of Versum Materials. Anand has also led EMD Electronics’ Biopharma and Consumer Health business in India.

Dan explores the far-reaching impact EMD Electronics is having on the semiconductor industry with Anand. Current programs and their impact on power and greenhouse gas emissions, as well as new initiatives are discussed, with an eye toward building a cleaner and more sustainable semiconductor industry.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Hynix historic loss confirms memory meltdown-getting worse – AMD a bright spot

Hynix historic loss confirms memory meltdown-getting worse – AMD a bright spot
by Robert Maire on 02-03-2023 at 8:00 am

Memory Meltdown

-Hynix reports worst downturn in 10yrs – Already in red ink
-If the #2 memory maker is already negative what does it say?
-Confirms our view of 2023 write off- maybe 2024 better?
-Micron Mangled? & Toshiba Toast?- Buyers advantage

Hynix posts record $1.4B loss- worst in 10 years

Not all that surprisingly Hynix reported a loss making quarter. As the second largest memory chip maker after Samsung the drop was rather rapid.
The company also said the current memory situation is getting worse in the first quarter.

Hopes are for a 2024 recovery. Right now there is no firm evidence to point to other than the typical refrain from analysts in prior cycles saying that things will be better in 6 months (just wishful thinking at this point).
Hynix has obviously cut Capex and output.

Memory makers can slow but not stop output

Unlike OPEC and oil wells you just don’t hit the stop button at a fab. The vast majority of the cost of making any semiconductors is the depreciation of the equipment and bricks and mortar. The variable costs of consumables & labor is relatively small. This means that once a fab is built and complete you tend to run it at maximum capacity for the rest of its life as the marginal cost is low.

That marginal cost is also quite low as compared to the fully loaded cost so memory makers can get pushed to very low, loss making levels before they would ever consider stopping production. They can however, slow production by a small amount but all that means is that they will likely lose share to other memory makers who don’t slow.

Samsung can push pricing to the edge

All this suggests that dominant players, like Samsung, can push memory pricing down to the point that they can still tolerate (not happily) but below the level at which other competitors are profitable, thereby choking off the oxygen in the room. Obviously Hynix as the number two memory maker and Micron as a more distant 4th or 5th can get quickly pushed into the red.

This situation can persist for a long time if demand doesn’t recover.
As we have previously explained capacity can still increase without large capital expenditures due to technology advancement.

What is really needed to end the memory meltdown is for demand to increase…..the industry will never be able to cut production enough to get supply and demand back into balance….it will just not happen.

This suggests that the current memory issue is more of a macro economic demand recovery issue than an over supply issue….meaning that the resolution is not in the hands of the memory makers. The best they can do is take advantage of the situation which Samsung is doing.

A memory buyers market

It has been a memory sellers market for a very long time as makers have set pricing.

The tables have now turned

We have heard, from several different sources, that large buyers of memory are dictating terms and making deals at attractive pricing and terms. Memory makers desperate for buyers are willing to cut deals for large orders of memory at fixed terms to try to hold onto market share or gain share from others.

This implies that we may see some market share shifts created by the downturn as some makers will make deals and others not so.

AMD a minor bright spot in a dark industry

AMD posted better than expected results as they continue to do well and gain share. This stands in obvious contrast to Intel not that long ago. While this is good for AMD it is really just further proof of the importance of TSMC that produces the chips that are successful.

It says that TSMC continues to do a great job of execution as it completely dominates the industry.

This is not to say that AMD has nothing to do with its own success but just that TSMC remains “the man behind the curtain” for most successful tech companies such as AMD, Apple, Nvidia etc; etc;.

Micron, Toshiba & Hynix need further cuts

While memory makers be not be able to avoid red ink in the current memory meltdown they need to reduce the hemorrhaging as much as possible to extend the runway beyond the length of a long downturn.

This likely means more layoffs, more capex cuts, project cancelations.

It is also important for companies to have gas left in the tank for when the industry does finally recover so they can participate and not be left permanently wounded or dead. Even Yangtze memory in China has reported a 10% headcount reduction.

The stocks

Obviously Hynix is just more proof of what we already knew since June. The main difference is the underscoring of exactly how long and deep the memory downturn will be. We haven’t seen significant red ink in a very long time and in many cases longer ago than many investors actually can remember or have experience with so many may be in uncharted territory. We continue to warn investors that this will not be a “snap back” type of short lived downcycle as we have seen in brief respites of an otherwise bull tech market.

Companies that continue to ride above the fray include ASML and TSMC, although not cheap for a reason. We also warn investors that we have yet to know the bottom, at least in the memory market, and it is wrong to assume its in the next quarter or two.

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory


Alphawave Semi at the Chiplet Summit

Alphawave Semi at the Chiplet Summit
by Daniel Nenni on 02-03-2023 at 6:00 am

Alphawave Semi Chiplet Summit

The first annual Chiplet Summit was held last week in San Jose and I must say it exceeded my expectations, but I have some advice for the participating speakers and sponsoring companies. A good portion of the content was on WHY chiplets and not HOW. I think we have progressed passed this point and if we keep dwelling on it we will delay the HOW which is critical in moving a new technology forward.

Otherwise I was very impressed and will attend again next year, absolutely.

In regards to content, I would like to call out a company that I admire Alphawave Semi. Not only were they a gold sponsor, Alphawave presented some of the best content and not only can they explain HOW chiplets work they can actually implement chiplets for you in form of a completed ASIC.

Even though the event has passed you can speak directly to the Alphawave people on the topics they covered. We will be writing more about it as well once we go through the materials they have provided.

Alphawave Semi will be a Gold Sponsor at the inaugural Chiplet Summit 2023 located in San Jose, CA on January 24-26, 2023! Catch us at our booth to learn more about our industry leading D2D (Die-to-Die) IP along with our custom silicon expertise integrated into a foundation for prebuilt connectivity chiplets delivering connectivity at a higher bandwidth and lower power than traditional infrastructure solutions.

Chiplet experts from Alphawave Semi will also be participating at Chiplet Summit on panels covering high-speed on-chip interfaces to achieve high performance while avoiding high latency; considerations on cost, chip area, throughput, and support are key in making an interface flexible, comprehensive, and easy to integrate for chiplet interoperability; and how to create a business-friendly structure on chiplet development for a viable marketplace.

Alphawave Semi is a contributing member of the Universal Chiplet Interface Express (UCIe) group and will be discussing the benefits UCIe brings to the ecosystem and market in panel discussions.

Alphawave Semi’s AresCORE16 D2D Connectivity IP is a market leading extremely low-power, low-latency interface IP designed by Alphawave Semi for very high bandwidth connections between two dies that are on the same package and is just one of the ways Alphawave is accelerating the critical data infrastructure at the heart of our digital world.

Panel Chiplet Interfaces

Letizia Giuliano
Tuesday, January 24th | 08:30-Noon

High-speed on-chip interfaces are the key to making the chiplet idea work. High data rates are essential to achieve high performance and avoid high latency. The interfaces also must consume little chip area to avoid reducing the total level of integration, and they must add little to power or thermal budgets. Example buses such as Bunch-of-Wires (BoW) and Universal Chiplet Interface Express (UCIe) are already available. Designers must consider cost, chip area, throughput, and support when deciding which one to use for their specific applications. The interface must be flexible, comprehensive, and easy to integrate with a wide variety of chiplets.

Best Packaging for Chiplets Today

Daniel Lambalot
Thursday, January 26th | 9:00 – 10:00 AM

Packaging is one of the most difficult areas for chiplet designers. Packages must be capable of handling power and heat dissipation, be reasonably priced and small, and be rugged enough for standard applications. Issues of concern include who selects the package and how, which packages are best-suited to chiplet-based designs, what breakthroughs we can expect in packaging over the next few years, and what are the best tradeoffs among size, performance, features, and cost for the many types of packages available today.

Tutorial Chiplet Interfaces

Letizia Giuliano
Thursday, January 26th | 9:00-10:00 AM

The interface connecting chiplets is critical to chiplet-based design. It must be extremely fast, highly reliable, and very flexible. It must also be low-power and take little chip area. There are two major contenders: Universal Chiplet Interface Express (UCIe) from the UCIe Consortium and Bunch-of-Wires (BoW) from the Open Compute Project Foundation. Designers must determine which fits best in their applications, and which is most likely to develop a large support ecosystem

How To Make Chiplets A Viable Market

Clint Walker
Thursday, January 26th | 2:00-3:30 PM

Many articles have discussed how chiplet-based design could become a drop-in business in which designers select the chiplets they want from a marketplace. Obviously, such a concept depends on a viable market in which chiplet designers could make a reasonable return on their investment. Clearly there would have to be standards for chiplets so chip designers would know what they’re getting and how it would integrate into their devices. The chiplet would need to have a specification sheet lists its connections and its characteristics in a specific manner. The chiplet would also have to pass both security and interoperability tests. Clearly such a marketplace will take time to develop and will require an organization to oversee it.

About Alphawave Semi

Alphawave Semi is a global leader in high-speed connectivity for the world’s technology infrastructure. Faced with the exponential growth of data, Alphawave Semi’s technology services a critical need: enabling data to travel faster, more reliably and with higher performance at lower power. We are a vertically integrated semiconductor company, and our IP, custom silicon, and connectivity products are deployed by global tier-one customers in data centers, compute, networking, AI, 5G, autonomous vehicles, and storage. Founded in 2017 by an expert technical team with a proven track record in licensing semiconductor IP, our mission is to accelerate the critical data infrastructure at the heart of our digital world. To find out more about Alphawave Semi, visit: awavesemi.com

Also Read:

Alphawave IP is now Alphawave Semi for a very good reason!

High-End Interconnect IP Forecast 2022 to 2026

Integration Methodology of High-End SerDes IP into FPGAs

Die-to-Die IP enabling the path to the future of Chiplets Ecosystem


Samsung- full capex speed ahead, damn the downturn- Has Micron in its crosshairs

Samsung- full capex speed ahead, damn the downturn- Has Micron in its crosshairs
by Robert Maire on 02-02-2023 at 2:00 pm

Samsung Electronics

-Samsung said its not reducing its capex despite downturn
-A clear indication they want to take share/kill Micron & others
-Is the US government subsidizing predatory chip behavior?
-The last US memory chip maker is clearly threatened

Samsung announces worst results in 8 years

Samsung released its earnings which were the worst in eight years. But the news was not how bad the earnings were because we already knew the chip industry and specifically memory is in a sharp downturn.

The real news is confirmation of previous statements that Samsung is not slowing its record capital spending of $39B despite the fact that the industry is flooded in over supply.

This is akin to OPEC drilling new wells when the price of oil is plummeting. OPEC is clearly smart enough to know that when you are already in a hole, you stop digging.

That is unless you want to take advantage of the situation and be a predator.

We have seen this movie several times before

We mentioned several newsletters ago, that we have been in the chip business long enough to remember that the US had 7 memory manufacturers including notably Intel and IBM. It is also notable that we have lost memory manufacturers usually in the bottom of a memory cycle when the weaker players can’t cut it and collapse.

It takes nerves of steel to be in the memory business and aggressive attitude. We pointed out that many months ago Micron bailed out in the game of “chicken” with Samsung as Samsung has kept the pedal to the metal of its 18 wheeler of memory manufacturing versus Micron’s pick up truck.

The fact that Samsung is not slowing even though Micron caved in a long time ago can only mean that Samsung is out for blood and market share….thats the only rational answer….

Samsung can read balance sheets

If you read Micron’s balance sheet , they are in a net debt position going into a downturn with prices and profitability collapsing. What better time for Samsung to press its advantage than when a competitor is financially weak in an industry that requires rivers of cash (which Samsung still has).

This same movie has played out in prior cycles as larger memory makers drive out weaker competitors who can’t keep up. We don’t know what Micron’s access to cash will look like if we have a prolonged downturn which it seems we are about to see given that Samsung may not cooperate and slow down.

Is the US government subsidizing predatory behavior with Chips act?

Samsung is planning new fabs in the US and will likely get CHIPS Act money because of it. They have been promised both federal and local money for new fabs in Texas. Given that the CHIPS act has a limited lifetime Samsung might as well get government money while the getting is good.

The government is clearly incentivizing those with money in the semiconductor industry to spend it as you don’t get CHIPS Act money unless you ante up your money first and Samsung is one of the few with money to spend as Micron is under water and Intel just reported a very bad quarter.

So in effect , what we have is the US government, subsidizing and incentivizing Samsung to spend money in a downturn to the detriment to US based competitors such as Micron who don’t have the money to spend in order to get CHIPS Act subsidies.

The US government is helping Samsung run its last remaining US memory maker out of business. Sounds like the exact opposite of what the CHIPS Act was supposed to do.

Even though Samsung is a friendly and the fabs are being built in Texas, it would still be nice to have a US domiciled memory maker left. Certainly the same goes for Samsung’s foundry business and Intel. Subsidizing Samsung when Intel is in a world of pain and resorting to accounting tricks to shore up its balance sheet is not a great idea.

A long downturn could get even longer and deeper

We have been talking about an unusually deep and long downturn and we have been criticized for that view. We are now in the typical rosy analyst view where “the recovery is coming in 6 months”. We have heard this before only to have the can kicked down the road again by another 6 months. The view of a H2 recovery seems widely held because of this fallacy.

There is no firm evidence that supports a H2 recovery other than hope and a prayer. Samsung’s capex behavior reinforces the view of a longer and deeper downturn unless we see them slow down. A longer, deeper downturn, long enough to mortally wound competitors may be what Samsung really wants.

Yangtze memory in China is a survivor and beneficiary

China has taken the memory market by storm and already garnered significant share. One thing that is an absolute certainty is that the Chinese government will anything and everything to insure their success and growth. China would very easily subsidize Yangtze no matter how long and deep the downturn in memory thus insuring its survival. The government has Yangtze’s back.

This suggests that Samsung is doing Yangtze a favor by going after the competition as Yangtze will also be able to take share when the dust clears.

Its not just Micron but a threat to Toshiba

With Toshiba looking to be sold off and broken up, their appetite for capex in the face of a weak industry is also near zero. Even though they have cash where Micron doesn’t they are in a similar leaky boat along with Micron. Japan has long been a strong supplier of memory but could potentially lose another player here.

The stocks

This is obviously pretty bad for Micron. It is an existential threat. At the very least its very damaging and severely crimps their plans and future. We already knew that Micron cuts its capex to the bone so its not much of a further loss to equipment makers.

It could be a positive for Lam if Samsung is serious about continuing to spend on capex (unless this turns out to be a big bluff) as Samsung is their biggest customer. It doesn’t help Toshiba’s valuation nor Hynix and others. Irrational behavior in a closely balanced commodity market is always bad to all involved.

We would hope that someone in the US government has the sense to pick up the phone and call Samsung about their clearly predatory behavior against the US chip industry. We saw that Korea was noticeably absent from the triumvirate of the US, Japan and Netherlands against China even though Korea does make semiconductor equipment and is supposedly a partner with the US.

Maybe Korea/Samsung wants it both ways……..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Samsung Versus TSMC Update 2022

A Memorable Samsung Event


Trends and Challenges in Quantum Computing

Trends and Challenges in Quantum Computing
by Ahmed Banafa on 02-02-2023 at 10:00 am

Trends and Challenges in Quantum Computing 1

Quantum Computing is the area of study focused on developing computer technology based on the principles of quantum theory. Tens of billions of public and private capitals are being invested in Quantum technologies. Countries across the world have realized that quantum technologies can be a major disruptor of existing businesses [1].

A Comparison of Classical and Quantum Computing

Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra. Data must be processed in an exclusive binary state at any point in time or what we call bits. While the time that each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in billionths of a second, there is still a limit as to how quickly these devices can be made to switch state.

 As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply. Beyond this, the quantum world takes over, in a quantum computer, a number of elemental particles such as electrons or photons can be used with either their charge or polarization acting as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and behavior of these particles form the basis of quantum computing [2]. Classic computers use transistors as the physical building blocks of logic, while quantum computers may use trapped ions, superconducting loops, quantum dots or vacancies in a diamond [1].

Challenges in Quantum Computing

  • Building scalable and stable quantum hardware: One of the main challenges in quantum computing is building a device that can handle a large number of qubits while maintaining stability and coherence.
  • Dealing with noise and errors in quantum systems: Quantum systems are highly sensitive to noise and errors, which can disrupt computation and lead to inaccurate results.
  • Developing efficient algorithms for quantum computation: As the capabilities of quantum computers are expanding, so is the need for new algorithms that can take advantage of the unique properties of quantum systems.
  • Implementing error correction and error mitigation methods: Error correction and error mitigation are crucial for building a useful quantum computer, but the methods used to accomplish this are still in the early stages of development.
  • Designing and implementing quantum communication and networking: Quantum communication and networking technologies, such as quantum key distribution and quantum teleportation, are still in the early stages of development, and there are many challenges to be overcome before they can be implemented on a large scale.
  • Addressing the lack of skilled professionals: The field of quantum computing is relatively new and there is a shortage of professionals with the necessary skills and knowledge to work with quantum devices and software.
  • Addressing the lack of integration of quantum technology with classical technology: It is still a challenge to seamlessly integrate quantum technology with existing classical technology, making it difficult to use quantum computing for practical applications.
  • Developing robust software and programming languages for quantum computing: There are currently limited software and programming languages that can be used for quantum computing, and these are still in the early stages of development.
  • Addressing the lack of standardization: There is currently a lack of standardization in the field of quantum computing, which makes it difficult to compare different devices and technologies.
  • Addressing the cost-effectiveness of quantum computing: Building and operating a quantum computer is still very expensive, and this is a major barrier to the widespread adoption of quantum computing [3].

Trends in Quantum Computing

·      Increasing qubit count and coherence times in quantum devices: The number of qubits (quantum bits) in a quantum computer is an important metric of its power. As the number of qubits increases, so does the computational power of the device. Coherence times refer to how long qubits can maintain their quantum state before decohering, and longer coherence times enable more complex computations.

·      Development of new quantum algorithms and optimization techniques: As the capabilities of quantum computers are expanding, so is the development of new algorithms and techniques to take advantage of the unique properties of quantum computing. These include quantum machine learning, quantum error correction, and quantum optimization algorithms.

·      Emergence of quantum-inspired classical algorithms and hardware: Researchers are studying the properties of quantum systems to develop new classical algorithms and hardware that mimic some of the advantages of quantum computing.

·      Growing interest and investment in quantum computing from industry and government: As the potential applications of quantum computing become more apparent, there is growing interest and investment in the field from both industry and government.

·      Increased collaboration and sharing of resources among quantum research institutions and companies: As quantum computing becomes more important, there is an increasing amount of collaboration and sharing of resources among quantum research institutions and companies.

·      The use of quantum machine learning and quantum artificial intelligence: Researchers are exploring the use of quantum computing to develop new machine learning and artificial intelligence algorithms that can take advantage of the unique properties of quantum systems.

·      Rising of Quantum Cloud Services: With the increasing qubit count and coherence times, many companies are now offering quantum cloud services to user, which allows them to access the power of quantum computing without the need of building their own quantum computer.

·      Advancement in Quantum Error Correction: To make a quantum computer practically useful, it is necessary to have quantum error correction techniques to minimize the errors that occur during computation. Many new techniques are being developed to achieve this goal.

The Future?

In the near future, it is likely that quantum computing will continue to be developed for specific applications such as optimization, machine learning and cryptography. Researchers are also working on developing more stable and reliable qubits, which are the building blocks of quantum computers. As the technology matures and becomes more accessible, it is expected to be increasingly used in industries such as finance and healthcare, where it can be used to analyze large amounts of data and make more accurate predictions.

In the long term, #quantumcomputing has the potential to revolutionize many industries and change the way we live and work. However, it is still a relatively new technology, and much research and development is needed before it can be fully realized [3].

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

 References

 1. https://www.linkedin.com/pulse/quantum-technology-ecosystem-explained-steve-blank/?

2. https://www.bbvaopenmind.com/en/technology/digital-world/quantum-computing-and-ai/

3. #chatgpt

Also Read:

10 Impactful Technologies in 2023 and Beyond

9 Trends of IoT in 2023

9 Trends Will Dominate Blockchain Technology In 2023