RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Samsung- full capex speed ahead, damn the downturn- Has Micron in its crosshairs

Samsung- full capex speed ahead, damn the downturn- Has Micron in its crosshairs
by Robert Maire on 02-02-2023 at 2:00 pm

Samsung Electronics

-Samsung said its not reducing its capex despite downturn
-A clear indication they want to take share/kill Micron & others
-Is the US government subsidizing predatory chip behavior?
-The last US memory chip maker is clearly threatened

Samsung announces worst results in 8 years

Samsung released its earnings which were the worst in eight years. But the news was not how bad the earnings were because we already knew the chip industry and specifically memory is in a sharp downturn.

The real news is confirmation of previous statements that Samsung is not slowing its record capital spending of $39B despite the fact that the industry is flooded in over supply.

This is akin to OPEC drilling new wells when the price of oil is plummeting. OPEC is clearly smart enough to know that when you are already in a hole, you stop digging.

That is unless you want to take advantage of the situation and be a predator.

We have seen this movie several times before

We mentioned several newsletters ago, that we have been in the chip business long enough to remember that the US had 7 memory manufacturers including notably Intel and IBM. It is also notable that we have lost memory manufacturers usually in the bottom of a memory cycle when the weaker players can’t cut it and collapse.

It takes nerves of steel to be in the memory business and aggressive attitude. We pointed out that many months ago Micron bailed out in the game of “chicken” with Samsung as Samsung has kept the pedal to the metal of its 18 wheeler of memory manufacturing versus Micron’s pick up truck.

The fact that Samsung is not slowing even though Micron caved in a long time ago can only mean that Samsung is out for blood and market share….thats the only rational answer….

Samsung can read balance sheets

If you read Micron’s balance sheet , they are in a net debt position going into a downturn with prices and profitability collapsing. What better time for Samsung to press its advantage than when a competitor is financially weak in an industry that requires rivers of cash (which Samsung still has).

This same movie has played out in prior cycles as larger memory makers drive out weaker competitors who can’t keep up. We don’t know what Micron’s access to cash will look like if we have a prolonged downturn which it seems we are about to see given that Samsung may not cooperate and slow down.

Is the US government subsidizing predatory behavior with Chips act?

Samsung is planning new fabs in the US and will likely get CHIPS Act money because of it. They have been promised both federal and local money for new fabs in Texas. Given that the CHIPS act has a limited lifetime Samsung might as well get government money while the getting is good.

The government is clearly incentivizing those with money in the semiconductor industry to spend it as you don’t get CHIPS Act money unless you ante up your money first and Samsung is one of the few with money to spend as Micron is under water and Intel just reported a very bad quarter.

So in effect , what we have is the US government, subsidizing and incentivizing Samsung to spend money in a downturn to the detriment to US based competitors such as Micron who don’t have the money to spend in order to get CHIPS Act subsidies.

The US government is helping Samsung run its last remaining US memory maker out of business. Sounds like the exact opposite of what the CHIPS Act was supposed to do.

Even though Samsung is a friendly and the fabs are being built in Texas, it would still be nice to have a US domiciled memory maker left. Certainly the same goes for Samsung’s foundry business and Intel. Subsidizing Samsung when Intel is in a world of pain and resorting to accounting tricks to shore up its balance sheet is not a great idea.

A long downturn could get even longer and deeper

We have been talking about an unusually deep and long downturn and we have been criticized for that view. We are now in the typical rosy analyst view where “the recovery is coming in 6 months”. We have heard this before only to have the can kicked down the road again by another 6 months. The view of a H2 recovery seems widely held because of this fallacy.

There is no firm evidence that supports a H2 recovery other than hope and a prayer. Samsung’s capex behavior reinforces the view of a longer and deeper downturn unless we see them slow down. A longer, deeper downturn, long enough to mortally wound competitors may be what Samsung really wants.

Yangtze memory in China is a survivor and beneficiary

China has taken the memory market by storm and already garnered significant share. One thing that is an absolute certainty is that the Chinese government will anything and everything to insure their success and growth. China would very easily subsidize Yangtze no matter how long and deep the downturn in memory thus insuring its survival. The government has Yangtze’s back.

This suggests that Samsung is doing Yangtze a favor by going after the competition as Yangtze will also be able to take share when the dust clears.

Its not just Micron but a threat to Toshiba

With Toshiba looking to be sold off and broken up, their appetite for capex in the face of a weak industry is also near zero. Even though they have cash where Micron doesn’t they are in a similar leaky boat along with Micron. Japan has long been a strong supplier of memory but could potentially lose another player here.

The stocks

This is obviously pretty bad for Micron. It is an existential threat. At the very least its very damaging and severely crimps their plans and future. We already knew that Micron cuts its capex to the bone so its not much of a further loss to equipment makers.

It could be a positive for Lam if Samsung is serious about continuing to spend on capex (unless this turns out to be a big bluff) as Samsung is their biggest customer. It doesn’t help Toshiba’s valuation nor Hynix and others. Irrational behavior in a closely balanced commodity market is always bad to all involved.

We would hope that someone in the US government has the sense to pick up the phone and call Samsung about their clearly predatory behavior against the US chip industry. We saw that Korea was noticeably absent from the triumvirate of the US, Japan and Netherlands against China even though Korea does make semiconductor equipment and is supposedly a partner with the US.

Maybe Korea/Samsung wants it both ways……..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Samsung Versus TSMC Update 2022

A Memorable Samsung Event


Trends and Challenges in Quantum Computing

Trends and Challenges in Quantum Computing
by Ahmed Banafa on 02-02-2023 at 10:00 am

Trends and Challenges in Quantum Computing 1

Quantum Computing is the area of study focused on developing computer technology based on the principles of quantum theory. Tens of billions of public and private capitals are being invested in Quantum technologies. Countries across the world have realized that quantum technologies can be a major disruptor of existing businesses [1].

A Comparison of Classical and Quantum Computing

Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra. Data must be processed in an exclusive binary state at any point in time or what we call bits. While the time that each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in billionths of a second, there is still a limit as to how quickly these devices can be made to switch state.

 As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply. Beyond this, the quantum world takes over, in a quantum computer, a number of elemental particles such as electrons or photons can be used with either their charge or polarization acting as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and behavior of these particles form the basis of quantum computing [2]. Classic computers use transistors as the physical building blocks of logic, while quantum computers may use trapped ions, superconducting loops, quantum dots or vacancies in a diamond [1].

Challenges in Quantum Computing

  • Building scalable and stable quantum hardware: One of the main challenges in quantum computing is building a device that can handle a large number of qubits while maintaining stability and coherence.
  • Dealing with noise and errors in quantum systems: Quantum systems are highly sensitive to noise and errors, which can disrupt computation and lead to inaccurate results.
  • Developing efficient algorithms for quantum computation: As the capabilities of quantum computers are expanding, so is the need for new algorithms that can take advantage of the unique properties of quantum systems.
  • Implementing error correction and error mitigation methods: Error correction and error mitigation are crucial for building a useful quantum computer, but the methods used to accomplish this are still in the early stages of development.
  • Designing and implementing quantum communication and networking: Quantum communication and networking technologies, such as quantum key distribution and quantum teleportation, are still in the early stages of development, and there are many challenges to be overcome before they can be implemented on a large scale.
  • Addressing the lack of skilled professionals: The field of quantum computing is relatively new and there is a shortage of professionals with the necessary skills and knowledge to work with quantum devices and software.
  • Addressing the lack of integration of quantum technology with classical technology: It is still a challenge to seamlessly integrate quantum technology with existing classical technology, making it difficult to use quantum computing for practical applications.
  • Developing robust software and programming languages for quantum computing: There are currently limited software and programming languages that can be used for quantum computing, and these are still in the early stages of development.
  • Addressing the lack of standardization: There is currently a lack of standardization in the field of quantum computing, which makes it difficult to compare different devices and technologies.
  • Addressing the cost-effectiveness of quantum computing: Building and operating a quantum computer is still very expensive, and this is a major barrier to the widespread adoption of quantum computing [3].

Trends in Quantum Computing

·      Increasing qubit count and coherence times in quantum devices: The number of qubits (quantum bits) in a quantum computer is an important metric of its power. As the number of qubits increases, so does the computational power of the device. Coherence times refer to how long qubits can maintain their quantum state before decohering, and longer coherence times enable more complex computations.

·      Development of new quantum algorithms and optimization techniques: As the capabilities of quantum computers are expanding, so is the development of new algorithms and techniques to take advantage of the unique properties of quantum computing. These include quantum machine learning, quantum error correction, and quantum optimization algorithms.

·      Emergence of quantum-inspired classical algorithms and hardware: Researchers are studying the properties of quantum systems to develop new classical algorithms and hardware that mimic some of the advantages of quantum computing.

·      Growing interest and investment in quantum computing from industry and government: As the potential applications of quantum computing become more apparent, there is growing interest and investment in the field from both industry and government.

·      Increased collaboration and sharing of resources among quantum research institutions and companies: As quantum computing becomes more important, there is an increasing amount of collaboration and sharing of resources among quantum research institutions and companies.

·      The use of quantum machine learning and quantum artificial intelligence: Researchers are exploring the use of quantum computing to develop new machine learning and artificial intelligence algorithms that can take advantage of the unique properties of quantum systems.

·      Rising of Quantum Cloud Services: With the increasing qubit count and coherence times, many companies are now offering quantum cloud services to user, which allows them to access the power of quantum computing without the need of building their own quantum computer.

·      Advancement in Quantum Error Correction: To make a quantum computer practically useful, it is necessary to have quantum error correction techniques to minimize the errors that occur during computation. Many new techniques are being developed to achieve this goal.

The Future?

In the near future, it is likely that quantum computing will continue to be developed for specific applications such as optimization, machine learning and cryptography. Researchers are also working on developing more stable and reliable qubits, which are the building blocks of quantum computers. As the technology matures and becomes more accessible, it is expected to be increasingly used in industries such as finance and healthcare, where it can be used to analyze large amounts of data and make more accurate predictions.

In the long term, #quantumcomputing has the potential to revolutionize many industries and change the way we live and work. However, it is still a relatively new technology, and much research and development is needed before it can be fully realized [3].

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

 References

 1. https://www.linkedin.com/pulse/quantum-technology-ecosystem-explained-steve-blank/?

2. https://www.bbvaopenmind.com/en/technology/digital-world/quantum-computing-and-ai/

3. #chatgpt

Also Read:

10 Impactful Technologies in 2023 and Beyond

9 Trends of IoT in 2023

9 Trends Will Dominate Blockchain Technology In 2023


Careful Who You Work for

Careful Who You Work for
by Roger C. Lanctot on 02-02-2023 at 6:00 am

Careful Who You Work for

When one is looking for a job and that hunt extends from weeks into months or even years one is inclined to default to an any-port-in-a-storm mindset. Some recent experiences suggest to me that that mentality may need a reevaluation.

I was surprised to learn recently, from conversations with industry acquaintances, that one’s future employment prospects can be colored unpredictably by one’s previous employment – or let’s say one’s previous employer. One acquaintance found that an association with two previous employers – tenures that had been marked by professional success and measurably positive outcomes – had marked this person as unemployable.

The description this person gave me was that employment by one particular company at a senior level had placed this person on a blacklist within a particular industry echelon. A headhunter let this executive know that doors were closed to potential positions merely as a result of having worked for a particular company.

The company in question had engaged in strategies that had led to immense financial losses to investors and created the appearance of fraud. The executive in question, my acquaintance, had nothing to do with strategic or financial decisions at the company, but it didn’t appear to matter. Just having worked for the company at a senior level during the period in question was disqualifying for future employers.

This executive went on to work for a much much larger public company in the IT industry leading a team of dozens of executives in launching a hugely successful business-to-business marketing campaign. Following this campaign, due to unrelated strategic decisions at the company, this executive’s department was massively downsized and the executive was let go.

This experience, too, proved a negative to potential future employers. In this case, it was the renowned toxic culture of the company – a major Fortune 500 IT firm – that tainted this executive’s reputation. It was as if to say that simply having worked at this company – famous for its attention-getting CEO – this executive was now infected and unhire-able.

I am happy to say that this highly talented individual has not been held down by these reputational impediments and has found a new home for their particular set of skills.

Another acquaintance of mine, who I originally met about three years before at CES 2020, recently found a new home and I reconnected. In this case, when I met this executive they were working for a company which had a horrible industry reputation – largely related to the behavior of the company’s CEO, who was verbally abusive to colleagues and customers.

When I first met this executive I was immediately sympathetic to their plight, knowing the company’s and the CEO’s reputation, which were likely unknown to this executive at this early stage of their employment. Having escaped this company and now working elsewhere, the executive had a quite different experience from that of the previously-described executive.

Having left their previously toxic work environment this executive discovered widespread sympathy from the new employer and elsewhere in the industry. Future employers were aware of the dysfunction at the previous employer and were more than happy to rescue a talented individual to join their team.

The bottom line is that most industries are not in fact Industries – with a capital “I.” Most industries are neighborhoods. Everybody knows everybody else. There are few secrets.

Industry colleagues tend to share information as employees migrate from company to company and customers, too, share their impressions of how their suppliers interact. Reputations are formed organically and working for a company can be used against you or can work in your favor and can influence one’s decision to apply for or accept an offer from said company.

It’s often difficult to see this reputational background radiation. It can be hard to understand how your organization or any organization is perceived. But these two experiences suggest that internal corporate culture has external consequences and relevance.

Who you work for matters. I am currently reading Emmis Communications CEO Jeff Smulyan’s “Never Ride a Rollercoaster Backwards” in which Smulyan talks about how Emmis’ reputation for being a great place to work contributed to the company’s ability to hire (and sometimes steal) great talent and may have even made acquisitions less expensive, though even Smulyan expresses skepticism on this point.

It is not always possible to choose who we work for. But my recent experiences suggest that it matters a lot. With massive layoffs spreading across the technology industry, plenty of folks will be pondering their next steps. As the weeks and months slide by that any-port-in-a-storm mindset may kick in, but remember that it does matter who you work for and how your organization treats its employees and customers.

P.S.

For the record, I work for a great organization. No complaints. How about you?

Also Read:

10 Impactful Technologies in 2023 and Beyond

Effective Writing and ChatGPT. The SEMI Test

All-In-One Edge Surveillance Gains Traction


U.S., Japan & Dutch versus China Chips & Memory looks to be in a long downturn

U.S., Japan & Dutch versus China Chips & Memory looks to be in a long downturn
by Robert Maire on 02-01-2023 at 2:00 pm

US Japan China

-US, Japan & Dutch agree to embargo some China chip equip
-Goes beyond just leading edge & will increase negative impact
-China might catch up in decades or invade Taiwan tomorrow
-Why the memory downturn could be longer than expected

Ganging up on China

It appears that the US has put together a coalition of the US, Japan and the Netherlands all of which will agree to stop selling certain semiconductor equipment to China.

This unified front against China is an obvious slap in the face but most importantly is likely a very effective way to shut China out of advanced semiconductor manufacturing.

Those three countries taken together make the vast majority of semiconductor equipment and more critically an even higher percentage of the leading edge equipment.

China would be unable to make even the most rudimentary semiconductors if it couldn’t buy any equipment from all three. Chinese semiconductor equipment makers are still in early stages and fundamentally rely on copying other manufacturers basic designs with little home grown R&D.

It will take decades to copy US, Japanese & Dutch tools

While China may be able to physically copy some dep and etch tools, it does not have the very deep and complex supply chain to source the amazingly complex lenses made by Zeiss or Nikon. Nor does China have the millions of lines of code in a KLA tool for defect analysis.

Copying will be much more difficult than the blatant rip off of US military plane designs as semiconductors and the tools that make them are way more complex.

Also missing is the decades of human capital and infrastructure such as there is in Silicon Valley where the expertise varies from the artisan welders of stainless steel piping to the decades of experience with plasma process.
EUV has been over 35 years in the making having started in Japan and the US then moving to the Netherlands.

No matter how much money is thrown at the technology issues, it will take a very, very long time. As the saying “nine women can’t make a baby in one month” goes (pardon the analogy) so goes semiconductor technology advancement.

By the time China is able to copy existing technology, the rest of the world will be decades further along. This is not to suggest they can never catch up, but likely not in most of our lifetimes.

Beyond restricting EUV

It appears from news reports that restrictions are now even broader than what was talked about in October in the US with 193 “immersion” litho systems mentioned. Restricting 193 immersion would push the Chinese back even further to a point where they couldn’t reasonably do multiple patterning or quadruple patterning or other tricks to try to get EUV like dimensions even at ridiculously low yields. It would push back to 28NM type technology.

Even more impact than the October embargo

If ASML and Nikon are not allowed to sell immersion scanners then it would make sense that KLA would be prohibited from selling a matching generation of reticle and wafer inspection tools and not just EUV capable tools. This could suggest that Metrology sales from the likes of KLA & AMAT etc will be further restricted in China.

TEL will likely not be able to sell EUV track tools or even immersion track tools. No dry resist for Lam nor high aspect ratio etch tools.

ASMI would likely not be able to sell ALD tools….the list goes on and on about much deeper impact if we go back to immersion technology.

It may be that from a political perspective going back to embargoing immersion is likely not only more punitive and effective but also more evenly shares the economic pain of those doing the embargoing by not just restricting the most advanced scanners.

We have yet to see the details, but the few points of information point to a deeper embargo.

A very serious escalation with no response (so far)

We are surprised about the seriousness and level of effort the current administration has put in versus prior efforts to contain China through tax policy. Forming a coalition is a significant escalation of tensions and effectiveness.

We are also surprised that China has so far not responded to the October embargo by cutting off rare earth elements or pharmaceuticals or some other critical export.

All we can imagine is that it just makes Taiwan that much more attractive to China….

Taiwan the “hollow” prize

While China may have dreams about taking over Taiwan and along with it the semiconductor industry, the real reality is that TSMCs fabs would quickly become unusable without support from equipment makers and would cease to function in a relatively short time as we saw when Jin Hua was abandoned overnight.

But it would be a really neat way of depriving the rest of the world the semiconductors they need. Perhaps with the thought “that if I can’t have them you won’t have them either”.

Don’t be surprised if China puts one very small missile into each fab in Taiwan thereby taking them all off line.

Memory – Deeper and longer downturn than expected

Also in the news media (Wall Street Journal) is the realization that the memory downturn is going to be longer and deeper than previously thought…..DUH!.

As we have pointed out numerous times, capacity keeps increasing through technology shrinks even without significant capital investment. Memory companies may slow the purchase of new equipment or new fabs but they will keep up R&D to get to more layers of NAND or the next generation of DRAM which gets the industry more bits without more (significant) bucks.

Technology marches on even in a down turn.

Although its clear that Micron and even Samsung cutting back on production there is still a lot of excess capacity, and getting worse, and pricing is not recovering and buyers likely know that.

The profitability of Samsung and Micron is already suffering and will get worse.
This all suggests a longer, deeper memory downturn than many people were expecting.

This means that capital spending by memory makers will be very significantly delayed and new fab construction will be even further delayed. We would guess that projects that Samsung had outside of Korea, especially those in China will be delayed or canceled.

Micron will without doubt push back its new Boise fab and New York even further behind that. It may be that most of the bit growth needed in slow times can be handled with existing fabs with tweaks in technology rather than new incremental capacity…at least for the next few years.

The stocks

The escalation of the embargo with China is negative for every equipment company as it may increase the amount of equipment covered under the ban.

The coalition is positive in that US companies such as AMAT and LRCX don’t have to worry about TEL or ASMI or other Japanese or Dutch companies eating their lunch in China.

The escalation is bad in that it begs a retaliation from China which will likely not be good for those involved.

As we have said many times, the equipment industry in the long run is a zero sum game. Especially for ASML, as equipment not sold to China will be sold elsewhere as someone, somewhere will make the chips if there is demand for them (even though right now demand is down….)

Overall this just potentially adds to the woes the equipment industry already has, the triple whammy of China, weak economy and horrible memory. There is not likely a good resolution to this all as we very, very highly doubt that the administration will loosen the sanctions as it has shown that it is so far unwilling to unwind sanctions in other situations. This means that chip equipment sales to China might not ever recover and most likely could worsen.

It will take time for others to displace China’s huge spending spree, but we could see India, Vietnam, Singapore and of course the re-shoring to the US and Europe to start to make up for a loss of China. Those investors hoping for a quick snap back in the chip industry may be disappointed.

We would try to minimize China exposure in our Chip portfolio, in both directions, either as a supplier or customer. We would be aware of those who could be in the way of a retaliatory strike by China such as those dependent upon rare earth elements.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory


Achieving Faster Design Verification Closure

Achieving Faster Design Verification Closure
by Daniel Payne on 02-01-2023 at 10:00 am

Questa Verification IQ min

On big chip design projects the logic verification effort can be larger than the design effort, taking up to 70% of the project time based on data from the 2022 Wilson Research Group findings. Sadly, the first silicon success rate has gone downwards from 31 percent to just 24 percent in the past 8 years, causing another spin to correct the flaws, costing companies lost time to market and certainly hurting their revenue plans. Better verification would certainly improve first silicon success, but that is easier said than done.

Some other sobering numbers from the Wilson Research Group study:

  • ASIC – 24% first time success, 36% finish on time
  • FPGA  – 16% achieve zero bug escapes, 30% finish on time

Design verification has many difficult chores: debugging, creating tests then running engines, testbench development and test planning. Ideally your team wants to minimize turn-around times, reach verification closure with the fewest people and compute resources, meet safety compliance, and know when the design quality is high enough to stop verifying, while meeting the project schedule.

I recently got an update from design verification expert Darron May at Siemens EDA to hear about something just announced, called Questa Verification IQ. Their approach is all about data-driven verification formed around using traceability, collaboration and analytics powered by AI/ML. Traditional analytics provided limited productivity and insight into just describing and diagnosing logic behavior, while big data-driven analytics using AI/ML offer predictive and prescriptive actions for verification. Software and hardware teams are becoming more productive by collaborating through the use of CI (Continuous Integration), Agile methods, ALM (Application Lifecycle Management), cloud-based design, and applying AI/ML techniques. Safety critical industries have a need for traceability between requirements, implementation and verification, as defined in industry standards like ISO 26262 and DO-254.

Here’s the big picture of how Quest Verification IQ connects together all of the data from various verification engines into a data-driven flow, along with an ALM tool.

Questa Verification IQ

The coverage data is gathered from logic simulation (Questa), Emulation and Prototyping (Veloce), AMS (Symphony), Formal (OneSpin), Static and FuSa. The ML feature analyzes all of this data in order to predict patterns and reveal any holes, point out root causes, then prescribe action to improve coverage. The ALM shown is Polarion from Siemens, although you could use another ALM, just like you can use your favorite verification engines.

Questa Verification IQ is a browser-based framework that includes a process guide so that you can build a safety critical flow using lifecycle management to plan and track all requirements. The regression navigator enables your team to create and execute tests, monitor the results, and have a complete verification history. With the coverage analyzer you know how complete your coverage is for code, functional blocks and test plans. Finally, the data analytics presented provide you with a metric platform, using project dashboards and providing cross analytics.

The web-based framework scales for any size of electronics project, and you won’t have to install any software or be concerned about keeping your OS updated. It also supports public, private or hybrid cloud setups. With AI/ML being applied the verification closure process is sped up, while debug effort quickens as root cause analysis helps pinpoint where to improve.

I asked Darron May a few clarifying questions.

Q: Can I mix and match Questa Verification IQ with any EDA vendor tool and ALM?

A: Questa Verification IQ supports ALM tools and engines via a standards based approach. It interfaces with ALM tools using Open Services for Lifecycle Collaboration (OSLC) so any tool supporting the standard like Doors next or Siemens Polarion and Teamcenter can be used. Any engine can be launched by Questa Verification IQ and again we have support for coverage via the  Unified Coverage Interoperability Standard (UCIS).

Q: How does this approach compare to Synopsys DesignDash?

A: Synopsys DesignDash is focused on ML for design data whereas Questa Verification IQ is focused on data driven verification using analytics, including ML, to accelerate verification closure, reduce turn-around times and provide maximum process efficiency. Questa Verification IQ provides applications needed for team-based collaborative verification management in a browser-based framework with centralized access to data.

Q: How does this approach compare to Cadence Verisium?

A: Cadence Verisium focuses only on ML assisted Verification. In comparison Siemens Questa Verification IQ provides complete data driven verification solution powered by Analytics, Collaboration and Traceability. Verification Management is provided in a browser-based tool with applications built around Collaboration. Coverage Analyzer brings the industry’s first collaborative coverage closure tool using analytical navigation assisted by ML. Question Verification IQ interfaces with Siemens Polarion using OSLC and provides a tight digital thread traceability with Application Lifecycle Management with no UI context change, bringing the power of ALM to hardware verification.

Summary

I’m always impressed with new EDA tools that make a complex task easier by working smarter, not requiring engineers to put in more hours of manual effort. With early endorsements of Questa Verification IQ from familiar companies like Arm and Nordic Semiconductor, it looks like Siemens EDA has added something compelling for verification teams to consider looking at.

Related Blogs


Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?
by Fred Chen on 02-01-2023 at 6:00 am

Multiple Monopole Exposures 1

For a leading-edge lithography technology, EUV (extreme ultraviolet) lithography is still plagued by some fundamental issues. While stochastically occurring defects probably have been the most often discussed, other issues, such as image shifts and fading [1-5], are an intrinsic part of using reflective EUV optics. However, as long as these non-stochastic issues can be systematically modeled, effectively as aberrations, corrective approaches may be applied.

Image shifts are an unavoidable part of EUV lithography for a variety of reasons, including feature position on mask and mask position [6]. However, at any given position of and on the mask, image shifts occur because the image is actually composed of sub-images from smaller and larger angles of reflection from the EUV mask. The larger angles are generally smaller amplitude and shift one way with defocus, while the smaller angles are generally larger amplitude and shift the opposite direction with defocus. The combined effect is to have a small net shift with defocus (Figure 1). If the amplitudes for the smaller and larger angles were the same, there would be no shift [3].

Figure 1. A net image shift results from different amplitude waves moving in opposite directions due to defocus.

The measured shifts and the best focus position are both nontrivial functions of both the illumination angle and the pitch [1]. From Figure 2, based on these measurements on a 0.33 NA system, we can also pick out illuminations which are best suited for particular pitches.

Figure 2. 0.8/0.5 dipole is suited for 32 nm horizontal line pitch, while 0.7/0.4 dipole is more suited for 37.3 nm.

For example, the 32 nm horizontal line pitch is best matched with the 0.8/0.5 dipole shape (45 deg span, 0.5 inner sigma, 0.8 outer sigma). On the other hand, the 0.7/0.4 dipole shape seems best matched with around 37 nm horizontal line pitch, or closer to 37.3 nm. So, ideally, a pattern containing these two pitches should be printed in two parts, one with 0.8/0.5 illumination for the part containing 32 nm pitch, and one with 0.7/0.4 illumination for the part containing 37.3 nm pitch. This would solve both the best focus difference and defocus image shift issues for these two pitches.

However, one other shift-related issue remains. The image position itself at best focus is different for different pitches. This can fortunately be corrected in a straightforward manner by the method suggested in Ref. 4. The shift can be directly compensated as different exposure positions. Moreover, the fading can be further eliminated by splitting the dipole illumination up as two exposures, one for each monopole [4]. This allows the perfect overlap of the images from each of the two poles (Figure 3). This would mean a total of four exposures for the 32 nm and 37.3 nm pitches. In addition, overlay needs to be tight for the shifts to be cancelled (<1nm). The dose would be reduced to 1/4 of the original dose for each exposure. However, the throughput may still suffer from the lower pupil fill (<20%) of the monopole. One alleviating possibility is to expand the monopole width to increase pupil fill, at least for some of the pitches being targeted.

Figure 3. Compensating exposure positions for each monopole exposure can lead to a zero dipole image shift.

This multiple exposure approach can be generalized to two-dimensional patterns, covering more pitches. In combination with mask position and mask position-dependent adjustments, it is the only true rigorous way to fully correct the image shift aberrations in EUV lithography.

References

[1] F. Wittebrood et al., ““Experimental verification of phase induced mask 3D effects in EUV imaging,” 2015 International Symposium of EUVL – Maastricht.

[2] T. Brunner et al., “EUV dark field lithography: extreme resolution by blocking 0th order,” Proc. SPIE 11609, 1160906 (2021).

[3] F. Chen, “Defocus Induced Image Shift in EUV Lithography,” https://www.youtube.com/watch?v=OXJwxQK4S8o

[4] J-H. Franke, T. A. Brunner, E. Hendrickx, “Dual monopole exposure strategy to improve extreme ultraviolet imaging,” J. Micro/Nanopattern. Mater. Metrol. 21, 030501 (2022).

[5] J-H. Franke et al., “Improving exposure latitudes and aligning best focus through pitch by curing M3D phase effects with controlled aberrations,” Proc. SPIE 11147, 111470E (2019).

[6] F. Chen, “Pattern Shifts in EUV Lithography,” https://www.youtube.com/watch?v=udF9Dw71Krk

This article first appeared in LinkedIn Pulse: Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Also Read:

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Application-Specific Lithography: Sub-0.0013 um2 DRAM Storage Node Patterning

Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects


Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges
by Robert Maire on 01-31-2023 at 2:00 pm

Lamb chops

-Lam Research chops guidance, outlook & headcount sharply
-Further declines as 2023 will be H1 weighted- No end in sight
-System sales cut by more than half as even service is cut
-Memory is the culprit as expected-Forcing business “reset”

A sad sounding conference call….

While Lam reported a good December, as expected by us and others, coming in at $5.28B in revenues and non GAAP EPS of $10.71 versus expectations of $5.08B and EPS of $9.96.

The real problem is guidance going into 2023. Guidance is for $3.8B+-$300M and EPS of $6.50+-$0.75 versus street expectations, which were already sharply lowered of $4.38B and $7.88 EPS.

The real problem is that “real” results are much, much worse after you back out deferred revenue from incomplete units in the field waiting on parts.
In the December quarter Lam benefited to the tune of $700M so the “real” revenue would have been $4.58B. Worse yet with expectations of $3.8B in March, if you back out deferred revenue it looks to drop below $3B. Deferred revenue was down from September’s $2.75B to $2B.

Also remember that deferred comes in at higher margins. So its way worse than it looks at first blush. The call had a very downbeat tone overall with management using words like “reset”, “decline meaningfully”, “well below”, “Unprecedented” .

Perhaps most telling was CEO Tim Archer saying on Q&A that there was “no timeframe on recovery”. So it sounds like no end in sight, no hope of a second half recovery.

The company also said that revenue would be first half weighted which suggests a weaker, not better second half due largely to taking down the deferred revenue

The company will be taking about $250M in charges

Headcount cuts signal bad/long downturn

We haven’t seen layoffs in the semiconductor equipment business for quite some time. Lam announced headcount cuts of 1300 full time employees plus 700 part time/contract on top of earlier cuts, so well over 2000 cuts or a bit over 10%

Even service/support dropped- previously sacrosanct

Lam had previously spoke about service/support revenue as being bulletproof and not vulnerable to variations. Turns out to be not true as service/support was down from September’s $1.9 to December’s $1.7B as tools were idled and did not need service.

Even worse still, if we back out the declining service revenue could “system” sales fall below $2B and approach a low of $1B in Q1????This is really off a cliff and explains the actions taken.

Memory, especially NAND, is hardest hit

Its no surprise that memory is hardest hit as we have heard for months that the memory industry was in sharp decline. Utilization is way down, tools are idled and new projects are being pushed way out. It sure sounds like we are not going to see a memory recovery any time soon and not this year.

Tim Archer said that “memory is at levels we haven’t seen in 25 years”. If we turn the clock back 25 years, memory spending was a very small fraction of where it has been over the last year- probably single digit percentages
Memory is obviously off the proverbial cliff without skid marks……

March quarter not likely the bottom- Bottom may be H2

It sounds as if we are in a situation where Lam will see declines over the course of the year especially if their view that 2023 is “first half weighted” is accurate. This suggests a bottom in H2 (or beyond?) Certainly not the H2 recovery that bullish people are expecting.

Welcome to reality

We have been clear in our view of the negative impact we expected from Lam and we now have the proof in black and white. We suggested that Lam was a short while every other analyst on the street had at least a neutral and most with buys despite the very clear signals.
In our most recent note;

Where there’s smoke there’s fire

We pointed out that pre-announcements from both UCTT & ICHR clearly telegraphed a horrible outlook from Lam. How could everyone miss this?

The stocks

Lam was down sharply, 4% in the after market as the call went on. As all those bullish analyst cut their numbers and do a “reset” we would also assume a few ratings changes after the cows have left the barn.

There is obviously no reason to own the stock if we haven’t yet hit bottom nor have any idea where the bottom is. We can just wait on the side line and watch it get cheaper.

There may be some temptation to buy on a relief rally, that it could have been even worse but obviously thats not a very good reason to own a stock.
Things are clearly much worse than most (not all) expected.

There is likely some collateral damage as sub suppliers to Lam will see the effects of Lams inventory reductions talked about on the call., as Lam appropriately cuts back on parts. Obviously supply chain constraints are less of an issue in a sharp downturn.

We would expect AMAT to sing a similar tune but with slightly less impact as Lam remains the memory poster child. KLAC is obviously negatively impacted in China but historically been the foundry/logic poster child and less impacted by memory.

As we stated in our note on ASML this morning, ASML is almost completely unaffected and almost invulnerable as they remain head and shoulders above the dep and etch business which is reverting back into a very competitive “turns” business.

This reminds me of an old book….. A tale of two cities “it was the best of times (for ASML) it was the worst of times (for LRCX)”

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken


Weebit ReRAM: NVM that’s better for the planet

Weebit ReRAM: NVM that’s better for the planet
by Eran Briman on 01-31-2023 at 10:00 am

1.Weebit Nano RRAM ReRAM IP NVM for semiconductors green materials Eco friendly technology production

Together with our R&D partner CEA-Leti, we recently completed an environmental initiative in which we analyzed the environmental impact of Weebit’s Resistive Random-Access Memory (ReRAM / RRAM) technology compared to Magnetoresistive Random Access Memory (MRAM) – another emerging non-volatile memory (NVM) technology. The results were extremely positive for Weebit’s Oxide-based ReRAM (OxRAM), which was jointly developed with Leti, showing the environmental impact of ReRAM is much lower than that of MRAM.

A bit of background

The overall contribution of the semiconductor industry to global greenhouse gas (GHG) emissions is increasing as demand for semiconductors continues to grow. To mitigate negative impacts, environmental programs are extremely important for all players in the semiconductor ecosystem. In addition to CO2 emissions, semiconductor manufacturing can use a significant amount of energy, water, rare natural resources, and chemicals, which can contribute to global warming. The choices semiconductor companies make in design and specification phases, including their memory technology choices, are key to reducing a company’s overall carbon footprint.

MRAM is effectively the only other kind of emerging NVM that is commercially available today at foundries. It stores data as resistance using magnetic fields (versus ReRAM which stores it as resistance of a solid dielectric material, and flash which stores data as electric charges). MRAM has high endurance and is more often used as a replacement for embedded SRAM than for embedded flash. Still, there are companies using MRAM today as a replacement for embedded flash that do so because until now there hasn’t been a production-ready alternative at smaller geometries.

Compared to MRAM, Weebit ReRAM is the logical choice for embedded applications, with the number one reason being ease of manufacturing. Weebit ReRAM requires significantly fewer layers and masks and doesn’t use exotic materials or special equipment, so it can be manufactured in the standard CMOS production line and doesn’t require designated cleanroom facilities. All this translates to lower costs. MRAM adds an estimated 30-40% to wafer cost, compared to ReRAM’s 5-7%. We will go into more depth on MRAM in a future article, but for now, suffice it to say that ReRAM has a long list of advantages over MRAM, and in our new study, we’ve outlined yet another advantage – ReRAM is much more ecologically friendly! 

What we looked at

The team at CEA-Leti estimated the contribution of both OxRAM and MRAM to climate change, focusing on the production flows of each technology. To enable a fair comparison, the study looked at each technology in an identical die area in a similar process node and considered only the memory cell portion. They looked at raw materials and manufacturing processes (cradle to gate) without including infrastructure and abatement. Scroll to the end of the article to learn more about the data collection for the study*.

Key results

The study found that on all measured parameters, OxRAM demonstrated a better GHG related profile than MRAM. Below we’ve listed some of the key results.

ReRAM demonstrated the following benefits over MRAM:

  • 30% reduction in GHG emissions
  • 41% reduction in water use
  • 53% reduction in use of minerals and metals
  • 36% less electricity to process

The importance of critical materials

One of the key study findings is that the MRAM flow contains 2X more critical raw materials than the OxRAM flow. As defined by the European Union, the two main factors that define the criticality of a material are supply risk and economic importance. Supply risk is determined by criteria including supply concentration, import reliance, governance performance of suppliers, trade restrictions and criticality of substitute materials. Economic importance is based on a material’s added value, importance in end use applications, and the performance of any substitute materials. In the below chart you can see the criticality of various materials used in semiconductor manufacturing.

Many of the materials required for MRAM are at high supply risk, and some – like magnesium, platinum and cobalt – are critical in terms of both supply risk and economic importance. Any disruption of access to such materials, whether from political challenges, extreme weather, COVID lock-downs, or other issues can put a project at risk. In addition, the borates that are used in MRAM manufacturing have a very poor recycling input rate (less than 1%) – yet another consideration when looking at environmental impacts.

The bigger picture

There are many environmental considerations that come into play for semiconductor technologies such as NVMs. In our study, we specifically looked at the memory cells and circuits themselves, without accounting for the rest of the chip (e.g., microcontrollers) or the environmental impacts of the product lifecycle, such as power consumption during its usage and end-of-life recycling.

The results we’ve shown here can provide customers with confidence that when they are choosing an alternative to flash for their next design, they can not only count on the many known advantages of ReRAM, but they now know that Weebit ReRAM has a lower environmental impact and less supply chain risk than MRAM.

* Notes about the study

  • Primary data: All data about the steps of the production flow came from internal collection by Leti, which has broad expertise in both MRAM and ReRAM. Quantity and types of materials used (metals, chemicals and gases), water consumption, energy consumption, and air/water emissions were measured by Leti.
  • Secondary data: All raw materials data came from the Eco Invent database.
  • Production is in France and therefore the energy mix is the French mix.
Also Read:

How an Embedded Non-Volatile Memory Can Be a Differentiator

CEO Interview: Coby Hanoch of Weebit Nano


Model-Based Design Courses for Students

Model-Based Design Courses for Students
by Bernard Murphy on 01-31-2023 at 6:00 am

System design min

Amid the tumult of SoC design advances and accompanying verification and implementation demands, it can be easy to forget that all this activity is preceded by architecture design. At the architecture stage the usual SoC verification infrastructure is far too cumbersome for quick turnaround modeling. Such platforms also tend to be weak on system-wide insight. Think about modeling an automotive Ethernet to study tradeoffs between zonal and other system architectures. Synopsys Platform Architect is one possible solution though still centered mostly on SoC designers rather than system designers. MATLAB/Simulink offers a system-wide view, but you have to build your own model libraries.

Mirabilis VisualSim Architect offers a model-based design (MBD) system with ready-to-use libraries for popular standards and components in electronic design. They have now added a cloud-based subset of this system plus collateral to universities as a live, actionable training course. Called “Semiconductor and Embedded Systems Architecture Labs” (SEAL), the course provides hands-on training in system design to supplement MBD/MBSE courses.

Mirabilis VisualSim and MBD

Deepak Shankar (Founder at Mirabilis) makes the point that for a university or training center to develop a training platform requires they procure and maintain prototypes and tool platforms and build training material and lab tutorials. This is extremely time-consuming and expensive, and quickly drifts out of date.

VisualSim is a self-contained system plus model library requiring no integration with external hardware, tools or libraries. Even more important the full product is in active use today for production architecture design across an A-list group of semiconductor, systems, mil-aero, space and automotive companies who expect accuracy and currency in the model library. As one recent example, the library contains a model for UCIe, the new standard for coherent communication between chiplets.

Hardware models support a variety of abstractions, from SysML down to cycle accurate, and analog (with linear/differential equation solvers) as well as digital functionality. Similarly, software can evolve from a task-graph model to more fully elaborated code.

The SEAL Program

The lab is offered on the VisualSim Cloud Graphical Simulation Platform, together with training collateral in the form of questions and answer keys. The initial release covers 67 standards and 85 applications. Major applications supported by SEAL include AI, SoC, ADAS, Radars, SDR, IoT, Data Center, Communication, Power, HPC, multi-core, cache coherency, memory, Signal/Image/Audio Processing and Cyber Physical Systems. Major standards supported are UCIe, PCIe6.0, Gigabit Ethernet, AMBA AXI, TSN, CAN-XL, AFDX, ARINC653, DDR5 and processors from ARM, RISC-V, Power and x86.

Examples of labs and questions posed include:

  • What is the throughput degradation of multi-die UCIe based SoC versus an AXI based SoC?
  • How do autonomous driving timing deadlines change between multi-ECUs vs single HPC ECU?
  • How much power is consumed in different orbits of a multi-role satellites?
  • Which wired communication technology is more suitable for a flight avionics system – PCIe or Ethernet?

Course work can be graded by university teaching or training staff. Alternatively, Mirabilis is willing to provide certification at two levels. A basic level offers a Certificate of Completion for a student who works through a module and completes the Assessment Questions. More comprehensive options include a Professional Certificate for a student who successfully completes 6 modules, or a Mini Masters in Semiconductor and Embedded Systems for a student who completes 20 modules.

What’s Next?

While an MBD system of this type obviously needs some pretty sophisticated underlying technology to manage the multiple different types of simulation needed and stitching required between different modeling styles and abstractions, the practical strength of the system clearly rests on the strength of the library. Deepak tells me their commercial business splits evenly between semiconductor and systems clients, all doing architecture simulation. Working with both types of client keeps their model library tuned to the latest needs.

Semiconductor clients are constantly optimizing or up-revving SoC architectures. Systems clients are doing the same for more distributed system architectures – an automotive network, an O-RAN system, an avionics system, a multi-role satellite system. Which makes me wonder. We all know that system companies are now more heavily involved in SoC design, in support of their distributed systems. Some form of MBD must be the first step in that flow. A platform with models well-tuned (though not limited) to the SoC world might be interesting to such architects I would think?

You can learn more about the SEAL program HERE.

Also Read:

CEO Interview: Deepak Shankar of Mirabilis Design

Architecture Exploration with Miribalis Design

Rethinking the System Design Process


Counter-Measures for Voltage Side-Channel Attacks

Counter-Measures for Voltage Side-Channel Attacks
by Daniel Payne on 01-30-2023 at 2:00 pm

agileGLITCH min

Nearly every week I read in the popular press another story of a major company being hacked: Twitter, Slack, LastPass, GitHub, Uber, Medibank, Microsoft, American Airlines. What is less reported, yet still important are hardware-oriented hacking attempts at the board-level to target a specific chip, using voltage Side-Channel Attacks (SCA). To delve deeper into this topic I read a white paper from Agile Analog, and they provide IP to detect when a voltage side-channel attack is happening, so that the SoC logic can take appropriate security counter-measures.

Approach

Agile Analog has created a rather crafty IP block that plays the role of security sensor by measuring critical parameters like voltage, clock and temperature. Here’s the block diagram of the agileGLITCH monitor, comprised of several components:

agileGLITCH

The Bandgap component ensures a voltage reference, and operates across a wider voltage span to provide glitch monitoring. You may increase accuracy optionally using production trimming.

Each reference selector has a configurable input voltage to the programmable comparators, allowing you to adjust the glitch side. You would adjust the thresholds if your core is using Dynamic Voltage Frequency Scaling (DVFS).

There are two programmable comparators, one for positive voltage glitches, and the other for negative glitch detection. You get to configure the thresholds for glitch detection, and the level-shifters enable the IOs to use the core supply.

The logic following each comparator provides control of enables based on the digital inputs, latching momentary events on the output of comparators, disabling outputs while testing, and 3-way majority voting on the latched outputs.

Not shown in the block diagram is an optional ADC component to measure the supply value, something useful for lifetime issues, or measuring performance degradation.

Use Cases

Consider an IOT security device like a wireless door lock to a home, where a malicious person gains access to the lock and uses voltage SCA to enter debug mode of the device, reading all of the authorized keys for the lock. With agileGLITCH embedded, the IOT device detects and records the voltage glitch, alerting the cloud system of an attack, noting the date and time.

IOT WiFi lock

A security camera has been compromised using voltage SCA to get around the boot-signing sequence, allowing agents to reflash using hacked firmware. This kind of exploit lets the hacker view the video and audio stream, violating privacy and setting up a blackmail scenario. Using the agileGLITCH counter-measure, the camera system detects voltage glitch events, then stops any unknown code to be flashed, plus it could report to the consumer that the device was compromised before they purchased it.

Security Camera

An automotive supply regulator tests OK at the factory, however over time, during high load conditions, the voltage degrades and eventually fails. The agileGLITCH sensor is a key component of a system that could measure voltage degradation over time (using an ADC and digital data monitor), and report back to the automotive vendor so that they can issue a recall in order to repair or replace the supply regulator. The trend is to provide remote automotive fixes, over the air.

Supply Regulator

A hacker wants to remove Digital Rights Management (DRM) from a satellite system, installing a voltage glitcher on the HDMI controller supply to reset the HDMI output to be non-HDCP validated. Counter-measures in agileGLITCH detect voltage glitching, safeguarding the HDMI controller from tampering.

Satellite Receiver System

Summary

Hacking is happening every day, all around the world, and the exploits continue to grow in complexity and penetration. Voltage SCA is a hacking technique used when the bad actors have physical access to the electronics and they use supply glitching techniques to put the system into a vulnerable state, but this approach only works if there are no built-in counter-measures. With an approach like agileGLITCH embedded inside an electronic device, then these voltage SCA hacking attempts can be identified and thwarted, before any unwanted changes are made. An ounce of prevention is worth a pound of cure, and that applies to SCA mitigation.

To download and read the entire white paper, visit the Agile Analog site and complete a short registration process.

Related Blogs