wide 1

Careful Who You Work for

Careful Who You Work for
by Roger C. Lanctot on 02-02-2023 at 6:00 am

Careful Who You Work for

When one is looking for a job and that hunt extends from weeks into months or even years one is inclined to default to an any-port-in-a-storm mindset. Some recent experiences suggest to me that that mentality may need a reevaluation.

I was surprised to learn recently, from conversations with industry acquaintances, that one’s future employment prospects can be colored unpredictably by one’s previous employment – or let’s say one’s previous employer. One acquaintance found that an association with two previous employers – tenures that had been marked by professional success and measurably positive outcomes – had marked this person as unemployable.

The description this person gave me was that employment by one particular company at a senior level had placed this person on a blacklist within a particular industry echelon. A headhunter let this executive know that doors were closed to potential positions merely as a result of having worked for a particular company.

The company in question had engaged in strategies that had led to immense financial losses to investors and created the appearance of fraud. The executive in question, my acquaintance, had nothing to do with strategic or financial decisions at the company, but it didn’t appear to matter. Just having worked for the company at a senior level during the period in question was disqualifying for future employers.

This executive went on to work for a much much larger public company in the IT industry leading a team of dozens of executives in launching a hugely successful business-to-business marketing campaign. Following this campaign, due to unrelated strategic decisions at the company, this executive’s department was massively downsized and the executive was let go.

This experience, too, proved a negative to potential future employers. In this case, it was the renowned toxic culture of the company – a major Fortune 500 IT firm – that tainted this executive’s reputation. It was as if to say that simply having worked at this company – famous for its attention-getting CEO – this executive was now infected and unhire-able.

I am happy to say that this highly talented individual has not been held down by these reputational impediments and has found a new home for their particular set of skills.

Another acquaintance of mine, who I originally met about three years before at CES 2020, recently found a new home and I reconnected. In this case, when I met this executive they were working for a company which had a horrible industry reputation – largely related to the behavior of the company’s CEO, who was verbally abusive to colleagues and customers.

When I first met this executive I was immediately sympathetic to their plight, knowing the company’s and the CEO’s reputation, which were likely unknown to this executive at this early stage of their employment. Having escaped this company and now working elsewhere, the executive had a quite different experience from that of the previously-described executive.

Having left their previously toxic work environment this executive discovered widespread sympathy from the new employer and elsewhere in the industry. Future employers were aware of the dysfunction at the previous employer and were more than happy to rescue a talented individual to join their team.

The bottom line is that most industries are not in fact Industries – with a capital “I.” Most industries are neighborhoods. Everybody knows everybody else. There are few secrets.

Industry colleagues tend to share information as employees migrate from company to company and customers, too, share their impressions of how their suppliers interact. Reputations are formed organically and working for a company can be used against you or can work in your favor and can influence one’s decision to apply for or accept an offer from said company.

It’s often difficult to see this reputational background radiation. It can be hard to understand how your organization or any organization is perceived. But these two experiences suggest that internal corporate culture has external consequences and relevance.

Who you work for matters. I am currently reading Emmis Communications CEO Jeff Smulyan’s “Never Ride a Rollercoaster Backwards” in which Smulyan talks about how Emmis’ reputation for being a great place to work contributed to the company’s ability to hire (and sometimes steal) great talent and may have even made acquisitions less expensive, though even Smulyan expresses skepticism on this point.

It is not always possible to choose who we work for. But my recent experiences suggest that it matters a lot. With massive layoffs spreading across the technology industry, plenty of folks will be pondering their next steps. As the weeks and months slide by that any-port-in-a-storm mindset may kick in, but remember that it does matter who you work for and how your organization treats its employees and customers.

P.S.

For the record, I work for a great organization. No complaints. How about you?

Also Read:

10 Impactful Technologies in 2023 and Beyond

Effective Writing and ChatGPT. The SEMI Test

All-In-One Edge Surveillance Gains Traction


U.S., Japan & Dutch versus China Chips & Memory looks to be in a long downturn

U.S., Japan & Dutch versus China Chips & Memory looks to be in a long downturn
by Robert Maire on 02-01-2023 at 2:00 pm

US Japan China

-US, Japan & Dutch agree to embargo some China chip equip
-Goes beyond just leading edge & will increase negative impact
-China might catch up in decades or invade Taiwan tomorrow
-Why the memory downturn could be longer than expected

Ganging up on China

It appears that the US has put together a coalition of the US, Japan and the Netherlands all of which will agree to stop selling certain semiconductor equipment to China.

This unified front against China is an obvious slap in the face but most importantly is likely a very effective way to shut China out of advanced semiconductor manufacturing.

Those three countries taken together make the vast majority of semiconductor equipment and more critically an even higher percentage of the leading edge equipment.

China would be unable to make even the most rudimentary semiconductors if it couldn’t buy any equipment from all three. Chinese semiconductor equipment makers are still in early stages and fundamentally rely on copying other manufacturers basic designs with little home grown R&D.

It will take decades to copy US, Japanese & Dutch tools

While China may be able to physically copy some dep and etch tools, it does not have the very deep and complex supply chain to source the amazingly complex lenses made by Zeiss or Nikon. Nor does China have the millions of lines of code in a KLA tool for defect analysis.

Copying will be much more difficult than the blatant rip off of US military plane designs as semiconductors and the tools that make them are way more complex.

Also missing is the decades of human capital and infrastructure such as there is in Silicon Valley where the expertise varies from the artisan welders of stainless steel piping to the decades of experience with plasma process.
EUV has been over 35 years in the making having started in Japan and the US then moving to the Netherlands.

No matter how much money is thrown at the technology issues, it will take a very, very long time. As the saying “nine women can’t make a baby in one month” goes (pardon the analogy) so goes semiconductor technology advancement.

By the time China is able to copy existing technology, the rest of the world will be decades further along. This is not to suggest they can never catch up, but likely not in most of our lifetimes.

Beyond restricting EUV

It appears from news reports that restrictions are now even broader than what was talked about in October in the US with 193 “immersion” litho systems mentioned. Restricting 193 immersion would push the Chinese back even further to a point where they couldn’t reasonably do multiple patterning or quadruple patterning or other tricks to try to get EUV like dimensions even at ridiculously low yields. It would push back to 28NM type technology.

Even more impact than the October embargo

If ASML and Nikon are not allowed to sell immersion scanners then it would make sense that KLA would be prohibited from selling a matching generation of reticle and wafer inspection tools and not just EUV capable tools. This could suggest that Metrology sales from the likes of KLA & AMAT etc will be further restricted in China.

TEL will likely not be able to sell EUV track tools or even immersion track tools. No dry resist for Lam nor high aspect ratio etch tools.

ASMI would likely not be able to sell ALD tools….the list goes on and on about much deeper impact if we go back to immersion technology.

It may be that from a political perspective going back to embargoing immersion is likely not only more punitive and effective but also more evenly shares the economic pain of those doing the embargoing by not just restricting the most advanced scanners.

We have yet to see the details, but the few points of information point to a deeper embargo.

A very serious escalation with no response (so far)

We are surprised about the seriousness and level of effort the current administration has put in versus prior efforts to contain China through tax policy. Forming a coalition is a significant escalation of tensions and effectiveness.

We are also surprised that China has so far not responded to the October embargo by cutting off rare earth elements or pharmaceuticals or some other critical export.

All we can imagine is that it just makes Taiwan that much more attractive to China….

Taiwan the “hollow” prize

While China may have dreams about taking over Taiwan and along with it the semiconductor industry, the real reality is that TSMCs fabs would quickly become unusable without support from equipment makers and would cease to function in a relatively short time as we saw when Jin Hua was abandoned overnight.

But it would be a really neat way of depriving the rest of the world the semiconductors they need. Perhaps with the thought “that if I can’t have them you won’t have them either”.

Don’t be surprised if China puts one very small missile into each fab in Taiwan thereby taking them all off line.

Memory – Deeper and longer downturn than expected

Also in the news media (Wall Street Journal) is the realization that the memory downturn is going to be longer and deeper than previously thought…..DUH!.

As we have pointed out numerous times, capacity keeps increasing through technology shrinks even without significant capital investment. Memory companies may slow the purchase of new equipment or new fabs but they will keep up R&D to get to more layers of NAND or the next generation of DRAM which gets the industry more bits without more (significant) bucks.

Technology marches on even in a down turn.

Although its clear that Micron and even Samsung cutting back on production there is still a lot of excess capacity, and getting worse, and pricing is not recovering and buyers likely know that.

The profitability of Samsung and Micron is already suffering and will get worse.
This all suggests a longer, deeper memory downturn than many people were expecting.

This means that capital spending by memory makers will be very significantly delayed and new fab construction will be even further delayed. We would guess that projects that Samsung had outside of Korea, especially those in China will be delayed or canceled.

Micron will without doubt push back its new Boise fab and New York even further behind that. It may be that most of the bit growth needed in slow times can be handled with existing fabs with tweaks in technology rather than new incremental capacity…at least for the next few years.

The stocks

The escalation of the embargo with China is negative for every equipment company as it may increase the amount of equipment covered under the ban.

The coalition is positive in that US companies such as AMAT and LRCX don’t have to worry about TEL or ASMI or other Japanese or Dutch companies eating their lunch in China.

The escalation is bad in that it begs a retaliation from China which will likely not be good for those involved.

As we have said many times, the equipment industry in the long run is a zero sum game. Especially for ASML, as equipment not sold to China will be sold elsewhere as someone, somewhere will make the chips if there is demand for them (even though right now demand is down….)

Overall this just potentially adds to the woes the equipment industry already has, the triple whammy of China, weak economy and horrible memory. There is not likely a good resolution to this all as we very, very highly doubt that the administration will loosen the sanctions as it has shown that it is so far unwilling to unwind sanctions in other situations. This means that chip equipment sales to China might not ever recover and most likely could worsen.

It will take time for others to displace China’s huge spending spree, but we could see India, Vietnam, Singapore and of course the re-shoring to the US and Europe to start to make up for a loss of China. Those investors hoping for a quick snap back in the chip industry may be disappointed.

We would try to minimize China exposure in our Chip portfolio, in both directions, either as a supplier or customer. We would be aware of those who could be in the way of a retaliatory strike by China such as those dependent upon rare earth elements.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory


Achieving Faster Design Verification Closure

Achieving Faster Design Verification Closure
by Daniel Payne on 02-01-2023 at 10:00 am

Questa Verification IQ min

On big chip design projects the logic verification effort can be larger than the design effort, taking up to 70% of the project time based on data from the 2022 Wilson Research Group findings. Sadly, the first silicon success rate has gone downwards from 31 percent to just 24 percent in the past 8 years, causing another spin to correct the flaws, costing companies lost time to market and certainly hurting their revenue plans. Better verification would certainly improve first silicon success, but that is easier said than done.

Some other sobering numbers from the Wilson Research Group study:

  • ASIC – 24% first time success, 36% finish on time
  • FPGA  – 16% achieve zero bug escapes, 30% finish on time

Design verification has many difficult chores: debugging, creating tests then running engines, testbench development and test planning. Ideally your team wants to minimize turn-around times, reach verification closure with the fewest people and compute resources, meet safety compliance, and know when the design quality is high enough to stop verifying, while meeting the project schedule.

I recently got an update from design verification expert Darron May at Siemens EDA to hear about something just announced, called Questa Verification IQ. Their approach is all about data-driven verification formed around using traceability, collaboration and analytics powered by AI/ML. Traditional analytics provided limited productivity and insight into just describing and diagnosing logic behavior, while big data-driven analytics using AI/ML offer predictive and prescriptive actions for verification. Software and hardware teams are becoming more productive by collaborating through the use of CI (Continuous Integration), Agile methods, ALM (Application Lifecycle Management), cloud-based design, and applying AI/ML techniques. Safety critical industries have a need for traceability between requirements, implementation and verification, as defined in industry standards like ISO 26262 and DO-254.

Here’s the big picture of how Quest Verification IQ connects together all of the data from various verification engines into a data-driven flow, along with an ALM tool.

Questa Verification IQ

The coverage data is gathered from logic simulation (Questa), Emulation and Prototyping (Veloce), AMS (Symphony), Formal (OneSpin), Static and FuSa. The ML feature analyzes all of this data in order to predict patterns and reveal any holes, point out root causes, then prescribe action to improve coverage. The ALM shown is Polarion from Siemens, although you could use another ALM, just like you can use your favorite verification engines.

Questa Verification IQ is a browser-based framework that includes a process guide so that you can build a safety critical flow using lifecycle management to plan and track all requirements. The regression navigator enables your team to create and execute tests, monitor the results, and have a complete verification history. With the coverage analyzer you know how complete your coverage is for code, functional blocks and test plans. Finally, the data analytics presented provide you with a metric platform, using project dashboards and providing cross analytics.

The web-based framework scales for any size of electronics project, and you won’t have to install any software or be concerned about keeping your OS updated. It also supports public, private or hybrid cloud setups. With AI/ML being applied the verification closure process is sped up, while debug effort quickens as root cause analysis helps pinpoint where to improve.

I asked Darron May a few clarifying questions.

Q: Can I mix and match Questa Verification IQ with any EDA vendor tool and ALM?

A: Questa Verification IQ supports ALM tools and engines via a standards based approach. It interfaces with ALM tools using Open Services for Lifecycle Collaboration (OSLC) so any tool supporting the standard like Doors next or Siemens Polarion and Teamcenter can be used. Any engine can be launched by Questa Verification IQ and again we have support for coverage via the  Unified Coverage Interoperability Standard (UCIS).

Q: How does this approach compare to Synopsys DesignDash?

A: Synopsys DesignDash is focused on ML for design data whereas Questa Verification IQ is focused on data driven verification using analytics, including ML, to accelerate verification closure, reduce turn-around times and provide maximum process efficiency. Questa Verification IQ provides applications needed for team-based collaborative verification management in a browser-based framework with centralized access to data.

Q: How does this approach compare to Cadence Verisium?

A: Cadence Verisium focuses only on ML assisted Verification. In comparison Siemens Questa Verification IQ provides complete data driven verification solution powered by Analytics, Collaboration and Traceability. Verification Management is provided in a browser-based tool with applications built around Collaboration. Coverage Analyzer brings the industry’s first collaborative coverage closure tool using analytical navigation assisted by ML. Question Verification IQ interfaces with Siemens Polarion using OSLC and provides a tight digital thread traceability with Application Lifecycle Management with no UI context change, bringing the power of ALM to hardware verification.

Summary

I’m always impressed with new EDA tools that make a complex task easier by working smarter, not requiring engineers to put in more hours of manual effort. With early endorsements of Questa Verification IQ from familiar companies like Arm and Nordic Semiconductor, it looks like Siemens EDA has added something compelling for verification teams to consider looking at.

Related Blogs


Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?
by Fred Chen on 02-01-2023 at 6:00 am

Multiple Monopole Exposures 1

For a leading-edge lithography technology, EUV (extreme ultraviolet) lithography is still plagued by some fundamental issues. While stochastically occurring defects probably have been the most often discussed, other issues, such as image shifts and fading [1-5], are an intrinsic part of using reflective EUV optics. However, as long as these non-stochastic issues can be systematically modeled, effectively as aberrations, corrective approaches may be applied.

Image shifts are an unavoidable part of EUV lithography for a variety of reasons, including feature position on mask and mask position [6]. However, at any given position of and on the mask, image shifts occur because the image is actually composed of sub-images from smaller and larger angles of reflection from the EUV mask. The larger angles are generally smaller amplitude and shift one way with defocus, while the smaller angles are generally larger amplitude and shift the opposite direction with defocus. The combined effect is to have a small net shift with defocus (Figure 1). If the amplitudes for the smaller and larger angles were the same, there would be no shift [3].

Figure 1. A net image shift results from different amplitude waves moving in opposite directions due to defocus.

The measured shifts and the best focus position are both nontrivial functions of both the illumination angle and the pitch [1]. From Figure 2, based on these measurements on a 0.33 NA system, we can also pick out illuminations which are best suited for particular pitches.

Figure 2. 0.8/0.5 dipole is suited for 32 nm horizontal line pitch, while 0.7/0.4 dipole is more suited for 37.3 nm.

For example, the 32 nm horizontal line pitch is best matched with the 0.8/0.5 dipole shape (45 deg span, 0.5 inner sigma, 0.8 outer sigma). On the other hand, the 0.7/0.4 dipole shape seems best matched with around 37 nm horizontal line pitch, or closer to 37.3 nm. So, ideally, a pattern containing these two pitches should be printed in two parts, one with 0.8/0.5 illumination for the part containing 32 nm pitch, and one with 0.7/0.4 illumination for the part containing 37.3 nm pitch. This would solve both the best focus difference and defocus image shift issues for these two pitches.

However, one other shift-related issue remains. The image position itself at best focus is different for different pitches. This can fortunately be corrected in a straightforward manner by the method suggested in Ref. 4. The shift can be directly compensated as different exposure positions. Moreover, the fading can be further eliminated by splitting the dipole illumination up as two exposures, one for each monopole [4]. This allows the perfect overlap of the images from each of the two poles (Figure 3). This would mean a total of four exposures for the 32 nm and 37.3 nm pitches. In addition, overlay needs to be tight for the shifts to be cancelled (<1nm). The dose would be reduced to 1/4 of the original dose for each exposure. However, the throughput may still suffer from the lower pupil fill (<20%) of the monopole. One alleviating possibility is to expand the monopole width to increase pupil fill, at least for some of the pitches being targeted.

Figure 3. Compensating exposure positions for each monopole exposure can lead to a zero dipole image shift.

This multiple exposure approach can be generalized to two-dimensional patterns, covering more pitches. In combination with mask position and mask position-dependent adjustments, it is the only true rigorous way to fully correct the image shift aberrations in EUV lithography.

References

[1] F. Wittebrood et al., ““Experimental verification of phase induced mask 3D effects in EUV imaging,” 2015 International Symposium of EUVL – Maastricht.

[2] T. Brunner et al., “EUV dark field lithography: extreme resolution by blocking 0th order,” Proc. SPIE 11609, 1160906 (2021).

[3] F. Chen, “Defocus Induced Image Shift in EUV Lithography,” https://www.youtube.com/watch?v=OXJwxQK4S8o

[4] J-H. Franke, T. A. Brunner, E. Hendrickx, “Dual monopole exposure strategy to improve extreme ultraviolet imaging,” J. Micro/Nanopattern. Mater. Metrol. 21, 030501 (2022).

[5] J-H. Franke et al., “Improving exposure latitudes and aligning best focus through pitch by curing M3D phase effects with controlled aberrations,” Proc. SPIE 11147, 111470E (2019).

[6] F. Chen, “Pattern Shifts in EUV Lithography,” https://www.youtube.com/watch?v=udF9Dw71Krk

This article first appeared in LinkedIn Pulse: Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Also Read:

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Application-Specific Lithography: Sub-0.0013 um2 DRAM Storage Node Patterning

Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects


Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges

Lam chops guidance, outlook, headcount- an ugly, long downturn- memory plunges
by Robert Maire on 01-31-2023 at 2:00 pm

Lamb chops

-Lam Research chops guidance, outlook & headcount sharply
-Further declines as 2023 will be H1 weighted- No end in sight
-System sales cut by more than half as even service is cut
-Memory is the culprit as expected-Forcing business “reset”

A sad sounding conference call….

While Lam reported a good December, as expected by us and others, coming in at $5.28B in revenues and non GAAP EPS of $10.71 versus expectations of $5.08B and EPS of $9.96.

The real problem is guidance going into 2023. Guidance is for $3.8B+-$300M and EPS of $6.50+-$0.75 versus street expectations, which were already sharply lowered of $4.38B and $7.88 EPS.

The real problem is that “real” results are much, much worse after you back out deferred revenue from incomplete units in the field waiting on parts.
In the December quarter Lam benefited to the tune of $700M so the “real” revenue would have been $4.58B. Worse yet with expectations of $3.8B in March, if you back out deferred revenue it looks to drop below $3B. Deferred revenue was down from September’s $2.75B to $2B.

Also remember that deferred comes in at higher margins. So its way worse than it looks at first blush. The call had a very downbeat tone overall with management using words like “reset”, “decline meaningfully”, “well below”, “Unprecedented” .

Perhaps most telling was CEO Tim Archer saying on Q&A that there was “no timeframe on recovery”. So it sounds like no end in sight, no hope of a second half recovery.

The company also said that revenue would be first half weighted which suggests a weaker, not better second half due largely to taking down the deferred revenue

The company will be taking about $250M in charges

Headcount cuts signal bad/long downturn

We haven’t seen layoffs in the semiconductor equipment business for quite some time. Lam announced headcount cuts of 1300 full time employees plus 700 part time/contract on top of earlier cuts, so well over 2000 cuts or a bit over 10%

Even service/support dropped- previously sacrosanct

Lam had previously spoke about service/support revenue as being bulletproof and not vulnerable to variations. Turns out to be not true as service/support was down from September’s $1.9 to December’s $1.7B as tools were idled and did not need service.

Even worse still, if we back out the declining service revenue could “system” sales fall below $2B and approach a low of $1B in Q1????This is really off a cliff and explains the actions taken.

Memory, especially NAND, is hardest hit

Its no surprise that memory is hardest hit as we have heard for months that the memory industry was in sharp decline. Utilization is way down, tools are idled and new projects are being pushed way out. It sure sounds like we are not going to see a memory recovery any time soon and not this year.

Tim Archer said that “memory is at levels we haven’t seen in 25 years”. If we turn the clock back 25 years, memory spending was a very small fraction of where it has been over the last year- probably single digit percentages
Memory is obviously off the proverbial cliff without skid marks……

March quarter not likely the bottom- Bottom may be H2

It sounds as if we are in a situation where Lam will see declines over the course of the year especially if their view that 2023 is “first half weighted” is accurate. This suggests a bottom in H2 (or beyond?) Certainly not the H2 recovery that bullish people are expecting.

Welcome to reality

We have been clear in our view of the negative impact we expected from Lam and we now have the proof in black and white. We suggested that Lam was a short while every other analyst on the street had at least a neutral and most with buys despite the very clear signals.
In our most recent note;

Where there’s smoke there’s fire

We pointed out that pre-announcements from both UCTT & ICHR clearly telegraphed a horrible outlook from Lam. How could everyone miss this?

The stocks

Lam was down sharply, 4% in the after market as the call went on. As all those bullish analyst cut their numbers and do a “reset” we would also assume a few ratings changes after the cows have left the barn.

There is obviously no reason to own the stock if we haven’t yet hit bottom nor have any idea where the bottom is. We can just wait on the side line and watch it get cheaper.

There may be some temptation to buy on a relief rally, that it could have been even worse but obviously thats not a very good reason to own a stock.
Things are clearly much worse than most (not all) expected.

There is likely some collateral damage as sub suppliers to Lam will see the effects of Lams inventory reductions talked about on the call., as Lam appropriately cuts back on parts. Obviously supply chain constraints are less of an issue in a sharp downturn.

We would expect AMAT to sing a similar tune but with slightly less impact as Lam remains the memory poster child. KLAC is obviously negatively impacted in China but historically been the foundry/logic poster child and less impacted by memory.

As we stated in our note on ASML this morning, ASML is almost completely unaffected and almost invulnerable as they remain head and shoulders above the dep and etch business which is reverting back into a very competitive “turns” business.

This reminds me of an old book….. A tale of two cities “it was the best of times (for ASML) it was the worst of times (for LRCX)”

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken


Weebit ReRAM: NVM that’s better for the planet

Weebit ReRAM: NVM that’s better for the planet
by Eran Briman on 01-31-2023 at 10:00 am

1.Weebit Nano RRAM ReRAM IP NVM for semiconductors green materials Eco friendly technology production

Together with our R&D partner CEA-Leti, we recently completed an environmental initiative in which we analyzed the environmental impact of Weebit’s Resistive Random-Access Memory (ReRAM / RRAM) technology compared to Magnetoresistive Random Access Memory (MRAM) – another emerging non-volatile memory (NVM) technology. The results were extremely positive for Weebit’s Oxide-based ReRAM (OxRAM), which was jointly developed with Leti, showing the environmental impact of ReRAM is much lower than that of MRAM.

A bit of background

The overall contribution of the semiconductor industry to global greenhouse gas (GHG) emissions is increasing as demand for semiconductors continues to grow. To mitigate negative impacts, environmental programs are extremely important for all players in the semiconductor ecosystem. In addition to CO2 emissions, semiconductor manufacturing can use a significant amount of energy, water, rare natural resources, and chemicals, which can contribute to global warming. The choices semiconductor companies make in design and specification phases, including their memory technology choices, are key to reducing a company’s overall carbon footprint.

MRAM is effectively the only other kind of emerging NVM that is commercially available today at foundries. It stores data as resistance using magnetic fields (versus ReRAM which stores it as resistance of a solid dielectric material, and flash which stores data as electric charges). MRAM has high endurance and is more often used as a replacement for embedded SRAM than for embedded flash. Still, there are companies using MRAM today as a replacement for embedded flash that do so because until now there hasn’t been a production-ready alternative at smaller geometries.

Compared to MRAM, Weebit ReRAM is the logical choice for embedded applications, with the number one reason being ease of manufacturing. Weebit ReRAM requires significantly fewer layers and masks and doesn’t use exotic materials or special equipment, so it can be manufactured in the standard CMOS production line and doesn’t require designated cleanroom facilities. All this translates to lower costs. MRAM adds an estimated 30-40% to wafer cost, compared to ReRAM’s 5-7%. We will go into more depth on MRAM in a future article, but for now, suffice it to say that ReRAM has a long list of advantages over MRAM, and in our new study, we’ve outlined yet another advantage – ReRAM is much more ecologically friendly! 

What we looked at

The team at CEA-Leti estimated the contribution of both OxRAM and MRAM to climate change, focusing on the production flows of each technology. To enable a fair comparison, the study looked at each technology in an identical die area in a similar process node and considered only the memory cell portion. They looked at raw materials and manufacturing processes (cradle to gate) without including infrastructure and abatement. Scroll to the end of the article to learn more about the data collection for the study*.

Key results

The study found that on all measured parameters, OxRAM demonstrated a better GHG related profile than MRAM. Below we’ve listed some of the key results.

ReRAM demonstrated the following benefits over MRAM:

  • 30% reduction in GHG emissions
  • 41% reduction in water use
  • 53% reduction in use of minerals and metals
  • 36% less electricity to process

The importance of critical materials

One of the key study findings is that the MRAM flow contains 2X more critical raw materials than the OxRAM flow. As defined by the European Union, the two main factors that define the criticality of a material are supply risk and economic importance. Supply risk is determined by criteria including supply concentration, import reliance, governance performance of suppliers, trade restrictions and criticality of substitute materials. Economic importance is based on a material’s added value, importance in end use applications, and the performance of any substitute materials. In the below chart you can see the criticality of various materials used in semiconductor manufacturing.

Many of the materials required for MRAM are at high supply risk, and some – like magnesium, platinum and cobalt – are critical in terms of both supply risk and economic importance. Any disruption of access to such materials, whether from political challenges, extreme weather, COVID lock-downs, or other issues can put a project at risk. In addition, the borates that are used in MRAM manufacturing have a very poor recycling input rate (less than 1%) – yet another consideration when looking at environmental impacts.

The bigger picture

There are many environmental considerations that come into play for semiconductor technologies such as NVMs. In our study, we specifically looked at the memory cells and circuits themselves, without accounting for the rest of the chip (e.g., microcontrollers) or the environmental impacts of the product lifecycle, such as power consumption during its usage and end-of-life recycling.

The results we’ve shown here can provide customers with confidence that when they are choosing an alternative to flash for their next design, they can not only count on the many known advantages of ReRAM, but they now know that Weebit ReRAM has a lower environmental impact and less supply chain risk than MRAM.

* Notes about the study

  • Primary data: All data about the steps of the production flow came from internal collection by Leti, which has broad expertise in both MRAM and ReRAM. Quantity and types of materials used (metals, chemicals and gases), water consumption, energy consumption, and air/water emissions were measured by Leti.
  • Secondary data: All raw materials data came from the Eco Invent database.
  • Production is in France and therefore the energy mix is the French mix.
Also Read:

How an Embedded Non-Volatile Memory Can Be a Differentiator

CEO Interview: Coby Hanoch of Weebit Nano


Model-Based Design Courses for Students

Model-Based Design Courses for Students
by Bernard Murphy on 01-31-2023 at 6:00 am

System design min

Amid the tumult of SoC design advances and accompanying verification and implementation demands, it can be easy to forget that all this activity is preceded by architecture design. At the architecture stage the usual SoC verification infrastructure is far too cumbersome for quick turnaround modeling. Such platforms also tend to be weak on system-wide insight. Think about modeling an automotive Ethernet to study tradeoffs between zonal and other system architectures. Synopsys Platform Architect is one possible solution though still centered mostly on SoC designers rather than system designers. MATLAB/Simulink offers a system-wide view, but you have to build your own model libraries.

Mirabilis VisualSim Architect offers a model-based design (MBD) system with ready-to-use libraries for popular standards and components in electronic design. They have now added a cloud-based subset of this system plus collateral to universities as a live, actionable training course. Called “Semiconductor and Embedded Systems Architecture Labs” (SEAL), the course provides hands-on training in system design to supplement MBD/MBSE courses.

Mirabilis VisualSim and MBD

Deepak Shankar (Founder at Mirabilis) makes the point that for a university or training center to develop a training platform requires they procure and maintain prototypes and tool platforms and build training material and lab tutorials. This is extremely time-consuming and expensive, and quickly drifts out of date.

VisualSim is a self-contained system plus model library requiring no integration with external hardware, tools or libraries. Even more important the full product is in active use today for production architecture design across an A-list group of semiconductor, systems, mil-aero, space and automotive companies who expect accuracy and currency in the model library. As one recent example, the library contains a model for UCIe, the new standard for coherent communication between chiplets.

Hardware models support a variety of abstractions, from SysML down to cycle accurate, and analog (with linear/differential equation solvers) as well as digital functionality. Similarly, software can evolve from a task-graph model to more fully elaborated code.

The SEAL Program

The lab is offered on the VisualSim Cloud Graphical Simulation Platform, together with training collateral in the form of questions and answer keys. The initial release covers 67 standards and 85 applications. Major applications supported by SEAL include AI, SoC, ADAS, Radars, SDR, IoT, Data Center, Communication, Power, HPC, multi-core, cache coherency, memory, Signal/Image/Audio Processing and Cyber Physical Systems. Major standards supported are UCIe, PCIe6.0, Gigabit Ethernet, AMBA AXI, TSN, CAN-XL, AFDX, ARINC653, DDR5 and processors from ARM, RISC-V, Power and x86.

Examples of labs and questions posed include:

  • What is the throughput degradation of multi-die UCIe based SoC versus an AXI based SoC?
  • How do autonomous driving timing deadlines change between multi-ECUs vs single HPC ECU?
  • How much power is consumed in different orbits of a multi-role satellites?
  • Which wired communication technology is more suitable for a flight avionics system – PCIe or Ethernet?

Course work can be graded by university teaching or training staff. Alternatively, Mirabilis is willing to provide certification at two levels. A basic level offers a Certificate of Completion for a student who works through a module and completes the Assessment Questions. More comprehensive options include a Professional Certificate for a student who successfully completes 6 modules, or a Mini Masters in Semiconductor and Embedded Systems for a student who completes 20 modules.

What’s Next?

While an MBD system of this type obviously needs some pretty sophisticated underlying technology to manage the multiple different types of simulation needed and stitching required between different modeling styles and abstractions, the practical strength of the system clearly rests on the strength of the library. Deepak tells me their commercial business splits evenly between semiconductor and systems clients, all doing architecture simulation. Working with both types of client keeps their model library tuned to the latest needs.

Semiconductor clients are constantly optimizing or up-revving SoC architectures. Systems clients are doing the same for more distributed system architectures – an automotive network, an O-RAN system, an avionics system, a multi-role satellite system. Which makes me wonder. We all know that system companies are now more heavily involved in SoC design, in support of their distributed systems. Some form of MBD must be the first step in that flow. A platform with models well-tuned (though not limited) to the SoC world might be interesting to such architects I would think?

You can learn more about the SEAL program HERE.

Also Read:

CEO Interview: Deepak Shankar of Mirabilis Design

Architecture Exploration with Miribalis Design

Rethinking the System Design Process


Counter-Measures for Voltage Side-Channel Attacks

Counter-Measures for Voltage Side-Channel Attacks
by Daniel Payne on 01-30-2023 at 2:00 pm

agileGLITCH min

Nearly every week I read in the popular press another story of a major company being hacked: Twitter, Slack, LastPass, GitHub, Uber, Medibank, Microsoft, American Airlines. What is less reported, yet still important are hardware-oriented hacking attempts at the board-level to target a specific chip, using voltage Side-Channel Attacks (SCA). To delve deeper into this topic I read a white paper from Agile Analog, and they provide IP to detect when a voltage side-channel attack is happening, so that the SoC logic can take appropriate security counter-measures.

Approach

Agile Analog has created a rather crafty IP block that plays the role of security sensor by measuring critical parameters like voltage, clock and temperature. Here’s the block diagram of the agileGLITCH monitor, comprised of several components:

agileGLITCH

The Bandgap component ensures a voltage reference, and operates across a wider voltage span to provide glitch monitoring. You may increase accuracy optionally using production trimming.

Each reference selector has a configurable input voltage to the programmable comparators, allowing you to adjust the glitch side. You would adjust the thresholds if your core is using Dynamic Voltage Frequency Scaling (DVFS).

There are two programmable comparators, one for positive voltage glitches, and the other for negative glitch detection. You get to configure the thresholds for glitch detection, and the level-shifters enable the IOs to use the core supply.

The logic following each comparator provides control of enables based on the digital inputs, latching momentary events on the output of comparators, disabling outputs while testing, and 3-way majority voting on the latched outputs.

Not shown in the block diagram is an optional ADC component to measure the supply value, something useful for lifetime issues, or measuring performance degradation.

Use Cases

Consider an IOT security device like a wireless door lock to a home, where a malicious person gains access to the lock and uses voltage SCA to enter debug mode of the device, reading all of the authorized keys for the lock. With agileGLITCH embedded, the IOT device detects and records the voltage glitch, alerting the cloud system of an attack, noting the date and time.

IOT WiFi lock

A security camera has been compromised using voltage SCA to get around the boot-signing sequence, allowing agents to reflash using hacked firmware. This kind of exploit lets the hacker view the video and audio stream, violating privacy and setting up a blackmail scenario. Using the agileGLITCH counter-measure, the camera system detects voltage glitch events, then stops any unknown code to be flashed, plus it could report to the consumer that the device was compromised before they purchased it.

Security Camera

An automotive supply regulator tests OK at the factory, however over time, during high load conditions, the voltage degrades and eventually fails. The agileGLITCH sensor is a key component of a system that could measure voltage degradation over time (using an ADC and digital data monitor), and report back to the automotive vendor so that they can issue a recall in order to repair or replace the supply regulator. The trend is to provide remote automotive fixes, over the air.

Supply Regulator

A hacker wants to remove Digital Rights Management (DRM) from a satellite system, installing a voltage glitcher on the HDMI controller supply to reset the HDMI output to be non-HDCP validated. Counter-measures in agileGLITCH detect voltage glitching, safeguarding the HDMI controller from tampering.

Satellite Receiver System

Summary

Hacking is happening every day, all around the world, and the exploits continue to grow in complexity and penetration. Voltage SCA is a hacking technique used when the bad actors have physical access to the electronics and they use supply glitching techniques to put the system into a vulnerable state, but this approach only works if there are no built-in counter-measures. With an approach like agileGLITCH embedded inside an electronic device, then these voltage SCA hacking attempts can be identified and thwarted, before any unwanted changes are made. An ounce of prevention is worth a pound of cure, and that applies to SCA mitigation.

To download and read the entire white paper, visit the Agile Analog site and complete a short registration process.

Related Blogs

 


Achronix on Platform Selection for AI at the Edge

Achronix on Platform Selection for AI at the Edge
by Bernard Murphy on 01-30-2023 at 10:00 am

Edge compute

Colin Alexander ( Director of product marketing at Achronix) released a webinar recently on this topic. At only 20 minutes the webinar is an easy watch and a useful update on data traffic and implementation options. Downloads are still dominated by video (over 50% for Facebook) which now depends heavily on caching at or close to the edge. Which of these applies depends on your definition of “edge”. The IoT world see themselves as the edge, the cloud and infrastructure world apparently see the last compute node in the infrastructure, before those leaf devices, as the edge. Potato, potahto. In any event the infrastructure view of the edge is where you will find video caching, to serve the most popular downloads as efficiently and as quickly as possible.

Compute options at the edge (and in the cloud)

Colin talks initially about infrastructure edge where some horsepower is required in compute and in AI. He presents the standard options: CPU, GPU, ASIC or FPGA. A CPU-based solution has the greatest flexibility because your solution will be entirely software based. For the same reason, it will also generally be the slowest, most power hungry and longest latency option (for round trip to leaf nodes I assume). GPUs are somewhat better on performance and power with a bit less flexibility than CPUs. An ASIC (custom hardware) will be fastest, lowest power and lowest latency, though in concept least flexible (all the smarts are in hardware which can’t be changed).

He presents FPGA (or embedded FPGA/eFPGA) as a good compromise between these extremes. Better on performance, power and latency than CPU or GPU and somewhere between a CPU and a GPU on flexibility. While much better than an ASIC on flexibility because an FPGA can be reprogrammed. Which all makes sense to me as far as it goes, though I think the story should have been completed by adding DSPs to the platform line up. These can have AI-specific hardware advantages (vectorization, MAC arrays, etc) which benefit performance, power, and latency. While retaining software flexibility. The other important consideration is cost. This is always a sensitive topic of course but AI capable CPUs, GPUs and FPGA devices can be pricey, a concern for the bill of materials of an edge node.

Colin’s argument makes most sense to me at the edge for eFPGA embedded in a larger SoC. In a cloud application, constraints are different. A smart network interface card is probably not as price sensitive and there may be a performance advantage in an FPGA-based solution versus a software-based solution.

Supporting AI applications at the compute edge through an eFPGA looks like an option worth investigating further. Further out towards leaf nodes is fuzzy for me. A logistics tracker or a soil moisture sensor for sure won’t host significant compute, but what about a voice activated TV remote? Or a smart microwave? Both need AI but neither need a lot of horsepower. The microwave has wired power, but a TV remote or remote smart speaker runs on batteries. It would be interesting to know the eFPGA tradeoffs here.

eFPGA capabilities for AI

Per the datasheet, Speedster 7t offers fully fracturable integer MACs, flexible floating point, native support for bfloat and efficient matrix multiplications. I couldn’t find any data on TOPS or TOPS/Watt. I’m sure that depends on implementation but examples would be useful. Even at the edge, some applications are very performance sensitive – smart surveillance and forward-facing object detection in cars for example. It would be interesting to know where eFPGA might fit in such applications.

Thought-provoking webinar. You can watch it HERE.

Also Read:

WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR The Rise of the SmartNIC

A clear VectorPath when AI inference models are uncertain


Taming Physical Closure Below 16nm

Taming Physical Closure Below 16nm
by Bernard Murphy on 01-30-2023 at 6:00 am

NoC floorplan

Atiq Raza, well known in the semiconductor industry, has observed that “there will be no simple chips below 16nm”. By which he meant that only complex and therefore high value SoCs justify the costs of deep submicron design.  Getting to closure on PPA goals is getting harder for such designs, especially now at 7nm and 5nm. Place and route technologies and teams are not the problem – they are as capable as ever. The problem lies in increasingly strong coupling between architectural and logic design and physical implementation. Design/physical coupling at the block level is well understood and has been addressed through physical synthesis.  However, below 16nm it is quite possible to design valid SoC architectures that are increasingly difficult to place and route, causing project delays or even SoC project cancellations due to missed market windows.

Why did this get so hard?

Physical implementation is ultimately an optimization problem. Finding a placement of interconnect components and connections between blocks in the floorplan which will deliver an optimum in performance and area. While also conforming to a set of constraints and meeting target specs within a reasonable schedule. The first goal is always possible if you are prepared to compromise on what you mean by “optimum”. The second goal depends heavily on where optimization starts and how much time each new iteration consumes in finding an improved outcome. Start too far away from a point which will deliver required specs, or take too long to iterate through steps to find that point and the product will have problems.

This was always the case, but SoC integrations in advanced processes are getting much bigger. Hundreds of blocks and tens of thousands of connections expand the size of the optimization space. More clock and power domains add more dimensions, and constraints. Safety requirements add logic and more constraints, directly affecting implementation. Coherent networks add yet more constraints since large latencies drag down guaranteed performance across coherent domains. In this expanding, many-dimensional and complex constrained optimization space with unpredictable contours, it’s not surprising that closure is becoming harder to find.

A much lower risk approach would start place and route at a point reasonably close to a good solution, without depending on long iteration cycles between design and implementation.

Physically aware NoC design

The integration interconnect in an SoC is at the heart of this problem. Long wires create long delays which defeat timing closure. Many wires running through common channels create congestion which forces chip area to expand to reduce congestion. Crossbar interconnects with their intrinsically congested connectivity were replaced long ago by network on chip (NoC) interconnects for just this reason. NoC interconnects use network topologies which can more easily manage congestion, threading network placement and routing though channels and white space in a floorplan.

But still the topology of the NoC (or multiple NoCs in a large design) must meet timing goals; the NoC design must be physically aware. All those added constraints and dimensions mentioned earlier further amplify this challenge.

NoC design starts as a logical objective, to connect all IP communication ports as defined by the product functional specification while assuring a target quality of service. And meeting power, safety and security goals. Now it is apparent that we must add a component of physical awareness to these logical objectives. Estimation of timing between IP endpoints and congestion based on a floorplan in early stages of RTL development, to be refined in later stages with a more accurate floorplan.

With such a capability, a NoC designer could run multiple trials very quickly, re-partitioning the design as needed, to deliver a good starting point for the place and route team. That team would then work their magic to fully optimize the physical awareness estimation. Confident that the optimum they are searching for is reasonably close to that starting point. That they will not need to send the design back for restructuring and re-synthesis.

Additional opportunities

Physically aware NoC design could offer additional advantages. By incorporating floorplan information in the design stage, a NoC designer can build a better NoC. Understanding latencies, placements and channel usage while still building the NoC RTL, they may realize opportunities to use a different topology (see the topology above as one example). Perhaps they can use narrower or longer connections on latency-insensitive paths, avoiding congestion without expanding area.

Ultimately, physical awareness might suggest changes to the floorplan which may deliver an even better implementation than originally considered.

Takeaway

Charlie Janac, CEO at Arteris, stressed this point in a recent SemiWiki podcast:

Physical awareness is helpful for back-end physical layout teams to understand the intent of the front-end architecture and RTL development teams.  Having a starting point that has been validated for latency and timing violations can significantly accelerate physical design and improve SoC project outcomes.  This is particularly important in scenarios where the architecture is being done by one company and the layout is being done by another. Such cases often arise between system houses such as automotive OEMs and their semiconductor design partners. Physical awareness is beneficial all around. It’s a win-win for all involved.

Commercial interconnect providers need to step up to make their NoC IP physically aware out of the box. This is becoming a minimum requirement for NoC design in advanced technologies. You might want to give Arteris a call, to understand how they are thinking about this need.

Also Read:

Arteris IP Acquires Semifore!

Arm and Arteris Partner on Automotive

Coherency in Heterogeneous Designs