100X800 Banner (1)

LRCX down from here – 2023 down more than 20% due to China and Downcycle

LRCX down from here – 2023 down more than 20% due to China and Downcycle
by Robert Maire on 10-27-2022 at 10:00 am

Lam Research USA Chips

-Lam reports good quarter that is high water mark for cycle
-2023 WFE to be down more than 20%- Cuts spending & Hiring
-If we back out deferred rev outlook would be down for Dec
-Memory already Down & China down $2-$2.5B in 2023

Lam reports good quarter but likely peak of current cycle with clear drop

Lam reported revenues of $5.1B and non GAAP EPS of $10.42. Guidance was essentially flat at $5.1B +-$300M with EPS slightly down at $10 +- $0.75. Its seems quite clear that September 2022 represents the high water mark for the current cycle and we will likely see declines from here with 2023 WFE being down over 20%

Deferred revenues softens the peak and delays revenue decline

Deferred revenues increased another $500M from $2.2B to $2.7B. This represents in part, incomplete systems that have been shipped to be completed later when parts are available.

This tends to reduce the peak revenue and mask/delay the actual decline in business. In our view its likely that December revenue would have already been down more without the deferred revenue instead of appearing somewhat flat.

China and memory/down cycle impacts are similar

Lam said that the loss of China customers would represent a $2 to $2.5B loss for 2023. If we assume an over 20% loss in WFE that Lam projects overall the down cycle and memory weakness likely accounts for more than half of the expected declines in 2023.

China was one the biggest regions for Lam’s business and is obviously not fully coming back from here if ever. It will take a while for China business to stabilize and we won’t know for a while. Lam also commented that “China would be significantly lower” and Korea down as well.

Lam commented on the call that the down cycle could be longer than a typical down cycle given the amount of inventory that had been built up.

“We know how to operate in a down cycle” Lam cuts hiring & manages spending

The company clearly admitted that we are in a down cycle (for all those doubters out there who said its no longer a cyclical industry). The bigger question at hand is how long? This was a very overheated cycle and China was a big part of that overheating. Now a big slug of China is gone and not likely coming back any time soon.

Lam is taking actions by “managing spending” and “slowing hiring to critical hires only”.

Lam remains mainly a turns business with strong competition

As we get into the down cycle, business for Lam will likely take a sharper downturn than others as it is mainly a turns type of business where orders are fulfilled relatively quickly in a normal environment. Lam does not usually have the luxury of a huge order book and backlog that ASML has. Customers can more easily cancel orders with Lam without fear of having to get back on the end of a long waiting line as is the case with ASML. This could potentially lead to a sharper business erosion as memory customers tend to hit the brakes very hard as evidenced by Samsungs prior cutbacks.

Pricing could erode as competition for remaining business heats up

In addition there are obviously many strong competitors to Lam such as Applied Materials, Tokyo Electron, ASMI, and others who all will be going after a much smaller pie. Its highly likely that pricing and therefore margins may be under pressure as it becomes more of a buyers market rather than the sellers market it has been for quite a while.

While Lam does have some unique market niches such as high aspect etch used in NAND, but the vast majority of product is more competitive and far from the ASML’s monopoly on products.

The stocks

WE have very clearly telling investors to avoid Lam due to the risks and clear downturn we are now in. Its going to be quite a while, perhaps through 2023 and maybe beyond before we get an indication of when the cycle will bottom and start to turn.

The typical time to buy these stocks is just before they hit rock bottom in business and we are certainly far from that right now and not likely until sometime in 2023 at the earliest.

We would expect collateral damage to AMAT as they are very much in a similar boar with a similar outgoing tide as well as KLA to a slightly lesser extent due to the more unique products they have.

The main problem is that the tide is going out for everyone with perhaps the main exception being ASML due to its supply limitation and monopoly. Sub suppliers to Lam, Applied and others tend to be at the end of the whip that is about to be cracked. Most all of them have been through many down cycles before and also know how to batten down the hatches as well as Lam does.

We warned it was going to get ugly and here it is

We also warned about the China risk for many years and it clearly will have a long running negative impact. The semiconductor industry and the US will likely be better for it in the long run but in the short term, of the next couple of years, will obviously be painful.

But no pain means no gain and upcycles always follow downcycles

 

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Is ASML Immune from China Impact?

Chip Train Wreck Worsens

Semiconductor China Syndrome Meltdown and Mayhem


Machine Learning Applications in Simulation

Machine Learning Applications in Simulation
by Daniel Nenni on 10-27-2022 at 6:00 am

Xcelium ML min

Machine learning (ML) is finding its way into many of the tools in silicon design flows, to shorten run times and improve the quality of results. Logic simulation seemed an obvious target for ML, though resisted apparent benefits for a while. I suspect this was because we all assumed the obvious application should be to use ML to refine constrained random tests for higher coverage. Which turned out not to be such a great starting point. My understanding of why those typical constraints are far too low-level to exhibit meaningful trends in learning. The data is just too noisy. Interestingly we had an Innovation in Verification topic some time ago in which (simulation) command-line parameters were used instead in learning. Such parameters represent system-level constraints, ultimately controlling lower-level constraints. That approach was more effective but is not so easy to productize since such parameters tend to be application specific. While it sounds simple, many state-of-the-art ML solutions end up with ineffective results. The reason is that there are so many randomizations made in practical designs and finding key control points to steer test sequences while leaving abundant randomness to stress designs was a challenge.

Cadence has an innovation to address this challenge and recently posted a TechTalk on the progress that has been made.

Regression compression and bug hunting

Increasing coverage is important but just as important is accelerating getting to a target coverage. Cut that time in half and you have more time in your schedule to find difficult bugs, to further increase coverage. There is an additional benefit,  that increase focus on rarely hit bins can improve the verification of rare scenarios. If a bug is found with some rare scenario, that area becomes suspect – possibly  containing more bugs. Focus on such scenarios increases the likelihood that additional bugs in that area will be found. Naturally, it will also increase general testing around other bins, potentially improving coverage and bug exposure in those cases also.

Overall the tool improves the hit rate in areas that are difficult to hit and provides significant improvement of the environment around challenging areas where holes may be correlated to targeted bins. A targeted attack on such cases may provide extra benefits in increasing coverage.

When can you apply learning?

Xcelium ML distinguishes between augmentation runs and optimization runs. Augmentation runs are those where bugs runs are focused on specific areas of the design or toward rare bins with a goal of improving overall verification quality. Optimization runs are those where the regression run suite is compressed in order to do the same essential work using a fraction of resources. Early in a project, where the simulator is actively learning, the coverage model is not mature, the recommendation is to stick to full regressions but uses Xcelium ML to augment the regression runs to find bug signatures earlier.

By the middle of the project, the coverage model should be sufficient to start depending on compression plus augmentation for nightly regressions. These runs can be complemented periodically with a full regression, for example, running a full regression once a week and an ML-generated regression nightly. Later in the project you can depend even more on compressed runs with only occasional cross-checks against a full regression.

The results across a representative set of designs are impressive. Better than 3X – 5X compression with negligible loss in coverage – as little as 0.1% in several instances to worst case ~1%. Compression may be lower for test suites that have already been manually optimized, but even in these cases 2X compression is still typical. This provides good hints on where Xcelium ML will help most. For example, effectiveness has little to do with the design type and more to do with the testbench methodology. In general, the more randomization is supported in the testbench the better the results you will see.

More detail

An excellent technical talk follows the introduction, explaining in more detail how the tool works and how to use it most effectively. One example they explain is a methodology to hit coverage holes.

You can learn more HERE.

Also Read:

Post-Silicon Consistency Checking. Innovation in Verification

New Cadence Joint Enterprise Data and AI Platform Dramatically Accelerates AI-Driven Chip Design Development

Test Ordering for Agile. Innovation in Verification


Bespoke Silicon Requires Bespoke EDA

Bespoke Silicon Requires Bespoke EDA
by Michiel Ligthart on 10-26-2022 at 10:00 am

Bespoke EDA

When I first heard the term ‘bespoke silicon,’ I had to get my dictionary out. Well versed in the silicon domain, I did not know what bespoke meant. It turns out to be a rather old-fashioned term for tailor made and seems to be very much British English. The word dates from 1583 and is the past participle of bespeak, according to the Oxford English Dictionary. American English by contrast more commonly uses the word custom. By now, custom silicon has been rebranded to bespoke silicon.

All the same, with plenty of attention in the industry [1], academia [2] and at conferences [3], I am now convinced bespoke silicon is here to stay.

But it seems that most participants in the bespoke silicon conversations miss out on one important aspect. Bespoke silicon requires bespoke EDA (electronic design automation), because one size does not fit all in EDA either. When I say bespoke EDA, I’m not talking about heavy duty work horses like place-and-route tools, static timing analyzers, or HDL simulators. These are fine-tuned to the hilt and support wide varieties of design styles. However, in the first steps of design creation, at the register transfer level (RTL), there are many side steps that can be made to improve the quality of a bespoken silicon design.

An interesting aspect is that it is difficult to provide tangible examples of bespoke EDA. System design houses that we interact with do not tell us what kind of bespoke EDA they are creating. After all, they do not want that information shared with their competitors, although one can easily guess the domains that get the most attention. Low power design tricks, design for test circuitry, intellectual property (IP) customization, and debug functionality quickly come to mind.

Standardization places an important role in bespoke EDA. Without the likes of SystemVerilog, VHDL, UVM, UPF, and encryption standards, it would be difficult to move design content around different EDA tools.

Traditionally, EDA tools are written in C and C++ but those are not languages of choice for semiconductor design teams implementing bespoke EDA. Python, with its vast infrastructure and freely available open source packages, seems to be the front-runner here.

Another advantage that bespoke EDA can bring is the equal treatment of SystemVerilog and VHDL. Rather than dealing with different APIs to obtain, for example, all output ports in a VHDL entity or SystemVerilog module (and ending up with two scripts at the end), it is a lot more productive to have a single API aptly named all_outputs(object)and be done with it. Bespoke EDA’s building blocks will take care of it under the hood.

As I mentioned before, hard-core EDA tools are not going to be replaced by home-grown bespoke EDA applications. But don’t be surprised if real innovation will come from inside semiconductor design organizations that focus significant efforts on their in-house bespoke EDA applications.

About Verific Design Automation

Verific Design Automation is the leading provider of SystemVerilog, Verilog, VHDL and UPF Parser Platforms that enable project groups to develop advanced electronic design automation (EDA) products quickly and cost effective worldwide. With offices in Alameda, Calif., and Kolkata, India, Verific has shipped more than 60,000 copies of its software used worldwide by the EDA and semiconductor industry since it was founded in 1999.

[1] https://www.ansys.com/blog/behold-the-dawning-of-the-era-of-bespoke-silicon

[2] https://www.bsg.ai/

[3] https://www.linkedin.com/pulse/designcon-bespoke-silicon-paul-mclella

Also Read:

Verific Sharpening the Saw

COO Interview: Michiel Ligthart of Verific


Podcast EP116: A Look at the Future of EDA Research With This Year’s Kaufman Award Winner, Dr. Giovanni De Micheli

Podcast EP116: A Look at the Future of EDA Research With This Year’s Kaufman Award Winner, Dr. Giovanni De Micheli
by Daniel Nenni on 10-26-2022 at 8:00 am

Dan is joined by Dr. Giovanni De Micheli, a research scientist in electronics and computer science credited with inventing the network-on-chip (NoC) design automation paradigm and creating EDA algorithms and design tools. Before serving as Professor and Director of the Integrated Systems Laboratory at EPFL, he was Professor of Electrical Engineering at Stanford University. He holds a Nuclear Engineering degree from Politecnico di Milano, and Master of Science and Ph.D. degrees in Electrical Engineering and Computer Science from the University of California, Berkeley.

Dr. De Micheli is also this year’s recipient of the ESD Alliance and IEEE CEDA Phil Kaufman Award.

Dan explores some of the pioneering work being done by Dr. De Micheli and his students. Details on how the NoC paradigm was developed are discussed, along with other work such as superconductors. The discussion ends with an assessment by Dr. De Mikeli of what the future of EDA and semiconductors holds.

The views, thoughts, and opinions expressed in these podcasts belong to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Post-Silicon Consistency Checking. Innovation in Verification

Post-Silicon Consistency Checking. Innovation in Verification
by Bernard Murphy on 10-26-2022 at 6:00 am

Innovation New

Many multi-thread consistency problems emerge only in post-silicon testing. Maybe we should take advantage of that fact. Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Threadmill: A Post-Silicon Exerciser for Multi-Threaded Processors. The authors presented the paper in the 2011 DAC proceedings, publishing in both ACM and IEEE Digital Libraries. At publication the authors were at IBM Research in Haifa, Israel.

The authors’ goal is to generate multi-threaded tests to run on first silicon, requiring only a bare metal interface. This exerciser is a self-contained program with all supporting data and library functions. A compile step pre-computes and optimizes as much as possible to minimize compute requirements in the exerciser. On-silicon, the exerciser generates multiple threads with shared generated addresses to maximize potential collisions. The exerciser can run indefinitely using new randomization choices for each set of threads.

Consistency mismatches when found are taken back to emulation for debug. These start with the same random seeds used in the exerciser, rolled back a few tests before a mismatch was detected.

Paul’s view

Great paper on a neat tool from IBM research in Israel. A hot topic in our industry today is how to create “synthetic” post-silicon tests to stress hardware beyond what is possible pre-silicon, while also composed of many short, modular, and distinct semi-randomized auto-generated sequences. Hardware bugs found by running real software workloads post-silicon typically require billions of cycles to replicate, making them impractical to port back to debug friendly pre-silicon environments. But bugs found using synthetic tests can be easily replicated by replaying only the relevant synthetic sequence that triggered the bug making them ideally suited to replication in pre-silicon environments.

Threadmill is a valuable contribution to the field of post-silicon synthetic testing of multi-threaded CPUs. It takes a high level pseudo-randomized test definition as input and compiles it into a self-contained executable to run on silicon. This generates randomized instruction sequences conforming to the test definition. Sequence generation is “online” in the silicon without need to stream any data from an offline tool, allowing generation of massive amounts of synthetic content at full silicon performance.

Lacking a golden reference for tests, Threadmill runs each test several times and checks that the final memory and register values at the end of the test are the same. This approach catches only non-determinism related bugs, e.g. those related to concurrency across multiple CPU threads – hence the tool’s name, ThreadMill. The authors rightly point out that such concurrency related bugs are some of the most common bugs that escape pre-silicon verification of modern CPUs. Threadmill can offer high value even with this somewhat limited scope.

The bulk of the paper is devoted to several techniques the authors deploy to make Threadmill’s randomization more potent. These techniques are clever and make for a wonderful read. For example, one trick is to use different random number seeds on each CPU thread for the random instruction generator, but the same random number seed across all CPU threads for random memory address generation. This trick has the effect of creating different randomized programs running concurrently on each CPU, but with each of these programs having a high probability of catching bugs related to memory consistency in the presence of read/write race conditions. Nice!

Raúl’s view

IBM’s functional verification methodology for the POWER7 processor (2011) consisted of a unified methodology including pre- and post-silicon verification. Differences between pre- and post-silicon include speed, observability, and the need for a lightweight exerciser they can load into the bare-metal chip. Both Threadmill and the pre-silicon platform (Genesys-Pro) use templates like “Store R5 into addresses 0x100+n*0x10 for addresses <0x200” to generate testcases. The key to the unified methodology is using the same verification plan, the same languages, templates, etc.

The authors describe Threadmill  at a high level, with many interesting details. One example is need to run coverage analysis on an accelerator, not on the chip, because the limited observability of the silicon does not allow to measure it on the silicon. The exerciser executes the same test case multiple times and compares results; multi-pass comparison is limited but has proven effective in exposing control-path bugs and bugs in the interface of the control and data-paths. Branches are generated statically, using illegal instruction interrupts before a branch to force taking one particular branch. Data for floating point instructions is generated as a combination of tables with interesting data for a given instruction and random values. Generation of concurrent tests (multiple threads) relies on shared random number generators, e.g., to select random collision memory locations. They debug failing tests by restarting the exerciser a few bugs before the failure on the acceleration platform.

Coverage results indicate at least one high impact bug exposed  on an accelerator before tape-out. Also “Results of this experience confirm our beliefs about the benefits of the increased synergy between the pre- and post-silicon domains and of using a directable generator in post-silicon validation”.

The papers are easy to follow and fun to read, lots of common sense. The shared coverage and collision experiment results are hard to judge. One must rely on the author’s comments on the contribution of post-silicon validation in their methodology. Post-silicon validation is a critical component of processor design; ultimately intertwined with design. Every group designing a complex processor will use its own methodology. It continues to be a fertile publication area. In 2022 alone Google scholar lists over 70 papers on this subject  .

My view

I’m not sure about the relevance today of the post-silicon floating point tests today. The memory consistency tests make a lot of sense to me.

 


Higher-order QAM and smarter workflows in VSA 2023

Higher-order QAM and smarter workflows in VSA 2023
by Don Dingee on 10-25-2022 at 10:00 am

Higher-order QAM and smarter workflows in a 5G NR example from the Keysight 89600 VSA

3GPP Release 17 and 802.11be (Wi-Fi 7) extend modulation complexity, raising the bar for conformance testing and test instrumentation to new levels. The latest release of Keysight PathWave 89600 Vector Signal Analysis 2023 (VSA 2023) software takes on higher-order QAM and smarter workflows, many customer-requested, for advanced signal analysis. We’ll focus on four areas of enhancements: 5G NR (and, for research, 6G) measurements, cross-correlated EVM, multi-measurements combining EVM with ACP and SEM, and event-based user actions.

Accelerating 5G, O-RAN,  and 6G research

In tracking the 3GPP specification and aligned specifications such as O-RAN WG4 fronthaul and leaning into 6G research, Keysight’s objective with the 89600 VSA is accelerating complex RF system research and development workflows. A visual example shows the range of measurements possible from one IQ data acquisition.

3GPP Release 17 brings 1024QAM, a much denser constellation than the 256QAM in previous releases. The VSA 2023 release supports I/Q reference values (all zeroes or PN23) for PDSCH EVM calculations, avoiding the problem of obtaining “ideal” constellation points from estimation in high noise or system impairments.

Another enhancement for 5G NR in the VSA 2023 release is expanding component carrier views. Even though the 3GPP specification still calls for 16 component carriers, researchers are exploring using up to 32. VSA 2023 users can display data from 32 component carriers with a simple pull-down menu, allowing a side-by-side view of all component carriers for easier performance comparison.

With design teams already looking ahead to 6G but its specifics still in flux, Keysight has also incorporated user-defined constellations and modulation formats in VSA 2023. “We’re trying to be as aggressive as we know how about 6G research topics,” says Raj Sodhi, Keysight 89600 VSA Product Manager.

Several dBs of improvement in EVM measurements

EVM is a critical spec for 5G NR, Wi-Fi 7, and other advanced wireless systems. “If we could get 3 to 5 dB better in EVM measurements, who would care? The answer is everyone,” says Sodhi. With higher-order constellations and broader bandwidths, margins are already tight. For accurate EVM readings, test equipment needs to be distortion-free and much quieter than the specification by a substantial margin.

Cross-correlated EVM (ccEVM) is a Keysight algorithm that eliminates instrument-induced uncorrelated noise using two independent analyzer channels, lowering the EVM noise floor. Depending on how many points the ccEVM measurement in VSA 2023 uses, improvements of 5 dB or more are straightforward, and theoretically, up to 20 dB improvement is possible.

Conformance test peace of mind with smarter workflows

While EVM defines system performance, two other specifications are essential in conformance testing: adjacent channel power (ACP) and spectral emission mask (SEM). Until the VSA 2023 release, RF system designers had to bring several instruments and spectrum analyzer software applications, each with their own setups, to get all three measurements. “Now, we’ve brought ACP, EVM, and SEM under one roof,” says Sodhi.

A new measurement application in VSA 2023 has ACP and SEM presets for 5G base station conformance tests. But it is flexible, allowing measurement configuration for any signal type. It formalizes support for single and multi-carrier ACP, EVM, and SEM measurements from single or sequential acquisitions depending on the hardware IF bandwidth.

This capability leads into “contexts” – a simple setup for 5G NR conformance tests. One setup can support all test models, bandwidths, numerologies, and more.

Advanced triggering with event-based actions

Often, setting up a complex VSA measurement is just the beginning of capturing data. Signal conditions can change rapidly during normal system operation, throwing test results out of whack. A good example is a system with kinematics, such as a satellite in a 5G non-terrestrial network (NTN), where Doppler shift can cause enough frequency error to pull the VSA out of sync during capture.

To cope with this scenario, VSA 2023 adds event-based actions. Users can choose a metric – such as frequency error, EVM, or any data source in a summary table trace in the VSA – and set a condition to monitor. When the condition is met, one of two actions can occur: pause the measurement or run a macro.

In the 5G NTN example, when the Doppler shift drives the frequency error over the selected condition of 10 kHz, a macro runs recentering the VSA on the shifted frequency, continuing the capture seamlessly. A macro can be essentially any set of commands for the VSA, for example, automating a setup change, saving anomalous data to a file, adding markers to zoom in traces, and so on. Advanced triggers based on measurements become possible without extensive experiments to find the right combination of settings.

More enhancements for wireless connectivity

Three more features round out the VSA 2023 release, including some designed for teams working in 6G research.

  • Wi-Fi 7 (802.11be) teams are dealing with even higher-order modulation in the form of 4096QAM. Similar to the situation for 5G NR and 6G, pulling ideal constellation points out of noise and impairments can be problematic. Users can specify reference symbols for more accurate EVM measurements. Also, the VSA now automates CRC checking by decoding the PHY Service Data Unit (PSDU).
  • UWB changes keep pace with the FiRa specification, with a Keysight engineer now helping drive specification updates. Pulse shaping is computed per the specification in the Baseband Pulse Mask trace, and the Frame Info trace results are enhanced. Synchronization improvements help capture results from multiple UWB frames.
  • FlexFrame now includes an adaptive equalizer, increasingly crucial as channel bandwidths grow and impairments make EVM measurements more difficult. Flex Frame continuously evolves the channel estimation with a least mean squares (LMS) algorithm in time domain and a zero-forcing algorithm in frequency domain. For 6G research, an XCorrelation equalization mode, equalizer filter track or hold, and LMS algorithm speed improvements offer more stable equalization.

Discover more about PathWave 89600 VSA 2023

“VSA has been the spearpoint of signal analysis technology, especially for EVM measurements,” says Sodhi. “Often, customers would start with the VSA, then migrate to spectrum analyzer applications for coverage of out-of-band measurements. With the VSA 2023 release, they can start with VSA and stick with VSA for measurements like ACP and SEM – a better workflow using the same measurement science.” Here are some resources for discovering more about higher-order QAM and smarter workflows in the PathWave 89600 VSA 2023 release.

 

Web page:

What’s New in 89600 VSA?

 

Videos:

89600 VSA Software PowerSuite Demo for 5G NR ACP and SEM Measurements

802.15.3d Signal Analysis using the 89600 VSA’s Adaptive Equalizer in FlexFrame

Event-Based User Actions in the 89601200C VSA Base Platform

5G Transmitter Validation using PathWave System Design and 89600 VSA Software

 

Press release:

Keysight Accelerates RF System Design and Digital Mission Engineering Workflows for 5G Non-Terrestrial Networks

Also Read:

Advanced EM simulations target conducted EMI and transients

Seeing 1/f noise more accurately

Unlocking PA design with predictive DPD


The Corellium Experience Moves to EDA

The Corellium Experience Moves to EDA
by Lauro Rizzatti on 10-25-2022 at 6:00 am

Corellium SemiWIki

Bill Neifert invited me to join him on Zoom recently to talk about his move to Corellium, a company known within the DevSecOps (development, security, operations) market. Developers and security groups use its virtualization technology to build, test, and secure mobile and IoT apps, firmware, and hardware.

Not knowing much about DevSecOps, I agreed, though wondered how Neifert ended up in what seems like a market far removed from EDA.

Neifert and I have known each other for many years, beginning when we were both co-founders of EDA startups. I was at Emulation and Verification Engineering, known as EVE and now part of Synopsys. He was CTO of Carbon Design Systems, now Arm. We did hardware emulation. Carbon did fast, cycle-accurate, system-level models. What we didn’t realize then was that the design and verification community was moving up the abstraction level. EVE and Carbon today would offer complementary tools and we might have signed a partnership or reseller agreement.

Neifert’s big news is Corellium’s move into our EDA ecosystem by forging a partnership with Arm to leverage its Arm hypervisor-based virtualization technology to enable hardware/software co-design for developers to verify and validate embedded and IoT applications.

Corellium is well-known in the security market and operates in the Arm virtualization space through its Arm hypervisor executing Arm workload directly on an Arm server. The typical application is modeling mobile phones by executing the latest versions of an OS for security and vulnerability research. This technology can be applied to other market segments, such as hardware/software co-design and general-purpose Arm workloads. IoT software development market is also ripe for change since the methodologies are basic by developer standards and not scalable.

I was intrigued and urged Neifert to tell me more, especially as IoT is emerging as the next frontier for electronic devices and often software developers today outnumber chip designers in a project group. Their needs become acutely apparent if they are forced to wait for first silicon to start software development that could delay meeting project schedules. The often used and popular solutions are hardware emulators or FPGA prototypes that can be costly shared resources and difficult to debug. Virtual prototypes are popular but require engineers to assemble them and write models for any new components when the third-party IP vendor doesn’t provide them. Speed is an issue as well.

By contrast, Corellium’s virtualization technology run orders of magnitude faster than traditional virtual models, a benefit software developers appreciate. In fact, a virtualized model of an IoT device runs much faster than the actual physical device.

The cloud gives a huge advantage to IoT device virtualization over older methodologies. Virtualized devices can be used independently of hardware availability. They can be created while chips are in development with the functionality of the real hardware with debuggability and observability that comes from virtual execution. They can scale up or down and deployed around the world via the internet and tie into cloud-based flows and resources to enable continuous integration (CI)/continuous deployment (CD) and DevSecOps.

It’s a compelling story. It also obvious why Neifert joined Corellium. He agreed and wished Corellium was around in Carbon’s early days because its virtualization technology is so much better than anything the EDA community was building.

To learn more about Corellium and its partnership with Arm, visit avh.arm.com or iot.corellium.com.

Also Read:

New Cadence Joint Enterprise Data and AI Platform Dramatically Accelerates AI-Driven Chip Design Development

Clock Aging Issues at Sub-10nm Nodes

The Increasing Gap between Semiconductor Companies and their Customers


New Cadence Joint Enterprise Data and AI Platform Dramatically Accelerates AI-Driven Chip Design Development

New Cadence Joint Enterprise Data and AI Platform Dramatically Accelerates AI-Driven Chip Design Development
by Kalar Rajendiran on 10-24-2022 at 10:00 am

1 Cadence Joint Enterprise Data and AI JedAI Platform

Without data, there is no computing field to talk about, no technology world to awe at and not much of a semiconductor industry to work in. There is no argument that data is the foundational piece for everything, has been to date and will continue to be. While processing an application’s input data is essential to serve the intended purpose, a lot of collateral data is typically generated in the process of creating the desired output. Chip development projects are high on the list when it comes to the amount of collateral data generated. Analyzing collateral data could provide insights to enhance future products, yet it is not frequently done.

Things are changing. The availability of compute-related resources has grown tremendously over the years. Advancements in artificial intelligence (AI) and machine learning (ML)-based technologies have made intelligent analysis software possible. Over the last couple of years, Cadence has released many AI-driven software products to dramatically benefit the chip development efforts. And recently, Cadence announced its Joint Enterprise Data and AI (JedAI) Platform that unifies data sets across all Cadence computational software. The Cadence JedAI Platform is a big data analytics infrastructure that dramatically improves productivity and power, performance and area (PPA) by enabling AI-driven applications.

Three Major Aspects of Product Development

Before discussing the salient aspects of the JedAI platform, it’s worthwhile to summarize the AI-driven Cadence products that have already been in use for some time now.

Design

Last year, the company announced the Cadence Cerebrus™ Intelligent Chip Explorer, an AI-driven, automated RTL-to-GDS full-flow optimization tool. It takes PPA targets for a design or a block within a design. The user can provide a start and end point of the flow or tell the tool to do the full flow. It can run hundreds of different experiments very quickly and search a much larger space than is possible via manual means. Through ML techniques, Cerebrus helps increase EDA flow efficiencies and enables implementation tools to quickly converge on better placement and route and timing closure.

Optimization

Announced earlier this year was Optimality™ Intelligent System Explorer, Cadence’s system optimization platform. The platform delivers for system design what the Cerebrus platform delivers for chip design. The tool quickly and efficiently explores the design space to produce optimal electrical design performance. It helps optimize the system design by applying AI techniques to the results of multi-physics analyses.

Verification

Cadence recently announced the Verisium™ Artificial Intelligence (AI)-Driven Verification Platform. Verification and debug are major aspects of all chip development projects and rely on experience and creativity to execute. The Verisium platform is built on the JedAI Platform and is natively integrated with Cadence verification engines. It is a multi-run, multi-engine tool that applies AI models and ML techniques to perform root cause analysis for debug, optimize verification workloads and increase coverage.

Cadence JedAI Platform

Whether it is the design, verification or optimization efforts, a tremendous amount of data is generated. The data can be broadly categorized into design data, workflow data and workload data. For example, design RTL, netlist, physical layout shapes, timing analysis reports, etc., would fall into the design data category. The workflow data category would contain information about the specific tools and methodology used in the design process. And workload data refers to data about runtime, memory and storage usage and job inputs and inter-dependencies.

The JedAI Platform applies AI algorithms on the above types of data to optimize multiple runs of multiple engines across an entire SoC design and verification flow. It also analyzes historical workload data to predict resource requirements and schedule jobs for increased server farm utilization for both on-prem and on-cloud scenarios. The Platform allows engineering teams to visualize and uncover data trends and automatically generate strategies for improved design performance and engineering productivity.

Benefits of Cadence JedAI Platform

By themselves, the earlier mentioned AI-driven design, verification and optimization platforms enable enhanced productivity and PPA benefits compared to traditional approaches. Even higher levels of benefits can be derived if data from these different platforms can be cross-leveraged. The JedAI Platform makes that possible by offering a common infrastructure for inter-communications. Built on top of this common infrastructure, data connectors allow for bi-directional transfer from a wide variety of Cadence tools and data sources. Also provided are general-purpose open data connectors for designers to easily import third-party data as needed. Support of open industry-standard user interfaces such as Python, Jupyter Notebook and REST APIs enable designers to create custom analytics applications as needed.

One of Many Use Cases

Various RTL modules of an SoC connect to different parts of an SoC using input and output ports. Understanding the timing criticality between the various modules is key to achieving the PPA goals of the SoC. A static timing analysis report does not make it easy or quick to gain these insights. But, with the module timing visualization and trend analysis app that is included with the JedAI Platform, customers can generate a module-based timing criticality report. The data visualization and analytics features of the JedAI Platform highlight the modules that communicate and the associated ports timings.

As the JedAI Platform can store many revisions of the SoC design data, it is possible to see how the RTL port timing changes based on different revisions of the source RTL. Equipped with this insight, RTL designers can make effective changes for improving timing within and across module boundaries. The changes are then communicated to the SoC implementation platform using the Innovus™ Implementation System’s data connector.

Summary

With the Cadence JedAI Platform, Cadence has unified its computational software innovations in data and AI across its Verisium verification platform, its Cerebrus implementation platform, and its Optimality system optimization platform. The revolutionary JedAI Platform enables customers to meet increasingly stringent PPA, productivity and time-to-market demands of their respective market segments.

The Cadence JedAI press announcement can be found here and more details can be accessed in the product section of Cadence website.

Also Read:

Test Ordering for Agile. Innovation in Verification

Finally, A Serious Attack on Debug Productivity

Hazard Detection Using Petri Nets. Innovation in Verification


Intel Foundry Services Forms Alliance to Enable National Security, Government Applications

Intel Foundry Services Forms Alliance to Enable National Security, Government Applications
by Daniel Nenni on 10-24-2022 at 6:30 am

USMAG Alliance

This will be the year of the semiconductor foundry ecosystem, absolutely. Right in between the Samsung SAFE Forum and the TSMC OIP Open Ecosystem Forum, Intel Foundry Services (IFS) just announced a United States Military, Aerospace, and Government (USMAG) Alliance.

Brilliant move, of course, due to the US Government now being actively involved in the semiconductor industry (CHIPS Act) and the current geopolitical landscape where the defense electronics market is literally exploding.

Early in my career I worked in the federal systems group of a computer company and was read into a Reagan era Star Wars project. It was mind blowing. Not only does this keep us safe, the commercial sector gets to benefit from this type of R&D and Hollywood gets “inspiration” for future entertainment. Seriously, I read a Tom Clancy novel that is now a movie that was strikingly similar to a project I participated in. Enough said, I don’t want a visit from the FBI.

As with automotive and other consumer markets, the semiconductor content of defense related electronics is increasing exponentially. Drones, smart weapons and vehicles just to name a few and the lifespan of some of these products being less than a smart phone.

The issue at hand is: How do we secure the supply chain for defense related electronics? As we have discovered in the Ukraine, Russian weapons are filled with US based chips and that has to stop.

The announcement:

Intel Foundry Services (IFS) today launched a strategic addition to its design ecosystem Accelerator program. The new USMAG (United States Military, Aerospace and Government) Alliance brings together a trusted design ecosystem with U.S.-based manufacturing to enable assured chip design and production on advanced process technologies and meet the stringent design and production requirements of national security applications. A first in the industry, the program’s initial members include leading companies like Synopsys, Cadence, Siemens EDA, Intrinsix and Trusted Semiconductor Solutions.

“Semiconductors enable technologies critical to U.S. national security and economic and global competitiveness. Intel is committed to restoring end-to-end U.S. chipmaking leadership through major investments in both R&D and scale manufacturing here in the United States. As the only U.S.-based foundry with leading-edge process capabilities, IFS is uniquely positioned to lead this effort and galvanize the ecosystem to build a more resilient and secure supply chain for U.S. military, aerospace and government customers.”–Randhir Thakur, president of Intel Foundry Services

Let me remind you that IFS employs two key ecosystem executives; Suk Lee and Lluis Paris who I worked with over at TSMC. They built the mighty TSMC OIP and proved without a shadow of a doubt that ecosystem is everything. Why is this important to us personally? Because this will help keep our families safe.

Why It’s Important: National security and government applications focus on securing vital information systems and decision networks, requiring scalable chip design and production capabilities. Leading- edge semiconductors are the bedrock of these systems and networks. In addition to requiring the most advanced process technologies, MAG applications also impose unique functional requirements like radiation hardening by design, wide ambient temperature tolerance and others. Securing these chips requires end-to-end capabilities across the semiconductor design and manufacturing life cycle. A closely coordinated effort between advanced manufacturers and their electronic design automation (EDA), IP and design service alliance members is crucial to deliver the functional and operational security required by MAG applications.”

How It Works: Through the USMAG Alliance, IFS will collaborate with members to enable their readiness to support MAG designs on leading-edge technology nodes. The alliance will ensure that EDA members’ tools are optimized to deliver secure design methodologies and flows and enabled to operate in secure design environments, while meeting the requirements of IFS’ process design kits (PDK). IFS will also work with IP-provider members to deliver design IP blocks that serve MAG specifications for quality and reliability. Finally, IFS will enable the members who provide design services to implement USMAG design projects using IFS reference flows and methodologies. The USMAG Alliance will provide an assured and scalable path for customers to deploy designs that fully achieve the unique requirements of MAG applications.

About the IFS Accelerator: In February 2022, IFS launched its Accelerator design ecosystem program to help foundry customers bring their silicon products from idea to implementation. Through deep collaboration with industry-leading companies, IFS Accelerator taps the best capabilities available in the industry to help advance customer innovation on Intel’s foundry platform offerings. The IFS Accelerator provides customers a comprehensive suite of tools, including validated EDA solutions, silicon-verified IP and design services that allow customers to focus on creating unique product ideas.

You can see the full press release HERE.

Also Read:

Intel Lands Top TSMC Customer

3D Device Technology Development

Intel Foundry Services Puts PDKs in the Cloud


GM’s Bad RTO Optics

GM’s Bad RTO Optics
by Roger C. Lanctot on 10-23-2022 at 6:00 pm

GMs Bad RTO Optics

The automotive industry has been uniquely whipsawed by the COVID-19 pandemic. Factories, dealerships, and offices were shuttered in its earliest days undercutting both supply and demand.

The industry impacts spread outward from these initial shocks with major auto shows folding their tents and ripples of supply chain disruptions roiling normally reliable sourcing arrangements for vital semiconductors. What slowly dawned on industry participants was the reality that some changes might be longer lasting.

Sure, workers may have returned to the factories, but what would happen with white collar workers? Would supply chains heal? Would traditional auto shows return? Customers returned right away. Demand never slackened.

New answers to these questions are emerging daily. The first post-pandemic auto show outside of China was the Munich IAA event last fall. It was different – with suppliers intermingled with car makers on the show floor – and it was reasonably successful.

The L.A. Auto Show that followed later last fall was less successful and signaled that a wider recovery in typical consumer-centric auto shows was not yet in the offing. Now the Detroit auto show has come and gone with its own reverberations of disappointment.

The bigger question emerging from the lackluster consumer reaction to the Detroit Auto Show (officially the North American International Auto Show) is why car company executives expect consumers to turn out to an auto show if rank and file white collar workers won’t show up at the office. The headline in the Detroit Free Press told the tale this week: “GM Steps Back on Return-to-Work Policy after Backlash from Salaried Workers.”

For many, the home of CEO Mary Barra’s “dress appropriately” mantra was not having any of this “work appropriately” request.

As a post-pandemic frequent traveller I feel a need to express my shock at the situation unfolding at GM. Plenty of workers across the country and throughout the world have had to return the work – in fact never left work! I see and interact with these workers every day in my travels.

We don’t think about these workers. We don’t notice them. We take them for granted. They are the workers in hospitality, retail, restaurants, health care, and transportation.

When the automotive industry was on its knees in the spring of 2020, the factory workers – the car makers – they came back. And they did so with barely a whimper!

The actual car makers helped ensure that the industry’s recovery was swift. Factory workers returned en masse to automobile plants that had been modified to accommodate pandemic hygiene. And dealerships, too re-opened to customers who still had a hankering to kick tires and take test drives.

Strangely, the offices of car makers and their suppliers largely remained shuttered. It became clear this was the case when visitors discovered that in-person meetings were impossible with most headquarter facilities unoccupied.

The revolt of white collar workers at GM is truly revolting. How do they account for their sentiments as they ride in their Ubers and Lyfts or accept their bag of pretzels from the flight attendant or receive their key to their accommodation? How do they explain THEIR outrage to the Starbucks coffee maker, the mailman, the convenience store clerk?

If you were hired – pre-pandemic – to work in an office, you are obligated now, nearly three years after the onset of the pandemic, to return to that office. But, seriously, if you work in the automotive industry which is uniquely dependent upon hundreds of thousands of workers building your products, you have a moral obligation to show up physically (with exceptions for those with medical conditions that may render them vulnerable). If for no other reason, you should show up in the office to demonstrate your solidarity with those folks at the plants. The plant workers took their places on the production for you and me and the industry. What’s your contribution?

Also Read:

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

Siemens EDA Discuss Permanent and Transient Faults

Super Cruise Saves OnStar, Industry