DAC2025 SemiWiki 800x100

A Fresh Look at HLS Value

A Fresh Look at HLS Value
by Bernard Murphy on 06-21-2022 at 6:00 am

Streaming min

I’ve written several articles on High-Level Synthesis (HLS), designing in C, C++ or SystemC, then synthesizing to RTL. There is unquestionable appeal to the concept. A higher level of abstraction enables a function to be described in less lines of code (LOC). Which immediately offers higher productivity and implies less bugs because the number of bugs in any kind of code scales pretty reliably with LOC. Simulation for architectural design and validation runs multiple orders of magnitude faster, allowing for broader experimentation with options. It also can run much larger tests like image recognition on streaming video, a tough goal for RTL simulations. Yet these methods have largely been restricted to specialized design objectives it seemed. Signal processing functions, some simple ML inference engines, that sort of thing.

I’m always willing to be re-educated, especially when I can hear from customers. Siemens EDA just hosted a webinar, mostly customer talks on use of HLS with just a little marketing thrown in. Pretty much a full day of presentations, centering around a few core applications, which made me rethink my position. The algorithm classes the technology best serves haven’t changed so much. What has changed is that big market needs have shifted to overlap more with those algorithms. Check out which companies presented on these topics. Naturally, when these speakers talked about HLS, they meant Catapult from Siemens EDA.

Video Codecs

There’s been a massive worldwide increase in cloud video workload. According to Google, video now accounts for more than 80% of internet traffic, thanks to streaming and YouTube in particular. Aki Kuusela of Google said that this volume demands warehouse scale encoding with fast throughput. From his perspective the whole warehouse must be viewed as a system – storage, networking, codec, compute, etc. – to optimize for this level of traffic and throughput. Moreover, codecs must support a variety of video formats, required to be seamless from the latest formats, to popular standards to legacy standards. Think of YouTube; every minute 500 hours of new content is uploaded, and tens of thousands of live streams must be served simultaneously.

Off the shelf solutions can’t meet this need. For the same reason Google built their own ML training platforms (TPUs), they are building their own codecs which must be optimized across traffic diversity, quality, throughput, and availability that only they can reproduce. Google started early with HLS to integrate with the YouTube stack. Nvidia is doing very similar work, also on video codecs. The world leader in GPUs, for gaming, for graphics, for AI needs to have the fastest and highest quality video. Of course they are building their own codecs.

Object detection for the Mars sample return program

Another cool video example (but not a codec) is from NASA/JPL. This from the team that brought you Ingenuity, the Mars helicopter. Now they are designing something called a Harris corner detector, an image-related algorithm, as a part of development for the Mars sample return project. The original implementation was in RTL as a DSP-like function, but this proved difficult to optimize. The speaker describes approaches using SystemC, implementing a DSP process or a Kahn (essentially self-timed) process, using the flexibility HLS offers for experimenting with these options.

OK, so video applications like these are still in that same algorithmic niche I was talking about earlier. But the business relevance of the video processing niche has exploded. Carrying HLS along with it.

Wireless applications

NXP, as a leader in automotive electronics, is working on a complete baseband for ultra-wideband (UWB). The technology you will soon be using for ultra-secure keyless entry to your car (your current Bluetooth-enabled keyless entry is not so secure). At some point maybe also contactless payment for the same reason. They found their traditional approach to designing the baseband, starting from Simulink, was too slow to converge. Much of the functionality here is signal processing; think filters and equalizers in multiple channels for example. Such a design demands high levels of parallelism at high clock rates which is difficult to architect in a timing-unaware platform. The application must also be very low power; think of UWB in a car key fob, running off a coin cell battery. These designs must build on custom-crafted signal processing.

A new company, Viosoft, is building a complete RAN physical layer for 5G (the radio unit piece of the network), from rate matching/channel mapping to model, time/frequency synchronization, MIMO/beamforming to RF processing and more. This must handle multiple bandwidth and latency requirements and multiple transmission frequencies. Once more lots of signal processing with huge demand for flexibility. The application will be built on an FPGA but still must be power optimized because it will be sitting in a potentially remote location.

Wireless, lots of signal processing, and low power demand. Once again requiring custom design solutions, built through HLS.

Smart sensing and wireless power transfer

ST provided a fascinating 3-part pitch. The first section was on infrared sensing for people detection in a room using a smart sensor. This technology can be useful for energy-saving controls. Sensing is on a grid within a room, allowing for machine learning of patterns of movement, thus a neural network which is where they use HLS.

The next application was a Qi (wireless power transmission) demodulator, a modem-like (and therefore DSP-like) function extracting power rather than information from the signal. The third application was a contactless infrared sensor, something familiar to all of us now thanks to COVID. A prior implementation did the temperature calcs in an embedded processor. This work pushes the calculation into the smart sensor, first establishing a correction for ambient temperature and for the sensed object temperature, then using Stefan Boltzmann law (yay physics!) to compute the temperature of the object. Note these are simply formulae, not DSP or ML operations, but they do use floating point math for precision, so the HLS approach was an easy choice.

What I like here is the applicability of HLS to these consumer-oriented applications, where cost and power will both be critical.

Wrap up

I skipped a couple of talks, one from Nvidia research on modeling interconnect in SystemC to get some feel for latencies as a function of layout. Another was from Siemens EDA on MatchLib, the open-source library originally developed by Nvidia in support of this modeling. All good stuff but not directly relevant to my theme here of the compelling demand for HLS in multiple applications.

Bottom line, best fit algorithms still tend to be signal processing centric, but big markets now see huge value in custom hardware development around those algorithms. You can watch the entire set of talks HERE.

Also read:

HLS in a Stanford Edge ML Accelerator Design

Standardization of Chiplet Models for Heterogeneous Integration

Using EM/IR Analysis for Efinix FPGAs


How to Cut Costs of Conversational AI by up to 90%

How to Cut Costs of Conversational AI by up to 90%
by Dave Bursky on 06-20-2022 at 10:00 am

20 Tbps 2D NoC

The burgeoning use of conversational artificial intelligence (CAI) in consumer and business applications places a heavy computational burden on both front-end and back-end systems that provide the natural language processing (NLP). NLP systems rely on deep learning (a subset of machine learning) to automate speech recognition, perform the NLP functions, and then provide the text to speech output. To cut costs of the NLP systems, Achronix and Myrtle.ai have partnered, promising to cut costs by 90% as well as reducing the hardware requirements, described in this whitepaper.

Myrtle.ai, a technology specialist in FPGA AI inferencing, implements performant recurrent neural networks (RNN)-based networks on FPGAs using their MAU inferencing acceleration engine. The MAU engine, integrated into the Achronix Speedster®7t AC7t1500 FPGA, leverages key architectural aspects of the Speedster7t architecture to drastically increase the acceleration of real-time automatic speech recognition (ASR) neural networks. That translates into a 2500% increase in the number of real-time streams that can be processed when compared to a server-class CPU.

The CAI pipeline is often defined by three key functional blocks:

  1. Speech to text (STT), also known as automatic speech recognition (ASR)
  2. Natural language processing (NLP)
  3. Text to speech (TTS) or speech synthesis

Such pipelines are found in the millions virtual voice assistants such as Apple’s Siri or Amazon’s Alexa, or voice search assistants on laptops such as Microsoft’s Cortana, as well as automated call center (or contact center) agents and many other applications. The deep learning algorithms that power these CAI services are either processed on the local electronic device or aggregated in the cloud for remote processing at scale. Large-scale deployments supporting millions of consumer interactions represent  extremely large compute processing challenges that hyperscaler providers have addressed by developing specialized silicon devices to address the processing of these services.

State of the art ASR algorithms are implemented with end-to-end deep learning. Recurrent neural networks (RNN), unlike convolutional neural networks (CNNs), are common in speech recognition. As noted in “CNN vs. RNN: How are they different?” by David Petersson from TechTarget. RNNs are better suited for processing temporal data, aligning well with ASR applications. RNN-based models require high compute capabilities and high memory bandwidths to process the neural network model within the strict latency targets required for conversational systems. When real-time or automated responses are too slow, the system appears sluggish and unnatural. Often low latency is only achieved at the expense of the processing efficiency which pushes up costs and can become too large for practical deployment.

Competing FPGA architectures in the ML acceleration segment claim teraoperations/second (TOPS) rates for inferencing as high as 150 TOPS. Yet in real-world applications, especially those which are latency sensitive such as ASR, these FPGAs fall well short of their headline TOPS rates due to their inability to efficiently transfer data between the compute and external memory. The Achronix Speedster7t architecture strikes the right balance of compute engines, eight high-speed memory interfaces (4 Tbit/s GDDR6 memory interfaces) and high-throughput data transfers (20 Tbit/s network on chip), yielding a device that can deliver 64% of the headline TOPS rates for real- time, low-latency ASR workloads (see the figure).

At the heart of the Speedster 7t architecture are the 2560 machine-learning processor (MLP) blocks. These blocks contain an optimized matrix/vector multiplication function capable of 32 multiplies and one accumulate in a single clock cycle. This is the foundation for the compute engine architecture. Block RAM (BRAM) is co-located with each of the 2560 instances of the MLPs in the AC7t1500, which equates to lower latency and higher throughput. Myrtle.ai’s MAU low latency, high throughput ML inferencing engine has been integrated into the Achronix Speedster7t FPGA, leveraging 2000 of the 2560 MLPs. Because the MLP is a hard block, it can run at a much higher clock rate than if implemented in the FPGA fabric itself.

Most ASR solutions offered by large-scale cloud service providers such as Google, Amazon, Microsoft Azure, and Oracle allow service providers to build products on top of these cloud APIs. However, the service providers face increasingly large bills as their operations scale out, and those products achieve success in the market.

The publicly advertised cost of the larger ASR providers range from $0.01 to $0.025 per minute, and Industry reports suggest that the average call center call is approximately five minutes. Consider a large enterprise data or call center services company fielding 50,000 calls per day at five minutes per call. At the stated rates above, the cost of the ASR processing would range from $1,500 to $6,000 per day or $500,000 to $2,000,000 per year. The Achronix and Myrtle.ai solution can support 4000 RTS on one accelerator card, delivering the capacity to handle over one million calls per day.

There are many factors that would dictate the cost of a stand-alone ASR appliance. For this particular example, assume the Achronix ASR acceleration solution delivered on an FPGA-based PCIe card integrated into an x86-based 2U server. Sold from a system integrator, this appliance might be $50,000 and the annual cost of running the server could double that cost. This leads to $100,000 for the first year for an on-premise ASR appliance. Comparing this on-premise solution versus cloud API services, the end user can enjoy a savings of 5X to 20X in the first year.

Achronix and Myrtle.ai are teaming up to deliver an ASR platform consisting of a 200W, x16 PCIe Gen4-based accelerator card and the associated software which together can sustain up to 4000 RTS concurrently, processing up to 1 million five-minute transcriptions per 24-hour period. Comparing this PCIe accelerator card on a single ×86 server to the cost of cloud ASR services, the first year CAPEX and OPEX can be reduced by as much as 90%.

To download the full whitepaper, visit achronix.com.

Also read:

Benefits of a 2D Network On Chip for FPGAs

5G Requires Rethinking Deployment Strategies

Integrated 2D NoC vs a Soft Implemented 2D NoC


Casting Light on OpenLight’s Open Silicon Photonics Platform

Casting Light on OpenLight’s Open Silicon Photonics Platform
by Kalar Rajendiran on 06-20-2022 at 6:00 am

The Growing Silicon Photonics Market

For many decades now, modern optical technology has been deployed in networking infrastructure, for long haul and medium haul links to support internet communications. The foundation of this technology is photonics, which is the science of generation, manipulation and detection of light for performing functions otherwise achieved using electronics. A fiber-optic module serves as a photoelectric converter to bi-directionally interface the optic side to the electronic side of a communications infrastructure.

Current Market Trends

Over the recent past, there has been an explosive growth of data (in zettabytes) due to the proliferation of mobile applications. In addition, hyperscale data centers, deep learning, 5G and video streaming applications call for higher performance at very low power consumption. With bandwidth, latency, power and reach being key elements relating to connectivity, the above trend has renewed interest in silicon photonics.

Silicon Photonics

Silicon photonics uses silicon as an optical medium where the silicon is patterned into micro-photonic components. Using current semiconductor fabrication techniques, hybrid devices with both optical and electronic components can be integrated on to a monolithic chip. This helps provide very high speed data transfers between and within chips and the continuation of the Moore’s law benefits. Products can enjoy speed improvements at reduced power consumption for data communications as well as ultrasensitive sensing applications such as LiDAR and healthcare.

But a major challenge for silicon photonics is the laser integration and the high cost associated with the manufacturing, addition, assembly, and alignment of those discrete lasers. This becomes an even bigger challenge as the number of laser channels and the overall bandwidth increases.

The Birth and Unveiling of OpenLight

Synopsys already offers an electronic photonic design automation solution that consists of OptoCompiler, OptSim, PrimeSim, Photonic Device Compiler and IC Validator design software products. It is not surprising that Synopsys announced a majority ownership in a new independent company that it jointly launched with Juniper Networks, back in April of this year. The April announcement simply stated that the as-yet unnamed company would deliver the industry’s first open-foundry silicon photonics platform with integrated lasers. The platform was to integrate silicon photonics assets that were spun out from Juniper Networks to the new company. These assets included more than 200 patents on photonic device design and process integration.

Earlier in June, the new company unveiled itself, revealing its brand identity and technology portfolio. OpenLight’s executive team brings decades of hands-on photonics design experience and is led by Dr. Thomas Mader, Chief Operating Officer, Dr. Daniel Sparacin, VP of Business Development and Strategy, and Dr. Volkan Kaman, VP of Engineering.

OpenLight’s Solution

The open platform includes integrated lasers, optical amplifiers, modulators, photodetectors, and other key photonic components to form a complete solution for low-power, high-performance photonics ICs. By processing IndiumPhosphide (InP) materials directly onto the silicon photonics wafer, the platform reduces the cost and time of adding lasers. This in turn enables scalability and improved power efficiency. In addition, the monolithically integrated lasers on silicon wafers increases overall reliability and simplifies packaging.

The first offering of the platform supports Tower Semiconductor’s PH18DA fabrication process and has passed the process qualification and reliability tests. As a demonstration vehicle, first samples of 400G and 800G reference designs with integrated lasers are expected to be available in summer 2022.

In addition, OpenLight offers select photonic integrated circuit (PIC) designs and design services to its customer base to accelerate time-to-market.

Value Proposition to its Customer Base

OpenLight’s platform will provide a new level of laser integration and scalability to accelerate the development of high-performance photonic integrated circuits (PICs).  Customers will benefit from access to a complete photonics library of industry-standard EDA tools and other key photonic components.

The target customer base spans a broad range covering applications such as datacom, telecom, LiDAR, healthcare, HPC, AI, and optical computing.

Integration Enabled Differentiation

While in the field of Calculus, differentiation and integration are opposite things, when it comes to products, integration enables differentiation in solutions. That is certainly the case with Silicon Photonics. OpenLight is boldly pitching its open foundry aspect, silicon photonic integration capability and channel and volume scalability benefit with its tagline, “Open. Integrated. Scalable.”

For more details, visit OpenLight’s website.

Also read:

DesignDash: ML-Driven Big Data Analytics Technology for Smarter SoC Design

Coding Guidelines for Datapath Verification

Very Short Reach (VSR) Connectivity for Optical Modules


Obscuration-Induced Pitch Incompatibilities in High-NA EUV Lithography

Obscuration-Induced Pitch Incompatibilities in High-NA EUV Lithography
by Fred Chen on 06-19-2022 at 10:00 am

High NA EUV Lithography 1

The next generation of EUV lithography systems are based on a numerical aperture (NA) of 0.55, a 67% increase from the current value of 0.33. It targets being able to print 16 nm pitch [1]. The High-NA systems are already expected to face complications from four issues: (1) reduced depth-of-focus requires thinner resists, which are more susceptible to pinholes as well as stochastic defects, and require new etch transfer and metrology techniques [1,2]; (2) increased sensitivity to blur from electrons [3]; (3) throughput considerations due to using half the size of the current 26 mm x 33 mm field [4]; and (4) the central obscuration of the pupil [5,6] leading to a variety of imaging effects [2,7].

The last issue, however, presents the most fundamental limitation, when considering which pitches are expected to be imaged. The smallest pitches (18 nm or less) lines have difficulty with the required illumination wreaking havoc on the diffraction patterns of (a) larger pitch (>25 nm) lines and (b) even larger pitch (up to 44 nm) staggered 2D arrays [8]. The dots spanning the range of illumination angles for 16 nm and 18 nm pitches fit inside the dipole leaf shapes in the plots below, with the red dots indicating illumination angles forbidden by the corresponding features.

As can be seen in the plots, over half of the possible illumination space is forbidden. This reduction in pupil fill to <20% is enough to impact the throughput [6,8]. For 16 nm pitch, the space is practically closed. Layouts may need to be separated out by illumination.

References

[1] https://www.imec-int.com/en/articles/high-na-euvl-next-major-step-lithography

[2] https://www.linkedin.com/pulse/cautions-using-high-na-euv-frederick-chen/

[3] https://www.linkedin.com/pulse/demonstration-dose-driven-photoelectron-spread-euv-resists-chen/; https://www.linkedin.com/pulse/adding-random-secondary-electron-generation-photon-shot-chen/; https://www.linkedin.com/pulse/electron-spread-function-euv-lithography-frederick-chen/

[4] A. H. Gabor et al., “Effect of high NA “half-field” printing on overlay error,” Proc. SPIE 11609, 1160907 (2021).

[5] B. Kneer et al., “EUV Lithography Optics for sub 9 nm Resolution,” Proc. SPIE 9422 (EUV VI), 94221G (2015).

[6] B. Bilski et al., “High-NA EUV imaging: challenges and outlook,” Proc. SPIE 11177 (EMLC 2019), 111770I (2019).

[7] https://www.linkedin.com/pulse/stochastic-sidelobe-risks-tradeoffs-high-na-euv-systems-chen/

[8] Pitches Forbidden by the Central Obscuration in High-NA EUV Lithography (video): https://www.youtube.com/watch?v=1HV2UYABh4E

This article originally appeared in LinkedIn Pulse: Obscuration-Induced Pitch Incompatibilities in High-NA EUV Lithography 

Also read:

The Electron Spread Function in EUV Lithography

Double Diffraction in EUV Masks: Seeing Through The Illusion of Symmetry

Demonstration of Dose-Driven Photoelectron Spread in EUV Resists

Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness


CHIPS for America DOA?

CHIPS for America DOA?
by Robert Maire on 06-19-2022 at 6:00 am

CHIPS for America 2022
  • We think hopes for CHIPS for America is fading fast
  • Politics, Jan 6th, guns, inflation, partisanship will likely block it
  • Alternative to building US semis is knocking down China chips
  • The only political option may be more restrictions on China

Chips for America act seems drowned out by partisan screaming

We have been saying, for some time now, that the odds of passing a compromise version of Chips for America has been fading. It seems like that probability is fading fast as we close in on both summer vacation and the fall elections. There has been a non-stop flow of large news items that have taken up everyone’s mindshare and especially those in the government.

Between Ukraine, guns, January 6th hearings and inflation…not to mention the stock market….the news flow is overwhelming.

The partisan divide has grown much wider than ever with total lack of cooperation that gets worse as the prospect of a change in control gets closer.

Cars are getting built even the Ford F150 lightning

One of the main things that got legislators attention is when the car industry ground to a halt due to lack of chips. That is not longer the case, America can get its beloved F150 and even the electric version (although at a huge premium over sticker).

So the issue that brought chips to the forefront of American minds has faded quickly. I can only imagine someone bringing up the semiconductor issue in the halls of Congress being laughed at and told the problems over because I can now buy a car.

As inflation moves to center stage spending money looks bad

Spending money on a problem that seems to no longer exist is likely not popular. The semiconductor industry may be fading back into the woodwork where it came from. Spending $52B right now seems far fetched. We had our moment in the sun and blew it.

Anti China sentiment still exists and may be worse

Meanwhile the Taiwan strait is still a parade of US and Chinese Navy ships and the provocations are worse than ever. It certainly doesn’t help when a Chinese official calls for the seizing of TSMC in the public media. Something I am sure is in China’s minds but would never be so bold as to publicly suggest.

Top Economist Urges China to Seize TSMC If US Ramps Up Sanctions

Obviously China could never seize TSMC as it would be a hollow prize. Within a week, operations would cease due to lack of critical support of the entire tool infrastructure industry much as we saw already happen in China with Fujian Jinhua which stole trade secrets from Micron. So threats of seizure are themselves hollow.

This all amounts to an interesting stand off reminiscent of mythology of Tantalus.

Rather than pump up US chips, knock down China Chips

While Chips for America may be dead or dying the concern about China remains. If you can’t keep the US ahead in the Chip industry by throwing a paltry amount of money at it then the next best alternative is to knock down the Chinese chip industry instead to accomplish a similar goal.

Sanctions and embargoes don’t cost money that would cause an unpopular partisan fight in Congress. Both Republicans and Democrats are concerned about China and it will likely be much easier to compromise on something that doesn’t cost money, especially in inflation in a critical election year…sanctions.

Semiconductor sanctions in China- A cheap compromise?

Slapping further sanctions on China is likely to be more palatable to legislators and voters. It doesn’t cost money and will hurt China where it hurts most…in semiconductors.

Russian auto manufacturers have had to go back to stone age cars due to the lack of chips so its very clear evidence that chip sanctions work very well for zero money.

The US can tighten restrictions on most all types of semiconductor equipment and many chip exports to China. Not just leading edge or military specific but more mundane chips such as what happened in Russia. Obviously not going anywhere near as far as the outright embargo in Russia but at the least a tightening. Its not like the Chinese can seize TSMC in response.

More political support for sanctions than spending

Given the success of Russian sanctions it seems the likely path instead of spending will be sanctions on China. This obviously does have risks and victims. Clearly the US semiconductor equipment industry will not be happy to have their number one market for product constrained in any way. Companies that depend highly on China, such as Apple, who are already trapped in the middle may get squeezed even more.

Its unclear how China could respond. Would this push them closer to Russia? But China is pretty close to Russia anyway. The US probably wants to be less reliant on China anyway. Biden’s statements on Taiwan seem to have happened without significant response. We may view sanctions as an effective weapon without as much risk as previously thought.

Our summary is that the death of CHIPS for America may be the cause of more sanctions rather than spending which may have the opposite of the desired effect on the chip industry but perhaps legislators don’t care anymore.

The Stocks

We did not see much positive impact from the CHIPS for America act anyway. $52B spread over 5 years cut up into little pieces meant very little impact on individual companies and more likely more in line with pork barrel politics. So we think the lack of CHIPS for America has near zero impact on the industry either short or long term.

The bigger impact is if we have sanctions instead of spending. There will certainly be more near term impact if semiconductor and semiconductor equipment sales to China slows, probably most notably on semiconductor equipment companies.

In the long run it likely equalizes as semiconductor demand is likely a zero sum game which means that semiconductors not used in China will be used elsewhere.

There is obviously more risk to consumer goods manufacturing in China as it may be difficult to differentiate Chips helping China from chips going back to US consumers. Apple and PC manufacturers would have more risk.

Sanctions may also help the re-shoring of chip production to the US as companies would not to be exposed to having their supply chain routed through China.

Over the longer term US consumer companies would likely lose out to local Chinese companies as we saw in the smart phone market or APP market so maybe the Chinese market isn’t a loss of anything that was going to be lost anyway.

It certainly complicates the supply chain which has yet to recover and makes sourcing of rare earth elements and critical gases even worse than the loss of Ukraine.

We would hope that if sanctions are deployed instead of CHIPS for America that they are done so slowly and carefully so as to minimize the shock to an already damaged technology supply system.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor) specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also read:

Has KLA lost its way?

LRCX weak miss results and guide Supply chain worse than expected and longer to fix

Chip Enabler and Bottleneck ASML


Semiconductors Weakening in 2022

Semiconductors Weakening in 2022
by Bill Jewell on 06-18-2022 at 6:00 am

June 2022 companies

The semiconductor market in 2022 is weakening. Driving factors include rising inflation, the Russian war on Ukraine, COVID-19 related shutdowns in China, and lingering supply chain issues. Four of the top 14 semiconductor companies (Intel, Qualcomm, Nvidia and Texas Instruments) are expecting lower revenues in 2Q 2022 versus 1Q 2022. All four cited COVID-19 related lockdowns in China as a factor. China locked down several major cities including Shanghai and Beijing in April in May due to rising COVID cases. The shutdowns were lifted on June 1, but since then temporary shutdowns were reimposed to fight emerging cases. The shutdowns significantly impacted manufacturing in China.

Six non-memory companies expect revenue growth in 2Q 2022 from 1Q 2022 ranging from 3% to 7%. Three of these companies (Infineon Technologies, STMicroelectronics and NXP Semiconductors) have significant automotive business contributing to their growth. AMD’s 1Q 2022 reported revenue was up 22% from 4Q 2021 largely due to its acquisition of Xilinx which was completed midway through the quarter. It’s outlook for 2Q 2022 growth is 10%, also including Xilinx. Excluding the effect of the Xilinx acquisition, AMD’s revenue grew 10.4% in 1Q 2022 and is expected to grow about 3% in 2Q 2022. The weighted average revenue growth of the 10 largest non-memory companies in 1Q 2022 versus 4Q 2021 was 4%. The weighted average outlook for 2Q 2022 is a decline of 1% from 1Q 2022.

Memory companies have a brighter outlook than non-memory companies. Micron’s guidance for its fiscal quarter which ended in early June was an increase of 11.7% from the prior quarter. Samsung, SK Hynix and Kioxia all reported demand for both DRAM and flash memory remains solid.

The outlook for the global economy is diminishing due to the factors listed earlier. The June 2022 forecast from the World Bank is for only 2.9% growth in global GDP in 2022 following 5.7% growth in 2021. In January 2022, the World Bank projected 4.1% growth in 2022 global GDP. Among advanced economies, the U.S. and the Euro area are expected to show 2.5% GDP growth in 2022, less than half the 2021 rate. Among emerging and developing economies, China’s 2022 GDP growth is forecast at 4.3%, well below 2021’s 8.1%, due primarily to COVID related shutdowns. Russia’s GDP should decline 8.9% in 2022 due to its war on Ukraine and resulting boycotts. India’s economy remains strong, with 2022 GDP growth targeted at 7.5%, the highest among major economies. The outlook for 2023 is similar to 2022, with the World Bank calling for 3.0% global GDP growth.

In the U.S., the chance of a recession in the next 12 months is 30%, according to Bloomberg’s May 2022 survey of economists. The Federal Reserve this month projected inflation would be 5.2% in 2022 based on its personal consumption expenditures index. The Fed expects inflation to moderate to 2.6% by the end of 2023. Inflation fears led the Federal Reserve this week to raise its benchmark interest rate by 75 basis points, the largest increase in 28 years. The European Central Bank plans to raise interest rates by 25 basis points in July.

The outlook for key semiconductor market drivers is also abating. Earlier this month IDC projected declines in 2022 shipments of both smartphones and PCs. Smartphones are forecast to decline 3.5% in 2022 after 6% growth in 2023. IDC expects smartphones to recover to 5% growth in 2023. PCs boomed in 2020 and 2021 with double-digit growth driven by work-at-home and learn-at-home trends due to the COVID-19 pandemic. IDC forecasts a decline of 8.2% for PCs in 2022. PCs should grow 1% in 2023, in line with pre-COVID trends.

The automotive industry is the only bright spot among major drivers. In May 2022, S&P Global Mobility (which merged with IHS Markit) expects light vehicle production to grow 4.1% in 2022 after 3.5% growth in 2021. Pent-up demand for vehicles would drive even higher growth, but production is limited by shutdowns in China, supply chain issues, and the war in Ukraine. Vehicle production is forecast to grow a healthy 9.4% in 2023.

With the weakening global economy and declines in shipment of key drivers, we at Semiconductor Intelligence have lowered our semiconductor market forecast for 2022 to 9% from 15% in February. The 2Q 2022 semiconductor market will likely decline by about 1% to 2% from 1Q 2022. The second half of 2022 should be weaker than typical trends. The only reason 2022 could see high single-digit growth is due to the strong quarter-to-quarter growth in 2021. The 1Q 2022 semiconductor market was up 23% from a year ago. Year-to-year growth should be in the low single-digits to flat by 4Q 2022. Other forecasts for the 2022 semiconductor market range from 11% from IC Insights to 16.3% from WSTS.

The weakness in the 2022 semiconductor market should continue into 2023. Our preliminary forecasts for 2023 is 3% growth. Other 2023 forecasts are 3.6% from Gartner and 5.1% from WSTS.

Also Read:

Semiconductor CapEx Warning

Electronics, COVID-19, and Ukraine

Semiconductor Growth Moderating


Podcast EP88: A conversation with Maheen Hamid, one of Silicon Valley’s 100 Most Influential Women

Podcast EP88: A conversation with Maheen Hamid, one of Silicon Valley’s 100 Most Influential Women
by Daniel Nenni on 06-17-2022 at 10:00 am

Dan is joined by Maheen Hamid, Chief Operating Officer and Chief Financial Officer at Breker Verification Systems and a recipient of Silicon Valley Business Journal’s 100 most influential women award. Maheen discusses her journey to Silicon Valley and Breker, beginning with her upbringing in Bangladesh. Maheen married Adnan Hamid, Executive President and CTO at Breker shortly before the company’s formation. They started the company together.

She offers many insightful comments about high technology, Silicon Valley and its impact on the world through the lens of her experiences beginning in Bangladesh.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Stop-For-Top IP Model to Replace One-Stop-Shop by 2025

Stop-For-Top IP Model to Replace One-Stop-Shop by 2025
by Eric Esteve on 06-17-2022 at 6:00 am

ALL Interface 2021 2026

…and support the creation of successful Chiplet business

The One-Stop-Shop model has allowed IP vendors of the 2000’s to create a successful IP business, mostly driven by consumer application, smartphone or Set-Top-Box. The industry has dramatically changed, and in 2020 is now driven by data-centric application (datacenter, AI, networking, HPC…), requiring best-in-class, high performance IP developed on bleeding edge technology nodes.

That’s why the Stop-For-Top IP model should replace the One-Stop-Shop model during the 2020 decade and allow to supply the right IP more efficiently to the demanding customer involved in data-centric application.

The next step will be to develop and market chiplet created from the Stop-For-Top IP portfolio, to help chip makers to overcome Moore’s law limitation and accelerate TTM for systems developed on technology nodes at 3nm and below. We think the IP vendors selecting Stop-For-Top IP model strategy will be the best positioned to offer chiplet at the right time when the semiconductor industry will need this innovation to overcome Moore’s law limitation.

 During the 2010 decade the successful business model for Interface IP has been the One-Stop-Shop IP model. Offering to the IP customer a single place where he could buy several functions was a good way to help him taking the decision to buy, instead of to make, while minimizing admin and legal task. It was faster to negotiate and sign IP license contract with only one supplier than with many.

But the nature of modern IP has changed, as they can’t be seen anymore as a commodity being cheaper to buy than to make. If we consider the star interface IP licensed for multi-million dollar, like PCIe 6 or CXL, DDR5 or HBM memory controller or PAM4 112G SerDes, designed on the most advanced technology node, performance, reliability, and robustness are now the essential pilar for decision making.

We have shown that the Interface IP market has been extremely healthy on the 2016 to 2021, growing with 20% CAGR, passing from $520 million in 2016 to $1300 million in 2021. If we consider the 2021 to 2026 forecast of the interface IP category, there are clearly two groups of protocols.

The first group include PCIe and CXL, DDR memory controller, Ethernet and SerDes and chip-to-chip protocols. For these protocols, the largest part of IP revenues is generated by the most advanced functions targeting bleeding edge technology nodes.

For the other protocols, the group of USB, MIPI, SATA or HDMI, both the weight and growth rate are lower. It’s not a coincidence if in the last group, protocols are used in consumer type of applications like smartphone, PC or TV, or even automotive. Protocols from the first group are requested in applications like datacenter, HPC, networking, wireless base station, storage, etc. that we can summarize by enterprise. Sounds like the old battle, consumer vs enterprise.

We have reworked the interface IP forecast for the next five years to extract the high-end part of PCIe and CXL, DDR memory controller, Ethernet and SerDes and chip-to-chip IP products, which are targeting advanced technology nodes, 7nm and below. The result can be synthetized on Table 1.

High-End Protocols Interface IP Forecast 2021-2026 Table

It can be interesting to compare these results with the total generated by all interface IP protocols for the same period:

All Protocols Interface IP Forecast 2021-2026-Graphic

If in 2021 the high-end part of interface IP revenues are slightly less than 50% of the total, this part is constantly growing to reach 72% in 2026. The reason is linked with the five years CAGR, much larger for the group of high-end part.

For 2010 decade, two EDA vendors have successfully deployed One-Stop-Shop strategy, mostly targeting the interface protocol category, and have created a successful IP business. Synopsys has combined 55% market share (or $727 million) in 2021 in interface IP category by supporting every protocols. On top of PCIe and CXL, DDR memory controller, Ethernet and SerDes, Synopsys supports USB, MIPI, SATA, HDMI and Display Port. These added interconnect protocols are intensively used in consumer, industrial and automotive applications, but almost not selected in the “star” applications of the 2020 decade, the data-centric (datacenter, hyperscale, networking, HPC, AI, etc.).

The main question is to know if it will be possible to create a successful IP business during the 2020 decade by focusing only on the high-end data-centric interconnect protocols developed on advanced technology nodes, 7nm and below? If we consider the 2021 to 2026 forecast of high-end IP (Table 1), the segment which was looking like a niche market in 2020 is expected to become a two-billion-dollar market in 2026. The question becomes: would a vendor employing all engineering resources to support high-end data-centric interconnect protocols be able to reach 25% market share in 2026 and create a successful $500 million business?

An IP vendor able to position on Top IP only, by moving from well-known “One-Stop-Shop” model (selected by Synopsys and Cadence in the 2000’s) to “Stop-For-Top” model, will generate a better ROI. This IP vendor will differentiate from Synopsys and Cadence and extract higher IP revenues growth!

The goal is clear, the strategy will have to be defined and fine-tuned for each data-centric protocol, keeping in mind that the long-term process must be completed by the second step, market deployment of application specific chiplet, with specification based on the high-end data-centric IP portfolio. Stop-For-Top IP strategy is now clearly defined.

To fulfill the need for ever increasing bandwidth has put pressure to move faster to target bleeding edge technology nodes and to release faster new version of interconnect protocols (PCIe, Ethernet, memory controller). Innovation like PAM4 modulation and creation of DSP-based SerDes to replace old, 100% analog-based technique, were implemented, allowing to break the 100Gbps barrier. Innovative architecture has been defined pushing adoption of new standard like CXL, supporting cache-coherent memory sharing for processor, co-processor and AI accelerator, or Chip-to-Chip protocol between main SoC and chiplet, allowing to pass the technological area limit and offer more powerful system in a single package to support ever-increasing needs to compute and interconnect data flow, like in the year 2000 using a SoC has led to smartphone explosion.

If we synthetize, the next technology revolution will require using top interconnect and IP vendors will have to propose best-in-class interface IP, to create a successful IP business based on Stop-For-Top IP positioning. We think that offering Stop-For-Top IP should be the first step of a strategy, the final goal being to offer a chiplet portfolio, built by integrating already available interconnect IP into an Integrated Chip (IC), commonly named chiplet. To support this strategy, IP vendor will have to rely on a pool of dedicated resources specialized in ASIC design service. IP vendor will have to build this engineering team, whether organic or inorganic, by acquisition of an ASIC design service vendor.

Chip maker developing SoCs for high-end applications, such as HPC, datacenter, AI or networking are likely to be early adopters for chiplet architectures. Specific functions, like AI accelerators, Ethernet, PCIe or CXL standards should be the first interface candidate for chiplet designs. When these early adopters have demonstrated the validity of heterogeneous chiplet architecture, leveraging multiple different business models, and obviously the manufacturing feasibility for test and, packaging, it will create an ecosystem that is critical to support this new technology. At this point, we can expect a wider market adoption, not only for high-performance applications.

Like it was the case for Design IP sourcing to build a SoC in the 2000’s, the buy or make decision for chiplet sourcing to complete a system design, will be weighted between core competency protection and sourcing of non-differentiating functions. The historical and modern-day Design IP business growth since the 2000’s has been sustained by continuous increase of external sourcing. Both models will coexist (chiplet designed in-house or by a vendor) but history has shown that the buy decision eventually overtakes the make.

IPnest believes this trend will have two main effects in the interface IP business, one will be the strong growth of D2D IP revenues on the short term (2021-2025), and the other is the creation of the heterogenous chiplet market issued from Stop-For-Top IP portfolio. This market is expected to consist of complex protocols functions like PCIe, CXL or Ethernet. Even IP vendors delivering USB, HDMI, DP, MIPI interface IP integrated in SoCs I/O may decide to deliver I/O chiplet instead.

The Stop-For-Top IP model is the first step of a successful strategy followed by the creation of a chiplet portfolio by IP vendors to support industry need for open chiplet ecosystem. This ecosystem is needed by the semiconductor industry to overcome Moore’s Law limitation and reach the trillion dollars during the 2020 decade.

By Eric Esteve (PhD.) Analyst, Owner IPnest

This white paper has been sponsored by Alphawave IP, nevertheless the content reflects the author’s positioning about the IP market and the way it expected to evolve in the future, during the 2020 decade. To read the complete white paper:

https://www.awaveip.com/en/news-views/stop-for-top-ip-model-to-replace-one-stop-shop-by-2025-and-support-the-creation-of-successful-chiplet-business/

Also read:

Die-to-Die IP enabling the path to the future of Chiplets Ecosystem

Design IP Sales Grew 19.4% in 2021, confirm 2016-2021 CAGR of 9.8%

Alphawave IP and the Evolution of the ASIC Business

Demand for High Speed Drives 200G Modulation Standards


Three Key Takeaways from the 2022 TSMC Technical Symposium!

Three Key Takeaways from the 2022 TSMC Technical Symposium!
by Daniel Nenni on 06-16-2022 at 12:10 pm

TSMC Technology Roadmap 2022

The TSMC Technical Symposium is today so I wanted to give you a brief summary of what was presented. Tom Dillinger will do a more technical review as he has done in the past. I don’t want to steal his thunder but here is what I think are the key takeaways. First a brief history lesson.

The history of TSMC Technology Development with 12 key milstones:

In 1987 TSMC was founded with the creation of the PurePlay business model.

In 1999 TSMC was the first foundry to offer 0.18 micron copper technology.

2001 brought the first foundry reference design flow. I participated in this with multiple EDA and IP vendors and I can tell you first hand that TSMC spent a huge amount of money creating the massive EDA and IP ecosystem we enjoy today.

In 2011 TSMC brought HKMG 28nm to the fabless ecosystem. Other foundries faltered at 28nm so this was a record breaking node for TSMC.

2012 brought CoWos, the first heterogenous 3DIC test vehicle.

In 2014 TSMC delivered the first fully functional FinFET networking processor which began the FinFET era that TSMC dominates today.

In 2015 TSMC qualified InFo, the advanced 3DIC packaging technology.

In 2018 TSMC delivered the most advanced logic technology (N7) available to all.

In 2020 TSMC lead the industry with N5 EUV based logic technology.

In 2021 TSMC launched N4P, N4X, and N6RF.

In 2022 TSMC will launch what will be the most advanced N3 process nodes covering a wide range of vertical markets. N3 will also break the record for tapeouts in a 5 year period, my opinion.

And last but not least, in 2022 TSMC announced the next generation process technology for the masses (N2).

Takeaway #1

TSMC will continue to invest in mature node and specialty technologies with a 1.5x capacity expansion from 2021 to 2025 which includes Fabs F14P8 (Tianan), F16 P1B (Nanjing), F22 P2 (Kaohsiung) and fab F23 P1 (Kumamoto Japan).

TSMC also announced an Integrated Specialty Technology Platform for NVM, HV, Sensor, PMIC, ULP/ULL, analog, and RF technology. Tom Dillinger will go into more detail here.

Takeaway #2

TSMC will continue scaling N3. N3 is on track for HVM in 2H 2022. N3E follows in 2H 2023 with improved performance/power and low process complexity for both mobile and HPC applications.

N3E PPA versus N5 comes in at 18% speed at same power or 34% power reduction at same speed, and a 1.6x logic density increase.

More importantly, TSMC announced FinFlex: Ultimate Design Flexibility for N3. TSMC just published a blog on FinFlex with more detail. Tom Dillinger will also have his say on this so stay tuned. Bottom line: you can change fin configurations to further optimize designs for area, speed, and power.

Takeaway #3

TSMC will use nanosheet transistors for N2. Not a huge surprise since Intel and Samsung have already made announcements but there is much more here than meets the eye. N2 PPA vs N3E is expected to be a 15% speed improvement at the same power or 25-30% power improvement at the same speed, and > 1.1x density. N2 is expected in 2025.

TSMC also discussed device architecture futures which included Nanosheet, CFET, 2D TMD, and CNT. We will be writing more about this later.

Bottom line: One thing we must all remember is that there is a distinct difference between a PurePlay and an IDM foundry. TSMC must produce the most cost effective, wide ranging process technologies with a fully supported ecosystem for hundreds of products. IDM foundries can pick and choose what is important and don’t have to worry about wafer margins. Semiconductor insiders know this but the media does not so expect continued misinformation in the coming days, absolutely.

So many more things were presented. If you have questions post them in the comments section and I will get the answers for you, absolutely.

Also Read:

Inverse Lithography Technology – A Status Update from TSMC

TSMC N3 will be a Record Setting Node!

Intel and the EUV Shortage


Podcast EP87: How Axiomise Addresses the Verification Challenge

Podcast EP87: How Axiomise Addresses the Verification Challenge
by Daniel Nenni on 06-16-2022 at 10:00 am

Dan is joined by GD Bansal, COO at Axiomise.  Dan explores the Axiomise business model to provide training and consulting services for formal verification with GD. The benefits and challenges of using formal verification on complex designs are discussed, along with the benefits of the Axiomize vendor-neutral approach to deploying state-of-the-art tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.