SNPS1670747138 DAC 2025 800x100px HRes

Enjoy the Super-Cycle it’ll Crash in 2023!

Enjoy the Super-Cycle it’ll Crash in 2023!
by Malcolm Penn on 09-23-2022 at 6:00 am

Semiconductor Crash 2022

At our January 2021 industry update webinar, 20 months ago, we forecast “There’s no tight capacity relief before 2022 at the earliest” whilst simultaneously cautioning “Enjoy the super-cycle … it’ll crash in 2023!”  At the time, we were dismissed as being ‘ever optimistic’ for the first prognosis and ‘losing the plot’ for the second. 20 months down the road, both predictions have transpired to have been astutely prescient.

We made those predictions based on our tried and tested methodology of analyzing what we see as the four key driving factors influencing market growth, namely the economy, unit demand, capacity and ASPs, overseen by our 55 years of hands-on experience in the industry.

Just to recap, the economy determines what users can afford to buy; unit demand reflects what users actually buy; capacity determines how much demand can be met; and ASPs set the price units can be sold for.

Industry Update

Twelve months later, at our Jan 2022 update webinar, we forecast 10 percent growth was still possible for 2022, but with a downside of 4 percent and upside of just 14 percent.  Our concerns at that time were that the risks outweighed the opportunities; the chip market was seriously overheating; and the road ahead very stony.  We were especially worried about:

  • Supply/Demand Rebalance (Once The 2021 CapEx Increases Bite Home)
  • Slowdown In End Demand (Covid Surge Slowdown and/or Supply Chain Constraints)
  • Economic Slowdown (Covid Restrictions, Inflation Concerns, Unwinding Fiscal Stimuli)

Our over-riding caveat was “When the bubble bursts, it burst, there are no soft landings”.  First unit shipments plummet and then ASPs collapse, with the added word of caution “Don’t be surprised if the market goes negative … it more often does than not.”

Five months later, at our May update event, despite most firms still reporting stellar first half year sales, we were once again out on a limb, reducing our January 10 percent forecast to just 6 percent, due to concerns about the worsening economic outlook, driven by increased food, energy and gasoline costs wrought by Russia’s war with Ukraine, and its squeeze on consumer spending.  At that time, we reiterated our belief that the downturn would hit in Q3, triggered by the anticipated capacity increases loosening up supply constraints.

The downturn actually broke in June 2022, with a huge implosion in sales driven by a slowdown in unit shipments and an 18 percent fall in ASPs, from Q2’s $1.350 peak to $1.105, wiping out in one month 77 percent of the Super-Cycle’s gain. This pushed our January and May forecasts well into bear territory.

Granted downturns are always uneven and patchy at the start, partly driven by their commodity nature, or not, and partly because there is at least four months’ of WIP production in the wafer fab pipeline which cannot be stopped, but we firmly believe no sector will prove 100 percent immune. As we move towards 2023, the inflation outlook is much worse than we originally believed, in the high double-digit range, with the Fed now having abandoned its ‘soft landing’ goal. Interest rates everywhere, except China, are now poised to rise much faster in response, despite the obvious and clear risk of triggering an economic recession.

At the same time, Russia has now openly declared weaponization of its oil, natural gas, and mineral supplies, especially In Europe, and China’s still aggressive Covid zero tolerance strategy continues to disrupt the global supply chains everywhere. War sanctions on Russia are inevitably also disrupting the global economy and heightened geopolitical tensions with China re Taiwan and the US technology sanctions are also raising the global tension thermometer.

Current Status

As we entered the third quarter, the chip industry is facing a simultaneous series of global economic shocks coincident with slowdown in market demand. Every single warning light on the economy is now flashing red. Unit shipments are still running about 20 percent above their long-term average and a sharp downward adjustment is inevitable as lead times start to shorten and excess inventory is purged.

CapEx spend, as percentage of semiconductor sales, is at 20-year record high, with the massive 2H-2021 74 percent increased CapEx splurge about to hit home in 2H-2022, just as the market demand turns down. The current ASPs plunge has been driven by memory, but all other sectors are weakening and expected to follow suit in 1H-2023.

We reached the top of the semiconductor Super-Cycle roller-coaster in Q2-2022 and the downward plunge started in June. Sectors in the front are already feeling the impact; for those in the rear? Their start has yet to come. The 17th industry down cycle has now definitely begun.

We have revised our 2022 forecast down to 4 percent growth, but held our 2023 outlook at a 22 percent decline, back to around US$450 billion, and, all things being equal, we should be back to single-digit positive growth in 2024.

Downturn Opportunities

Downturns are a structural part of the industry psyche, which is quite natural and normal. They are also a time when longer-term market share gains are best made.

It is also a time when innovation comes to the fore. Firms need to emphasize new products to differentiated from their competitors and invent their way out of the crisis.

Innovation tends to slow during an upturn as foundries and IDMs ration R&D access to scarce wafer fab capacity. The opposite happens in a downturn when fab capacity becomes abundant.

As a result, R&D and new IC design activity always accelerates during a downturn and, as with the past cycles, the downturn beneficiaries will be the IC design houses, EDA industry and leading-edge suppliers.

Malcolm Penn

20 Sep 2022

Preview of the webinar proceedings here: https://youtu.be/PX9TmRTzD18

Order the webinar slides and recording here: https://www.futurehorizons.com/page/133/

Also Read:

Semiconductor Decline in 2023

Does SMIC have 7nm and if so, what does it mean

Samtec is Fueling the AI Revolution


Flex Logix: Industry’s First AI Integrated Mini-ITX based System

Flex Logix: Industry’s First AI Integrated Mini-ITX based System
by Kalar Rajendiran on 09-22-2022 at 10:00 am

AI Workflow for Development and Deployment

As the market for edge processing is growing, the performance, power and cost requirements of these applications are getting increasingly demanding. These applications have to work on instant data and make decisions in real time at the user end. The applications span the consumer, commercial and industrial market segments. AI hardware accelerator solutions are being sought to meet the needs of these applications.

With its focus on the embedded vision market, Flex Logix introduced its InferX X1 accelerator chip in 2019. The InferX X1 is the industry’s most efficient AI edge inference accelerator that can bring AI to embedded vision applications across multiple industries. Flex Logix has been working on making AI system adoption and deployment easier in a number of markets and applications. Ease of incorporation into the customers’ applications accelerates the adoption of any useful technology and the InferX X1 is no exception.

At the AI Hardware Summit this week, Flex Logix launched Hawk X1, the industry’s first Mini-ITX AI x86 based system card. I had an opportunity to chat about that announcement with Barrie Mullins, VP of Product Management for Flex Logix.

Enabling Easier Edge and Embedded AI Deployment

The Hawk X1 is designed for customers looking to upgrade their current Mini-ITX systems with AI, or quickly develop new edge AI appliances and solutions. As the bulk of existing mini-ITX systems are x86 based, the Hawk X1 will be a drop-in upgrade to these systems. This makes it easier for customers to get to market faster at a lower development cost and reduced risk. From an operating system perspective, the Hawk X1 supports both Linux and Windows.

AI Workflow for Development and Deployment

The Hawk X1 system leverages Flex Logix’s InferX accelerator chip, the industry’s most efficient AI inference chip for edge systems. It provides Flex Logix’s customers a price/performance/power advantage over competitive edge inference solutions. Flex Logix’s Easy Vision platform includes pre-trained ready-to-use models for object detection such as hard hat, people counting, face mask and license plate recognition. Customers can save over six months of product development time, additional system costs and power compared to alternate solutions.

Depending on the application, a customer may want to utilize the pre-trained ready-to-use AI models, load them to the Hawk X1 board and run their application. Alternately, they may want to develop their own models using the Easy Vision Platform and the Infer Model Development Kit (MDK). The MDK optimizes, quantifies and compiles the models into the format that the Hawk X1 will understand. The customer then uses the Run Time Kit (RTK) to configure and manage the run time execution.

Hawk X1 Hardware Specification

The interfaces are ready for direct connect to the cameras. The Hawk X1 includes two InferX X1 accelerators and the AMD Quad-core chip for delivering the highest performance system. Flex Logix plans to launch another version of the card with one InferX X1 chip and an AMD Dual-core chip to address customers who may want a lower performance system. Customers get to choose their own memories allowing customers to have more control on the performance, power consumption and cost of the system. The Hawk X1 comes with a thermal solution that sits on top of the card to dissipate the heat.

Target Markets and Applications

  • Safety and Security
    • Mask, personal protection equipment (PPE) detection, building access, data anonymization and privacy
  • Manufacturing and Industrial Optical Inspection
    • Employee safety, logistics and packaging, and inspection of parts, processes and quality
  • Traffic and Parking Management
    • Traffic junction monitoring, vehicle detection and counting, public and private parking structures, toll booths
  • Retail
    • Logistics, safety, consumer monitoring, automated checkout, and stock management
  • Healthcare:
    • Medical image analytics, patient monitoring, mask detection, staff and facility access control and safety
  • Agriculture
    • Crop inspection, weed and pest detection, automated harvesting, yield and quality analysis, animal monitoring and health analysis
  • Robotics
    • First/last mile delivery, forklifts, tuggers, drones, and autonomous machines

Benchmarking Hawk X1

The Hawk X1 offers better performance against all NVIDIA boards. The main competition in this space is the Xavier AGX based systems. In the chart below, you can see how the Hawk X1 compares against the Xavier AGX with popular and standard object detection models.

Hawk X1 Availability

Flex Logix is taking orders for delivery starting January 2023. Hawk X1 is priced at $1,299 for order quantity of 1Ku+.

Summary

Through the latest addition to its product portfolio, Flex Logix has made AI system adoption and deployment easier for a number of market applications. The Hawk X1  in the mini-ITX form factor is deployment ready for Safety and Security, Manufacturing and Industrial and Traffic and Parking management applications.

You can read the Hawk X1 product announcement here.

For more details, visit Flex Logix website.

Also Read:

Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform

EasyVision: A turnkey vision solution with AI built-in

WEBINAR: 5G is moving to a new and Open Platform O-RAN or Open Radio Access Network


Semiconductor Decline in 2023

Semiconductor Decline in 2023
by Bill Jewell on 09-22-2022 at 8:00 am

Semiconductor MArket Forecast 2022 1

The semiconductor market dropped 0.8 percent in 2Q 2022 versus 1Q 2022, according to WSTS. The 2Q 2022 decline followed a 0.5% quarter-to-quarter decline in 1Q 2022. The 2Q 2022 revenues of the top 15 semiconductor suppliers match the overall market results, with a 1% decline from 1Q 2022. The results by company were mixed. Memory suppliers SK Hynix and Micron Technology led with 2Q 2022 revenue growth of 13.6% and 11.0% respectively. AMD’s revenues were up 11.3% primarily due to its acquisition of Xilinx. The weakest companies were Nvidia with a 19% decline due to weakness in gaming and Intel with a 16.5% decline because of a weak PC market.

The outlook for 3Q 2022 is also mixed. The strongest growth is from the companies primarily supplying analog ICs and discretes. STMicroelectronics has the highest expectations, with its 3Q 2022 revenue guidance a 10.5% increase from 2Q 2022. ST cited strong overall demand, particularly in the automotive and industrial segments. Automotive and/or industrial segments were also cited for expected 3Q 2022 revenue growth by Infineon Technologies, NXP Semiconductors, and Analog Devices. Analog Devices’ revenue was over $3 billion for the first time in 2Q 2022 (primarily due to its acquisition of Maxim Integrated Products last year) and made it on to our list of top semiconductor companies.

Weakness in the PC and smartphone markets were mentioned as major factors in expected 3Q 2022 revenue drops by MediaTek and Texas Instruments. Nvidia cited continuing weakness in gaming for an expected 12% decline. The largest decline in 3Q 2022 revenue will be from the memory companies. Micron Technology guided for a 21% decline. Samsung did not provide specific revenue guidance, but Dr. Kye Hyun Kyung, head of Samsung’s semiconductor business, said the second half of 2022 “looks bad.”

The weakness in the smartphone and PC markets is reflected in recent forecasts from IDC. Smartphone unit shipments are projected to decline 7 percent in 2022 following 6 percent growth in 2021. Smartphones are expected to return to 5 percent growth in 2023. PCs, which experienced 13% growth in 2020 and 15% growth in 2021 as a result of the COVID-19 pandemic, are forecast to decline 13% in 2022 and 3% in 2023. Automotive remains healthy, with LMC Automotive projecting light vehicle unit production will increase 6.0% in 2022 and 4.9% in 2023.

The global economic outlook is another factor pointing toward a slowing of the semiconductor market. Recent forecasts for global GDP growth in 2022 are in the range of 2.7% to 3.2%. The percentage point decline (or deceleration) from 2021 growth ranges from 2.9 points to 3.3 points. Our Semiconductor Intelligence model predicts a 3-percentage point deceleration in GDP growth will result in a 16-point deceleration in semiconductor market growth. Our current forecast of 5% semiconductor growth in 2022 is a 21-point deceleration from 26% growth in 2021. Global GDP is expected to show continued growth deceleration in 2023 of 0.3 to 1.0 points. However, a global recession is still a possibility. Bloomberg surveys put the probability of a recession in the next 12 months at 48% for the U.S. and 80% for the Eurozone.

Our Semiconductor Intelligence forecast of a 5.0% increase in the semiconductor market in 2022 is the lowest among recent publicly available forecasts. Other recent projections of 2022 semiconductor market growth range from 6% (Future Horizons) to 13.9% (WSTS).

The semiconductor market will likely show at least five consecutive quarter-to-quarter declines from 1Q 2022 through 1Q 2023. If the global economy does not weaken more than current expectations, the semiconductor market should have a modest recovery in the second half of 2023. However, the quarterly trends will drive the market negative for the year 2023. Our Semiconductor Intelligence forecast is a 6.0% decline in 2023. Future Horizons is the most pessimistic with its August projection of a 22% drop in 2023. Gartner projects a decline in of 2.5%. IDC and WSTS expect growth in 2023, but at a slower rate than 2022: 6.0% for IDC and 4.6% for WSTS.

After 2023, the semiconductor market should stabilize toward typical trends. The COVID-19 related shutdowns and resulting supply chain disruptions will be mostly resolved. Traditional market drivers smartphones and PCs should be back to normal growth. Emerging applications such as automotive and IoT (internet of things) will become increasing important as market drivers.

Also Read:

Automotive Semiconductor Shortage Over?

Electronics is Slowing

Semiconductors Weakening in 2022


Load-Managing Verification Hardware Acceleration in the Cloud

Load-Managing Verification Hardware Acceleration in the Cloud
by Bernard Murphy on 09-22-2022 at 6:00 am

Scheduling emulation min

There’s a reason the verification hardware accelerator business is growing so impressively. Modern SoCs – now routinely multi-billion gate devices – must be verified/validated against massively demanding test plans, requiring high levels of test coverage. Use cases extend all the way up to firmware, OSes, even application software, across a dizzying array of power saving configurations. Testing for functionality, performance, peak and typical power, security and safety goals. All while also enabling early software stack development and debug. None of this would be possible without hardware accelerators, offering many orders of magnitude higher verification throughput than is possible with software simulators.

Maximizing Throughput and ROI

Hardware accelerators are not cheap, but there is no other way to get the job done; SoC design teams must include accelerators in their CapEx budgeting. But they want to exploit their investment as fully as possible. Ensuring machines are kept fully occupied and are fairly allocated across product development teams.

In the software world, this load management and balancing problem is well understood. There are plenty of tools for workload allocation, offering a range of sophisticated capabilities. But they all assume a uniform software view of the underlying hardware. Hardware options can range across a spectrum of capacities and speeds, while still allowing at least in principle any job to be virtualized anywhere in the cloud/farm. Not so with hardware-accelerated jobs which must provide their own virtualization options given the radically different nature of their architectures.

Another wrinkle is that there are different classes of hardware acceleration, centered either on emulation or FPGA prototyping. Emulation is better for hardware debug, at some cost in speed. FPGA prototyping is faster but not as good for hardware debug. (GPUs are sometimes suggested as another option for their parallelism, though I haven’t heard of GPUs playing a major role so far in this space.)

Verifiers like to use emulators or prototypers in in-circuit emulation (ICE) configurations. Here they connect the design under test to real hardware. This directly mimics the hardware environment in which the chip under design must ultimately function. This requires physical connectors, and hence application-specific physical setups. Further constraining virtualization except where the hardware offers multiple channels between connectors and the core emulator/prototyper.

Swings and Roundabouts

Expensive hardware and multiple suppliers suggest an opportunity for allocation software to maximize throughput and ROI against a range of hardware options. Altair aims to tap this need with their Altair® Hero™ product, an enterprise job scheduler built for multi-vendor emulation environments. As their bona fides for this claim, they mention that they already have field experience deploying with Cadence Palladium Z1 and Z2, Synopsys Zebu Z4 and Synopsys HAPS. They expect to extend this range over time to also include Cadence Protium and Siemens EDA Veloce. They also hint at a deployment in which users can schedule jobs in a hardware accelerator farm, choosing between Palladium and Zebu accelerators.

In a mixed vendor environment, clearly Altair has an advantage in providing a neutral front-end for user selected job scheduling. Advantages are less clear if users hope for the scheduler to dynamically load-balance between different vendor platforms. Compile for FPGA-based platforms is generally not hands-free; a user must pick a path before they start. Unless perhaps the flow compiles for both platforms in parallel, allowing for a switch before running? Equally scheduling ICE flows must commit to a platform up-front given physical connectivity demands.

Another point to consider is that some vendors closely couple their emulation and prototyping platforms. In order to support quick debug handoff between the two flows. Treating such platforms as independently allocatable would undermine this advantage.

In a single vendor environment, I remain open-minded. The hardware guys are very good at their hardware and have apparently put work into supporting virtualization. But could a 3rd party with significant cloud experience add scheduling software on top? To better optimize throughput and ROI in balancing jobs? I don’t see why that shouldn’t be possible.

You can learn more from this Altair white paper.

 

 


The Truly Terrifying Truth about Tesla

The Truly Terrifying Truth about Tesla
by Roger C. Lanctot on 09-21-2022 at 2:00 pm

The Truly Terrifying Truth about Tesla

When I think about Tesla’s impact on the wider automotive industry, I conjure images of auto CEOs waking up in the middle of the night in cold sweats. It isn’t just that Tesla virtually or actually gets away with murder, it’s that the level of impunity is almost completely unchallenged by regulators and is actually celebrated by consumers – or at least Tesla owners.

Nowhere is this clearer than on the question of build quality. Tesla vehicles are so notorious for uneven body panel gaps, mismatched paint, and broken windshields that posters on Tesla forums provide checklists for Tesla buyers to thoroughly inspect their cars upon delivery.

This behavior would not be tolerated for a nanosecond by a buyer of a Chevy, Ford, Toyota, or BMW. Consumers would refuse to accept or take delivery of a new Honda if the vehicle had a single scratch. Not so with Tesla. Some Tesla buyers expect to take their brand new vehicles to a car detailer for a remedial once over after taking delivery.

It’s not just that the average Tesla buyer has likely waited months for their vehicle. Tesla’s are given a pass because the company has tapped into the vast, aching consumer dissatisfaction with existing vehicles and the entire business of buying, owning, insuring, maintaining, and even driving a typical car.

Musk, Tesla’s CEO, takes it even further, increasingly positioning Tesla ownership as both a privilege and a responsibility. Tesla is engaged in a great project of refining automated or semi-automated driving and every Tesla owner is contributing to this effort – especially drivers of Tesla’s with the full-self-driving beta. Musk has told these folks – who have paid $15K for the privilege – that they have been selected or accepted into the beta program.

A very different picture emerges among legacy auto makers competing with Tesla. Prior to her ascension to the position of CEO at General Motors Mary Barra captured the car marketing malaise in the U.S. and globally when she told a gathering of female executives that her goal was to end the era of sub-standard vehicles: “No more crappy cars,” she said.

Auto makers conduct studies to better understand what features, colors, and smells consumers prefer, how much they want to pay, what services they’d like to subscribe to, and how they feel about the brand. Legacy auto makers obsess over user interfaces, industry standards and safety protocols.

To top these marketing efforts off, car companies then spend billions of dollars on advertising trying to convince consumers that the vehicles they have built are precisely what the consumer is looking for to enhance their social status, help them have more fun, or make them more productive.

Tesla does none of this. Tesla’s customer engagement is nothing more than a fait accompli. You get what you’re gonna get and you’re going to like it because there is no alternative. Not only that, you’re going to wait for it – maybe months.

The message to the Tesla consumer and from the Tesla consumer is the same. No matter what shortcomings there may be in the Tesla product, anything is better than the “crappy cars” available from all other auto makers. All of them.

It is not just the cars, of course. It’s the entire experience. The dealerships. The insurance companies. The mechanics. The gas stations. Tesla offers a vast parallel universe where sales, service, recharging, and insurance are available under one roof. It’s a value proposition that legacy auto makers simple can’t match.

But the most galling manifestation of this unique customer relationship is the consumer acceptance of bad build quality – an enduring aspect of Tesla’s brand value proposition. It’s almost a feature. It’s almost laughable. It’s actually quite serious, but consumers routinely cut Tesla a bit of slack.

All of this was brought home to me during Green Hills Software CEO Dan O’Dowd’s advertising campaign as part of his Dawn Project to have Tesla sanctioned for making what he saw as false claims regarding the performance of its full-self-driving (FSD) beta software. O’Dowd may have exaggerated the weaknesses of Tesla’s software – with images of cardboard children being repeatedly run over, but observers could plainly see that not only was O’Dowd incapable of stirring any consumer outrage, he raised concerns that outraged Tesla fanboys might actually turn on him.

O’Dowd pointed out the obvious weaknesses of Tesla’s FSD beta, and consumers shrugged. Consumers have been so hungry for a transformative driving experience and a transcendent transportation experience – which many feel they have found in Tesla – that they will forgive just about anything.

Tesla’s have proven that they can last forever, maintain or improve their resale value after the initial sale (whether purchased new or used), and service (when needed) is delivered directly to the consumer. It is to the point that any time a consumer has a problem with their Tesla, they assume it’s their fault.

This is the truly terrifying thing about Tesla. No other auto maker has this kind of relationship with their customers. No other auto maker can count on this level of pre-emptive forgiveness for any and all sins. And the cars are sufficiently expensive and represent such a commitment and social and emotional statement that consumers will defend the company like they would an ex-spouse being criticized by a relative. Tesla owners are deeply invested in the experience of owning a Tesla. It really is terrifying.

What might one day be terrifying to Tesla is when the day arrives, as it no doubt will, that Tesla consumers begin to regard the company as just another auto maker – just like all the rest. That day is sure to come, but until then Tesla will continue to ride the benefit of the doubt wave at the vanguard of global vehicle electrification. Still, if Tesla can’t get the paint right, what does that say about what might be wrong with its beta software?

Also Read:

GM Should BrightDrop-Kick Cruise

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

GM Buyouts: Let’s Get Small!

Automotive Semiconductor Shortage Over?


Die-to-Die Interconnects using Bunch of Wires (BoW)

Die-to-Die Interconnects using Bunch of Wires (BoW)
by Daniel Payne on 09-21-2022 at 10:00 am

BoW min

Chiplets are a popular and trending topic in the semiconductor trade press, and I read about  SoC disaggregation at shows like ISSCC, Hot Chips, DAC and others. Once an SoC is disaggregated, the next challenge is deciding on the die-to-die interconnect approach. The Open Compute Project (OCP) started 10 years ago as a way to share designs of data center products between a diverse set of companies, like: ARM, Meta, IBM, Intel, Google, Microsoft, Dell, NVIDIA, HP Enterprise, Lenovo and others. In July the OCP Foundation announced their approach to SoC disaggregation with an interface specification, and dubbed it Bunch of Wires (BoW).

I contacted Elad Alon, CEO and co-founder of Blue Cheetah Analog Design to learn about BoW and how it compared to the UCIe approach. Here’s a quick comparison:

  • BoW
    • Focused on die disaggregation
    • Open standard from the start
    • Allows design freedom and application-specific optimization
  • UCIe
    • Focused on package aggregation
    • Interoperability favored over design freedom, similar to PCIe
    • Specified by Intel, then other members added

In a package aggregation approach, there would’ve been separate chips to begin with, but now the chiplets are brought into the same package, almost a PCB-type thinking. With die disaggregation, all of the chiplet functions would’ve been combined in a single SoC, but the die size was too large or too costly. There’s room for both the BoW and UCIe approaches as chiplet use expands.

Bunch of Wires

The BoW is a open PHY specification for die-to-die (D2D) parallel interfaces that can be implemented in organic laminate or advanced packaging technologies. Here’s a diagram of what it looks like, along with some metrics:

BoW Features

With BoW the D2D interfaces can be optimized to the host chiplet products, using minimal required features while supporting interoperability. A slice for BoW contains 16 data wires, a source-synchronous differential clock, and two optional signals – FEC (error control), AUX (DBI, repair, control).

BoW signals

A stack is a group of slices extending towards the inside of the chiplet, then a link is one or more slices forming a logical interface from one chiplet to another.

Slice, Stack, Link

In the BoW specification it only requires the wire order on the package, but not a specific bump map, giving you some flexibility while retaining interoperability. All of the PHYs must support 0.75V for compatibility across a wide range of process technologies, although systems can use other voltages to optimize for performance, BER or reach.

BoW interoperability and flexibility

BoW Adoption

A version 1.0 specification was formally approved and released in July 2022, and an ecosystem has already formed as BoW is being designed into products using multiple process nodes: 65nm, 22nm, 16nm, 12nm, 6nm, 5nm.

Companies using or supporting BoW:

  • Blue Cheetah Analog Design
  • eTopus and QuickLogic  – eFPGA Chiplet template
  • Samsung – foundry
  • NXP – BoW PHY design
  • Keysight – test and measurement
  • Ventana Micro Systems
  • DreamBig  – hyperscale Smart NIC/DPU chiplet
  • d-Matrix – AI compute platform
  • Netronome – SmartNIC

Both Blue Cheetah and d-Matrix have taped out with Bunch of Wire test chips, so we could expect silicon results later this year. You can even get involved with the weekly meetings, or start by reading the 33 page specification. BoW is an open specification, so there’s no NDA to sign or legal paperwork.

There is an OCP Global Summit, scheduled for October 18-20, and there’s an ODSA session on BoW.

Summary

Chiplet-based electronic systems are quickly emerging by semiconductor companies from around the globe, so it’s an exciting time to see emerging chiplet interconnect standards arrive to provide some standardization. The Open Compute Project has good momentum with version 1.0 of the BoW specification, and you can expect to see more news as companies announce products that use this interconnect later this year.

There’s even a “plugfest” for BoW PHY interoperability testing, and the interoperability community has several participants: Google, Cisco, Arm, Meta, JCET, d-Matrix, Blue Cheetah, Analog Port.

Related Blogs


Verification IP Hastens the Design of CXL 3.0

Verification IP Hastens the Design of CXL 3.0
by Dave Bursky on 09-21-2022 at 6:00 am

cxl standards 1

Although version 2.0 of the Computer Express Link (CXL) standard is just making it into new designs, the next generation, version 3.0, has been approved and is now ready for designers to implement the new silicon and firmware needed to meet the new standard’s performance specifications. CXL, an open industry-standard interconnect, defines an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion and Accelerators. (For more about the CXL standard, check out the CXL Consortium website – www.computeexpresslink.org.)

The CXL Consortium is an open industry standard group that was created to develop technical specifications that enable companies to deliver breakthrough performance for emerging usage models. Also able to support an open ecosystem for data-center accelerators and other high-speed enhancements, the standard offers coherency and memory semantics using high-bandwidth, low-latency connectivity between host processor and devices such as accelerators, memory buffers, and smart I/O devices. The updated standard (version 3.0) provides a range of advanced features and benefits including doubling bandwidth with the same latency (see the table).

To help speed the design of the new chips need to implement CXL 3.0, Avery Design Systems has developed verification IP (VIP) and virtual platform solutions supporting verification for the first wave of CXL 3.0 designs.  As explained by Chris Browy, vice president sales/marketing of Avery, “we can enable leading developers of server processors, managed DRAM and storage class memory (SCM) buffers, switch/retimer, and IP companies to rapidly meet the growing needs for the CXL datacenter ecosystem in 2022 and beyond. Our collaboration with key ecosystem companies allows Avery to deliver best-in-class, robust CXL 3.0 VIP solutions that streamline the design and verification process and foster the rapid adoption of the CXL standard by the industry.  Our CXL virtual platform and VIP co-simulation enables complete CXL system-level bring-up of SoCs in a linux environment.”

Avery provides a complete System Verilog/UVM verification solution including models, protocol checking, and compliance test suites for PCIe® 6.0 and CXL 3.0 for CXL host, Type 1-3 devices, switches, and retimers. The verification solution is based on an advanced UVM environment that incorporates constrained random-traffic generation, robust packet-, link-, and physical-layer controls and error injection, protocol checks and coverage, functional coverage, protocol analyzer-like features for debugging, and performance analysis metrics. Thanks to the advanced capabilities of the Avery VIP, claims Browy, engineers can work more efficiently, develop more complex tests, and work on more complex topologies, such as multi-path, multi-link solutions. The company’s compliance test suites offer effective core -through-chip-level tests, including those used in compliance workshops as well as extended tests developed by Avery to cover the specification features.

The VIP suite adds key CXL 3.0 updates to the 2.0 VIP offering. Some of the additions include:

  • Double the bandwidth using PCIe 6.0 PHY for 64 GT/s
  • Fabric capabilities
    • Multi-headed and fabric-attached devices
    • Enhanced fabric management
    • Composable disaggregated infrastructure
  • Improved capability for better scalability and resource utilization
    • Enhanced memory pooling
    • Multi-level switching
    • Direct memory/ Peer-to-Peer accesses by devices
    • New symmetric memory capabilities

Additional capabilities and features included in the VIP CXL 3.0 release include:

  • Additional CXL switch agent with fabric manager support
  • Support for AMBA® CHI to CXL/PCIe via CXS
  • Dynamic configuration of VIP for legacy PCIe, CXL 3.0, 2.0 or CXL 1.1 including CXL device types 1-3
  • Realistic traffic arbitration among CXL.IO, CXL.Cache, CXL.Mem and CXL control packets.
  • Unified user application data class for both pure PCIe and CXL traffic.

In addition to the CXL 3.0 support mentioned above, Avery has recently announced extensions to its QEMU-CXL virtual platform specifically for the 3.0 version. The enhancements include the latest linux kernel 5.19.8 supporting CXL and interoperability tests such as using ndctl for memory pooling provisioning, resets and Sx states, and Google stressapptest using randomized traffic from processor to HDM, creating realistic high-workload situations.

Co-simulating the SoC RTL with a QEMU open software virtual machine emulator environment rnel allows software engineers to natively develop and build custom firmware, drivers, and applications. They can then run them unaltered as part of a comprehensive system-level validation process using the actual SoC RTL hardware design. In a complementary manner, hardware engineers can evaluate how the SoC performs through executing UEFI and OS boot and custom driver initialization sequences, Additionally, designers can run real application workloads and utilize the CXL protocol aware debugging features of the VIP to effectively investigate any hardware related issues.

“Combined with our CXL compliant VIP, our QEMU CXL virtual platform and VIP co-simulation enables complete CXL system-level bring-up of SoCs in a Linux environment. With this approach customers can address new CXL 3.0 design and verification challenges even when no mainstream commercial platforms support the latest standards,” said Chris Browy, vice president sales/marketing at Avery.

Through the development of VIP, Browy feels that Avery enables system and SOC design teams to achieve dramatic functional verification productivity improvements through the comprehensive VIP and virtual platforms.

www.avery-design.com

cbrowy@avery-design.com

Also Read:

Verifying Inter-Chiplet Communication

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions


Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC
by Daniel Nenni on 09-20-2022 at 10:00 am

Ansys chip package board
Thermal, mechanical, electrical, and power analysis must be analyzed simultaneously to capture the interdependencies that are accentuated in muti-die 3D-IC designs

Over its 40+ year history, electronic design automation (EDA) has seen many companies rise, fall, and merge. In the beginning, in the 1980s, the industry was dominated by what came to be known as the big three — Daisy Systems, Mentor Graphics, and Valid Logic (the infamous “DMV”). The Big 3 has morphed over the years, eventually settling in for a lengthy run as Cadence, Synopsys, and Mentor. According to my friend, Wally Rhines’ always informative presentations, the Big 3 have traditionally represented over 80% of the total EDA revenue. However, EDA is now absolutely led by four major billion-dollar-plus players, plus a collection of much smaller niche suppliers. The “Big 3” is now the “Big 4,” consisting of Ansys, a $2 billion Fortune 500 company, Synopsys, Cadence Design Systems, and Siemens Digital Industries Software (née Mentor Graphics). Much smaller, niche players like Silvaco (the next largest) follow in the distance with a cool $50 million in revenues.

Ansys has a 50-year history in engineering simulation, but its involvement in EDA began just with the acquisition of Apache in 2011. In the process, Ansys acquired Redhawk — a verification platform specifically for the power integrity and reliability of complex ICs, along with a suite of related products that included Totem, Pathfinder and PowerArtist. This evolution continued with the acquisition of Helic with its suite of on-chip electromagnetic verification tools, and subsequent acquisitions in domains such as photonics, to address key verification issues in IC development.

To give context to Ansys’ role in the current EDA ecosystem, the chip design flow consists of 40 or more individual steps executed by a wide range of software tools. No single company can hope to cover the entire flow. While every stage is important, there are certain extremely difficult, critical verification steps mandated by semiconductor manufacturers that all chips must pass. These signoff verifications are required before foundries will accept a design for manufacture.

Ansys multiphysics solutions for 3D-IC provide comprehensive analysis solutions along 3 axis: across multiple physics, across multiple design stages, and across multiple design levels

Referred to as golden signoff, they include:

  • Design Rule Checking (DRC)
  • Layout vs. Schematic (LVS)
  • Timing Signoff (STA: static timing analysis)
  • Power Integrity Signoff (EM/IR: electromigration/voltage drop).

The reliability and confidence in these checks rests on industry reputations and experience built up over decades, including years of close collaboration with foundries, which makes golden signoff tools very difficult and risky to displace. Today, virtually every IC design relies on Calibre from Siemens and PrimeTime from Synopsys. Joining these two longstanding golden tools, Ansys RedHawk-SC™ (for digital design) and Ansys Totem (for analog/mixed signal (AMS) design) are golden tools for power integrity signoff, critical for today’s advanced semiconductor designs.

Foundry Certification

Beyond the signoff certification of RedHawk-SC and Totem for power integrity, other Ansys tools have also been qualified by the foundries for a range of verification steps including on-chip electromagnetic modeling (RaptorX, Exalto, VeloceRF), chip thermal analysis (RedHawk-SC Electrothermal), and package thermal analysis (Icepak).

Due to limited engineering resources and demanding schedules, foundries typically work with just a few select EDA vendors as they develop each new generation of silicon processes. Most of these collaborations now exist within the bubble of the Big 4, hinging on relationships built on a reputation for delivering specific technological capabilities, working relationships forged over many years, and the reliability of those tools established by working with a wide spectrum of customers over many technology generations.

Ansys Brings EDA Into the 3D Multiphysics Workflow

The evolution of semiconductor design is moving beyond scaling down to ever-smaller feature sizes and is now addressing the interdependent system challenges of 2.5D (side-by-side) and 3D (stacked) integrated circuits (3D-IC). These disintegrate traditional monolithic designs into a set of ‘chiplets’ that offer benefits in yield, scale, flexibility, reuse, and heterogenous process technologies. But in order to access these advantages, 3D-IC designers must grapple with the significant increase in complexity that comes with multi-chip design. Many more physical effects must be analyzed and controlled than in traditional single-chip designs, and a broad suite of physical simulation and analysis tools is critical to manage the added complexity of the multiphysics involved.

Ansys has strategically positioned itself to take on these challenges as an industry leader by leveraging its lengthy, broad multiphysics simulation experience with updated Redhawk-SC and Totem capabilities to support advances in power integrity for 3D-IC. This includes brand new capabilities like RedHawk-SC Electrothermal that are targeted specifically at 3D-IC design challenges with thermal and high-speed integrity.

Over the past few years, Ansys has been recognized by TSMC for its critical role in the EDA design flow. In 2020, Ansys achieved early certification of its advanced semiconductor design solution for TSMC’s high-speed CoWos (Chip-on-Wafer-on-Substrate) design, and InFO (Integrated Fan-Out) 2.5D and 3D packaging technologies. Continued successful collaboration with TSMC has delivered an hierarchical thermal analysis solution for 3D-IC design. In a more recent collaboration, Ansys Redhawk-SC and Totem achieved signoff certification for TSMC’s newest N3E and N4P process technologies. Similar collaborations for advanced processes, multi-die advanced packaging, and high-speed design have led to certifications from Samsung and GlobalFoundries. Ansys is even moving beyond foundry signoff and certification to define reference flows incorporating these tools, such as TSMC’s N6 radio frequency (RF) design reference flow.

TSMC has also recognized Ansys with multiple Partner of the Year Awards in the past 5 years, most recently in:

  • Joint Development of 4nm Design Infrastructure for delivering foundry-certified, state-of-the-art power integrity and reliability signoff verification tools for TSMC N4 process
  • Joint Development of 3DFabric™ Design Solution for providing foundry-certified thermal, power integrity, and reliability solutions for TSMC 3DFabric™, a comprehensive family of 3D silicon stacking and advanced packaging technologies

Achieving Greater Efficiency through Engineering Workflows

As more system companies embark on designing their own bespoke silicon and 3D-IC technology becomes more pervasive, more physics must be analyzed, and they must be analyzed concurrently, not in isolation. Multiphysics is not merely multiple physics. Building a system with several closely integrated chiplets is more complex, so more physical/electrical issues come into play. In response, Keysight, Synopsys and others have chosen to partner with Ansys, recognizing the value of its open and extensible multiphysics platforms. Keysight has integrated Ansys HFSS into their RF flow, while Synopsys has tightly integrated Ansys tools into their IC design flow.

Ansys is well-positioned to accelerate 3D-IC system design, offering essential expertise in different disciplines — in EDA and beyond — for an efficient workflow that spans a range of physics in virtually any field of engineering. For example, Ansys solutions support the complete thermal analysis of a 3D systems, including the application of computational fluid dynamics to capture the influence of cooling fans, and mechanical stress/warpage analysis to ensure system reliability despite differential thermal expansion of the multiple chips. Ansys even provides technology to address manufacturing reliability, predicting when a chip will fail in the field. These products enable the understanding of silicon and systems engineering workflows from start to finish.

Ansys’ influence as a leader in physics spans decades. It extends beyond multiple physics to multiphysics-based solutions that consider interactions more consistent with 3D-IC systems development simultaneously — in thermal analysis, computational fluid dynamics for cooling, mechanical, electromagnetic analysis of high-speed signals, low-frequency power oscillations between components, safety verification, and more, all within the context of the leading EDA flows. And, Ansys’ open and extensible analysis ecosystem connects to other EDA tools and the wider world of computer-aided design (CAD), manufacturing, and engineering.

Summary

There’s little doubt that 3D-IC innovation is accelerating. As systems companies expand further into 3D-IC, they will continue to look to, and trust Ansys solutions in support of their IC designs. To date, the vast majority of the world’s chip designers rely on Ansys products for accurate power integrity analysis. Ansys provides cyber-physical product expertise, with an acute understanding of silicon and system engineering workflows. With one foot in the semiconductor world, and another in the wider system engineering world, Ansys is uniquely positioned to provide broader multiphysics solutions for 2.5D/3D-IC that will continue to grow its footprint in EDA. The EDA Big 3 is now the Big 4, absolutely.

Also Read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

What Quantum Means for Electronic Design Automation

The Lines Are Blurring Between System and Silicon. You’re Not Ready.


Finally, A Serious Attack on Debug Productivity

Finally, A Serious Attack on Debug Productivity
by Bernard Murphy on 09-20-2022 at 6:00 am

Verisium min

Verification technologies have progressed in almost all domains over the years. We’re now substantially more productive in creating tests for block, SoC and hybrid software/hardware verification. These tests provide better coverage through randomization and formal modeling. And verification engines are faster – substantially faster in hardware accelerators – and higher capacity. We’ve even added non-functional testing, for power, safety and security. But one area of the verification task – debug – has stubbornly resisted meaningful improvements beyond improved integration and ease of use.

This is not an incidental problem; debug now accounts for almost half of verification engineer hours on a typical design.  Effective debug depends on expertise and creativity and these tasks are not amenable to conventional algorithmic solutions. Machine learning (ML) seems an obvious answer; capture all that expertise and creativity in training. But you can’t just bolt ML onto a problem and declare victory. ML must be applied intelligently (!) to the debug cycle. There has been some application-specific work in this direction, but no general-purpose solutions of which I am aware. Cadence has made the first attack I have seen on that bigger goal, with their Verisium ™ platform.

The big picture

Verisium is Cadence’s name for their new AI-driven verification platform. This subsumes the debug and vManager engines for those tracking product names, but what is most important is that this platform now becomes a multi-run, multi-engine center applying AI and big data methods to learning and debug. Start with the multi-run part. To learn you need historical data; yesterday the simulation was fine, today we have bugs – what changed? There could be clues in intelligent comparison of the two runs. Or in checked-in changes to the RTL, or in changes in the tests. Or in deltas in runs on other engines – formal for example. Maybe even in hints further afield, in synthesis perhaps.

Tapping into that information must start with a data lake repository for run data. Cadence has built a platform for this also, which they call JedAI for Cadence Joint Enterprise Data and AI Platform. Simulation trace files, log files, even compiled designs go into JED AI. Design and testbenches can stay where they are normally stored (Perforce or GitHub for example). From these Verisium can easily access design revs and check-in data.

Drilling down

Now for the intelligent part of applying ML to all this data in support of much faster debug. Verisium breaks the objective down into four sub-tasks. Bug triage is a timing-consuming task for any verification team. Grouping bugs with a likely common cause to minimize redundant debug effort. This task is a natural candidate for ML, based on experience from previous runs pointing to similar groupings. AutoTriage provides this analysis.

SemanticDiff identifies meaningful differences between RTL code checkins, providing another input to ML. WaveMiner performs multi-run bug root-cause analysis based on waveforms. This looks at passing and failing tests across a complete test suite to narrow down which signals and clock cycles are suspect in failures. Verisium Debug then provides a side-by-side comparison between passing and failing tests.

Cadence is already engaging with customers on another component called PinDown, an extension which aims to predict bugs on check-in.  This looks both at historical learning and behavioral factors like check-in times to assess likely risk in new code changes

Putting it all together

First a caveat. Any technique based on ML will return answers based on likelihood, not certainties. The verification team will still need to debug, but they can start closer to likely root causes and can get to resolution much faster. Which is a huge advance over the way we have to do debug today. As far as training is concerned, I am told that AutoTriage requires 2-3 regressions worth of data to start to become productive.  PinDown bug prediction needs a significant history in the revision control system, but if that history exists, can train in a few hours. Looks like training is not a big overhead.

There’s a lot more that I could talk about, but I’ll wrap with a few key points. This is the first release of Verisium, and Cadence will be announcing customer endorsements shortly. Further, JedAI is planned to extend to other domains in Cadence. They also plan APIs for customers and other tool vendors to access the same data, acknowledging that mixed vendor solutions will be a reality for a long time 😊

I’m not a fan of overblown product reviews, but here I feel more than average enthusiasm is warranted here. If it delivers on half of what it promises, Verisium will be a ground breaker. You should check it out.

 


WEBINAR: O-RAN Fronthaul Transport Security using MACsec

WEBINAR: O-RAN Fronthaul Transport Security using MACsec
by Daniel Nenni on 09-19-2022 at 10:00 am

Commcore OMAC Webinar

5G provides a range of improvements compared to existing 4G LTE mobile networks in regards to capacity, speed, latency and security. One of the main improvements is in the 5G RAN; it is based on a virtualized architecture where functions can be centralized close to the 5G core for economy or distributed as close to the edge as possible for lower latency performance.

SEE THE REPLAY HERE

The functional split options for the baseband station processing chain results in a separation between Radio Units (RUs) located at cell sites implementing lower layer functions, and Distributed Units (DUs) implementing higher layer functions.

This offers centralized processing and resource sharing between RUs, simple RU implementation requirements, easy function extendibility, and easy multivendor interoperability. The fronthaul is defined as the connectivity in the RAN infrastructure between the RU and the DU.

The O-RAN Alliance, established in February 2018, is an initiative to standardize the RAN with open interoperable interfaces between the radio and signal processing elements to facilitate innovation and reduce costs by enabling multi-vendors interoperable products and solutions while consistently meeting operators’ requirements.

The O-RAN Alliance defines that the fronthaul has to support Low Level Split 7-2x between the O-RAN Radio Unit (O-RU) and the O-RAN Distributed Unit (O-DU). The O-RAN Fronthaul is divided into data planes over different encapsulation protocols: Control Plane (C-Plane), User Plane (U-Plane), Synchronization Plane (S-Plane), and Management Plane (M-Plane). These planes carry very sensitive data and are constrained by strict performance requirements.

SEE THE REPLAY HERE

For its ability to mix different traffic types and ubiquitous application Ethernet is the preferable packet-based transport technology for the next generation fronthaul. An insecure ethernet transport network can expose the fronthaul to different types of threats that can compromise the operation of the network.

For example, data can be eavesdropped due to the packets’ clear-text nature, and the lack of authentication can allow an attacker to impersonate a network component. This can result in the manipulation of the data planes that can be used maliciously to cause a complete network Denial-of-Service, making security solution for the O-RAN Fronthaul critical.

In this live webinar, we will take a look at MACsec as a persuasive security solution for the O-RAN Fronthaul. We will understand the very sensitive data that the fronthaul transports, its strict high-performance requirements, and the urgent need to secure it against several threats and attacks.

We will learn the features that MACsec has to protect the fronthaul together with its implementation challenges. Finally, we will see how the Comcores’ MACsec solution can be integrated and customized for Open-RAN projects accelerating developments and reducing risks and costs.

Also read:

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

CEO Interview: John Mortensen of Comcores

LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface