DAC2025 SemiWiki 800x100

PCIe design workflow debuts simulation-driven virtual compliance

PCIe design workflow debuts simulation-driven virtual compliance
by Don Dingee on 07-16-2024 at 6:00 am

PCIe classical workflow

PCIe design complexity continues rising as the standard for intrasystem communication evolves. An urgent need for more system bandwidth drives PCIe interconnects to multi-lane, multi-link, multi-level signaling. Classical PCIe design workflows leave designers with most of the responsibility for getting the requisite interconnect details right. These classical workflows also defer compliance testing until physical realization – when it can be too late to fix unforeseen chip problems. Keysight is revamping the PCIe design workflow with smart workflow automation in System Designer for PCIe®, including IBIS-AMI model generation and simulation-driven virtual compliance, speeding design cycles and reducing risk in a shift-left approach. We discussed the technology with Hee-Soo Lee, Director of High-Speed Digital Segment at Keysight.

Replacing tedious setups and hard debugging with workflow automation

Traditional high-speed digital design requires PCIe designers to complete tedious manual design setups. In the example below, designers face a choice of whether to use a retimer (RT) and where to place it in the signal chain between the root complex (RC) and the endpoint (EP). Many scenarios benefit from using an RT, introducing two schematics, one between the RC and RT and another between the RT and EP, requiring individually wired signal connections from S-parameter blocks to signal drivers. Creating all the connections for complex multi-lane, multi-link designs can be very tedious.

Classical PCIe design workflow adding a retimer with two schematics

Parameter sweeps in conventional high-speed digital simulations span many points, resulting in large data sets, and gathering insights becomes time-consuming. If eye diagrams from the analysis are closed, debugging channel performance is the first step, but it becomes a problem as the connections around the S-parameter blocks must be unwired and manually terminated for further simulation. Measurement probes also require manual insertion. Any mishaps in wiring or unwiring signals and terminations inject errors into debugging, requiring more investigation. If manual tasks like these become automated, designers can spend more time optimizing designs.

Lee contrasts this classical approach with the smart PCIe design workflow using Keysight’s Advanced Design System (ADS) and System Designer for PCIe®. “One schematic implements a topology with the RootComplex and EndPoint placed as smart components,” he says. “Smart bus wire makes all the connections accurately, automated with a click.” Below is the System Designer for PCIe® representation of Topology 3 above. Automatic placement of PCIe probes simplifies multi-dimensional data capture using smart measurement technology. What once consisted of hours of work in layout is now a few minutes, and preparing for and running a PCIe simulation is a few seconds.

Smart components for the same PCIe retimer placed in System Designer for PCIe®

Control over parameters and choice of simulations in one interface

Each of these blocks is configurable in the System Designer for PCIe® user interface. Expanding the example above to look at the PCIe Mid-Channel Repeater block shows some options. Designers can choose whether the block is a redriver or retimer, select how many differential pairs are in the link, and select an IBIS file for the model. Smart wire makes the connections with the correct parameters automatically. Designers can also set up bit-level system behaviors.

PCIe mid-channel repeater with configuration parameters

Simulation options from one schematic with no manual conversions also give control over the PCIe design workflow. Most PCIe designers are already familiar with Seasim, the statistical simulator from the PCI-SIG, which guides PCIe compliance evaluation. Existing workflows required designers to jump from their preferred simulation tool to the Seasim environment. When choosing Seasim simulation mode in System Designer for PCIe®, ADS characterizes the channels using its S-parameter simulator and launches Seasim (user-installed separately) from ADS for analysis.

Seasim interface in System Designer for PCIe®

In addition to the S-parameter simulator and the Seasim interface, two other simulation methods are available for selection in System Designer for PCIe®:

  • A bit-by-bit simulator exercises sequences of bits specified in transmitters for durations specified in analysis settings, which helps analyze systems faster with waveform information embedded.
  • A statistical simulator uses ADS proprietary algorithms to analyze random and periodic jitter, duty-cycle distortion, and other effects to achieve an extremely low bit error rate (BER).

IBIS-AMI modeling and virtual compliance evaluation

IBIS-AMI behavioral modeling for SerDes transceivers captures analog characteristics and algorithmic functionality such as equalization, gain, and clock data recovery. Millions of bits through a link can be simulated using convolution and data flow approaches in a few minutes.

Usually, a designer must have intimate knowledge of digital signal processing details and C coding to create an AMI model. With the PCIe AMI Modeler in System Designer for PCIe®, AMI model generation has become a simple task in the wizard-driven AMI modeling workflow. This capability is essential for cutting-edge PCIe Gen6 designs with multi-level PAM4 signaling, where AMI models may not exist yet. Output files include .dll for Windows or .so for Linux.

PCIe AMI Modeler in System Designer for PCIe®

Finally, System Designer for PCIe® adds simulation-driven compliance. “Typically, PCIe compliance tests are done in the physical realm, using a detailed Method of Implementation in an oscilloscope application, testing performance against compliance metrics,” says Lee. “The problem is this is very late in the design cycle to discover for the first time if compliance tests pass or fail, and the risks are huge in complex designs.”

A guiding principle for Keysight EDA is using unified measurement science proven from hardware verifications in simulation-driven tests. The software that powers Keysight test equipment also runs in the Keysight EDA design environment – same algorithms, same methodologies. System Designer for PCIe® carries that principle to PCIe compliance testing with a specialized PCI Compliance Probe that sets up the correct stimulus and makes the appropriate compliance measurements. Users can accurately and thoroughly gauge PCIe design compliance from early-stage designs before committing to hardware.

This smart PCIe design workflow slashes the design cycle while providing faster insight into performance and a much earlier look at compliance. Keysight EDA is offering a deeper dive into System Designer for PCIe® and its capabilities in a webinar – registration is now open.

WEBINAR: Simplify Design Verification and Compliance with Standards-Driven EDA Workflows

More details are also available online: W3651B System Designer for PCIe®


The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)
by Lauro Rizzatti on 07-15-2024 at 10:00 am

Immensity of SW development Fig 1

Part 1 of this 4-part series introduces the complexities of developing and bringing up the entire software stack on a System on Chip (SoC) or Multi-die system. It explores various approaches to deployment, highlighting their specific objectives and the unique challenges they address.

Introduction

As the saying goes, it’s tough to make predictions, especially about the future. Yet, among the fog of uncertainty, a rare prescient vision in the realm of technology stands out. In 2011, venture capital investor Marc Andreessen crafted an opinion piece for The Wall Street Journal titled “Why Software Is Eating The World.” Andreessen observed that the internet’s widespread adoption took roughly a decade to truly blossom, and predicted that software would follow a similar trajectory, revolutionizing the entire human experience within ten years.

His foresight proved remarkably accurate. In the decade following Andreessen’s article, software’s transformative power swept through established industries, leaving a lasting impact. From industrial and agricultural sectors to finance, medicine, entertainment, retail, healthcare, education, and even defense, software reshaped landscapes disrupted traditional models. Those slow to adapt faced obsolescence. Indeed, software has been eating the world.

This rapid software expansion lies at the core of the challenges in developing and delivering fully validated software for modern SoC designs.

The Software Stack in a Modern SoC Design

In a modern System on Chip (SoC) the software is structured as a software stack that consists of several layers, each serving specific purposes to ensure efficient operation and functionality:

1)  Bare Metal Software and Firmware:

  • Bare Metal Software: Specialized programs that run directly on the hardware without an underlying operating system (OS) into memory upon startup. This software interacts directly with the hardware components.
  • Firmware: Low-level software that initializes hardware components and provides an interface for higher-level software. It is critical for the initial boot process and hardware management.

2) Operating System (OS):

  • The OS is the core software layer that manages hardware resources and provides services to application software.

3) Middleware:

    • Middleware provides common services and capabilities to applications beyond those offered by the OS. It includes libraries and frameworks for communication, data management, device management, dedicated security components such as secure boot, cryptographic libraries, and trusted execution environments (TEEs) to protect against unauthorized access and tampering.

4) Drivers and HAL (Hardware Abstraction Layer):

  • Device Drivers: These are specific to each hardware component, enabling the OS and applications to interact with hardware peripherals like GPUs, USB controllers, and sensors.
  • HAL: Provides a uniform interface for hardware access, isolating the upper layers of the software stack from hardware-specific details. This abstraction allows for easier portability across different hardware platforms.

5) Application Layer:

  • This top layer consists of the end-user applications and services that perform the actual functions for which the SoC is designed. Applications might include user interfaces, data processing software, and custom applications tailored to specific tasks.

Figure 1 captures the structure of the most frequently used software stack in modern SoC design.

Figure 1: Example of Android Software Stack, Source: researchgate.ne
The Software Development Landscape

The global software development community, estimated to comprise around 12 million professional developers, is responsible for an astounding amount of code production. Various sources suggest an annual output ranging between 100 and 120 billion lines of code. This vast quantity reflects the ever-growing demand for software in diverse industries and applications.

However, this impressive volume comes with a significant challenge: the inherent presence of errors in new code. Web sources report a current error rate for new software code before debugging ranging from 15 to 50 errors per 1,000 lines. This translates to an estimated average of over 10 billion errors that need to be identified and fixed before software reaches the market. (See Appendix).

It’s no surprise that design verification and validation consume a disproportionately large portion of the project schedule. Tracking and eliminating bugs is a monumental task, particularly when software debugging is added to the hardware debugging process. According to a 2022 analysis by IBS, the cost of software validation in semiconductor and system design is double that of hardware verification, even before accounting for the cost of software development of end-user applications, see figure 2.

Figure 2: Total cost of mainstream designs by process technology node, segmented by design stages Source: IBS, July 2022

This disparity underscores the increasing complexity and critical importance of thorough software validation in modern SoC development.

SoC Software Verification and Validation Challenges

The multi-layered software stack driving today’s SoC designs cannot be effectively validated with one-size-fits-all approach. This complex task demands a diverse set of tools and methods sharing a common foundation: executing vast amounts of verification cycles, even for debugging bare-metal software, the smallest software block.

Given the iterative nature of debugging, which involves running repeatedly tens or hundreds of times the same software tasks, even basic tasks can quickly consume millions of cycles. The issue becomes more severe when booting operating systems and running large application-specific workloads, potentially requiring trillions of cycles.

1) Bare-metal Software and Drivers Verification

At the bottom of the stack, verifying bare-metal software and drives requires the ability to track the execution and interaction of the software code with the underlying hardware. Access to processor registers is crucial for this task. Traditionally, this is achieved using the JTAG connection to the processor embedded in the design-under-test (DUT), which is available for a test board accommodating the SoC.

2) OS Booting

As the task moves up the stack, next comes booting the operating system. Likewise debugging drivers, it is essential to have visibility into the hardware. The demand for verification cycles now jumps to hundreds of billions of verification cycles.

3) Software Application Validation

At the top of the stack sits the debugging of application software workloads with the needs to execute trillions of cycles.

These scenarios defeat traditional hardware-description-language-based (HDL) simulators, as they fall short of meeting the demand. They run out of steam when processing designs or design blocks in the ballpark of one hundred million gates. A major processor firm reported that their leading-edge HDL simulator could only achieve clock rates of less than one hertz under such conditions. This makes HDL simulators impractical for real-world development cycles.

The alternative is to adopt either hardware-assisted verification (HAV) platforms or virtual prototypes that operate at a higher level of abstraction than RTL.

Virtual prototypes can provide an early start before RTL reaches maturity. This adoption drove the shift-left verification methodology. See Part 3 of this series.

Once RTL is stable enough and the necessary hardware blocks or sub-systems for software development are available, HAV engines tackle the challenge by delivering the necessary throughput to effectively verify OS and software workloads.

Hardware-assisted Verification as the Foundation of SoC Verification and Validation

HAV platforms encompass both emulators and FPGA prototypes, each serving distinct purposes. Emulators are generally employed for software bring-up of existing software stacks or minor modifications of software for new SoC architectures, such as driver adaptations. In contrast, FPGA prototypes, due to their substantially higher performance—roughly 10 times faster than emulators—are advantageous for software development requiring higher fidelity hardware models at increased speeds. To remain cost-effective, FPGA prototypes often involve partial SoC prototyping, allowing for the replication of prototypes across entire teams.

Working in parallel, hardware designers and software developers can significantly accelerate the development process. Emulation allows hardware teams to verify that bare-metal software, firmware, and OS programming interact correctly with the underlying hardware. FPGA prototypes enable software teams to quickly validate application software workloads when hardware design visibility for debugging is not critical. Increasingly, customers are extending the portion of the design being prototyped to the full design as software applications require interaction with many parts of the design. The scalability of prototypes into realms previously reserved for emulators is now essential.

Hardware engineers can realize the benefits of running software on hardware emulation, too. When actual application software is run on hardware for the first time, it almost always exposes hardware bugs missed by the most thorough verification methodologies. Running software early exposes these bugs when they can be addressed easily and inexpensively.

This parallel workflow can lead to a more efficient and streamlined development process, reducing overall time-to-market and improving product quality.

Conclusion

Inadequately tested hardware designs inevitably lead to respins, which increase design costs, delay the progression from netlist to layout, and ultimately push back time-to-market targets, severely impacting revenue streams.

Even more dramatic consequences arise from late-stage testing of embedded software, which can result in missed market windows and significant financial losses.

Also Read:

LIVE WEBINAR Maximizing SoC Energy Efficiency: The Role of Realistic Workloads and Massively Parallel Power Analysis

Synopsys’ Strategic Advancement with PCIe 7.0: Early Access and Complete Solution for AI and Data Center Infrastructure

Synopsys-AMD Webinar: Advancing 3DIC Design Through Next-Generation Solutions


Codasip Makes it Easier and Safer to Design Custom RISC-V Processors #61DAC

Codasip Makes it Easier and Safer to Design Custom RISC-V Processors #61DAC
by Mike Gianfagna on 07-15-2024 at 6:00 am

DAC Roundup – Codasip Makes it Easier and Safer to Design Custom RISC V Processors

RISC-V continued to be a significant force at #61DAC. There were many events that focused on its application in a wide variety of markets. As anyone who has used an embedded processor knows, the trick is how to be competitive. Using the same core as everyone else and differentiating in software is a strategy that tends to run out of gas quickly. There is simply not enough capability to differentiate in software alone. And so, customizing the processor core becomes the next step. The open-source ISA offered by RISC-V makes it a popular choice for customization. Achieving this goal is easier said than done, however. There are many moving parts to manage, and many pitfalls to be avoided. Codasip has substantial expertise in this area and a newly announced, safer and more robust approach to the problem was on display at DAC.  Let’s examine how Codasip makes it easier and safer to design custom RISC-V processors.

Codasip Company Mission

Codasip is a processor solutions company which uniquely helps developers differentiate products. It was founded in 2014, and a year later offered the first commercial RISC-V core and co-founded RISC-V International. The company’s philosophy includes the belief that processor customization is something the end user wants to control. This is the most potent way to differentiate in the market.

Achieving that result requires a holistic approach. This is accomplished through the combination of the open RISC-V ISA, Codasip Studio processor design automation, and high-quality processor IP. Codasip’s custom compute enables its customers to take control of their destiny.

What’s New – A Conversation from the Show Floor

I had the opportunity to meet with two senior executives at the Codasip DAC booth – Brett Cline, Chief Commercial Officer and Zdeněk Přikryl, Chief Technology Officer. I’ve known Brett for a long time, dating back to his days at Forte Design Systems. These two gentlemen cover the complete spectrum of all things at Codasip, so we had a far-reaching and enjoyable discussion. Along the way, we may have uncovered a way to solve most of the world’s problems, but I’ll save that for another post. Let’s focus on how Codasip makes it easier and safer to design custom RISC-V processors.

We first discussed a new version of Codasip Studio called Studio Fusion, which has a capability called Custom Bounded Instructions, or CBI. Using CBI, customers can develop any type of customization needed for their intended market, but by staying within the guidelines of CBI they can be assured the changes will not cause processor exceptions. Essentially, you can’t “break” the processor if you follow CBI.

Anyone who has developed custom instructions knows this is not the case in general and great care must be taken not to introduce subtle, hard-to-find bugs. There is substantial re-verification required. All that goes away with CBI.

We also discussed how limiting this new approach could be. It turns out the answer is “not much”. Significant customization can be accomplished with much lower development time and risk. To drive home that point, Codasip was running a live demo in its booth using a customized processor that was implemented with Codasip Studio Fusion and CBI.

The application applied AI algorithms to analyze the sound of a running cooling fan to identify anomalies in the sound that indicate potential problems. The algorithm would then predict the time to failure for the fan. If the application was cooling for critical electronics or automotive operation, the benefits are clear. After implementing and verifying the code, a custom processor was implemented with 40 unique custom instructions to enhance the performance of the algorithm.

Speed and energy efficiency showed dramatic improvements, with power reduction in the neighborhood of 80 percent. That makes the application much easier to implement in a small, low power form factor. I should also mention that doing a live demo of custom hardware at a trade show requires a lot of confidence – my experience is that all failures find a way forward while folks are watching. This made the demo more impressive in my eyes.

It was also pointed out that Codasip generates all the infrastructure to use the new custom processor, including the compiler and debugger. You get everything required. No third-party tools or support needed. This means no code changes to use the custom processor, the compiler takes care of exploiting the new features. Here, we discussed another new feature that has been added. The compiler is now more micro-architecturally aware. This means the compiler has deeper knowledge of what’s going on in the custom processor and so it can perform more sophisticated and higher-impact optimization.

After my discussion with Brett and Zdeněk it became clear how much automation Codasip is delivering to the RISC-V customization process. You truly are limited only by your imagination.

To Learn More

You can learn more about Codasip Studio Fusion here. You can also learn more about Codasip on SemiWiki here. Check out the video of the live demo from the Codasip booth and see the improvements a custom processor can deliver here. And that’s how Codasip makes it easier and safer to design custom RISC-V processors at #61DAC.


Podcast EP235: Tinier than TinyML: pushing the flexible boundaries of AI – Pragmatic Semiconductor

Podcast EP235: Tinier than TinyML: pushing the flexible boundaries of AI – Pragmatic Semiconductor
by Daniel Nenni on 07-12-2024 at 10:00 am

Dan is joined by Dr. Richard Price, CTO and Dr. Konstantinos Iordanou, a senior ASIC designer at Pragmatic Semiconductor.

Richard has over 25 years’ experience in the development and commercialisation of a wide range of new technologies based on novel processes, materials and flexible electronics. Richard is also a non-executive director at the Henry Royce Institute – the UK’s National Institute for advanced materials research. Konstantinos Iordanou is working on pioneering projects that push the limits of flexible IC technology. He holds a Ph.D. in Computer Science from The University of Manchester and specialises in computer microarchitecture, digital design, hardware accelerators and heterogeneous systems.

Dan explores the unique and disruptive technology of Pragmatic Semiconductor, a UK-based leader in flexible integrated circuit technology and semiconductor manufacturing. The company uses thin-film semiconductors to create ultra-thin, flexible integrated circuits, known as FlexICs, that are significantly lower cost and faster to produce than silicon chips – talking days, rather than months to produce.

Richard and Konstantinos discuss their groundbreaking work on tiny classifiers, in which they created world’s tiniest ML inference hardware on a flexible substrate. Uniquely, an evolutionary algorithm is used to automatically generate classification hardware. The resulting chip is extremely small in area – fewer than 300 logic gates.

When implemented on a flexible substrate, such as a FlexIC, this classifier occupies up to 75 times less area, consumes up to 75 times less power and has six times better yield than the most hardware-efficient ML baseline.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Homepage


SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC

SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC
by Robert Maire on 07-12-2024 at 6:00 am

Semicon West SF

– We just finished the most happy SEMICON West in a long time
– IMEC stole the show- HBM has more impact than size dictates
– Has Samsung lost its memory mojo? Is SK the new leader?
– AI brings new tech issues with it – TSMC is still industry King

Report from SEMICON West

The crowds at Semicon West were both big and Jubilant…. more so than we have seen in a long time (and we have been attending a long time, over 3 decades). It was a complete turn around from the 2-3 year downturn we have been in. Despite the fact that the majority of the memory industry has still not fully recovered and foundry logic is most healthy only at the bleeding edge, the euphoria generated by AI and HBM has swamped the entire industry. The icing on the AI/HBM cake is the buzz generated by the CHIPS Act and the fact that the average Joe on the street has now probably heard about chips (or semiconductors ) by now. If not due to all the news about CHIPS act money then by the minute by minute stock market buzz about NVDA.

Most of our non tech friends have no clue what NVDA does and a small minority know they make chips that have something to do with AI.

But any publicity is good publicity.

The best part of Semicon was not at Semicon

Monday afternoon, prior to the Tuesday start of Semicon, IMEC, the European semiconductor R&D consortium does a series of presentation of a number of speakers talking about various technology isues and advancements in the industry.

This years discussion was especially good talking about CMOS 2.0 and the many changes the industry is currently undergoing at the same time.

AI & HBM implementation drive many different issues and technology in the industry, more so than prior singular technology transitions.

There has been a lot of discussion but still not enough about the power requirements of AI devices. The latest Nvidia device runs 700 watts which means at 0.7 volts it uses 1000 amps of electricity, easily enough to weld thick steel or start the largest diesel engine let alone what the power demands will do to electric vehicles that require AI chips for autonomous driving. How do we package, test & supply power to these electrical beasts? What does it do to data centers and the grid?

AI and HBM are also about networking and moving data as fast as possible (as in Large Language Models). He who can move the most data, in, out and about, wins the race. This requires new connectivity, fast connectivity, parallel connectivity etc;.

Packaging will play an increasing role in both power and connectivity that dives AI and HBM. In our view, packaging (the back end of the semiconductor industry) has grown to equal importance (almost) to the front end (wafer fabrication)

My last point about IMEC is that it painfully points out that the US is woefully behind in R&D consortia that are driven by cooperation for the greater good of the industry. In the US we no longer have a single, nationwide, R&D organization. We have a number of companies out for their own benefit in competition with one another. While speakers at Semicon talked about “together” we need to do it for real if we are serious about re-shoring and re-capturing prior US greatness in semiconductors.

Has Samsung lost its MoJo to SK Hynix?

Samsung has long been the undisputed leader in semiconductor memory, with all others far behind.

It is interesting to note that SK Hynix has clearly taken the lead in the small but obviously super critical HBM segment. This should be both an embarrassment as well as wake up call for Samsung. It is further suggested that Micron may be number two in HBM after SK which should make alarm bells go off inside Samsung.

While some may dismiss this as a non issue as HBM is only about 5% (and growing quickly) of the industry it is by far the most profitable with the highest margins while pricing has still not fully recovered in the greater part of the memory industry.

We would expect both heads to roll inside Samsung as well as spending to ramp to fix this embarrassment.

Meanwhile, Samsung is not lighting the world on fire with its lackluster foundry offerings and hollow bravado. TSMC remains the far and away undisputed King of all foundry evidenced by its recent financial numbers. Obviously driven in large part by its enablement of Nvidia for which it deserves to be richly rewarded.

If anything, this makes us feel like Samsung has also fallen further behind TSMC in foundry as well….so Samsung is 0 for 2 in semicodnuctors.

CHIPS Act excitement

We heard a very rousing Keynote address coming from the under secretary of the CHIPS Act which sounded an awful lot like an advertisement than a list of actual accomplishments. The CHIPS Act is clearly a motivator and influencer but as we have previously mentioned writing checks is the easiest part, training people, getting fabs to work, developing technology is a lot harder.

We do think that the CHIPS Act can be the catalyst or spark that re-ignites the US semiconductor industry.

China is still the 800 pound gorilla of WFE spend in the industry

China is still outspending virtually everyone one else in the industry and many times that of the US spend. If the big equipment companies lost the 40% plus of their revenues that is China they would be sucking major wind. So they better keep spending the tens of millions of dollars on K Street lobbyists to keep the shipments up to enable China, and move operations and jobs to Asia.

We do think that China as an overall percentage of spend will start to decrease but primarily because other countries will start to increase their spend as we slowly make our way out of the downturn.

No major product announcements at Semicon

Tokyo Electron did finally publicly release their Ion Beam sidewall etch Epion product, which has been in the market already along with AMAT’s competing Sculpta product. Both products have taken some POR (process of record) positions at customers. We understand that the TEL product may have some cost advantages.

Semicon West Shrinkage

Actual semiconductor tools have long ago left the show floor. Major equipment makers have also exited stage left and have even reduced their off floor, in nearby hotel, presence. This has reduced the actual show to a lot of small booths of bit players selling bits and pieces like O rings and rubber gloves. Foreign representation seems to be on the rise with Korea, Germany & now Malaysia pavilions.

SEMI did announce a new SEMICON “Heartland” to take place in Indiana, announced with a guest appearance by the governor of Indiana. A second Semicon West is scheduled in October in Arizona, likely to pay homage to that states rising importance in the semiconductor industry as home to the newest fabs in the US by Intel & TSMC.

The Stocks

… have been on fire for a long streak now, reaching PE ratios not seen in forever. Whether earnings can ever catch up to rising valuations will remain to be seen. Everybody is asking is it time to take profit on NVDA? We would expect occasional downdrafts of profit taking but the overall mood and momentum is so positive its hard to imagine the positive tone changing much as business continues to improve.

There are a long of supporting and secondary plays on both AI & HBM that are yet to be discovered by the general public and probably a bunch of small caps that will see the trickle down effects of a tide that is rising exceptionally high. This will likely further support the ongoing tidal wave of valuation…..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside

LRCX- Mediocre, flattish, long, U shaped bottom- No recovery in sight yet-2025?

ASML- Soft revenues & Orders – But…China 49% – Memory Improving


Who Are the Next Anchor Tenants at DAC? #61DAC

Who Are the Next Anchor Tenants at DAC? #61DAC
by Mike Gianfagna on 07-11-2024 at 10:00 am

DAC Roundup – Who Are the Next Anchor Tenants at DAC?

#61DAC is evolving. The big get bigger and ultimately focus on other venues for customer outreach and branding. This is a normal evolution in any industry. For EDA, it was noticed by many that Cadence and Synopsys have downsized their booths at DAC. Everyone knows CDNLive and SNUG are very successful events for these companies and so this change shouldn’t come as a surprise. There may be other examples of this trend as the industry matures. The interesting part to focus on is who will be the next wave of anchor tenants at DAC? There are clearly some new entrants to DAC that are gaining momentum fast. The how and why of this phenomenon is interesting. I had the opportunity to speak with an executive from one such company at DAC. The conversation was both enlightening and inspirational. Let’s examine who are the next anchor tenants at DAC.

Altair Company Profile

Before getting into the profile of Altair, an observation about the focus of #61DAC is relevant. The conference tagline is now The Chips to Systems Conference. This is not a marketing slogan, it’s a statement about where the electronics industry is going. That is, electronic systems are becoming the critical enabler for a growing class of systems.

So, the question to ask as we look for the next anchor tenants at DAC is this – which companies have a broad enough footprint to enable systems with electronics? Altair is one such company and is one to watch as the next crop of DAC anchor tenants move in. You can learn about the breadth and focus of Altair on SemiWiki here. A short excerpt from the Altair website will explain a lot as well:

Changing Tomorrow, Together

When data science meets rocket science, incredible things happen. The innovation our world-changing technology enables may feel like magic to users, but it’s the time-tested result of the rigorous application of science, math, and Altair.

Our comprehensive, open-architecture simulation, artificial intelligence (AI), high-performance computing (HPC), and data analytics solutions empower organizations to build better, more efficient, more sustainable products and processes that will usher in the breakthroughs of tomorrow’s world. Welcome to the cutting edge of computational intelligence – no magic necessary.

In my opinion, this is the stuff a DAC anchor tenant should be made of.

Altair – the Backstory

Sarmad Khemmoro

I had the good fortune to spend some time with Sarmad Khemmoro at DAC. Many thanks to Dan Nenni for setting it up. Sarmad is currently the senior VP of Product & Strategy – Electronics Design & Simulation at Altair. He clearly sees the opportunity in the global electronics market in general and at DAC in particular. He has a storied career with senior technical and strategy leadership roles at companies such as Mentor, Innoveda and Viewlogic. He knows the technology behind chip design and the ecosystem that uses it.

He’s a natural fit to help Altair take a growing role in the world of electronic systems. He shared some valuable information during our meeting.

He began with a discussion of convergence. ECAD, MCAD, PLM and many other disciplines are now coming together, either through acquisition or partnership to create the required technology stack to realize tomorrow’s world-changing products. For Altair, three focus areas are silicon debug, 3DIC multi-physics – from chip to PCB to systems and job scheduling and license management. A broad footprint that is growing.

Altair has grown and will continue to grow through acquisitions. The company seems to have cracked the code for that process. It turns out many of the CEOs of acquired companies are still with Altair. That speaks volumes about the quality of the workplace and the commitment it has to its employees. We also talked about Altair’s customers – the list includes many household names.  Altair is also very strong in the automotive industry. This will be a strategic advantage as that market continues to consume more semiconductors. Sarmad is located in Detroit, so he’s up close and personal on this front.

We also discussed AI and digital twins. Altair has capabilities in both areas and Sarmad is quite familiar with the company’s strategy. The list of industries supported by Altair is quite extensive as shown in the figure below. This is a company with substantial reach.

Primary Industries Supported

Sarmad also discussed Altair’s unique and patented licensing model. The company basically re-wrote the rule book regarding tool licensing. The process is driven by something called Altair Units. Purchasing units gives users full access to all Altair software tools whenever they need them, and they can determine when, where, and how they want to use different tools without needing to worry if they’re eligible for access.

This approach removes a lot of the uncertainty and overhead associated with specific tool licensing. Altair has a long list of partners and through the Altair One™ Marketplace partner software can also be accessed with Altair Units, simplifying even more of the process.

Post-DAC Update – the Momentum Continues

Altair is clearly on the move. Its acquisition machine is in high gear with a recent announcement signaling its intention to acquire Metrics Design Automation, further expanding the company’s footprint in EDA. Metrics is a Canadian company that has developed a game-changing simulation as a service (SaaS) business model for semiconductor electronic functional simulation and design verification.

Combining the Metrics simulator with Altair’s silicon debug tools will result in a world-class, advanced simulation environment with superior simulation and debug capabilities. Note that Metrics is led by Joe Costello, who is something of a folk hero in EDA.

Tight relationships in the semiconductor ecosystem are a key attribute of any DAC anchor tenant. There was also a recent announcement that Altair has joined the Samsung Advanced Foundry Ecosystem, known as SAFE™. Altair and Samsung Electronics will combine Altair’s comprehensive EDA technology with Samsung Foundry’s manufacturing capabilities to establish a more innovative, more efficient semiconductor design and production process.

To Learn More

My conversation with Sarmad left an impact. The breadth of Altair’s tools is substantial, and the company has a vision to grow in key markets to further dominate the landscape. You can explore Altair’s capabilities for semiconductor design and EDA here. If you want to take the grand tour of all industries supported, you can do that here.

You can also see the full announcement about the Metrics acquisition here and the full Samsung SAFE announcement here.

So, the next time you wonder who are the next anchor tenants at DAC, think Altair.


AI Booming is Fueling Interface IP 17% YoY Growth

AI Booming is Fueling Interface IP 17% YoY Growth
by Eric Esteve on 07-11-2024 at 6:00 am

IF 2018 2027no$

AI explosion is clearly driving semi-industry since 2020. AI processing, based on GPU, need to be as powerful as possible, but a system will reach optimum only if it can rely on top interconnects. The various sub-system parts (memory, processor, co-processor, network) need to be connected with interface links with ever more bandwidth and lower latency: DDR5 or HBM memory controller, PCIe and CXL, 224G SerDes and so on. When you design a supercomputer, raw processing power is important, but the way you access memory, latency and network speed optimization will allow you to succeed. It’s the same with AI, that’s why interconnects protocols are becoming key.

In 2023, the semiconductor market declined, but the interface IP segment grew by 17%. Our forecast shows stronger growth for years 2024 to 2028, comparable to 20% growth in the 2020’s. AI is driving the semiconductor industry and Interconnect protocols efficiency are fueling AI performance. Virtuous cycle!

The interface IP category has moved from 18% share of all IP categories in 2017 to 28% in 2023. In 2024, we think this trend will amplify during the decade and Interface IP to grow to 38% of total (detrimental to processor IP passing from 47% in 2023 to 41% in 2028).

As usual, IPnest has made the five-year forecast (2024-2028) by protocol and computed the CAGR by protocol (picture below). As you can see on the picture, most of the growth is expected to come from three categories, PCIe, memory controller (DDR) and Ethernet & D2D, exhibiting 5 years CAGR of resp. 19%, 23% and 22%.

It should not be surprising as all these protocols are linked with data-centric applications! If we consider that the weight of the Top 5 protocols was $1820 million in 2023, the value forecasted in 2028 will be $4390 million, or CAGR of 19%.

This forecast is based on amazing growth of data-centric applications, AI in short. Looking at TSMC revenues split by platform in 2023, HPC is clearly the driver. This has started in 2020, we expect this trend to continue up to 2028, at least.

Conclusion

Synopsys has built a strong position on every protocol -and on every application, enjoying more than 55% market share, by doing strategic acquisitions since the early 2000’s and by offering integrated solutions, PHY and Controller. We still don’t see any competitor in position of challenging the leader. Next two are Cadence and Alphawave, with market share in the 12%, far from the leader.

In 2024, we think that a major strategy change will happen during the decade. IP vendors focused on high-end IP architecture will try to develop a multi-product strategy and market ASIC, ASSP and chiplet derived from leading IP (PCIe, CXL, memory controller, SerDes…). Some have already started, like Credo, Rambus or Alphawave. Credo and Rambus already see significant revenues results on ASSP, but we will have to wait to 2025, at best, to see measurable results on chiplet.

This is the 16th version of the survey, started in 2009 when the Interface IP category market was $250 million (in 2023 $1980 million), and we can affirm that the 5 years forecast stayed within +/- 5% error margin!

IPnest predict in 2024 that the interface IP category in 2028 will be in the $4750 million range (+/- $250), and this forecast is realistic.

If you’re interested by this “Interface IP Survey” released in July 2024, just contact me:

eric.esteve@ip-nest.com .

Eric Esteve from IPnest

Also Read:

Semi Market Decreased by 8% in 2023… When Design IP Sales Grew by 6%!

Interface IP in 2022: 22% YoY growth still data-centric driven

Design IP Sales Grew 20.2% in 2022 after 19.4% in 2021 and 16.7% in 2020!


Will Semiconductor earnings live up to the Investor hype?

Will Semiconductor earnings live up to the Investor hype?
by Claus Aasholm on 07-10-2024 at 10:00 am

NVIDIA Fastest Growing Semiconductor company

The state of the Semiconductor Industry before the earnings season. This post will give the industry’s status before the results are revealed. We are sharing the information available.

The first Q2 swallows
A few companies with quarters not aligned to calendar quarters have reported. Nvidia was slightly ahead of expectations, and the stock price made the company the most valuable in periods of June. All of it is driven by the data centre and H100 AI sales.

Broadcom reported disappointing semiconductor revenue, only saved by AI Network and Accelerator sales to Meta and Google. Marvell painted a similar picture with everything down except the data centre business. This is not a good sign for the broader earnings season coming up. (Broadcom result)

Lastly, Micron showed 17% growth, mainly due to memory price increases, and only the storage business was growing in bits sold. Even the computer was flat in bits sold, indicating that Micron is not getting much action from Nvidia. (Micron result)

The closure of Q1
The total revenue of Semiconductor companies was flat in Q1-24 compared to the prior quarter, but the overall growth compared to Q1-23 was quite strong. 29% growth signals the industry is well into the cyclical recovery period (Long-term growth is currently at 8%).

If Nvidia’s strong growth is excluded, the growth falls to under 10% or close to the long-term level.

The exclusion of Nvidia revenue makes Foundry revenue growth very similar to the growth of Semiconductor companies, highlighting that Nvidia revenue is mostly profit, and only 15% makes a mark on Foundry revenue.

The four growth curves represent the semiconductor time machine; while imperfect, they allow a peek into the future of the Semiconductor Companies.

With zero inventory movements, the time machine works like this:

The revenue of Tools, Materials, & Foundries is a chain of events predicting the revenue of semiconductor companies. While it can be used to predict individual results of some of the largest Semiconductor companies, is works better as an overall indicator of the industry.

The negative growth of materials and the drop in foundry revenue does not suggest a strong recovery in the Q2 results and the tools revenue is not a solid longer term indication of revenue expansion.

Mean revenue results
With Nvidia’s strong performance clouding general industry insights, it is worth looking at a Box and Whiskers plot based on mean values.

This is a way of investigating industry growth with the outsized impact of the outliers. Here, it becomes obvious that not only is Nvidia driving the overall growth but also the Korean memory companies led by SK Hynix, which are currently winning HBM at Nvidia.

The mean growth for semiconductor companies compared to Q1-23 is 0.2%, indicating that the AI pocket of growth is the only action in the Semiconductor Industry in Q1.

Median growth for tool companies is positively impacted by the good performance of Chinese tool companies.

Revenue growth by Manufacturing Model
We divide Semiconductor companies into three different categories:

1) Integrated Device Manufacturers: Traditional model with fabs.

2) Fabless Semiconductor Companies: Companies exclusively using foundries

3) Mixed Manufacturing Model: Analog and power fabs with high-end digital outsourced to foundries.

The relative growth for Fabless is strong, but the impact of Nvidia accounts for most of the development. Without Nvidia, the result is 4%. The IDMs are lifted by the increase in memory pricing rather than bit growth. The mixed model companies have seen significant declines over the last two quarters.

The Inventory Situation
The inventory position for different areas of the supply chain can reveal how much of a surprise the current revenue level represents. If revenue is unfolding in line with the quarterly manufacturing plan, you would expect to see a decrease in inventory as companies try to optimise their inventory. The exception is if companies are running on low inventory, which is not the case in the current market environment (with notable exceptions for Nvidia and the company’s supply chain.

The chart shows the inventory days according to the supply chain position. As foundries and semiconductor companies have been depleting inventory compared to Q1-23, the materials companies were still struggling with the last pile-up collision.

The Q1-24 increase in inventory is driven by lower-than-expected demand from the end markets, which slams through the supply chain. This will likely continue into Q2-24 as neither foundries nor semiconductor companies invest in materials to support a potential Q2-24 revenue increase.

World Semiconductor Trade Statistics (WSTS)
WSTS just released their Semiconductor trade statistics for May, which showed another monthly increase. While this should be a good signal, there are issues with how WSTS accounts for semiconductor revenue.

WSTS only gets monthly reports from its members. The reporting is screened by a third-party accountant who shields the identity of the reporting company, so WSTS does not know who reports what, only what products were sold. As many important companies are not members, WSTS has to guess about their revenue numbers by month. This problem is growing with the revenue of Nvidia, which is not a member of WSTS and now accounts for more than 8.6B/month or more than 17% of total WSTS revenue. A year ago, it was 2.4B$/month. In addition, neither Intel, AMD, nor Broadcom are members of WSTS.

This makes WSTS numbers very unpredictable and not very useful for making predictions anymore.

TSMC update
As TSMC reports monthly revenue, it is possible to see Q2 revenue already. While it is a TSMC Record, the quarter is slightly above Q4-22 and Q4-23.

TSMC’s strong quarter suggests an uptick in market activity. It is hard to judge if this is broad-based or still AI-centric. Apart from Nvidia, TSMC will manufacture AI GPUs for Intel and AMD this quarter. This could signal that AMD and Intel are expecting meaningful AI orders. Whether this materialises is another matter entirely. Also, TSMC is winning orders from Samsung’s foundry business, which is struggling to get good yields on leading-edge nodes.

Semiconductor Operating Profits
The operating profits for Semiconductor companies compared to Q1-23 looks incredibly good with over 300% growth, while the rest of the supply chain have meager results.

As the Q1-23 view is taken from the Semiconductor cycle minimum it involves memory companies starting in negative and ending in postive which does not tell the full story. Turning the dial back to Q1-22 gives a significantly different view, where all of the supply chain operating profit growth is under water.

It is also worth noting that Nvidia is now dominating the total operation profit of the Semiconductor companies, skewing the graphs dramatically.

While we still wait for most Semiconductor companies to publish the Q2-24 result, the division between Nvidia and the rest of the industry is clear. In Q1, Nvidia accounted for more than half the semiconductor operating profit. This is likely to be the case in Q2 also.

The Stock market perspective
While we do not try and predict share prices, we do not mind comparing business development with increases in share prices.

We understand that revenue is not the only important element in a company valuation, but it is incredibly important for semiconductor companies to have revenue growth. Without revenue growth, it is difficult to make meaningfull gains in free cashflow which is more important in valuations.

We use the Philadelphia Semiconductor Index (SOXX) as a good proxy for the collective share price of semiconductors. As can be seen, in the graph below, the current share gains are not justified by a similar gain in revenue growth.

From an operating profit perspective, the increase in share price looks more justified, while is should be noted that Nvidia is driving both.

Adding a comparison from Q1-22 gives a different view, where none of the supply chain sectors have retuned to an operating profit at the level of Q1-22

Conclusion
While there is a lot of semiconductor optimism before the current earnings season, there is not a lot of evidence that there is significant revenue growth or inventory depletion that indicate a general upturn. The optimism surrounding the WSTS numbers does not point to a general upturn as they are dominated by Nvidia’s hyper growth and the increasing revenue of the memory companies due to price increases. The memory volume is not increasing.

TSMC will be reporting healthy numbers but not anything that goes through the roof. The good result will be dominated by supplies of AI products for Nvidia, Intel, AMD and Broadcom. It will be interesing to see if the Semiconductor companies can turn these products into revenue. We will have a special focus on Intel as the company will need to show results soon.

If you are an investor or another stakeholder in the Semiconductor Industry, you can gain insights from our updates as the Semiconductor companies reports Q2 results.

Also Read:

Automotive Semiconductor Market Slowing

2024 Starts Slow, But Primed for Growth

Electronics Turns Positive


Production AI is Taking Off But Not Where You Think

Production AI is Taking Off But Not Where You Think
by Bernard Murphy on 07-10-2024 at 6:00 am

TinyML

AI for revolutionary business applications grabs all the headlines but real near-term growth is already happening, in consumer devices and in IoT. For good reason. These applications may be less eye-catching but are eminently practical: background noise cancellation in earbuds and hearing aids, keyword and command ID in voice control and face-ID in vision, predictive maintenance and health and fitness sensing. None of these require superhuman intelligence or revolutions in the way we work and live yet they deliver meaningful productivity/ease-of-use improvements. At the same time, they must be designed for milliwatt-level power levels and must be attractive to budget-conscious consumers and enterprises aiming to scale. Product makers in this space are already actively building and selling products for a wide range of applications and now have a common interest group (not yet standards) in the tinyML Foundation.

Requirements and Opportunity

Activity around tiny ML is clear, but it’s worth stressing that the tinyML group isn’t (yet) setting hard boundaries on how a product qualifies to be in the group. However, per Elia Shenberger (Sr. Director Biz Dev, Sensors and Audio at CEVA) one common factor is power, less that a watt for the complete device, and milliWatts for the ML function. Another common factor is ML performance, up to hundreds of Gigaops per second.

These guidelines constrain networks to be small ML models running on battery-powered devices. Transformers/GenAI are not in scope (though see the end of this blog). Common uses will be for sensor data analytics for remote deployment with infrequent maintenance, and for always-on functions such as voice and anomalous sound detection or visual wake triggers. As examples of active growth, Raspberry PI (with AI/ML) is already proving very popular in industrial applications, and ST sees TinyML as the biggest driver of the MCU market within the next 10 years.

According to ABI Research, 4 billion inference chips for tinyML devices are expected to ship annually by 2028 with a CAGR of 32%. ABI also anticipate that by 2030 75% of inference-based shipments will run on dedicated tinyML hardware rather than general purpose MCUs.

A major factor in making this happen will almost certainly be cost, both hardware and software. Today a common implementation depends on an MCU for control and feature extraction (signal processing), followed by an NPU or accelerator to run the ML model. This approach incurs a double royalty overhead and will certainly result in a larger chip area/cost. It will also promote greater complexity in managing software, AI models, and data traffic between these cores. In contrast, single-core solutions with out-of-the-box APIs, libraries, and ported models based on open model zoos are going to look increasingly appealing.

Ceva-NeuPro-Nano

Ceva is already established in the embedded inference space with their NeuPro-M family of products. Recently they extended this family by adding NeuPro-Nano to address tinyML profiles. They claim some impressive stats versus alternative solutions: 10X higher performance, 45% die area, 80% lower on-chip memory demand and 3X lower energy consumption.

The architecture allows them to run control code, feature extraction and the AI model all within the same core. That reduces the burden on the MCU, allowing a builder to go with a smaller MCU or even dispense with that core altogether (depending on application). To understand why, consider two common tinyML applications: wake-word/command extraction from voice, and environmental noise cancellation. In the first, feature extraction consumes 36% of processing time, with the balance in the AI model. In the second, feature extraction consumes 68% of processing time versus the AI model. Clearly moving these into a common core with dedicated signal processing plus an ML engine is going to outperform a platform splitting feature extraction and AI model between 2 cores.

The NeuPro-Nano neural engine to run the AI model is scalable, supporting multiple MAC configurations and ML performance is further boosted through sparsity acceleration and activation acceleration for non-linear types such as sigmoid.

Proprietary weight compression technology dispenses with need for intermediate decompression storage, handling on-the-fly decompression as needed. Which significantly reduces need for on-chip SRAM – more cost reduction.

Power management is a key component in meeting tinyML objectives. Clever sparsity management minimizes calculations with zero weights, dynamic voltage and frequency scaling (tunable per application) can significantly reduce net power, and weight sparsity acceleration also reduces energy/bandwidth communication overhead.

Finally the core is designed to work directly with standard inference frameworks – TensorFlow Lite for Microcontrollers and μTVM – and offers a tinyML Model Zoo covering voice, vision and sensing use-cased and based on open libraries, pre-trained and optimized for NeuPro-Nano.

Future proofing

Remember that point about tinyML being a collaboration rather than a standards committee? The initial aims are quite clear; however these continue to evolve at least in discussion as applications continue to evolve. Maybe the ceiling for power will be pushed up, maybe bit-widths should cover a wider range to support on-device training, maybe some level of GenAI should be supported.

Ceva is ready for that. NeuPro-Nano already supports 4-bit to 32-bit accuracies as well as native transformer computation. As the tinyML goalposts move, NeuPro-Nano can move with them.

Ceva-NeuPro-Nano is already available. You can learn more HERE.

 


Facing challenges of implementing Post-Quantum Cryptography

Facing challenges of implementing Post-Quantum Cryptography
by Don Dingee on 07-09-2024 at 10:00 am

Template Whitepaper promotion rectangulaire 1

While researchers continue a march for more powerful quantum computers, cybersecurity measures are already progressing on an aggressive timeline to avoid potential threats. The urgency is partly in anticipation of a “store-now-decrypt-later” attack where compromised data, seemingly safe under earlier generations of encryption technology, is gathered and kept until quantum computers grow powerful enough to enable future decryption. Hardware lifecycles are also on the minds of many, where chips developed using classical pre-quantum algorithms will abruptly become obsolete. Secure-IC outlines the approach needed to confront the industrial challenges of implementing Post-Quantum Cryptography (PQC) in its new white paper.

Revisiting the algorithms and planning a transition

RSA became the de facto standard in encryption technology in the late 1970s. It combines short decryption times with unreasonably long crack times thanks to long key lengths. Crack time estimates in hundreds of years were the best guess based on the computing power of the day – mainframes and mini-computers. For every measure, there is a countermeasure, and it only took two decades for Shor’s algorithm to emerge, theoretically rendering both RSA and elliptic curve cryptography vulnerable. In practice, Shor’s algorithm would need to run on a much more powerful computer to crack encryption in a reasonable time. Despite processing power advances along Moore’s law, RSA cryptography has remained safely beyond cracking.

Quantum computing changes the curve with an exponential increase in computational power as the number of qubits scales. Soon, quantum computers could offer enough operations per second to cut crack times dramatically for classical encryption methods. That should not be a surprise – classical encryption algorithms remain fixed while computing power grows yearly, which means new algorithms will be needed if encryption is to stay safe.

NIST has pursued PQC algorithms since 2016, announcing its first round of selections in July 2022. From those selections, the NSA issued its PQC recommendations in the Commercial National Security Algorithm Suite 2.0 (CNSA Suite 2.0) with timelines for modernizing six classes of systems and a target of having all systems PQC-enabled by 2033.

With the NSA’s initial software/firmware signing and cloud services goals looming in 2025, developers need to get moving with PQC technology and IP, forcing the discussion from theory to practice. Agencies in Europe – including France’s National Cybersecurity Agency (ANSSI) and Germany’s Federal Office for Information Security (BSI) – and Asia have issued similar timelines for approaching the PQC transition.

Projecting PQC theory into practical implementations

Secure-IC devotes the balance of its white paper to practical implementation challenges. High on the list is performance, particularly embedded device performance, as many more devices connect to the internet and must encrypt and decrypt traffic for security. Also on the list is hybridization, where classical and PQC algorithms exist in systems simultaneously. Another point is the existence of new cryptographic primitives in PQC and the associated concerns with design, integration, licensing, and interoperability. Their last point is certifications, where industry and regional differences complicate the landscape and usually mean addressing multiple certification efforts to field a product in various applications and markets.

In developing its PQC-ready technologies, Secure-IC created a hardware accelerator and software library that delivers a complete solution to address these challenges. Their hardware architecture manages impacts on power, performance, and area (PPA) for enabling embedded devices with PQC. Their software provides configurable modules for both classical and post-quantum algorithms. Secure-IC’s solutions have achieved several certifications, including those for the automotive industry.

To download a copy of the white paper and see how Secure-IC solutions face the challenges and help developers safeguard digital assets, please visit the Secure-IC website:

Redefining Security – Confronting the Industrial Challenges of Implementing Post-Quantum Cryptography (PQC)