Banner 800x100 0810

PQShield Demystifies Post-Quantum Cryptography with Leadership Lounge

PQShield Demystifies Post-Quantum Cryptography with Leadership Lounge
by Mike Gianfagna on 10-31-2024 at 6:00 am

PQShield Demystifies Post Quantum Cryptography with Leadership Lounge

Post-Quantum Cryptography, or PQC provides a technical approach to protect encrypted data and connections when quantum computers can cost-effectively defeat current approaches. Exactly when this will occur is open to much discussion, but the fact is this day is coming, some say in ten years. One of the imperatives is to deploy quantum-resistant algorithms before this happens. That process can also take a long time, so the time for action is now. The National Institute of Standards and Technology, or NIST is an agency of the United States Department of Commerce that promotes American innovation and industrial competitiveness through measurement science, standards, and technology. NIST is deeply involved in developing PQC standards and you can get a view of the whole process here.

PQShield is a company that delivers hardware, software, firmware and research IP to enable post-quantum cryptography. Recently, the company started a series of videos to help the industry understand the PQC world and what to do. Let’s look at how PQShield demystifies post-quantum cryptography with Leadership Lounge.

NIST Standards – the PQC Turning Point

There are several short and very informative videos on Leadership Lounge. A complete list and a link is coming. First, I’d like to focus on the second installment, entitled NIST standards – the PQC turning point. Each video features a dialogue between Ben Packman, Chief Strategy Officer and Dr. Ali El Kaafarani, Founder & CEO at PQShield. These gentlemen have a natural and easy-to-watch style. You are essentially listening in on an information-rich dialogue.

In this video, Ben and Ali discuss the publication of the first NIST PQC standards. This is certainly cause for celebration. It represents a substantial achievement for the whole collaborative PQC community. The long road to get to this point is discussed as there were many delays and frustrations along the way. Despite this, Ali points out that the NIST team was always there to provide timely support and clarification. It was indeed a broad collaboration.

He also points out that in recent months, even NIST was anxiously awaiting the publication of the new standards along with the rest of the community. The work has been done, and now the government needs to publish the standards. Ben states that “even NIST wound up in the same boat as the rest of us.”

Even with the long wait, it is pointed out that everyone really appreciated the thorough process, and the communication, clarification, and input from everyone involved. The community will now move to implementation and testing of the new standards, a long and complex process that will continue to require communication and collaboration.

The Leadership Lounge Library

These videos cover a lot of ground, with more on the way. Here is a list of the current topics.

Video 1: Algorithms are just recipes. The release of the new NIST standards is a great achievement, but how do you apply them? At PQShield, this is a question that drives the company towards mature products, solving real-world problems. Ali and Ben come to realize that algorithms are really only recipes – the key is how you use them. 

Video 2: Summarized above.

Video 3: How to think crypto agile. Cryptography is about risk mitigation. It’s a question of how you value your business, and it might well be the last line of defense when it comes to protecting what’s important to you. Ali and Ben talk about the way we each think about our business, and how that impacts decisions we make.

Video 4: Celebrating the cryptography community. Ben and Ali reflect on the origins of PQShield as part of the wider cryptographic community and how great it is to be part of a brilliant but genuinely down-to-earth group of cryptographers.

Video 5: No can to kick down the road – it’s all about compliance. Release of the standards has definitely shifted the focus – it’s now time to talk about how we deploy post-quantum cryptography, and where to start in the supply chain.

Video 6: Post-quantum is an era. The term ‘post-quantum’ defines an era when public key cryptography needs to be replaced with new technology. Ali and Ben discuss some of the wider pieces of cryptography, many of which are not vulnerable to the quantum threat, but form essential components nevertheless, in the ‘post-quantum’ era.

Video 7: Is it time to stop talking about PQC?  Ben and Ali discuss moving into an era when the focus has shifted from the nature of the threat, to talking about compliance with the next generation of standards in public key cryptography.

Video 8: PQC in silicon. Ben and Ali talk about PQShield’s silicon implementation of PQC – the company hasn’t just designed PQC solutions, it’s built hardware IP onto a physical chip.

Video 9: Standardization – what’s next? Ben and Ali discuss NIST’s timeline, including FALCON, Round 4 KEMs, the necessary mix of lattice, code-based and hash algorithms, as well as the ongoing effort to select digital signatures.

To Learn More

You can learn more about PQShield and its unique focus and charter on SemiWiki here. And you can browse all the great videos on Leadership Lounge here. PQC is a challenge that will impact everything. Getting ahead of the game is the best strategy. PQShield is the best partner to do that. And that’s how PQShield demystifies post-quantum cryptography with Leadership Lounge.


Datacenter Chipmaker Achieves Power Reduction With proteanTecs AVS Pro

Datacenter Chipmaker Achieves Power Reduction With proteanTecs AVS Pro
by Kalar Rajendiran on 10-30-2024 at 10:00 am

Alphawave Using proTeanTechs

As semiconductor technology advances and nodes continue to shrink, designers are faced with increasing challenges related to device complexity, power consumption, and reliability. The delicate balance between high performance, low power usage, and long-term reliability is more critical than ever. This growing demand calls for innovative solutions that can dynamically adapt to real-time operating conditions, ensuring devices meet performance standards while minimizing unnecessary power consumption. In conventional chip design, operating voltages are typically set higher than the minimum required to account for variables like temperature changes, signal noise, and process aging. While this safety margin helps prevent performance issues, it often leads to inefficient power consumption due to a one-size-fits-all approach, resulting in over-provisioning.

Demo at the TSMC OIP Ecosystem Forum

At the recent TSMC OIP Ecosystem Forum, proteanTecs showcased their AVS Pro solution with a live demo that highlights how their adaptive voltage scaling (AVS) technology can revolutionize power management in semiconductor chips. The solution can achieve up to 14% in power savings.

The demo showed how AVS Pro effectively minimizes power consumption by dynamically adjusting the chip’s operating voltage using embedded margin agents and dedicated algorithms. In this case, hundreds of agents were spread across the chip’s logic paths, to continuously monitor the timing margins—an indicator of how close a path is to experiencing a timing failure. This real-time data was fed into the AVS Pro application, which adjusted the voltage based on the current needs of the chip, ensuring that performance was maintained without excessive power usage. Initially, the chip’s supply voltage was set at 650 millivolts—higher than the minimal operating voltage, or VDD Min, of 580 millivolts. The extra voltage is applied as a safeguard against potential issues like aging, environmental noise, and workload variations. However, this guard band leads to over-provisioning, which wastes valuable power.

When AVS Pro was enabled, the system reduced the voltage based on real-time feedback from the agent measurements. This careful scaling resulted in significant power savings—up to 12.51% in the demo—without sacrificing performance or stability. AVS Pro continues to adjust the voltage until the timing margins reach a safe minimum. If a sudden workload spike or voltage drop threatens to push the timing margins below a critical threshold, the system instantly increases the voltage to maintain stability and avoid potential failures. Once conditions stabilize, AVS Pro resumes voltage reduction, ensuring the chip operates at its most efficient power level.

This kind of solution is essential for industries such as AI, high-performance computing (HPC), data centers, mobile telecom, and automotive electronics.

How AVS Pro Works: Real-Time Monitoring and Adaptation

At the core of AVS Pro is its ability to monitor millions of logic paths in a chip in real time, providing a highly granular picture of each path’s proximity to a timing failure. The system continuously analyzes these margins and dynamically adjusts voltage levels to prevent failures caused by environmental factors, process variation, latent defects, noise, application stress, and aging effects. In contrast to traditional methods, which apply broad voltage guard bands for worst-case scenarios, AVS Pro tailors its response to the chip’s real-time conditions. By doing so, it optimizes power usage while ensuring that performance remains reliable even under challenging conditions, such as high temperatures or heavy workloads. When conditions are favorable, AVS Pro safely lowers the voltage, reducing power consumption and extending the device’s lifespan, by pushing out device wearout.

The system also accounts for process variations, ensuring each chip is calibrated individually to operate at its optimal voltage. Moreover, it monitors aging effects that slow down transistors over time, continuously adjusting voltage to compensate for degradation, thus preventing performance degradation or premature failure.

A Holistic, Closed-Loop Solution

The power of AVS Pro lies in its closed-loop integration of hardware and firmware. This tightly coupled system continuously monitors, analyzes, and adjusts voltage levels in real time, ensuring the chip remains within its optimal operating parameters. The system not only responds to current conditions but also learns from historical data, enabling it to predict future trends and make proactive voltage adjustments.

Fast-Response Protection and Adaptation

Another key feature of AVS Pro is its fast-response safety net. In dynamic environments where conditions can change rapidly, it is crucial for the system to make quick adjustments to avoid timing failures. AVS Pro’s closed-loop architecture provides real-time feedback between the hardware and firmware, allowing the system to instantly react to voltage fluctuations or workload spikes. By detecting potential failures early and taking corrective action immediately, AVS Pro ensures that even minor performance fluctuations are addressed before they escalate into more serious problems. This type of capability is essential for applications that demand high reliability, such as cloud computing, AI/HPC, and critical infrastructure.

Summary

The combination of real-time monitoring, adaptive voltage scaling, and a closed-loop architecture makes AVS Pro an ideal solution for designers and manufacturers looking to optimize their products for the next generation of computing technologies, where performance, power efficiency, and reliability are paramount.

The proteanTecs AVS Pro solution pushes the boundaries of adaptive voltage scaling and power optimization, delivering tangible benefits across a wide range of applications, from data centers to consumer devices. By ensuring each chip operates at the most efficient voltage level, AVS Pro maximizes performance while minimizing power consumption, paving the way for the future of high-performance semiconductor design.

Play with a Power Reduction ROI Calculator. (you have to scroll down a bit on the page)

Learn more about chip power reduction and data center economics.

Access a whitepaper on Power Performance Optimizer.

Visit proteanTecs.com to learn more about their various technology offerings.

Also Read:

proteanTecs Introduces a Safety Monitoring Solution #61DAC

proteanTecs at the 2024 Design Automation Conference

WEBINAR: Navigating the Power Challenges of Datacenter Infrastructure


The Next LLM Architecture? Innovation in Verification

The Next LLM Architecture? Innovation in Verification
by Bernard Murphy on 10-30-2024 at 6:00 am

Innovation New

LLMs have amazing capabilities but inference run times grow rapidly with the size of the input (prompt) sequence, a significant weakness for some applications in engineering. State space models (SSMs) aim to correct this weakness. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Mamba: Linear-Time Sequence Modeling with Selective State Spaces. This was published in arXiv in 2023. The authors are from CMU and Princeton.

Judging by recent publications there is growing interest in a next step beyond transformer architectures, using an architecture building on state space models (SSMs). State space modeling is not a new idea; studies date back to the 1960s (Kalman filters) and are applied to time series analysis in many disciplines. In essence the method builds a model of internal state for a system based on equation-based constraints or statistical observations.

Research in SSMs for LLMs is quite recent, based on the idea that it should be possible to generate statistical models to mirror a more compact representation. Using such a model, inference can predict next items in a sequence faster than using brute-force attention recalculation on each next step. Research is already looking at applications in speech generation, DNA sequence analysis, computer vision, and of course LLM methods. While I haven’t yet found research on verification applications, it seems reasonable to assume that if ‘traditional’ LLMs can play a role in a verification problem, then SSM-based LLMs can play a more efficient role.

Paul’s view

The potential for LLMs to dramatically improve verification productivity is clear. How long it will take to and what kinds of tools will achieve it is actively debated. All EDA vendors including Cadence have significant investments in LLM-based tools.  This month we’re blogging about Mamba, a state space model (SSM) rather than an LLM. SSM research has been active for many years, but Mamba puts it in the spotlight as a serious contender to replace LLMs. While ours is not a blog for AI experts, if SSMs are to replace LLMs it would be a big deal for all of us, so we figured we should respect the moment and blog on Mamba!

As a simple teaser here, I like to compare LLMs and SSMs to control and datapath in chip design. Think of an LLM as a massive multi-billion node datapath. The inputs are every word in the prompt concatenated with every word that has been output so far. The output is the next word inferred. The width of the datapath explodes internally as very complex math is used to map numbers denoting each input word into scores for every possible output word, literally the entire English dictionary.

Alongside a datapath is control logic that gates and guides the datapath. In our world, control logic is highly sequential – state machines and control registers. Control logic up-levels datapath from a calculator into a reasoning thing that can take actions and make decisions.

In LLMs the control logic is not sequential. It’s a combinational “attention” weighting function that weights input words with other input words. In SSMs the control logic is a generic programmable (through training) state machine. Sure, it can do attention, but it can do many other things as well.

One key benefit of SSMs is that they don’t have limits on the size of input prompt. LLMs have an n-squared size/runtime problem since the attention function must compare every input word with every other input word. Inference blows up if the context window is too big. SSMs have no hardwired requirement to compare every input word to every other input word. Conceptually they just remember something about words input so far and use this memory to project weightings on the current input word.

The math and innovations behind SSMs go deep. If you are want to zoom in, this blog is a great place to start. Either way, let’s all stay tuned – dramatic improvements in verification productivity may well come through SSMs rather than LLMs. Imagine what we could do if the RTL and testbench for a full chip SOC and a full wavedump from its simulation could be passed as input to an SSM?

Raúl’s view

Inference in transformers has quadratic complexity arising from the self-attention mechanism: each token in the input sequence must compute its relevance (attention score) to every other token. This means that for an input sequence of length n the attention mechanism requires O(n2) computations. This makes inference expensive, and in practice a state-of-the-art LLM like OpenAI’s GPT-4 reportedly manages sequences of up to 32,000 tokens, while Google’s Gemini can handle up to 8,192 tokens. State Space Models (SSMs) have been developed to address transformers’ computational inefficiency on long sequences, but they have not performed as well as attention on important domains such as language.

The paper we review this month introduces Mamba, an architecture which incorporates a structured SSM to perform context-dependent reasoning while scaling linearly in sequence length, matching or outperforming transformers in many cases. Here is how it works.

A Structured SSM maps an input sequence xt to an output yt through a state ht as follows (discretized): ht = Aht-1 + Bxt, yt = Cht, where A, B, and C are matrices (to electrical engineers this is reminiscent of a Moore finite state machine). Such recurrent models are efficient because they have a finite state, implying constant-time inference and linear-time training. However, their effectiveness is limited by how well the state has compressed the context. This shortcoming is addressed by selection, which means making B and C also functions of the input and thus time varying. (*)

Mamba is an architecture that integrates a selective SSM with a Multi-Layer Perceptron (MLP) block. It achieves state-of-the-art results, often matching or surpassing Transformer models, in some cases using 3-4x fewer parameters (which is nice but not game changing). Additionally, it can handle longer context up to sequences of one million in length (this may allow to process very long strings, useful in EDA where design data is large). It certainly makes the point that Transformers are not the end of the road.

The paper, cited over 1000 times, spans 36 pages with 116 references and requires AI expertise to read. It covers various aspects of SSMs like architectures, dimensions, use of complex vs. real numbers, discretization, RNN gating mechanisms, and selection effects. Mamba is evaluated on synthetic tasks such as Selective Copying (filter out irrelevant tokens) and Induction Heads (retrieving an answer based on context, e.g., predict Potter after Harry), and on Language, DNA, and Audio modeling. Mamba is compared to other SSM architectures such as Hyena, SaShiMi, H3 and Transformer models such as Transformer++. The number of parameters is in the range of hundreds of thousands to one billion. The authors finish by suggesting that “Mamba is a strong candidate to be a general sequence model backbone”.

(*) The paper uses an overbar to indicate discretized A and B matrices, which I could not translate successfully from my Mac to the SemiWiki site. I used an underbar instead.


Defect-Pattern Leveraged Inherent Fingerprinting of Advanced IC Package with TRI

Defect-Pattern Leveraged Inherent Fingerprinting of Advanced IC Package with TRI
by Navid Asadizanjani on 10-29-2024 at 10:00 am

Article 1 figure 1 (1)

In the quest to secure the authenticity and ownership of advanced integrated circuit (IC) packages, a novel approach has been introduced in this paper that capitalizes on the inherent physical discrepancies within these components. This method, distinct from traditional strategies like physical unclonable functions (PUFs) and cryptographic techniques, harnesses the unique defect patterns naturally occurring during the manufacturing process. Counterfeiting involves unlawfully replicating authentic items for unauthorized advantages or financial gain, affecting diverse sectors such as automotive components, electronics, and consumer goods. Counterfeit integrated circuits (ICs) if sold in open market present a substantial risk due to their deviations in functionality, material composition, and overall specifications.

These illegitimate micro-electronic products, which might be mislabeled, reused, or cloned, fall into two primary categories: those with functional differences (like incorrect labeling or false specifications) and those that mimic the original function yet differ in technical aspects such as circuit timing or stress tolerance. Incorporating such counterfeit ICs into electronic devices can lead to significant adverse effects, undermining the devices’ quality, reliability, and performance. The stakes are especially high in military contexts, where the reliability and security of electronic systems are of paramount importance. According to a report from the International Chamber of Commerce , counterfeit trading globally is valued at up to 1 trillion USD each year, with the electronics sector making up a substantial share of this market.

The rise in counterfeit ICs has been linked to practices like outsourcing production to untrusted entities or due to the absence of proper life cycle management or traceability framework. Detection of counterfeits is perceived to be a more viable approach than prevention. Prevention requires extensive collaboration across borders, industries, and legal frameworks. Given the global nature of supply chains and the sophistication of counterfeit operations, prevention efforts can be difficult to implement and enforce consistently. However, the approach of detection offers flexibility, cost-effectiveness, and the ability to adapt to the changing tactics of counterfeiters. Despite extensive research into methods for detecting counterfeit ICs over the past decade, differentiating between new and used ICs, as well as spotting illegally produced or altered ICs, continues to be a significant challenge.

The introduction of sophisticated multi-die packaging technologies further complicates the issue of counterfeiting. These technologies which combine multiple chiplets into a single package increases the likelihood of counterfeit components being introduced into the system. The complexity of these systems  where chiplets from various sources are integrated into one package, makes verifying the authenticity of each component more challenging, raising the potential for counterfeit chiplets to affect the system’s overall functionality and security.

This new landscape of IC packaging necessitates a new direction for enabling reliable provenance. Provenance allows for the authentication of components at any stage of the supply chain. Buyers can verify whether an IC matches its documented history, ensuring its authenticity which reduces the risk of counterfeit ICs being accepted and used in critical systems. Provenance requires a method of identification in the die level, package level or board level. Historically, this is achieved by embedding some form of hardware identifier into the IC. These identifiers can be as simple as placing physical markers on the IC package or the die or storing manufacturing data in a non-volatile memory inside the chip or inserting additional circuitry to serve as an electrical watermark which makes it possible to trace the batch or wafer number of origin of a particular IC.

Hardware fingerprinting has emerged as a potent method for achieving provenance in the fight against counterfeit electronics. For example, physical unclonable functions (PUF) leverages the unique, manufacturing process variation of hardware components to provide a means of identifying and authenticating genuine devices throughout their lifecycle. However, despite being in spotlight for more than two decades, PUF based fingerprinting is yet to be widely adopted in industry. This can be attributed to a number of reasons including sensitivity to environmental conditions, risk of physical degradation over time, scalability and integration challenges within manufacturing processes, challenges in enrollment and response provisioning, high resource demands for error correction, susceptibility to advanced security attacks, reliability concerns across the device’s lifespan, and issues with standardization and interoperability.

In this work, the authors visit the existing challenges and limitations of traditional embedded fingerprinting and watermarking approaches and propose the notion of inherent hardware identifiers using Thermo-reflectance Imaging (TRI) as a new frontier of opportunity for effective security assurance of advanced IC packaging supply chain as categorized in Figure 2.

The key contributions of this work are summarized as follows: 1. Review existing embedded fingerprinting and watermarking. 2.Highlight the limitations and challenges of the existing approaches when applied in the context of security of chiplets and multi-chiplet systems or SiP. 3.Introduce the concept of inherent identifiers for fingerprinting and watermarking.4. Demonstrate TRI to harness inherent uniqueness to create fingerprints and watermarks.

Navid Asadizanjani
Associate Professor, Alan Hastings Faculty Fellow, Director, Security and Assurance lab (virtual walk), Associate director, FSIMEST, Department of Electrical and Computer Engineering, Department of Material Science and Engineering, University of Florida

Nitin Vershney
Research Engineer, Florida Institute for Cybersecurity Research

Also Read:

Electron Beam Probing: The New Sheriff in Town for Security Analyzing of Sub- 7nm ICs with Backside PDN

Navigating Frontier Technology Trends in 2024

PQShield Builds the First-Ever Post-Quantum Cryptography Chip


Podcast EP256: How NoC Tiling Capability is Changing the Game for AI Development with Andy Nightingale

Podcast EP256: How NoC Tiling Capability is Changing the Game for AI Development with Andy Nightingale
by Daniel Nenni on 10-29-2024 at 8:00 am

Dan is joined by Andy Nightingale, VP of product management and marketing at Arteris. Andy has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.

Dan explores with Andy the significance of the recently announced tiling capabilities and extended mesh topology support for the Arteris network-on-chip (NoC) IP products. Andy provides an extensive overview of the benefits and impact of this new capability across a very broad range of markets and products.

He explains that that huge increase in AI development has put tremendous pressure on chip development across system scalability, performance, power and design productivity. Andy explains how Arteris NoC IP with the new tiling capabilities will have a substantial impact in all these areas across many applications and markets, in both the data center and the edge.

How specific market challenges are met with Arteris NoC IP is explained in detail in this informative discussion.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


How to Update Your FPGA Devices with Questa

How to Update Your FPGA Devices with Questa
by Mike Gianfagna on 10-29-2024 at 6:00 am

How to Update Your FPGA Devices with Questa

It’s a fact of life that technology marches on. Older process nodes get replaced by newer ones. As a result, ASSPs and FPGAs are obsoleted, leaving behind large system design investments that need to re-done. Since many of these obsolete designs are performing well in the target application, this re-do task can be particularly vexing. Thanks to advanced technology offered by Siemens Digital Industries Software, there is now another way around this problem. Using the Questa™ Equivalent FPGA retargeting flow, all the work on obsolete designs no longer needs to go to waste. Siemens recently published a white paper that takes you through the entire process. A link is coming, but first let’s look at the big picture and how to update your FPGA devices with Questa.

An Overview of the Problem and the Flow

The goal of the flow presented in the white paper is to extend the design life of obsolete FPGAs by migrating those designs to newer technologies. This way, the design work can be reused with the added benefit of taking advantage of the latest safety, security, and power saving features of newer FPGAs. The fundamental (and correct) premise here is that retargeting a working design to a newer technology takes far less time and effort than re-designing the application from scratch. Siemen’s Questa Equivalent FPGA retargeting solution is at the center of this approach.

Additional motivation includes process simplification and cost reduction due to end-of-life supply limitations and minimization of counterfeit risks that may be present in older supply chains. The proposed solution takes the final netlist from the original design and generates a functionally equivalent final netlist in a modern target FPGA technology. Months of engineering time can be eliminated because designers do not have to recreate the RTL for reimplementation on a modern FPGA device.

A high-level view of this process is shown in the graphic at the top of this post. Digging a bit deeper, the figure below provides more details of the retargeting methodology presented in the white paper.

Retargeting methodology

Details of Use Cases

 Not all design situations are the same. Recognizing that, the Siemens white paper presents three use cases. You will get all the information needed to build your migration strategy in the white paper – again, a link is coming. For now, let’s briefly examine the three scenarios that are discussed.

Use Case 1: Equivalence with RTL: In addition to proving the obsolete netlist against the new netlist, Questa Equivalent FPGA can be used to prove the functional equivalence of the RTL to the obsolete netlist, if it has not been manually modified to meet the requirements of the original specification. A complete description of how to examine the design to identify any needed additional inputs and how to set up this flow are all covered.

Use Case 2: RTL Retargeting: If the RTL is available for the obsolete device netlist and you decide to use the RTL for synthesis and retargeting and want to verify the old device netlist versus the new netlist (synthesized using RTL), this is the flow to use.

A high-level summary of this flow includes:

  • Synthesize the RTL for the new device
  • Create (if necessary) and apply formal models for the new device netlist
  • Prove functional equivalence for RTL versus the new device netlist using Questa Equivalent FPGA
  • Create (if necessary) and apply formal models for the obsolete device netlist
  • Prove functional equivalence of the obsolete device netlist versus the modern device netlist using Questa Equivalent FPGA

Again, all the details about how to examine the design and identify any needed information and how to set up the overall flow are covered in the white paper.

Use Case 3: RTL-RTL Retargeting: RTL-RTL retargeting can be used if the RTL of the obsolete device netlist has IP that is no longer available for the new device, and the obsolete IP can be replaced with up-to-date IP with similar functionality (or functionally equivalent logic).

A high-level summary of this flow includes:

  • Replace the obsolete IP with similar updated IP or equivalent logic
  • Create (if necessary) and apply formal models for the obsolete device IP
  • Create (if necessary) and apply formal models for the new device IP
  • Prove functional equivalence for RTL with obsolete IP versus RTL with updated IP (or equivalent logic) using Questa Equivalent FPGA
  • Synthesize the RTL for the new device
  • Create (if necessary) and apply formal models for the new device netlist
  • Prove functional equivalence for RTL with updated IP or equivalent logic versus the new device netlist using Questa Equivalent FPGA
  • Create (if necessary) and apply formal models for the obsolete device netlist
  • Prove functional equivalence for RTL with obsolete IP versus the obsolete device netlist using Questa Equivalent FPGA

As before, all the details of how to examine the design to identify missing data, how to create that data, and how to set up the flow are all covered in the white paper.

To Learn More

In most cases, reuse makes more sense than redesign. If you are faced with this decision, I highly recommend you find out how Siemens Digital Industries Software can help. You can download the Questa Equivalent FPGA Retargeting Flow white paper here. And that’s how to update your FPGA devices with Questa.

Also Read:

The RISC-V and Open-Source Functional Verification Challenge

Prioritize Short Isolation for Faster SoC Verification

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs


Overcoming obstacles with mixed-signal and analog design integration

Overcoming obstacles with mixed-signal and analog design integration
by Chris Morrison on 10-28-2024 at 10:00 am

Central,Computer,Processors,Cpu,Concept.,3d,Rendering,conceptual,Image.

Mixed-signal and analog design are key aspects of modern electronics. Every chip incorporates some form of analog IP, as even digital logic is dependent on analog signals for critical functions. Many digital design engineers are known to be uncomfortable with the prospect of integrating analog components. However, the current shortage of analog design engineers means that more digital designers are having to take on this daunting task. Here, we address the main integration issues and look at how recent developments in analog IP technology from Agile Analog are helping to make analog design far less complex, costly and time-consuming.

Traditional mixed-signal and analog design integration issues

Integrating analog and digital functions can result in a complicated design. Ensuring that a chip design meets all requirements can be challenging. Digital design engineers have often relied on reusable digital IP blocks, but the opportunity for design reuse has been limited with analog and mixed-signal designs as these usually involve bespoke solutions for each project. Mixed-signal circuits require close attention to physical layout, as well as correct component placement for compactness and peak performance. It’s also crucial to manage voltage levels, signal levels and signal processing between analog and digital components to enable seamless functionality.

Specialized techniques are needed for controlling noise and interference in mixed-signal systems, because of the sensitive nature of analog circuits and potentially noisy digital elements. Balancing power consumption and temperature regulation in mixed-signal systems adds an extra degree of difficulty, as digital and analog components may have different power requirements. Simulation demands a high level of precision to account for the continuous range of possible values. Testing traditional mixed-signal systems can lead to further challenges as this can involve expensive equipment, as well as time-intensive verification processes that may be alien to digital engineers.

Embracing advances in analog IP

Overcoming the obstacles with traditional mixed-signal and analog design integration can be tricky. Fortunately, following new advances in the analog IP sector, there is now an alternative fresh approach. At Agile Analog, we can automatically generate analog IP that meets the customer’s exact specifications, for any process and foundry, from legacy nodes right up to advanced nodes. Parameters such as accuracy, power consumption, die area, sensitivity and speed can be optimized to suit the precise requirements of the application.

The Agile Analog team is fully focused on expanding our analog IP product portfolio and helping chip design engineers by simplifying mixed-signal and analog design integration. Agile Analog is changing the landscape of analog IP and transforming the way analog circuits are developed. Disrupting analog design methodologies that have remained the same for decades. Removing the hassle, delay and expense associated with conventional custom IP. We can also regenerate analog IP using a foundry PDK, so it is straightforward to make modifications. For example, it is not necessary to process port all analog circuits when moving to a smaller process node as this Agile Analog IP can simply be regenerated.

Our growing range of customizable analog IP solutions cover data conversion, power management, IC monitoring, security and always-on IP, with a vast array of applications including HPC (High Performance Computing), IoT, AI and security. All Agile Analog IP comes with a comprehensive set of IP deliverables – including test specifications, documentation, simulation outputs and verification models. This digitally wrapped IP can be seamlessly integrated into any SoC, substantially cutting the complexity, constraints, risks and costs of analog design. Speeding up the time-to-design will help to accelerate the time-to-market for new semiconductor devices and encourage further innovation across the global semiconductor industry.

Learn more at www.agileanalog.com

Chris Morrison has over 15 years’ experience of delivering innovative analog, digital, power management and audio solutions for International electronics companies, and developing strong relationships with key partners across the semiconductor industry. Currently he is the Director of Product Marketing at Agile Analog, the customizable analog IP company. Previously he has held engineering positions, including 10 years at Dialog Semiconductor, now acquired by Renesas. Chris has an engineering degree in computer and electronic systems from the University of Strathclyde and a masters’ degree in system level integration from the University of Edinburgh.

Chris Morrison, Director of Product Marketing, Agile Analog

Also Read:

CEO Interview: Barry Paterson of Agile Analog

International Women’s Day with Christelle Faucon VP Sales Agile Analog

2024 Outlook with Chris Morrison of Agile Analog


Emerging Growth Opportunity for Women in AI

Emerging Growth Opportunity for Women in AI
by Bernard Murphy on 10-28-2024 at 6:00 am

Fem AI Logo Reg R5

I was invited to the Fem.AI conference in Menlo Park, the first sponsored by the Cadence Giving Foundation with a goal to promote increased participation of women in the tech sector, especially in AI. Not just for equity, also to grow the number of people entering the tech/AI workforce. There are countless surveys showing that demand for such talent is fast outstripping supply. Equally interesting, men and women seem to bring complementary skills to tech, especially to AI. More women won’t just add more horsepower, they can also add a new competitive edge.

What follows is based on talks I heard mostly in the first half of the day. Schedule conflicts prevented me from attending the second half.

Role models

This event was so stuffed with content and high-profile speakers that I struggled at first to decide my takeaways. Listing all speakers and what they had to say would make for a very long and boring blog so I’m taking a different path based on an analogy the MC (Deborah Norville, anchor of Inside Edition) used to kick off the day. She talked about a pipeline for women entering tech, imagining roles in tech, starting in academia, progressing to first jobs and beyond. Reminding us that the reason we don’t see more women in tech is that this pipeline is very leaky.

Many of us find first inspiration for a career path in a role model, especially a high-profile role model, someone in who we can imagine ourselves. In engineering this is just as true for boys as for girls, but girls don’t see as many identifiable role models as boys do. More are now appearing but are still not sufficiently championed as role models.

We also need to correct girls’ own perception that engineering and software careers are not for them. If they don’t see fun and inspiration in these areas driven by like-minded activity among their peers, they won’t look for role models with those characteristics. Girls Who Code and similar organizations are making an important dent in this problem. Fem.AI and others are aiming (among other things) to raise the visibility of role models who can inspire girls looking for that initial spark. The speakers at this event were a good indication of the caliber of inspiring examples we should promote more actively.

Fixing the leaky pipeline

Start with school programs. A big barrier to those interested in STEM is the math hurdle. I’m told a belief among girls that “math is hard” starts at age 7. Females don’t lack genetic wiring for math, and they are certainly not alone in this challenge. My father (an English teacher) hated math, but he liked trigonometry because he understood how it can be used to solve real world problems like figuring out the height of a building. Relevance to real world problems is an important clue.

At the college level, as many women as men enter as computer science majors, but 45% of them change majors or leave school before graduation. The consensus here was they don’t feel they belong. One solution already in place at multiple colleges is mentorship and allyship programs, where an undergrad can turn to a grad student for guidance or support through a rough patch in self-confidence. (This is arguably just as valuable and accessible for male undergrads. These programs are fostered by encouraging graduates to develop their own leadership skills through mentoring, making them just as feasible and accessible for men.)

A second solution is blended majors, combining CS and AI with say economics, or brain and cognitive sciences, recognizing that women are often drawn to areas where they can see impact. The Dean of Engineering from MIT said she saw female engagement in such programs rising to 50% to 60%.

Working at the interfaces between disciplines was a recurring theme throughout the Summit. Accuracy, fairness and accountability are huge concerns in AI deployments today, and these can’t be addressed solely within CS/AI. One research program I heard about involved a collaboration between a law school and AI researchers. Another (private discussion) was with a senior at Notre Dame leading an all-female team building a satellite – a very impressive multi-disciplinary project with an easily relatable goal.

The irrepressible Dr Joy Buolamwini talked about her seminal work into inaccuracies in face recognition software, leading to major (though not yet complete) regulatory actions in the US, Europe and other countries. It is quite shocking to me that we seem to accept levels of inaccuracy in AI that in any other STEM context would earn an automatic fail. While understandable for high school and research programs, we should demand more for any public deployment affecting safety, finances, policing, legal decisions, everywhere AI claims it can make our lives easier.

The opportunity for women in AI

The theme I have mentioned above several times, that women lean into areas which have a clear impact on real world needs, led to a very interesting VC panel discussion focused on waves in AI venture investment. We’ve seen the AI boom: large language models, spectacular growth in AI hardware and incredible claims. At the same time most would agree we’re headed into a “trough of disillusionment”. The initial thrill is wearing off.

I get it. I work with a lot of tech teams, including several startups. Invariably run by guys, fascinated by their tech, and sure that the world will immediately figure out to apply their innovation to solve an unlimited number of real-world needs. That’s the way it works with us guys: technology first, figure out a real-world application later.

VCs see the next big wave in AI being problem-centric: start with a domain-specific need – in health care, agriculture, education, wherever – then build a technology solution around that need. Adapt as required to find the best fit between initial concept and experience with prototypes.  This sounds like a perfect fit for the way many women like to work. Suggesting a wave where women can lead, maybe even help men find real applications for their cool tech!

Very interesting series of talks. I look forward to learning more, especially digging deeper into those real-world problems. You can learn more about Fem.AI HERE.


LRCX- Coulda been worse but wasn’t so relief rally- Flattish is better than down

LRCX- Coulda been worse but wasn’t so relief rally- Flattish is better than down
by Robert Maire on 10-27-2024 at 8:00 am

Happy Lamb
  • Lam put in good quarter with flattish guide- still a slow recovery
  • This is better than worst case fears of order drop like ASML
  • China spend is slowing but tech spending increase offsets
  • Relief rally as the market was braced for bad news and got OK news
Lam has OK, slightly better than in line quarter with OK guide….

It coulda been way worse but wasn’t.

It wasn’t a blow out quarter but at least it wasn’t a disaster that many had been bracing for after the ASML news last week.

Lam came in at $4.17B in revenues and $0.86 in EPS versus expected $4.05B and $0.80 so a “normal” slight beat.

Guidance is for $4.3B+-$300M and EPS of $0.87+-$0.10, flattish with street expectations of $4.24B and $0.84.

The fact that Lam is looking at flattish to slightly up is way better than the greater than 50% cut in orders ASML reported and the street was fearful of.

China moderating as expected but tech spending is improving.

China moderated from 39% of business to 37% of business with expectations of falling to about 30% in the December quarter. All this down from highs in the 40’s.

There were a ton of China questions on the conference call as analysts probed over their fears of a coming China collapse but there were no clear answers.

We did hear that tech spending, mainly in DRAM (read that as HBM) were improving while NAND is still languishing.

Still a very slow recovery from a very long and deep downturn

We have been covering the semiconductor industry for decades through many cycles and this is perhaps one of the slowest/longest recoveries we have ever seen the industry experience.

Memory obviously had built way too much excess capacity that we are still experiencing the after effects of.

Usually in a downcycle, all spending takes a holiday while capacity gets burned off. In this down cycle, China never stopped spending.

This downcycle was a whiplash cycle started by the COVID crisis which is largely responsible for the severity and the overbuild on the bounce back.

2025 WFE to be up from 2024….but how much?

Lam spoke about 2025 being better than 2024 but would not quantify by how much. Is it 1% better or 50% better….its anybody’s guess.

Our guess would be slightly up on the order of 10-15% with China slowing as we exit 2024 and technology spending picking up in foundry & DRAM.

The main issue we see is that with Samsung foundry and Intel not spending much its hard for the rest of the industry to offset that weakness.

The Stocks-Expect a short lived “relief rally”/bounce

The market was expecting news that was likely bad coming out of Lam after the ASML debacle last week.

Lams news was more or less in line and OK and wasn’t anywhere near the disaster that it could have been so we saw the aftermarket bounce in the stock and will likely see a bounce in the equipment stocks overall as investors breathe a sigh of relief.

We would caution investors that its clearly not off to the races as Lam’s lukewarm guidance underscores.

We see a quick bounce then settling in to a slightly lower overall valuation of the semi equipment market as the ASML dark cloud will still hang over the market until proven otherwise.

We would again point out that ASML is the canary in the coal mine of equipment orders as litho tools are always ordered well before other tools due to the long lead time.

Its also important to point out that Deposition and Etch tools that Lam and Applied makes are useless without litho tools to print the pattern that is then etched, so there is most certainly a relationship between the number of litho tools and dep and etch tools.

Bottom line is that the industry isn’t going to buy a lot of dep and etch tools while not buying litho tools….it just doesn’t work that way, that divergence does not exist (at least not for very long)

So be aware…..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

ASML surprise not a surprise (to us)- Bifurcation- Stocks reset- China headfake

SPIE Monterey- ASML, INTC – High NA Readiness- Bigger Masks/Smaller Features

Samsung Adds to Bad Semiconductor News


Podcast EP255: The Growing Proliferation of Semiconductors and AI in Cars with Amol Borkar

Podcast EP255: The Growing Proliferation of Semiconductors and AI in Cars with Amol Borkar
by Daniel Nenni on 10-25-2024 at 10:00 am

Dan is joined by Amol Borkar, Product Marketing Director at Cadence. Since joining in 2018 as a senior product manager, he has led the development of many successful hardware and software products, including Tensilica’s latest Vision 331 and Vision 341 DSPs and 4DR accelerator targeted for various vision, automotive and AI edge applications.

Within Tensilica, he has been responsible for product management, marketing, partnerships and ecosystem growth for the Vision, ConnX and MathX families of DSPs. Previously, he was at Intel’s RealSense group, where he held various positions in engineering, product management and marketing and was responsible for the success of a number of RealSense’s 3D cameras.

Before joining Intel, Borkar developed computer vision-based advanced driver assistance algorithms for self-driving vehicles as part of this Ph.D. thesis.

In this informative discussion, Amol explains his passion for technology and working with customers to achieve the required impact. Dan explores AI proliferation in automotive applications with Amol.

Many of the architectural trends in AI are clearly explained, with examples of use models, challenges and benefits. Examples include the merging of vision and radar processing, the benefits and challenges of sensor fusion and domain-based vs. central compute architectures.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.