RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Functional Safety Analysis of Electronic Systems

Functional Safety Analysis of Electronic Systems
by Daniel Payne on 03-04-2026 at 10:00 am

hyperlynx ams

Safety engineers, hardware designers and reliability specialists in safety-critical industries like automotive, aerospace, medical device and industrial automation use FMEDA (Failure Modes, Effects and Diagnostic Analysis). ISO 26262 compliance for ADAS, braking systems and ECUs require FMEDA in the automotive sector. In case of a failure, the system has to respond with an approach to keep people, property and the surrounding environment safe. Siemens and Modelwise wrote a white paper on this topic as it applies to electronic systems, so I’ll share my findings.

Safety analysis reports can be written in a natural language, but then someone has to interpret them, which can introduce errors. Leaving safety analysis until after a design is done can lead to design rework and stretch out the schedule. The approach from Modelwise is to use Paitron, which formalizes and automates the functional safety process, making for an accurate and efficient way to compute FMEA and FMEDAs. Integrating Modelwise Paitron with the Siemens Hyperlynx AMS creates a methodology for the design, verification and functional safety analysis of electronic systems.

For AMS schematic capture and simulation there are tools from Siemens, Xpedition Designer and HyperLynx AMS, respectively. Designers can simulate using models from components in SPICE, VHDL-AMS or custom.

HyperLynx AMS in the Xpedition Designer environment

HyperLynx AMS integrates PCB design capture with circuit simulation, then Paitron enables automatic functional safety analysis.

The Part Lister in Xpedition Designer generates of bills of materials (BOMs) by extracting component property information directly from the schematic database. This enables Paitron to automatically map the components to the categories of the failure rate and modes databases, an otherwise task for engineers.

Consider an example functional safety analysis using a voltage monitor circuit where an output goes low whenever the input voltage falls outside a reference range.

Schematic
Simulation Results

The voltage divider R3/R4/R5 determine the voltage range, so consider a failure mode where resistor R4 has an increase in resistance. This increase make the output window wider than expected, making it a safety violation.

Paitron can be used to speed up FMEA/FMEDA for this design through automation. Accuracy and standardization is achieved through template-based formalization.

In Paitron for this example the input and output variables, Vin and Vout are defined and a domain is assigned to each one.

Variable

Type

Partition

Value

Vin

Input

[0.1, 6.7)
[6.8, 13.4]
[13.5, 30)

Undervoltage
Valid
Overvoltage

Vout

Output

(-inf, 0.5]
(0.5, inf)

Tripped
Valid

Defined model variables and domains

System effects are next formalized using constraint expression as shown below:

Effect

Description

>Monitor stuck valid

Vout is always HIGH

Monitor stuck tripped

Vout is always LOW

Missing overvoltage

Vout is HIGH for some overvolted input and LOW for others

Missing undervoltage

Vout is HIGH for some undervolted input and LOW for others

Tripped valid

Vout is LOW for some valid input voltages and HIGH for others

In Paitron the effect for “Monitor stuck valid” is defined with the dialog:

The failure rate and failure mode distributions for each PCB component are used to compute the quantitative safety metrics. Paitron can use several sources for the failure mode and rates.

Source name Description

Failure rate

Failure mode
SN 29500 Siemens standard providing failure rates for electrical and electronic components

Yes

No
IEC 61709 Reference conditions for failure rates and stress models for conversion

No

Yes
MIL-HDBK-217F ilitary handbook that predicts reliability of military electronic equipment and systems

Yes

>No
MIL-HDBK-338B Military handbook that provides reliability data and analysis for electronic devices and systems

No

Yes
Birolini Textbook by A. Birolini (Reliability Engineering: Theory and Practice, 8th Edition., Springer Berlin Heidelberg)

Yes

Yes

Failure mode and rate sources available in Paitron

Failure rate and failure mode categories are assigned based on component types, shown next:

Part(s) Reference type Failure rate category Failure mode category
C1, C2 Capacitor Ceramic | LDC (COG, NPO) Ceramic | NPO-COG
R1, R2, R3, R4, R5, R6, R7 Resistor Metal film Metal film
XU1A, XU1B Integrated Circuit | Comparator Bipolar, BIFET | No. of transistors: ≤ 30 Portable and non-stationary use, ground vehicle installation
YA1 Integrated Circuit | Digital | Gate | AND BICMOS | Logic | No. of gates 1-100, No. of transistors 5-500 Portable and non-stationary use, ground vehicle installation

Once analysis is set up, then the automated FMEA/FMEDA is run where Paitron finds the resulting effects for each failure mode, creating a detailed safety report. Here’s the analysis summary per IEC 61508. Each component has a detailed analysis with determined effects for each failure mode, plus the distribution of the failure rates that depend on classification as dangerous/safe and diagnostic coverage.

Summary

FMEDA can be done manually or with automation, and the integration between Siemens and Modelwise provides the fastest functional safety analysis. This automation helps to cut failure rates and speed up the design cycle. Paitron and HyperLynx AMS are a proven tool combination for use on safety critical systems.

Read the 16 page White Paper online after a brief registration.

Related Blogs


RVA23 Ends Speculation’s Monopoly in RISC-V CPUs

RVA23 Ends Speculation’s Monopoly in RISC-V CPUs
by Jonah McLeod on 03-04-2026 at 8:00 am

RVA23 Image

RVA23 marks a turning point in how mainstream CPUs are expected to scale performance. By making the RISC-V Vector Extension (RVV) mandatory, it elevates structured, explicit parallelism to the same architectural status as scalar execution. Vectors are no longer optional accelerators bolted onto speculation-heavy cores. They are baseline capabilities that software can rely on.

RVA23 doesn’t force scalar execution to become deterministic. It simply makes determinism viable because the scalar side is no longer responsible for throughput. The vector unit handles the parallel work explicitly, and the scalar core becomes a coordinator that can be simple, predictable, and low‑power without sacrificing performance.

To understand why this shift matters, it helps to recall how thoroughly speculative execution came to dominate high-performance CPU design. It delivered speed, but at increasing cost—in power, complexity, verification burden, and security exposure. RVA23 does not reject speculation. Instead, it restores balance. It acknowledges that predictable, vector-driven parallelism is now a credible, mainstream path for performance growth.

Mandatory vector support fundamentally changes the software performance contract. Compilers, libraries, and applications can now assume RVV exists on every compliant core. Optimization strategy shifts away from “let the CPU guess” toward explicit, structured parallelism. Toolchains must reliably emit vector code. Math and DSP libraries can reduce or eliminate scalar fallbacks. Application developers gain a predictable model for scaling loops and data-parallel workloads.

The cultural shift is significant: parallelism becomes something software expresses directly, not something hardware attempts to infer. For hardware designers, the shift is different but equally profound. Vector units are now mandatory, yet the specification preserves microarchitectural freedom.

Implementers can choose lane width, pipeline depth, issue policy, and memory design. What changes is the performance center of gravity. Designers are no longer forced to rely exclusively on deeper speculation—larger branch predictors, wider reorder buffers, and increasingly complex recovery mechanisms—to remain competitive.

Instead, area and power can shift toward vector throughput and memory bandwidth. Simpler in-order cores with strong vector engines become viable for workloads that once demanded complex speculative machinery.

How Speculation Came to Dominate

Speculative execution did not appear overnight. It emerged gradually from techniques that loosened strict sequential execution. In 1967, Robert Tomasulo’s work on the IBM System/360 Model 91 introduced dynamic scheduling and register renaming, allowing instructions to execute out of order without violating program semantics. Around the same time, James Thornton’s scoreboard in the CDC 6600 kept pipelines active in the presence of hazards. These mechanisms did not speculate—but they removed structural barriers that once forced processors to stall. Once out-of-order execution became viable, speculation became irresistible.

In the late 1970s and early 1980s, James E. Smith formalized branch prediction, grounding speculation in probability. Memory ceased to be something processors simply waited on; it became something to anticipate. Data was fetched before it was confirmed to be needed. Caches evolved from locality optimizers into buffers that absorbed the turbulence of speculative execution.

Academia reinforced this direction. Instruction-level parallelism research at Stanford and Berkeley treated speculation as the path forward. John Hennessy framed speculation  as a way to increase performance without abandoning sequential programming. David Patterson articulated the “memory wall,” encouraging deeper caching and hierarchical storage.

Industry followed. Intel’s Pentium Pro (P6) crystallized speculative out-of-order execution with deep cache hierarchies into the mainstream CPU template. IBM POWER and AMD Zen reinforced the same model: sustain ever larger volumes of in-flight speculative work by expanding buffering, bandwidth, and memory-level parallelism. Each generation scaled speculation rather than questioning it.

The Growing Costs

Over time, the costs became clearer. In his ISSCC 2014 plenary, Mark Horowitz argued that energy—not transistor density or raw logic speed—had become the primary constraint in computing. Arithmetic consumes only a few picojoules. Cache accesses cost an order of magnitude more. DRAM accesses cost two to three orders more. Data movement, not computation, dominates energy consumption.

Voltage scaling stalled and frequency scaling hit thermal limits. Simply adding cores no longer restored historical performance curves. Meanwhile, last-level caches and register files grew so large that they began consuming energy comparable to—and often exceeding—the cores they served.  Modern memory hierarchies evolved not independently, but in symbiosis with speculative execution. They became the scaffolding required to sustain large volumes of in-flight, uncertain work. Speculation optimizes for the appearance of forward progress. The memory system exists to sustain that appearance—and to clean up when predictions fail.

At the DRAM level, Onur Mutlu showed how modern processors stress memory systems through interference, row conflicts, and unpredictable access patterns—many driven not by committed computation, but by speculation that would ultimately be discarded.

Seen in this light, modern CPU memory hierarchies did not evolve independently. They co-evolved with speculative, out-of-order execution, becoming the physical scaffolding required to sustain it. At its core, speculative execution optimizes for illusion—the illusion that a single sequential thread is progressing faster by guessing ahead.

Deterministic execution, by contrast, optimizes for what is known. It treats latency as schedulable rather than something to hide behind ever-increasing bandwidth. Where speculative architectures grow in complexity to compensate for uncertainty, deterministic architectures grow in predictability and sustained throughput.

The Path Not Taken

Speculation was not inevitable. Seymour Cray’s vector machines demonstrated that speculation was never the only path forward. They rejected it entirely, relying instead on predictable memory stride patterns, explicit vector lengths, and deterministic scheduling. Parallelism was exposed directly to the hardware, not inferred through guesswork, and latency was something to plan around rather than hide.

Their memory systems were engineered for stable, high‑throughput access rather than the guess‑and‑recover behavior that later speculative architectures required. In this sense, Cray’s approach aligns more closely with RVV’s structured, length‑agnostic model than with the speculative superscalar lineage that came to dominate general‑purpose CPUs

Speculation won historically because it preserved sequential programming models and minimized software disruption. But that success created path dependence. Memory hierarchies were optimized for speculative throughput even as power consumption, verification complexity, and architectural opacity escalated.

RVA23 and the Nature of Modern Compute

AI, machine learning, and signal processing workloads are structured and inherently data-parallel. Their access patterns are often knowable rather than probabilistic. These are precisely the domains where explicit parallelism outperforms speculative guessing. By making RVV mandatory, RVA23 guarantees hardware support for such workloads. Structured parallelism moves from optional extension to architectural baseline. This does not eliminate speculation. It eliminates exclusivity.

Architectures such as deterministic, time-based scheduling approaches—like those explored at Simplex Micro—can now assume vector capability as a foundation. Rather than compensating for speculative inefficiency, they coordinate compute and memory explicitly. Performance scales through utilization and predictability rather than speculation depth. For vector and matrix workloads, this is less a revolution than a return to a lineage that speculation once displaced.

Structured Parallelism as First-Class Architecture

The significance of RVA23 goes beyond instruction encoding. Compiler infrastructures can assume vector support. Operating systems can schedule with vector resources in mind. Hardware implementations can optimize for vector efficiency without worrying whether the ecosystem will ignore it. For three decades, speculation received consistent architectural investment. Structured parallelism did not.

RVA23 changes that. It does not mandate abandoning speculation. It mandates architectural parity. Designers may deploy both where appropriate, but structured parallelism is no longer a second-class citizen. The false binary—scale through speculation or accept inferior performance—no longer applies.

Less to Speculate On

With RVA23, there is less uncertainty about vector capability, less doubt that deterministic approaches can achieve first-class performance, and less need to rely exclusively on speculation to scale. Less reliance on speculation as the sole path to scaling. Today’s workloads are parallel by design, not by heroic compiler extraction from sequential code. For these workloads, speculation’s costs increasingly outweigh its benefits.

RVA23 does not end the era of speculation. It ends its monopoly. And that shift—more than any single technical feature—may be its most important contribution to processor architecture.

Also Read:

Reimagining Compute in the Age of Dispersed Intelligence

Two Open RISC-V Projects Chart Divergent Paths to High Performance

The Foundry Model Is Morphing — Again

The AI PC: A New Category Poised to Reignite the PC Market


Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems
by Kalar Rajendiran on 03-03-2026 at 10:00 am

UCIe bump planning in 3DIC Compiler Platform

The first article in this series examined how feasibility exploration enables architects to evaluate multi-die system configurations while minimizing early design risk. Once architectural decisions are validated, designers must translate conceptual connectivity requirements into physical interconnect infrastructure. Bump and TSV planning represents one of the most critical steps in bridging architectural intent with physical implementation.

Multi-die designs rely heavily on dense, high-performance interconnect structures to enable communication between chiplets, interposers, and package substrates. As multi-die systems continue to scale in complexity and signal density, traditional manual interconnect planning approaches are no longer sufficient. This article explores the methodologies and automation strategies that enable scalable bump and TSV planning. The final article in this series will discuss automated routing solutions that implement high-speed signal connectivity across these interconnect structures.

Why Bump and TSV Planning Is Critical

Bumps and TSVs form the electrical and mechanical backbone of multi-die systems. Microbumps and hybrid bonding pads facilitate horizontal connectivity between adjacent dies, while TSVs enable vertical communication between stacked dies and packaging layers. These interconnect elements allow multi-die architectures to achieve extremely high bandwidth and low latency communication.

As advanced packaging technologies reduce interconnect pitch, modern multi-die systems may include millions of individual connections. Because these interconnect structures influence routing feasibility, signal integrity, and power delivery efficiency, their planning must occur early in the design process. Poor interconnect planning can create routing congestion, degrade performance, and increase implementation complexity.

The Limitations of Manual Planning

Historically, designers relied on spreadsheet-based or graphical planning methods to define bump layouts. While these approaches were adequate for single-die flip-chip designs with limited connectivity, they cannot scale to modern multi-die systems. Manual planning increases the likelihood of connectivity mismatches between dies and introduces challenges when design changes occur during development.

Even minor modifications to bump placement can affect routing feasibility, interposer layout, and die placement. Coordinating such changes across multiple design teams becomes increasingly difficult as system complexity grows, making automation essential for maintaining design consistency and productivity.

Automated Bump Planning Methodologies

Modern multi-die design tools provide automated bump planning capabilities that allow designers to generate, place, and assign interconnect structures efficiently. By integrating bump planning into early design exploration and prototyping stages, designers can optimize multi-die floorplans and improve routing quality. Automated workflows support multiple input formats and allow designers to reuse or adapt existing bump maps across different designs and technologies.

Region-Based Bump Planning

To manage the scale and complexity of bump placement, designers often organize bumps into functional regions associated with specific interfaces or subsystems. Region-based planning allows designers to define placement patterns using parameters such as pitch, spacing, and bump geometry. These regions dynamically adjust as design constraints evolve, ensuring consistent placement and rule compliance throughout the design process.

Signal Assignment and Hierarchical Integration

After bump placement is established, designers must assign signals and power connections to individual interconnect structures. Automated assignment algorithms can optimize routability and wirelength across the entire multi-die system, rather than focusing solely on individual dies. Hierarchical planning capabilities further improve productivity by allowing bump structures to propagate between IP blocks and top-level die implementations, enabling efficient reuse and consistency.

Automatic Bump Mirroring and Alignment

Precise alignment between contacting dies is essential for ensuring electrical continuity and manufacturability. Automated mirroring techniques replicate bump layouts across adjacent dies while accounting for orientation and placement changes. This approach significantly reduces alignment errors and ensures that interconnect structures remain synchronized throughout design iterations.

Engineering Change Management (ECO) and Design Rule Checking

Bump planning is inherently iterative and requires robust engineering change management capabilities. Integrated visualization and reporting tools enable designers to track modifications and maintain synchronization across multiple dies and packaging layers. Multi-die-aware design rule checking ensures connectivity correctness, alignment accuracy, and compliance with logical and physical constraints, reducing the risk of costly late-stage errors.

Bump ECO Graphical Visualization and Spreadsheet-like Table

TSV Planning Considerations

TSVs introduce additional design constraints due to their size, mechanical stress impact, and area overhead. Designers must carefully position TSVs to minimize their effect on device placement and timing performance. TSV planning is often performed concurrently with bump planning to align vertical and horizontal connectivity structures and reduce routing complexity.

Synopsys 3DIC Compiler Platform for Building the Interconnect Foundation

Bump and TSV planning for a multi-die design is a critical, but tedious, time-consuming, iterative, and error prone step faced with multiple challenges. Creating optimal, DRC-correct bump maps and die-to-die connectivity plans for millions of bumps over multiple dies involving multiple designers is complex and critical to meet PPA targets and design schedules. Synopsys’ unified 3DIC exploration-to-signoff platform, meets those challenges with powerful bump planning, visualization, and analysis capabilities, increasing design engineer productivity and drastically cutting design time.

UCIe signal color-coded bump plan in 2D and 3D, as created in 3DIC Compiler platform

Learn more by accessing the whitepaper from here.

Establishing the Routing Framework

Effective bump and TSV planning creates the physical connectivity infrastructure required for high-speed signal routing. With interconnect structures defined and aligned across dies, designers can focus on implementing dense, high-performance routing for advanced chiplet interfaces. The final article in this series will explore how automated routing technologies enable scalable implementation of high-speed multi-die interconnects.

Also Read:

How Customized Foundation IP Is Redefining Power Efficiency and Semiconductor ROI

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering

Hardware is the Center of the Universe (Again)


CHERI: Hardware-Enforced Capability Architecture for Systematic Memory Safety

CHERI: Hardware-Enforced Capability Architecture for Systematic Memory Safety
by Daniel Nenni on 03-03-2026 at 8:00 am

CHERI Technology Overview 2026

The rapid escalation of cyberattacks over the past two decades has exposed a fundamental weakness at the core of modern computing systems: the lack of memory safety. Industry data consistently shows that the majority of critical software vulnerabilities stem from memory corruption issues such as buffer overflows, use-after-free errors, and out-of-bounds memory accesses. These weaknesses persist despite decades of improved development practices, static analysis tools, and defensive programming techniques. The economic impact is staggering, with global cybercrime costs estimated in the trillions of dollars annually. Addressing this systemic issue requires more than incremental software fixes—it demands architectural change.

One of the most promising solutions to this challenge is CHERI (Capability Hardware Enhanced RISC Instructions), an open hardware technology developed over more than 15 years by the University of Cambridge and SRI International. CHERI introduces hardware-enforced memory safety through a fundamentally different approach to pointer management. Rather than relying on traditional raw pointers, CHERI replaces them with capabilities, bounded, unforgeable references that include metadata defining permissible memory regions and access rights. These capabilities are enforced directly by the processor, ensuring that out-of-bounds memory access becomes architecturally impossible rather than statistically unlikely.

Traditional mitigation strategies such as stack canaries, address space layout randomization (ASLR), and control-flow integrity provide only probabilistic protection. Skilled attackers frequently bypass these defenses, especially when vulnerabilities are discovered before patches are applied. Moreover, rewriting entire software ecosystems in memory-safe languages such as Rust or managed runtime environments is economically and practically infeasible. Trillions of lines of legacy C and C++ code underpin operating systems, embedded firmware, industrial controllers, and cloud infrastructure. Replacing this code wholesale would require decades of coordinated global effort and introduce unacceptable operational risks.

CHERI addresses this dilemma by enabling hardware-based memory safety while preserving compatibility with existing codebases. Software can often be recompiled using CHERI-enabled compilers, requiring minimal modification to application logic. This low migration barrier is critical for industry adoption. In hybrid execution modes, CHERI and non-CHERI code can coexist, allowing incremental deployment strategies. Developers can prioritize protection of the most security-critical components first, progressively expanding coverage over time.

At the architectural level, CHERI provides fine-grained, deterministic enforcement of memory bounds and permissions. Capabilities cannot be forged or arbitrarily modified by software because their integrity is protected by hardware tags. This ensures systematic, rather than statistical, coverage against memory corruption. Additionally, CHERI supports scalable compartmentalization. Capabilities are associated with specific execution contexts, enabling the principle of least privilege at a granular level. Functions can only access memory explicitly granted to them, preventing lateral movement within an application after an exploit. This significantly reduces the risk of privilege escalation and data exfiltration.

Performance and area overhead are intentionally minimized. Reported implementations demonstrate processor area increases on the order of approximately 4–5%, with similar power consumption and modest performance impact. In some cases, performance can improve after optimization because software-based isolation mechanisms, such as hypervisor context switching or complex trusted execution environments—can be simplified or removed. By shifting security enforcement into hardware, CHERI reduces runtime overhead associated with defensive checks scattered throughout software stacks.

The technology has matured beyond academic prototypes. With substantial government and industry investment, CHERI-enabled processor designs are emerging in commercial and research platforms. Operating systems such as FreeBSD, Linux variants, and embedded real-time operating systems have been ported to CHERI architectures. This ecosystem development is coordinated in part by the CHERI Alliance, which promotes standardization, toolchain support, and collaborative adoption across hardware and software vendors.

Perhaps most importantly, CHERI represents a preventive rather than reactive cybersecurity strategy. Instead of patching vulnerabilities after discovery, it removes entire classes of memory safety flaws at the architectural level. As software complexity continues to grow, especially in cloud computing, mobile devices, automotive systems, and critical infrastructure, the attack surface expands correspondingly. Hardware-enforced memory safety offers a scalable defense mechanism aligned with this growth.

CHERI Blossoms 2026 – https://cheri-alliance.org/events/cheri-blossoms-conference-2026/

Bottom line: CHERI redefines how systems approach memory safety by embedding protection directly into processor architecture. Through capability-based addressing, fine-grained compartmentalization, and compatibility with legacy code, it offers a pragmatic yet transformative path toward reducing one of the most persistent sources of software vulnerability. As cybersecurity threats intensify, architectural solutions like CHERI may become foundational to the next generation of secure computing platforms.

Also Read:

PQShield on Preparing for Q-Day

2026 Outlook with Richard Hegberg of Caspia Technologies

A Six-Minute Journey to Secure Chip Design with Caspia


WEBINAR: Two-Part Series on RF Power Amplifier Design

WEBINAR: Two-Part Series on RF Power Amplifier Design
by Don Dingee on 03-03-2026 at 6:00 am

VNA inspired simulated load pull setup for RF power amplifier design

At lower frequencies with simpler modulation, RF power amplifier (PA) designers could safely concentrate on a few primary metrics – like gain and bandwidth – and rely on relaxed margins to ensure proper operation in a range of conditions. Today’s advanced RF PA design is a different story. mmWave and sub-THz frequencies introduce complex effects. Extremely wide bandwidths stress performance. Higher-order digital modulation tightens margins. Designers often face the challenge of improving one metric while degrading others, suggesting that evaluating metrics simultaneously under authentic conditions is essential.

Keysight launched a two-part masterclass on RF PA design, featuring online webinars highlighting the greater role of complex layout and accurate modeling in Keysight Advanced Design System (ADS), coupled with Keysight RF Circuit Simulation Professional and its programmatically-callable simulator. The first session introduces the concept of power waves and a vector network analysis (VNA)-inspired load-pull analysis via simulation, using a- and b-waves for high-fidelity active characterization. The second session will delve into improving PA efficiency, extending load-pull analysis for multi-performance optimization, and using simulation data to improve artificial neural network (ANN) modeling for more accurate representations.

Evolving from measured to simulated load-pull analysis

At its core, load-pull analysis evaluates PA performance under varying load and source conditions. It traces its roots to scalar measurements using simple input signals and power meters, essentially characterizing a few operating points at a designer’s discretion. Complexity rapidly undermines the value of a scalar approach, with too many interactions and too few data points to provoke and observe anomalous behavior.

An improved approach is vector load-pull analysis, which leverages sophisticated VNAs to enable control, measurement, and derivation of more critical PA parameters in fewer passes. Schematically, vector load-pull analysis looks like this:

While capturing parameters unavailable in scalar testbenches, some VNA measurements remain tedious to set up, lengthy to execute, and can be difficult to repeat. Physical VNA techniques, limited to visibility at the input and output, also don’t capture what’s happening inside the PA at the transistor level under various conditions.

Active load-pull analysis via simulation is a state-of-the-art technique that draws inspiration from the VNA setup, borrowing measurement algorithms pioneered in hardware by researchers for simulations that run orders of magnitude faster in software. A basic setup in Keysight ADS for VNA-inspired load-pull analysis looks like this:

More advantages to a simulated load-pull approach

There are several more advantages to simulated load-pull analysis in a highly accurate environment, such as ADS with RF Circuit Simulation Professional. Incorporating a- and b-waves instantly adds fidelity. Simulation can easily handle many more degrees of freedom, generating complex sweeps and contours. Using Python, designers can set up spiral, circular, and rectangular shapes for impedance sweeps. Simulation can also perform 2D interpolation, reducing the need for dense sweeps.

Other types of analysis can augment load pull. One insightful analysis, which can run separately or together with load pull, is gain compression. Sweeps of input power at back-off values (set by the user) reveal important insights into PA characterization. Designers can also jump back and forth between their preferred data displays – for instance, switching between rectangular and Smith charts, or drilling down to any parameter. Optimization routines in Python that call simulation sequences can quickly explore multivariate contours.

A design in ADS can grow into a digital twin of an RF PA, enabling designers to study discrepancies and refine models. Load-pull simulation data serves as training data for artificial neural network (ANN) device models, producing highly accurate representations of PA behavior in minutes rather than the weeks of effort normally required, with more modeling detail.

A look ahead to using intrinsic techniques for efficiency

Those last two ideas, deeper analysis for optimization and ANN modeling enhancement, including a look at system-level metrics like EVM, are the primary focus for the second session of this RF PA design series. It will build on the power wave load-pull technique, with more detail on digital twin setup and ideas such as harmonic-matching networks. By registering (link below), you’ll gain access to the first session on Why Use the Power Wave Load-Pull Technique on demand, which will prepare you for the second session on Why Use Intrinsic Techniques for High-Efficiency PA Design coming in April.

One question that came up during the first session is whether viewers can access the demo workspace presenters show. Yes, the ADS workspace is available online in the Keysight EDA Knowledge Center, so users with ADS and RF Circuit Simulation Professional (in licensed or free-trial versions) can follow along at work. There are also downloadable supplementary documents with design guides, application notes on PA design, and more.

Registration for this Keysight webinar series is now open:

RF Power Amplifier Design MasterClass Webinar Series

Also Read:

On the high-speed digital design frontier with Keysight’s Hee-Soo Lee

2026 Outlook with Nilesh Kamdar of Keysight EDA

From Silos to Systems, From Data to Insight: Keysight’s Upcoming Webinar on EDA Data Transformation


Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain

Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain
by Admin on 03-02-2026 at 10:00 am

RISC V 3PIP CWE Workflow BR 022626

by Jagadish Nayak

RISC-V adoption continues to accelerate across commercial and government microelectronics programs. Whether open-source or commercially licensed, most RISC-V processor cores are integrated as third-party IP (3PIP), potentially introducing supply chain security challenges that demand structured, design-level assurance.

As systems become more heterogeneous and interconnected, design supply chain security is no longer a documentation exercise, but an engineering challenge. A single weakness in processor IP can cascade into systemic risk. That reality makes scalable, repeatable 3PIP assurance essential, especially for RISC-V cores deployed in mission-critical environments.

From Third-Party IP Risk to Repeatable Assurance

Traditional IP integration workflows often rely on vendor claims, checklist-based reviews, and limited test evidence. While helpful, these approaches rarely provide design-level assurance across all relevant weakness classes. To address this gap, a Common Weakness Enumeration (CWE)-based methodology enables structured, measurable, and portable security validation.

A structured CWE-based methodology replaces ad hoc reviews with measurable validation. Relevant weaknesses are scoped from the MITRE database, translated into security requirements, verified through executable properties and tests, and captured as traceable assurance artifacts.

The outcome is not simply test coverage, but documented security assurance tied directly to recognized weakness definitions.

Scaling Assurance for the RISC-V Ecosystem

RISC-V’s consistent ISA foundation enables reusable security requirement templates, parameterized verification properties, and portable C-based test workloads. Once developed, these artifacts can be applied across multiple RISC-V cores with minimal modification, significantly reducing non-recurring engineering (NRE) effort.

The benefits of templatizing CWE-derived security requirements for RISC-V processors include:

  • Teams can avoid starting from scratch for each integration.
  • Scope inclusion decisions can be repurposed.
  • Verification properties can be parameterized for specific RTL implementations.
  • C-based tests can be compiled using standard RISC-V toolchains and reused across multiple cores with minimal modification.

This portability is particularly powerful for programs integrating multiple RISC-V implementations across product lines or lifecycle revisions.

Demonstrated Use Case: SiFive X280 3PIP Assurance

In a collaboration between Cycuity (an Arteris brand), SiFive, and BAE Systems, the methodology was applied to a commercial RISC-V core integrated into a larger SoC. Of 60 CWEs identified as potentially in scope, 16 have been analyzed using templated security requirements and reusable verification infrastructure spanning information-flow rules, static analysis, portable C-tests, and assertion-based verification.

What the Results Reveal

Of the 16 CWEs analyzed:

  • 12 were confirmed passing under the defined requirements.
  • 3 were flagged as failing by rule definition.
  • 1 was determined to be out of scope after deeper analysis.

Importantly, failing a CWE does not inherently indicate a vulnerability—it highlights divergence from formal CWE definitions and prompts system-level evaluation of mitigation strategy.

For example, evaluation of debug-mode transitions revealed that sensitive registers are not automatically cleared when entering debug mode. While architecturally intentional, this behavior required software mitigation planning at the system level. Similarly, analysis of register reset conditions identified registers not explicitly initialized at reset. Although deemed non-critical in context, the structured analysis ensured no assumptions were left unvalidated.

These findings highlight an essential point: assurance is not simply about finding flaws, but it is about eliminating uncertainty. Engineers and managers alike gain clearer visibility into implementation behavior, design intent, and mitigation boundaries.

Reducing NRE Through Reusable Assurance Templates

One of the most valuable outcomes of the RISC-V initiative is measurable reduction in assurance effort. Once security requirement templates, property macros, and portable test harnesses are defined, subsequent RISC-V cores can be evaluated with significantly reduced engineering investment. The methodology enables acceleration without sacrificing rigor.

Verification teams can focus effort where differentiation truly exists, such as implementation-specific signals, privilege modes, memory maps, and integration boundaries, rather than recreating foundational security requirements. For government and defense programs under Trusted and Assured Microelectronics (T&AM) objectives, this repeatability directly supports both technical assurance and program schedule constraints.

Strengthening the Design Supply Chain for RISC-V

As microelectronics ecosystems diversify, design supply chains now span open-source repositories, commercial IP vendors, integrators, tool providers, and system-level developers. Supply chain security cannot be enforced solely at procurement but must be embedded within the design verification lifecycle.

CWE-based assurance provides a shared technical language across stakeholders, for instance:

  • IP providers can align their documentation and artifacts to standardized weakness definitions.
  • Integrators can demand traceable evidence.
  • System architects can quantify residual risk and implement deliberate mitigations.

This transparency strengthens collaboration without exposing proprietary RTL or design details unnecessarily.

Looking Ahead: Expanding Beyond RISC-V

While this work focuses on RISC-V, the methodology generalizes to any third-party IP, from processors to accelerators and peripherals. Assurance will never be zero cost, but structured, reusable frameworks transform it from reactive compliance into a scalable engineering discipline. For organizations building on RISC-V and beyond, this shift is foundational to safeguarding modern design supply chains.

As RISC-V deployment continues to grow in high-assurance and mission-critical systems, design teams must move beyond trust-by-assumption. Comprehensive, CWE-based 3PIP verification enables measurable confidence, reduces integration uncertainty, and strengthens the entire microelectronics ecosystem from IP provider to end system.

CONTACT ARTERIS

Jagadish Nayak is a Distinguished Engineer in Security at Arteris (formerly Cycuity). He provides technical expertise and guidance on the Hardware Security Verification and the Radix family of tools for security verification. He has an extensive background in hardware design, verification and security analysis with over 30 years of semiconductor industry experience.

Also Read:

Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

Arteris Simplifies Design Reuse with Magillem Packaging


Apple’s iPhone 17 Series 5G mmWave Antenna Module Revealed to be Powered by Soitec FD-SOI Substrates

Apple’s iPhone 17 Series 5G mmWave Antenna Module Revealed to be Powered by Soitec FD-SOI Substrates
by Daniel Nenni on 03-02-2026 at 8:00 am

Qualcomm’s QTM565 mmWave Antenna Module

Recent independent teardown and technical analyses have confirmed that the 5G mmWave antenna module powering Apple’s latest iPhone 17 lineup relies on advanced SOITEC based Fully Depleted Silicon-On-Insulator (FD-SOI) substrate technology. The discovery highlights a significant architectural shift in high-frequency RF integration for flagship smartphones.

Industry intelligence firms Yole Group and TechInsights recently conducted detailed teardowns of the Qualcomm QTM565, the mmWave integrated antenna-in-package (AiP) module used in the iPhone 17, iPhone 17 Pro, and iPhone 17 Pro Max. According to TechInsights’ study, Qualcomm HG11-34443-2 (QTM565) FR2 Tx/Rx Front End Die RFIC Process Analysis by Sharath Poikayil Satheesh, and corroborated by Yole Group’s component analysis, the Qualcomm QTM565 module utilizes GlobalFoundries’ 22FDX RF process. This process is fundamentally built upon advanced FD-SOI substrates supplied by Soitec.

The same RF die has been embedded in the AiP mmWave solution within the iPhone 17 series, highlighting the growing use of FD-SOI substrates in mmWave RFIC design for premium smartphones. This marks one of the most visible commercial validations of FD-SOI for high-volume consumer 5G mmWave applications.

Daniel Nenni, founder of SemiWiki, commented: “The findings highlight a major industry shift toward highly integrated 5G mmWave Systems-on-Chip (SoCs), where phased array element spacing and area constraints are highly critical.” As antenna arrays move to higher frequencies in the FR2 mmWave spectrum, the physical spacing between elements becomes increasingly constrained, demanding exceptional integration density and signal integrity at the silicon level.

TechInsights’ executive summary notes that FD-SOI devices are uniquely engineered to operate effectively into the mmWave band, making monolithic and highly integrated phased-array transmit and receive SoCs feasible for top-tier consumer electronics. Unlike traditional bulk CMOS approaches, FD-SOI leverages a thin buried oxide layer that reduces parasitic capacitance, enhances electrostatic control, and improves RF isolation—critical attributes for mmWave performance.

The FD-SOI Advantage in the iPhone 17

By serving as the foundational substrate for the Qualcomm QTM565 module, FD-SOI technology enables fully integrated 5G mmWave SoCs. For manufacturers like Apple and Qualcomm, this platform delivers several strategic advantages:

Miniaturization and Footprint Optimization

FD-SOI provides significant logic scaling benefits while maintaining strong RF characteristics. Designers can integrate baseband functions, beamforming control logic, power management, and RF front-end components onto a single die. This high level of monolithic integration reduces the Bill of Materials (BOM), minimizes PCB footprint, and lowers interconnect losses between discrete components. In space-constrained smartphones, these savings directly translate into slimmer form factors or additional room for battery capacity and thermal management.

Best-in-Class Power Efficiency

Operating at low voltages, FD-SOI enables dynamic body biasing and precise threshold control, optimizing performance per watt. The inherent low-noise analog devices and excellent device matching support stable beamforming and signal integrity without excessive power draw. For end users, this means sustained 5G mmWave throughput without disproportionately draining battery life—a critical factor as carriers continue expanding high-band spectrum deployments.

Unmatched RF Performance

As demonstrated by silicon mmWave prototypes cited in the TechInsights analysis, FD-SOI provides the device-level precision necessary for high-frequency operation across both sub-6 GHz and FR2 mmWave bands. Improved isolation and reduced variability enhance linearity and gain control in phased-array architectures, directly impacting range, data rate stability, and thermal performance.

Strategic Implications for the Semiconductor Ecosystem

The integration of FD-SOI technology into Apple’s flagship iPhone 17 series underscores the substrate’s expanding role in next-generation RF system design. It also reflects a broader industry trend: convergence of digital logic and high-frequency RF on a single optimized platform.

As 5G evolves and early 6G research accelerates, the demand for compact, power-efficient, and highly integrated mmWave solutions will intensify. FD-SOI’s combination of RF excellence, power efficiency, and scalability positions it as a compelling enabler for future mobile connectivity platforms.

Bottom line: With independent validation from Yole Group and TechInsights, and commercial deployment in one of the world’s highest-volume premium smartphones, Soitec’s FD-SOI substrate technology has secured a visible and strategic foothold in the mmWave era, driving miniaturization, extending battery life, and redefining what is possible in integrated RF design.

CONTACT SOITEC

Also Read:

Podcast EP331: Soitec’s Broad Impact on Quantum Computing and More with Dr. Christophe Maleville

Podcast EP321: An Overview of Soitec’s Worldwide Leadership in Engineered Substrates with Steve Babureck

FD-SOI: A Cyber-Resilient Substrate for Secure Automotive Electronics

 


Another Quantum Topic: Quantum Communication

Another Quantum Topic: Quantum Communication
by Bernard Murphy on 03-02-2026 at 6:00 am

Quantum teleportation

In my recent series on quantum computing (QC), I intentionally overlooked a couple of adjacent topics: quantum communication and quantum sensing. These face some of the same challenges as QC, however I noticed a recent report on a test quantum network implemented by Cisco and Qunnect which led me to find more from Cisco on their work in quantum networking.

Early post since I will be at DVCon next week moderating a panel, among other activities.

What is the point of quantum communication?

Quantum communication is based on entanglement; two physically separated qubits whose states are nevertheless coupled so that if one somehow changes state the other “instantaneously” also changes state. This holds even if the qubits are separated by thousands of kilometers and led Einstein to call this behavior “spooky action at a distance”.

This idea prompted some to think entanglement implied faster-than-light communication. Sadly no – physical laws are not violated by this technique. Still, the concept has led quantum experts (and Star Trek enthusiasts) to label methods using this technique as “quantum teleportation”, which I’ll call QT.

The point of QT is security in the communication channel. If a third-party attempts to monitor a qubit state at either end, both qubit states immediately collapse and the information is lost. Which immediately signals an attempted hack while also destroying information before it can be revealed, valuable for cryptographic key distribution.

These methods are considered “quantum-safe” unlike “quantum-resistant” methods for protecting encrypted data, which are known to defend (classically) against Shor’s algorithm but not against as-yet unknown advances on Shor. External hacks against entanglement-based QT must (as far as I can tell) hack physics, a very tall order.

Cisco work in QT

I got my information from this Cisco article and this blog. Cisco have already developed a research prototype quantum network entanglement chip able to generate a million entangled photons per second, running at room temperature and working with existing photonic infrastructure.

Cisco’s prototype run with Qunnect suggests that this entanglement chip can be used to connect a cryogenic-based system (a superconducting QC for example) on one end to fiber and on the other end of the fiber back to another QC. Details on how this works are sparse I’m afraid but they do claim they were able to connect reliably over more than 17km, a very practical distance for the banks and other finance institutions that cluster around New York area where this trial was run.

Cisco have a higher aim, for quantum networking that could scale up QC capacity without needing to wait for qubit counts to scale up in individual QCs. A quantum network connected topology of QCs could in theory provide almost as much capability (?) as a single large QC. If this works out in practice it could be huge for QC.

Sanity checks

Quantum key distribution (QKD seems to be the most real part of this story today. China claims a 2000km QKD backbone between Beijing and Shanghai supporting banks. This has been in operation for quite a while.

The idea of connecting multiple QC nodes through a quantum internet still looks experimental. The University of Chicago is active in this area, also see the earlier Cisco reference on their quantum labs.

Interesting – a new possible path towards a large scale quantum computer and truly secure networking.

Also Read:

PQShield on Preparing for Q-Day

Where is Quantum Error Correction Headed Next?

Quantum Computers: Are We There Yet?


Advancing Automotive Memory: Development of an 8nm 128Mb Embedded STT-MRAM with Sub-ppm Reliability

Advancing Automotive Memory: Development of an 8nm 128Mb Embedded STT-MRAM with Sub-ppm Reliability
by Daniel Nenni on 03-01-2026 at 6:00 pm

World First 8nm 128Mb Embedded STT MRAM for Automotive

IEDM 2025 Papers MRAM RRAM

The rapid evolution of automotive technology has intensified the demand for highly reliable, high-performance semiconductor memory solutions. Modern vehicles increasingly rely ADAS driving features, and complex infotainment platforms, all of which require memory that can operate flawlessly under extreme environmental conditions. Among emerging memory technologies, embedded magnetic random access memory (eMRAM) stands out as a compelling candidate due to its non-volatility, high endurance, and fast read/write capabilities. The development of an 8nm 128Mb embedded STT-MRAM specifically tailored for automotive applications represents a significant technological milestone in this field.

One of the primary challenges in automotive memory design is ensuring reliable operation across a wide temperature range, typically from –40°C to 150°C. Unlike consumer electronics, automotive systems must maintain data integrity and functional stability even under prolonged exposure to high temperatures. This stringent requirement places considerable pressure on memory architectures, particularly when scaling down to advanced process nodes such as 8nm. Shrinking the technology node increases memory density and performance but also introduces heightened risks of failure mechanisms, including short defects, read margin degradation, and retention loss.

A major breakthrough in the 8nm 128Mb eMRAM development lies in the aggressive scaling of the memory cell to 0.017 μm². While this scaling enables higher density and improved integration with advanced logic nodes, it also intensifies process complexity. Higher bitcell density increases the probability of short failures due to redeposition and patterning challenges during fabrication. To address this, improvements in integration processes significantly reduced in-line defect counts, resulting in a substantial decrease in median short fail bit counts. Achieving sub-parts-per-million (sub-ppm) levels of short failure demonstrates that high-density scaling can coexist with automotive-grade reliability when supported by meticulous process optimization.

Another critical concern in scaled MRAM technology is maintaining sufficient read margin. As the back-end-of-line (BEOL) thermal budget increases in advanced nodes, thermal migration can degrade the magnetic tunnel junction (MTJ) properties, particularly the tunneling magnetoresistance (TMR). A lower TMR reduces the resistance gap between parallel (P) and anti-parallel (AP) states, narrowing the sensing window and increasing the risk of read errors. By optimizing the MTJ stack, especially through fine-tuning the free layer composition, the design achieved improved thermal tolerance. In fact, enhanced crystallization of the MgO barrier after thermal treatment led to an increase in TMR, thereby widening the read margin. Combined with patterning improvements that drastically suppressed inter-cell leakage, these advancements enabled ppm-level read failure rates even at elevated temperatures.

Write performance and data retention present another delicate trade-off. Automotive specifications demand both low write error rates (WER) and robust long-term retention, often exceeding 20 years at high temperatures. However, optimizing for easier write switching can compromise thermal stability, and vice versa. To balance this trade-off, pinned layer optimization was employed to tailor asymmetry between P and AP switching characteristics. By carefully adjusting the magnetic stack, engineers identified an optimal asymmetry point that minimized overall bit error rates while preserving retention strength. Furthermore, reducing the temperature dependence of switching current improved write reliability at low temperatures, where higher currents are typically required.

In addition to pinned layer refinement, enhancements in spin-transfer torque (STT) efficiency further reduced switching current requirements without sacrificing thermal stability. Improved MTJ engineering broadened the switching current window, lowering the voltage necessary to meet WER specifications while significantly improving distribution tail behavior. These refinements resulted in sub-ppm levels of both write error rate and retention bit error rate, effectively eliminating yield loss related to these failure mechanisms.

Finally, comprehensive chip-level validation confirmed full functionality of write and read operations across the entire automotive temperature range. Shmoo plot analyses demonstrated robust voltage and timing margins, with read speeds as fast as 8ns under worst-case conditions. This performance underscores not only reliability but also competitiveness in high-speed embedded applications.

Bottom line: The successful realization of an 8nm 128Mb embedded STT-MRAM for automotive use demonstrates that aggressive scaling and stringent reliability requirements can be achieved simultaneously. Through innovations in integration processing, MTJ stack engineering, and magnetic layer optimization, this technology meets sub-ppm failure targets while delivering high performance across extreme temperatures. Such advancements position eMRAM as a leading memory solution for next-generation automotive electronics, paving the way for safer, smarter, and more connected vehicles.

Also Read:

Memory Matters: Signals from the 2025 NVM Survey

Akeana Partners with Axiomise for Formal Verification of Its Super-Scalar RISC-V Cores

SiFive’s AI’s Next Chapter: RISC-V and Custom Silicon

Ceva IP: Powering the Era of Physical AI