Bronco Webinar 800x100 1

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems
by Kalar Rajendiran on 03-03-2026 at 10:00 am

UCIe bump planning in 3DIC Compiler Platform

The first article in this series examined how feasibility exploration enables architects to evaluate multi-die system configurations while minimizing early design risk. Once architectural decisions are validated, designers must translate conceptual connectivity requirements into physical interconnect infrastructure. Bump and TSV planning represents one of the most critical steps in bridging architectural intent with physical implementation.

Multi-die designs rely heavily on dense, high-performance interconnect structures to enable communication between chiplets, interposers, and package substrates. As multi-die systems continue to scale in complexity and signal density, traditional manual interconnect planning approaches are no longer sufficient. This article explores the methodologies and automation strategies that enable scalable bump and TSV planning. The final article in this series will discuss automated routing solutions that implement high-speed signal connectivity across these interconnect structures.

Why Bump and TSV Planning Is Critical

Bumps and TSVs form the electrical and mechanical backbone of multi-die systems. Microbumps and hybrid bonding pads facilitate horizontal connectivity between adjacent dies, while TSVs enable vertical communication between stacked dies and packaging layers. These interconnect elements allow multi-die architectures to achieve extremely high bandwidth and low latency communication.

As advanced packaging technologies reduce interconnect pitch, modern multi-die systems may include millions of individual connections. Because these interconnect structures influence routing feasibility, signal integrity, and power delivery efficiency, their planning must occur early in the design process. Poor interconnect planning can create routing congestion, degrade performance, and increase implementation complexity.

The Limitations of Manual Planning

Historically, designers relied on spreadsheet-based or graphical planning methods to define bump layouts. While these approaches were adequate for single-die flip-chip designs with limited connectivity, they cannot scale to modern multi-die systems. Manual planning increases the likelihood of connectivity mismatches between dies and introduces challenges when design changes occur during development.

Even minor modifications to bump placement can affect routing feasibility, interposer layout, and die placement. Coordinating such changes across multiple design teams becomes increasingly difficult as system complexity grows, making automation essential for maintaining design consistency and productivity.

Automated Bump Planning Methodologies

Modern multi-die design tools provide automated bump planning capabilities that allow designers to generate, place, and assign interconnect structures efficiently. By integrating bump planning into early design exploration and prototyping stages, designers can optimize multi-die floorplans and improve routing quality. Automated workflows support multiple input formats and allow designers to reuse or adapt existing bump maps across different designs and technologies.

Region-Based Bump Planning

To manage the scale and complexity of bump placement, designers often organize bumps into functional regions associated with specific interfaces or subsystems. Region-based planning allows designers to define placement patterns using parameters such as pitch, spacing, and bump geometry. These regions dynamically adjust as design constraints evolve, ensuring consistent placement and rule compliance throughout the design process.

Signal Assignment and Hierarchical Integration

After bump placement is established, designers must assign signals and power connections to individual interconnect structures. Automated assignment algorithms can optimize routability and wirelength across the entire multi-die system, rather than focusing solely on individual dies. Hierarchical planning capabilities further improve productivity by allowing bump structures to propagate between IP blocks and top-level die implementations, enabling efficient reuse and consistency.

Automatic Bump Mirroring and Alignment

Precise alignment between contacting dies is essential for ensuring electrical continuity and manufacturability. Automated mirroring techniques replicate bump layouts across adjacent dies while accounting for orientation and placement changes. This approach significantly reduces alignment errors and ensures that interconnect structures remain synchronized throughout design iterations.

Engineering Change Management (ECO) and Design Rule Checking

Bump planning is inherently iterative and requires robust engineering change management capabilities. Integrated visualization and reporting tools enable designers to track modifications and maintain synchronization across multiple dies and packaging layers. Multi-die-aware design rule checking ensures connectivity correctness, alignment accuracy, and compliance with logical and physical constraints, reducing the risk of costly late-stage errors.

Bump ECO Graphical Visualization and Spreadsheet-like Table

TSV Planning Considerations

TSVs introduce additional design constraints due to their size, mechanical stress impact, and area overhead. Designers must carefully position TSVs to minimize their effect on device placement and timing performance. TSV planning is often performed concurrently with bump planning to align vertical and horizontal connectivity structures and reduce routing complexity.

Synopsys 3DIC Compiler Platform for Building the Interconnect Foundation

Bump and TSV planning for a multi-die design is a critical, but tedious, time-consuming, iterative, and error prone step faced with multiple challenges. Creating optimal, DRC-correct bump maps and die-to-die connectivity plans for millions of bumps over multiple dies involving multiple designers is complex and critical to meet PPA targets and design schedules. Synopsys’ unified 3DIC exploration-to-signoff platform, meets those challenges with powerful bump planning, visualization, and analysis capabilities, increasing design engineer productivity and drastically cutting design time.

UCIe signal color-coded bump plan in 2D and 3D, as created in 3DIC Compiler platform

Learn more by accessing the whitepaper from here.

Establishing the Routing Framework

Effective bump and TSV planning creates the physical connectivity infrastructure required for high-speed signal routing. With interconnect structures defined and aligned across dies, designers can focus on implementing dense, high-performance routing for advanced chiplet interfaces. The final article in this series will explore how automated routing technologies enable scalable implementation of high-speed multi-die interconnects.

Also Read:

How Customized Foundation IP Is Redefining Power Efficiency and Semiconductor ROI

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering

Hardware is the Center of the Universe (Again)


CHERI: Hardware-Enforced Capability Architecture for Systematic Memory Safety

CHERI: Hardware-Enforced Capability Architecture for Systematic Memory Safety
by Daniel Nenni on 03-03-2026 at 8:00 am

CHERI Technology Overview 2026

The rapid escalation of cyberattacks over the past two decades has exposed a fundamental weakness at the core of modern computing systems: the lack of memory safety. Industry data consistently shows that the majority of critical software vulnerabilities stem from memory corruption issues such as buffer overflows, use-after-free errors, and out-of-bounds memory accesses. These weaknesses persist despite decades of improved development practices, static analysis tools, and defensive programming techniques. The economic impact is staggering, with global cybercrime costs estimated in the trillions of dollars annually. Addressing this systemic issue requires more than incremental software fixes—it demands architectural change.

One of the most promising solutions to this challenge is CHERI (Capability Hardware Enhanced RISC Instructions), an open hardware technology developed over more than 15 years by the University of Cambridge and SRI International. CHERI introduces hardware-enforced memory safety through a fundamentally different approach to pointer management. Rather than relying on traditional raw pointers, CHERI replaces them with capabilities, bounded, unforgeable references that include metadata defining permissible memory regions and access rights. These capabilities are enforced directly by the processor, ensuring that out-of-bounds memory access becomes architecturally impossible rather than statistically unlikely.

Traditional mitigation strategies such as stack canaries, address space layout randomization (ASLR), and control-flow integrity provide only probabilistic protection. Skilled attackers frequently bypass these defenses, especially when vulnerabilities are discovered before patches are applied. Moreover, rewriting entire software ecosystems in memory-safe languages such as Rust or managed runtime environments is economically and practically infeasible. Trillions of lines of legacy C and C++ code underpin operating systems, embedded firmware, industrial controllers, and cloud infrastructure. Replacing this code wholesale would require decades of coordinated global effort and introduce unacceptable operational risks.

CHERI addresses this dilemma by enabling hardware-based memory safety while preserving compatibility with existing codebases. Software can often be recompiled using CHERI-enabled compilers, requiring minimal modification to application logic. This low migration barrier is critical for industry adoption. In hybrid execution modes, CHERI and non-CHERI code can coexist, allowing incremental deployment strategies. Developers can prioritize protection of the most security-critical components first, progressively expanding coverage over time.

At the architectural level, CHERI provides fine-grained, deterministic enforcement of memory bounds and permissions. Capabilities cannot be forged or arbitrarily modified by software because their integrity is protected by hardware tags. This ensures systematic, rather than statistical, coverage against memory corruption. Additionally, CHERI supports scalable compartmentalization. Capabilities are associated with specific execution contexts, enabling the principle of least privilege at a granular level. Functions can only access memory explicitly granted to them, preventing lateral movement within an application after an exploit. This significantly reduces the risk of privilege escalation and data exfiltration.

Performance and area overhead are intentionally minimized. Reported implementations demonstrate processor area increases on the order of approximately 4–5%, with similar power consumption and modest performance impact. In some cases, performance can improve after optimization because software-based isolation mechanisms, such as hypervisor context switching or complex trusted execution environments—can be simplified or removed. By shifting security enforcement into hardware, CHERI reduces runtime overhead associated with defensive checks scattered throughout software stacks.

The technology has matured beyond academic prototypes. With substantial government and industry investment, CHERI-enabled processor designs are emerging in commercial and research platforms. Operating systems such as FreeBSD, Linux variants, and embedded real-time operating systems have been ported to CHERI architectures. This ecosystem development is coordinated in part by the CHERI Alliance, which promotes standardization, toolchain support, and collaborative adoption across hardware and software vendors.

Perhaps most importantly, CHERI represents a preventive rather than reactive cybersecurity strategy. Instead of patching vulnerabilities after discovery, it removes entire classes of memory safety flaws at the architectural level. As software complexity continues to grow, especially in cloud computing, mobile devices, automotive systems, and critical infrastructure, the attack surface expands correspondingly. Hardware-enforced memory safety offers a scalable defense mechanism aligned with this growth.

CHERI Blossoms 2026 – https://cheri-alliance.org/events/cheri-blossoms-conference-2026/

Bottom line: CHERI redefines how systems approach memory safety by embedding protection directly into processor architecture. Through capability-based addressing, fine-grained compartmentalization, and compatibility with legacy code, it offers a pragmatic yet transformative path toward reducing one of the most persistent sources of software vulnerability. As cybersecurity threats intensify, architectural solutions like CHERI may become foundational to the next generation of secure computing platforms.

Also Read:

PQShield on Preparing for Q-Day

2026 Outlook with Richard Hegberg of Caspia Technologies

A Six-Minute Journey to Secure Chip Design with Caspia


WEBINAR: Two-Part Series on RF Power Amplifier Design

WEBINAR: Two-Part Series on RF Power Amplifier Design
by Don Dingee on 03-03-2026 at 6:00 am

VNA inspired simulated load pull setup for RF power amplifier design

At lower frequencies with simpler modulation, RF power amplifier (PA) designers could safely concentrate on a few primary metrics – like gain and bandwidth – and rely on relaxed margins to ensure proper operation in a range of conditions. Today’s advanced RF PA design is a different story. mmWave and sub-THz frequencies introduce complex effects. Extremely wide bandwidths stress performance. Higher-order digital modulation tightens margins. Designers often face the challenge of improving one metric while degrading others, suggesting that evaluating metrics simultaneously under authentic conditions is essential.

Keysight launched a two-part masterclass on RF PA design, featuring online webinars highlighting the greater role of complex layout and accurate modeling in Keysight Advanced Design System (ADS), coupled with Keysight RF Circuit Simulation Professional and its programmatically-callable simulator. The first session introduces the concept of power waves and a vector network analysis (VNA)-inspired load-pull analysis via simulation, using a- and b-waves for high-fidelity active characterization. The second session will delve into improving PA efficiency, extending load-pull analysis for multi-performance optimization, and using simulation data to improve artificial neural network (ANN) modeling for more accurate representations.

Evolving from measured to simulated load-pull analysis

At its core, load-pull analysis evaluates PA performance under varying load and source conditions. It traces its roots to scalar measurements using simple input signals and power meters, essentially characterizing a few operating points at a designer’s discretion. Complexity rapidly undermines the value of a scalar approach, with too many interactions and too few data points to provoke and observe anomalous behavior.

An improved approach is vector load-pull analysis, which leverages sophisticated VNAs to enable control, measurement, and derivation of more critical PA parameters in fewer passes. Schematically, vector load-pull analysis looks like this:

While capturing parameters unavailable in scalar testbenches, some VNA measurements remain tedious to set up, lengthy to execute, and can be difficult to repeat. Physical VNA techniques, limited to visibility at the input and output, also don’t capture what’s happening inside the PA at the transistor level under various conditions.

Active load-pull analysis via simulation is a state-of-the-art technique that draws inspiration from the VNA setup, borrowing measurement algorithms pioneered in hardware by researchers for simulations that run orders of magnitude faster in software. A basic setup in Keysight ADS for VNA-inspired load-pull analysis looks like this:

More advantages to a simulated load-pull approach

There are several more advantages to simulated load-pull analysis in a highly accurate environment, such as ADS with RF Circuit Simulation Professional. Incorporating a- and b-waves instantly adds fidelity. Simulation can easily handle many more degrees of freedom, generating complex sweeps and contours. Using Python, designers can set up spiral, circular, and rectangular shapes for impedance sweeps. Simulation can also perform 2D interpolation, reducing the need for dense sweeps.

Other types of analysis can augment load pull. One insightful analysis, which can run separately or together with load pull, is gain compression. Sweeps of input power at back-off values (set by the user) reveal important insights into PA characterization. Designers can also jump back and forth between their preferred data displays – for instance, switching between rectangular and Smith charts, or drilling down to any parameter. Optimization routines in Python that call simulation sequences can quickly explore multivariate contours.

A design in ADS can grow into a digital twin of an RF PA, enabling designers to study discrepancies and refine models. Load-pull simulation data serves as training data for artificial neural network (ANN) device models, producing highly accurate representations of PA behavior in minutes rather than the weeks of effort normally required, with more modeling detail.

A look ahead to using intrinsic techniques for efficiency

Those last two ideas, deeper analysis for optimization and ANN modeling enhancement, including a look at system-level metrics like EVM, are the primary focus for the second session of this RF PA design series. It will build on the power wave load-pull technique, with more detail on digital twin setup and ideas such as harmonic-matching networks. By registering (link below), you’ll gain access to the first session on Why Use the Power Wave Load-Pull Technique on demand, which will prepare you for the second session on Why Use Intrinsic Techniques for High-Efficiency PA Design coming in April.

One question that came up during the first session is whether viewers can access the demo workspace presenters show. Yes, the ADS workspace is available online in the Keysight EDA Knowledge Center, so users with ADS and RF Circuit Simulation Professional (in licensed or free-trial versions) can follow along at work. There are also downloadable supplementary documents with design guides, application notes on PA design, and more.

Registration for this Keysight webinar series is now open:

RF Power Amplifier Design MasterClass Webinar Series

Also Read:

On the high-speed digital design frontier with Keysight’s Hee-Soo Lee

2026 Outlook with Nilesh Kamdar of Keysight EDA

From Silos to Systems, From Data to Insight: Keysight’s Upcoming Webinar on EDA Data Transformation


Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain

Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain
by Admin on 03-02-2026 at 10:00 am

RISC V 3PIP CWE Workflow BR 022626

by Jagadish Nayak

RISC-V adoption continues to accelerate across commercial and government microelectronics programs. Whether open-source or commercially licensed, most RISC-V processor cores are integrated as third-party IP (3PIP), potentially introducing supply chain security challenges that demand structured, design-level assurance.

As systems become more heterogeneous and interconnected, design supply chain security is no longer a documentation exercise, but an engineering challenge. A single weakness in processor IP can cascade into systemic risk. That reality makes scalable, repeatable 3PIP assurance essential, especially for RISC-V cores deployed in mission-critical environments.

From Third-Party IP Risk to Repeatable Assurance

Traditional IP integration workflows often rely on vendor claims, checklist-based reviews, and limited test evidence. While helpful, these approaches rarely provide design-level assurance across all relevant weakness classes. To address this gap, a Common Weakness Enumeration (CWE)-based methodology enables structured, measurable, and portable security validation.

A structured CWE-based methodology replaces ad hoc reviews with measurable validation. Relevant weaknesses are scoped from the MITRE database, translated into security requirements, verified through executable properties and tests, and captured as traceable assurance artifacts.

The outcome is not simply test coverage, but documented security assurance tied directly to recognized weakness definitions.

Scaling Assurance for the RISC-V Ecosystem

RISC-V’s consistent ISA foundation enables reusable security requirement templates, parameterized verification properties, and portable C-based test workloads. Once developed, these artifacts can be applied across multiple RISC-V cores with minimal modification, significantly reducing non-recurring engineering (NRE) effort.

The benefits of templatizing CWE-derived security requirements for RISC-V processors include:

  • Teams can avoid starting from scratch for each integration.
  • Scope inclusion decisions can be repurposed.
  • Verification properties can be parameterized for specific RTL implementations.
  • C-based tests can be compiled using standard RISC-V toolchains and reused across multiple cores with minimal modification.

This portability is particularly powerful for programs integrating multiple RISC-V implementations across product lines or lifecycle revisions.

Demonstrated Use Case: SiFive X280 3PIP Assurance

In a collaboration between Cycuity (an Arteris brand), SiFive, and BAE Systems, the methodology was applied to a commercial RISC-V core integrated into a larger SoC. Of 60 CWEs identified as potentially in scope, 16 have been analyzed using templated security requirements and reusable verification infrastructure spanning information-flow rules, static analysis, portable C-tests, and assertion-based verification.

What the Results Reveal

Of the 16 CWEs analyzed:

  • 12 were confirmed passing under the defined requirements.
  • 3 were flagged as failing by rule definition.
  • 1 was determined to be out of scope after deeper analysis.

Importantly, failing a CWE does not inherently indicate a vulnerability—it highlights divergence from formal CWE definitions and prompts system-level evaluation of mitigation strategy.

For example, evaluation of debug-mode transitions revealed that sensitive registers are not automatically cleared when entering debug mode. While architecturally intentional, this behavior required software mitigation planning at the system level. Similarly, analysis of register reset conditions identified registers not explicitly initialized at reset. Although deemed non-critical in context, the structured analysis ensured no assumptions were left unvalidated.

These findings highlight an essential point: assurance is not simply about finding flaws, but it is about eliminating uncertainty. Engineers and managers alike gain clearer visibility into implementation behavior, design intent, and mitigation boundaries.

Reducing NRE Through Reusable Assurance Templates

One of the most valuable outcomes of the RISC-V initiative is measurable reduction in assurance effort. Once security requirement templates, property macros, and portable test harnesses are defined, subsequent RISC-V cores can be evaluated with significantly reduced engineering investment. The methodology enables acceleration without sacrificing rigor.

Verification teams can focus effort where differentiation truly exists, such as implementation-specific signals, privilege modes, memory maps, and integration boundaries, rather than recreating foundational security requirements. For government and defense programs under Trusted and Assured Microelectronics (T&AM) objectives, this repeatability directly supports both technical assurance and program schedule constraints.

Strengthening the Design Supply Chain for RISC-V

As microelectronics ecosystems diversify, design supply chains now span open-source repositories, commercial IP vendors, integrators, tool providers, and system-level developers. Supply chain security cannot be enforced solely at procurement but must be embedded within the design verification lifecycle.

CWE-based assurance provides a shared technical language across stakeholders, for instance:

  • IP providers can align their documentation and artifacts to standardized weakness definitions.
  • Integrators can demand traceable evidence.
  • System architects can quantify residual risk and implement deliberate mitigations.

This transparency strengthens collaboration without exposing proprietary RTL or design details unnecessarily.

Looking Ahead: Expanding Beyond RISC-V

While this work focuses on RISC-V, the methodology generalizes to any third-party IP, from processors to accelerators and peripherals. Assurance will never be zero cost, but structured, reusable frameworks transform it from reactive compliance into a scalable engineering discipline. For organizations building on RISC-V and beyond, this shift is foundational to safeguarding modern design supply chains.

As RISC-V deployment continues to grow in high-assurance and mission-critical systems, design teams must move beyond trust-by-assumption. Comprehensive, CWE-based 3PIP verification enables measurable confidence, reduces integration uncertainty, and strengthens the entire microelectronics ecosystem from IP provider to end system.

CONTACT ARTERIS

Jagadish Nayak is a Distinguished Engineer in Security at Arteris (formerly Cycuity). He provides technical expertise and guidance on the Hardware Security Verification and the Radix family of tools for security verification. He has an extensive background in hardware design, verification and security analysis with over 30 years of semiconductor industry experience.

Also Read:

Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

Arteris Simplifies Design Reuse with Magillem Packaging


Apple’s iPhone 17 Series 5G mmWave Antenna Module Revealed to be Powered by Soitec FD-SOI Substrates

Apple’s iPhone 17 Series 5G mmWave Antenna Module Revealed to be Powered by Soitec FD-SOI Substrates
by Daniel Nenni on 03-02-2026 at 8:00 am

Qualcomm’s QTM565 mmWave Antenna Module

Recent independent teardown and technical analyses have confirmed that the 5G mmWave antenna module powering Apple’s latest iPhone 17 lineup relies on advanced SOITEC based Fully Depleted Silicon-On-Insulator (FD-SOI) substrate technology. The discovery highlights a significant architectural shift in high-frequency RF integration for flagship smartphones.

Industry intelligence firms Yole Group and TechInsights recently conducted detailed teardowns of the Qualcomm QTM565, the mmWave integrated antenna-in-package (AiP) module used in the iPhone 17, iPhone 17 Pro, and iPhone 17 Pro Max. According to TechInsights’ study, Qualcomm HG11-34443-2 (QTM565) FR2 Tx/Rx Front End Die RFIC Process Analysis by Sharath Poikayil Satheesh, and corroborated by Yole Group’s component analysis, the Qualcomm QTM565 module utilizes GlobalFoundries’ 22FDX RF process. This process is fundamentally built upon advanced FD-SOI substrates supplied by Soitec.

The same RF die has been embedded in the AiP mmWave solution within the iPhone 17 series, highlighting the growing use of FD-SOI substrates in mmWave RFIC design for premium smartphones. This marks one of the most visible commercial validations of FD-SOI for high-volume consumer 5G mmWave applications.

Daniel Nenni, founder of SemiWiki, commented: “The findings highlight a major industry shift toward highly integrated 5G mmWave Systems-on-Chip (SoCs), where phased array element spacing and area constraints are highly critical.” As antenna arrays move to higher frequencies in the FR2 mmWave spectrum, the physical spacing between elements becomes increasingly constrained, demanding exceptional integration density and signal integrity at the silicon level.

TechInsights’ executive summary notes that FD-SOI devices are uniquely engineered to operate effectively into the mmWave band, making monolithic and highly integrated phased-array transmit and receive SoCs feasible for top-tier consumer electronics. Unlike traditional bulk CMOS approaches, FD-SOI leverages a thin buried oxide layer that reduces parasitic capacitance, enhances electrostatic control, and improves RF isolation—critical attributes for mmWave performance.

The FD-SOI Advantage in the iPhone 17

By serving as the foundational substrate for the Qualcomm QTM565 module, FD-SOI technology enables fully integrated 5G mmWave SoCs. For manufacturers like Apple and Qualcomm, this platform delivers several strategic advantages:

Miniaturization and Footprint Optimization

FD-SOI provides significant logic scaling benefits while maintaining strong RF characteristics. Designers can integrate baseband functions, beamforming control logic, power management, and RF front-end components onto a single die. This high level of monolithic integration reduces the Bill of Materials (BOM), minimizes PCB footprint, and lowers interconnect losses between discrete components. In space-constrained smartphones, these savings directly translate into slimmer form factors or additional room for battery capacity and thermal management.

Best-in-Class Power Efficiency

Operating at low voltages, FD-SOI enables dynamic body biasing and precise threshold control, optimizing performance per watt. The inherent low-noise analog devices and excellent device matching support stable beamforming and signal integrity without excessive power draw. For end users, this means sustained 5G mmWave throughput without disproportionately draining battery life—a critical factor as carriers continue expanding high-band spectrum deployments.

Unmatched RF Performance

As demonstrated by silicon mmWave prototypes cited in the TechInsights analysis, FD-SOI provides the device-level precision necessary for high-frequency operation across both sub-6 GHz and FR2 mmWave bands. Improved isolation and reduced variability enhance linearity and gain control in phased-array architectures, directly impacting range, data rate stability, and thermal performance.

Strategic Implications for the Semiconductor Ecosystem

The integration of FD-SOI technology into Apple’s flagship iPhone 17 series underscores the substrate’s expanding role in next-generation RF system design. It also reflects a broader industry trend: convergence of digital logic and high-frequency RF on a single optimized platform.

As 5G evolves and early 6G research accelerates, the demand for compact, power-efficient, and highly integrated mmWave solutions will intensify. FD-SOI’s combination of RF excellence, power efficiency, and scalability positions it as a compelling enabler for future mobile connectivity platforms.

Bottom line: With independent validation from Yole Group and TechInsights, and commercial deployment in one of the world’s highest-volume premium smartphones, Soitec’s FD-SOI substrate technology has secured a visible and strategic foothold in the mmWave era, driving miniaturization, extending battery life, and redefining what is possible in integrated RF design.

CONTACT SOITEC

Also Read:

Podcast EP331: Soitec’s Broad Impact on Quantum Computing and More with Dr. Christophe Maleville

Podcast EP321: An Overview of Soitec’s Worldwide Leadership in Engineered Substrates with Steve Babureck

FD-SOI: A Cyber-Resilient Substrate for Secure Automotive Electronics

 


Another Quantum Topic: Quantum Communication

Another Quantum Topic: Quantum Communication
by Bernard Murphy on 03-02-2026 at 6:00 am

Quantum teleportation

In my recent series on quantum computing (QC), I intentionally overlooked a couple of adjacent topics: quantum communication and quantum sensing. These face some of the same challenges as QC, however I noticed a recent report on a test quantum network implemented by Cisco and Qunnect which led me to find more from Cisco on their work in quantum networking.

Early post since I will be at DVCon next week moderating a panel, among other activities.

What is the point of quantum communication?

Quantum communication is based on entanglement; two physically separated qubits whose states are nevertheless coupled so that if one somehow changes state the other “instantaneously” also changes state. This holds even if the qubits are separated by thousands of kilometers and led Einstein to call this behavior “spooky action at a distance”.

This idea prompted some to think entanglement implied faster-than-light communication. Sadly no – physical laws are not violated by this technique. Still, the concept has led quantum experts (and Star Trek enthusiasts) to label methods using this technique as “quantum teleportation”, which I’ll call QT.

The point of QT is security in the communication channel. If a third-party attempts to monitor a qubit state at either end, both qubit states immediately collapse and the information is lost. Which immediately signals an attempted hack while also destroying information before it can be revealed, valuable for cryptographic key distribution.

These methods are considered “quantum-safe” unlike “quantum-resistant” methods for protecting encrypted data, which are known to defend (classically) against Shor’s algorithm but not against as-yet unknown advances on Shor. External hacks against entanglement-based QT must (as far as I can tell) hack physics, a very tall order.

Cisco work in QT

I got my information from this Cisco article and this blog. Cisco have already developed a research prototype quantum network entanglement chip able to generate a million entangled photons per second, running at room temperature and working with existing photonic infrastructure.

Cisco’s prototype run with Qunnect suggests that this entanglement chip can be used to connect a cryogenic-based system (a superconducting QC for example) on one end to fiber and on the other end of the fiber back to another QC. Details on how this works are sparse I’m afraid but they do claim they were able to connect reliably over more than 17km, a very practical distance for the banks and other finance institutions that cluster around New York area where this trial was run.

Cisco have a higher aim, for quantum networking that could scale up QC capacity without needing to wait for qubit counts to scale up in individual QCs. A quantum network connected topology of QCs could in theory provide almost as much capability (?) as a single large QC. If this works out in practice it could be huge for QC.

Sanity checks

Quantum key distribution (QKD seems to be the most real part of this story today. China claims a 2000km QKD backbone between Beijing and Shanghai supporting banks. This has been in operation for quite a while.

The idea of connecting multiple QC nodes through a quantum internet still looks experimental. The University of Chicago is active in this area, also see the earlier Cisco reference on their quantum labs.

Interesting – a new possible path towards a large scale quantum computer and truly secure networking.

Also Read:

PQShield on Preparing for Q-Day

Where is Quantum Error Correction Headed Next?

Quantum Computers: Are We There Yet?


Advancing Automotive Memory: Development of an 8nm 128Mb Embedded STT-MRAM with Sub-ppm Reliability

Advancing Automotive Memory: Development of an 8nm 128Mb Embedded STT-MRAM with Sub-ppm Reliability
by Daniel Nenni on 03-01-2026 at 6:00 pm

World First 8nm 128Mb Embedded STT MRAM for Automotive
IEDM 2025 Papers MRAM RRAM

The rapid evolution of automotive technology has intensified the demand for highly reliable, high-performance semiconductor memory solutions. Modern vehicles increasingly rely ADAS driving features, and complex infotainment platforms, all of which require memory that can operate flawlessly under extreme environmental conditions. Among emerging memory technologies, embedded magnetic random access memory (eMRAM) stands out as a compelling candidate due to its non-volatility, high endurance, and fast read/write capabilities. The development of an 8nm 128Mb embedded STT-MRAM specifically tailored for automotive applications represents a significant technological milestone in this field.

One of the primary challenges in automotive memory design is ensuring reliable operation across a wide temperature range, typically from –40°C to 150°C. Unlike consumer electronics, automotive systems must maintain data integrity and functional stability even under prolonged exposure to high temperatures. This stringent requirement places considerable pressure on memory architectures, particularly when scaling down to advanced process nodes such as 8nm. Shrinking the technology node increases memory density and performance but also introduces heightened risks of failure mechanisms, including short defects, read margin degradation, and retention loss.

A major breakthrough in the 8nm 128Mb eMRAM development lies in the aggressive scaling of the memory cell to 0.017 μm². While this scaling enables higher density and improved integration with advanced logic nodes, it also intensifies process complexity. Higher bitcell density increases the probability of short failures due to redeposition and patterning challenges during fabrication. To address this, improvements in integration processes significantly reduced in-line defect counts, resulting in a substantial decrease in median short fail bit counts. Achieving sub-parts-per-million (sub-ppm) levels of short failure demonstrates that high-density scaling can coexist with automotive-grade reliability when supported by meticulous process optimization.

Another critical concern in scaled MRAM technology is maintaining sufficient read margin. As the back-end-of-line (BEOL) thermal budget increases in advanced nodes, thermal migration can degrade the magnetic tunnel junction (MTJ) properties, particularly the tunneling magnetoresistance (TMR). A lower TMR reduces the resistance gap between parallel (P) and anti-parallel (AP) states, narrowing the sensing window and increasing the risk of read errors. By optimizing the MTJ stack, especially through fine-tuning the free layer composition, the design achieved improved thermal tolerance. In fact, enhanced crystallization of the MgO barrier after thermal treatment led to an increase in TMR, thereby widening the read margin. Combined with patterning improvements that drastically suppressed inter-cell leakage, these advancements enabled ppm-level read failure rates even at elevated temperatures.

Write performance and data retention present another delicate trade-off. Automotive specifications demand both low write error rates (WER) and robust long-term retention, often exceeding 20 years at high temperatures. However, optimizing for easier write switching can compromise thermal stability, and vice versa. To balance this trade-off, pinned layer optimization was employed to tailor asymmetry between P and AP switching characteristics. By carefully adjusting the magnetic stack, engineers identified an optimal asymmetry point that minimized overall bit error rates while preserving retention strength. Furthermore, reducing the temperature dependence of switching current improved write reliability at low temperatures, where higher currents are typically required.

In addition to pinned layer refinement, enhancements in spin-transfer torque (STT) efficiency further reduced switching current requirements without sacrificing thermal stability. Improved MTJ engineering broadened the switching current window, lowering the voltage necessary to meet WER specifications while significantly improving distribution tail behavior. These refinements resulted in sub-ppm levels of both write error rate and retention bit error rate, effectively eliminating yield loss related to these failure mechanisms.

Finally, comprehensive chip-level validation confirmed full functionality of write and read operations across the entire automotive temperature range. Shmoo plot analyses demonstrated robust voltage and timing margins, with read speeds as fast as 8ns under worst-case conditions. This performance underscores not only reliability but also competitiveness in high-speed embedded applications.

Bottom line: The successful realization of an 8nm 128Mb embedded STT-MRAM for automotive use demonstrates that aggressive scaling and stringent reliability requirements can be achieved simultaneously. Through innovations in integration processing, MTJ stack engineering, and magnetic layer optimization, this technology meets sub-ppm failure targets while delivering high performance across extreme temperatures. Such advancements position eMRAM as a leading memory solution for next-generation automotive electronics, paving the way for safer, smarter, and more connected vehicles.

Also Read:

Memory Matters: Signals from the 2025 NVM Survey

Akeana Partners with Axiomise for Formal Verification of Its Super-Scalar RISC-V Cores

SiFive’s AI’s Next Chapter: RISC-V and Custom Silicon

Ceva IP: Powering the Era of Physical AI


Podcast EP333: A Look at the Broad, Worldwide Impact SEMI Has on the Semiconductor Industry with Ajit Manocha

Podcast EP333: A Look at the Broad, Worldwide Impact SEMI Has on the Semiconductor Industry with Ajit Manocha
by Daniel Nenni on 02-27-2026 at 10:00 am

Daniel is joined by Ajit Manocha, president and CEO of SEMI, the global industry association serving the semiconductor and electronics manufacturing and design supply chain. Throughout his career, Manocha has been a champion of industry collaboration as a critical means of advancing technology for societal and economic prosperity. He began his career at AT&T Bell Laboratories as a research scientist and was granted more than a dozen patents related to semiconductor manufacturing processes that served as the foundation for modern microelectronics manufacturing.

Ajit discusses his AT&T Bell Laboratories roots and the focus on “connect, collaborate, innovate” that the organization instilled. He explains that he found this same focus at SEMI, which drew him closer to the organization to become its president and CEO. Dan explores the substantial impact SEMI has on the semiconductor industry. Ajit describes broad coalitions between SEMI members, governments and academia to address key issues such as talent pool growth, energy reduction and reduction of harmful compounds such as PFAS. The collaboration with the United Nations and the EU is also described.

Dan explores future efforts of SEMI with Ajit that include AI data protection and cybersecurity.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Memory Matters: Signals from the 2025 NVM Survey

Memory Matters: Signals from the 2025 NVM Survey
by Daniel Nenni on 02-27-2026 at 6:00 am

when do you expect to choose

Non-volatile memory choices are becoming more complex as SoC designs push into advanced nodes, and new requirements driven by AI, new sensor technologies and stringent quality standards.

The second annual 2025 NVM Survey, completed in December, captures a market that still hangs on established technologies but is increasingly testing alternatives in response to these new design and production constraints.

More than 80% of respondents say they use or evaluate embedded non-volatile memory technologies. 20% are looking for an NVM now, and just under 30% expect to choose NVM IP within the coming year. Taken together, this points to a market that is both experienced and active. A meaningful proportion of near-term decisions are yet to be made, leaving a lot to play for among the competing technologies.

Fig. When do respondents plan to select an NVM

Embedded flash continues to dominate in terms of technology recognition, with awareness exceeding 80% of respondents, reflecting its long-standing role as the default choice. That said, the survey shows a broadening of familiarity beyond flash. FRAM, MRAM, and ReRAM are each recognized by more than a quarter of respondents, indicating that alternative NVM technologies are now part of mainstream awareness.

Vendor recognition follows a similar pattern. A small group of suppliers stand out in terms of familiarity, led by SST (embedded flash), Infineon (SONOS), and Weebit Nano (ReRAM), in that order.

When respondents were asked to weigh the importance of embedded NVM selection criteria, the results emphasize practicality. Reliability, endurance, and data retention all score at the top of the range, each with weighted averages well above 3.0 out of 4.0, confirming that they remain foundational requirements for embedded NVM selection. Process scalability follows closely, also scoring above the 3.0 mark, reflecting the growing difficulty of extending traditional embedded NVM into advanced geometries that embedded flash cannot scale to. Power efficiency scored over 3.0 too. Integration risk and long-term predictability sit only marginally behind, indicating that manufacturing readiness and lifecycle stability are now considered nearly as important as raw technical performance. This shows that the market is maturing; people understand that the raw technical capability of new NVMs is there, but the risk and cost of integration are becoming real concerns, especially for advanced nodes where flash integration is not an option.

The risk and pain-point data reinforce this view. Scalability limitations and power-performance trade-offs rank highest, both scoring with weighted averages above 3.0 out of 4.0, indicating that they are seen as critical constraints in current NVM deployments, especially in advanced process nodes. Reliability concerns and cost uncertainty follow closely behind, also clustering in the upper end of the scale, suggesting that long-term predictability and economic risk remain unresolved issues for many designs. Taken together, these pressures help explain why awareness of alternative NVM technologies is increasing, even where adoption remains cautious.

Fig. Pain points by importance

Design pressure seems to be increasing faster than legacy memory can adapt. What has changed since last year’s survey is not a collapse in confidence in embedded flash, but a clear acceleration in the pressures acting upon it. More teams are now evaluating alternatives not out of curiosity, but because scaling, power, and long-term predictability are becoming binding constraints on future designs.

Overall, the 2025 survey does not point to an abrupt abandonment of embedded flash, but it does suggest that the transition away from traditional memory technologies is entering a more decisive phase and might be likely to accelerate as design starts transition to nodes where new NVMs are required for technical reasons. Awareness of alternative NVMs is rising, evaluation is broadening, and a significant share of teams expect to make concrete IP choices within the next year.

Fig. Planned design starts by node

External forecasts point the same way: Yole Group’s outlook suggests embedded emerging NVMs could reach $3.3B by 2030, driven by adoption of technologies such as MRAM, PCM and ReRAM in next-generation MCUs and SoCs.

Compared with last year’s results, the direction of travel is clearer: the question is no longer whether embedded flash can be extended further, but how long it can continue to meet the combined demands of scaling, power efficiency, reliability, and cost predictability.

Bottom line: For many SoC teams, NVM selection is shifting from a background assumption to an urgent architectural decision that will shape product viability in the next generation of designs.

Also Read:

Weebit Nano Reports on 2025 Targets

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

SiFive’s AI’s Next Chapter: RISC-V and Custom Silicon

Ceva IP: Powering the Era of Physical AI


AI Drives Strong Semiconductor Market in 2025-2026

AI Drives Strong Semiconductor Market in 2025-2026
by Bill Jewell on 02-26-2026 at 1:00 pm

2026 Market Forecast

The global semiconductor market in 2025 was $792 billion, according to WSTS. 2025 was up 25.6% from 2024, the strongest growth since 26.2% in the COVID recovery year 2021. The increase was driven by AI, with Nvidia revenues up 65%. The major memory companies (Samsung, SK Hynix, Micron Technology, Kioxia and Sandisk) all cited AI as the primary revenue driver in their collective 29% revenue growth.

Fourth quarter 2025 results were mixed. The memory companies reported revenue growth in the range of 21% to 34% versus 3Q 2025. Nvidia was up 20%. Ten companies had 4Q 2025 revenue up in the range of 0.2% to 11%. Four companies (Texas Instruments, Infineon, Sony Imaging and Onsemi) reported revenue declines.

Guidance for 1Q 2026 revenue change from 4Q 2025 is mixed. The three memory companies providing guidance are expecting substantial revenue increases in 1Q 2026 with Micron at 37%, Sandisk at 52%, and Kioxia at 64%. Nvidia projects AI will drive 14% revenue growth. Four companies project revenue gains ranging from 2% to 11% based on a recovering industrial market and continuing AI strength. AMD, NXP Semiconductors, STMicroelectronics and Onsemi project revenue declines primarily due to seasonality.

The huge memory demand in AI is causing shortages of memory for other applications. Intel expects an 11% decline in revenue in 1Q 2026 versus 4Q 2025 due to shortages of memory for PCs. Qualcomm and MediaTek both cite memory shortages for smartphones as the reason for projected revenue declines.

In December, IDC cited the memory shortage as potentially leading to declining shipments of smartphones and PCs in 2026.

Thus, if strong AI growth continues in 2026, semiconductor companies dependent on the smartphone and PC markets could see revenues decline in 2026.

A year ago, no one predicted demand for AI would drive 25.6% growth in the semiconductor market in 2025. We at Semiconductor Intelligence give a virtual award for the most accurate semiconductor market forecast for the year. The criteria are publicly available forecasts released between October of the previous year and release of the WSTS January data in early March. The winner for 2025 is IDC which predicted 15% growth. Several other prognosticators were in the 12% to 14% range.

Looking ahead to 2026, recent forecasts are in two groups. In the lower group, the Cowan LRA model (based on historical revenue trends) has 9.5%. Future Horizons projected 12%. The higher group includes RCD Advisors at 23%, WSTS at 26.3%, and Semiconductor Intelligence at 30%.

We at Semiconductor Intelligence believe the robust expansion of AI will continue through at least the first half of 2026. The high quarter-to-quarter semiconductor market growth of 16% in 3Q 2025 and 14% in 4Q 2025 followed by an expected strong 1Q 2026 practically guarantees 2026 growth over 20%. Even if memory shortages impact the smartphone and PC markets, the booming AI market and the relative stability of the industrial and automotive markets will continue to drive semiconductor growth in 2026.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

AI Bubble?

Semiconductors Up Over 20% in 2025

U.S. Electronics Production Growing