SiC Forum2025 8 Static v3

The Rise, Fall, and Rebirth of In-Circuit Emulation: Real-World Case Studies (Part 2 of 2)

The Rise, Fall, and Rebirth of In-Circuit Emulation: Real-World Case Studies (Part 2 of 2)
by Lauro Rizzatti on 10-20-2025 at 6:00 am

The Rise, Fall, and Rebirth of In Circuit Emulation real world case studies figure 1

Recently, I had the opportunity to speak with Synopsys’ distinguished experts in speed adapters and in-circuit emulation (ICE). Many who know my professional background see me as an advocate for virtual, transactor-based emulation, hence I was genuinely surprised to discover the impressive results achieved by today’s speed adapters critical to the validation of system in their actual use environment.

In this article, I share what I learned. While confidentiality prevents me from naming customers, all the examples come from leading semiconductor companies and major hyperscalers across the industry using ZeBu® emulation or HAPS® prototyping together with Synopsys speed adaptors. As you read through the article you can refer to the following diagram of the Synopsys Speed Adaptor Solution:

Figure 1: Deployment diagram of Synopsys’ Speed Adaptor Solution and System Validation Server (SVS)

Case Study #1: The Value of Combining Fidelity and Flexibility in System Validation

The Challenge

A major fabless semiconductor company adopted virtual platforms as well as Hardware-Assisted Verification (HAV) platforms to accelerate early software development and design verification.

The company’s operations were organized around three distinct business units, each responsible for designing its own silicon independently. Each unit selected a different major EDA vendor for its virtual host solution platform. At first glance, such a multi-vendor setup might seem fragmented, but because virtual platforms are generally built on similar architectural blueprints, the approach still resulted in a verification environment that was consistent and standardized across all three BUs.

Alongside these virtual setups, the engineering teams also adopted In-Circuit Emulation (ICE). Here again, they diversified their tools, sourcing speed adapters and emulation from two of the three major EDA vendors. This allowed them to carry out system-level testing, interfacing the emulated environments with real hardware components to validate behavior under realistic conditions.

During a critical design milestone, a senior VP overseeing design verification mandated a cross-platform validation initiative: swap designs and tools across BUs, validate that silicon from each BU worked on all vendors’ platforms, uncover hidden inconsistencies before tape-out.

The mandate required running each BU’s design on all three virtual host platforms and on both ICE setups to ensure environment independence.

That’s when the surprise hit! One design passed flawlessly on all three virtual host platforms. It passed on one of the ICE platforms, but it failed on the other ICE platform, halting system boot entirely. The immediate suspicion fell on the speed adapter. The design team escalated the issue to the EDA vendor’s ICE experts for root-cause analysis.

The Solution

The EDA vendor’s ICE team dug deep into the logs and waveform traces and found the real culprit. It wasn’t the adapter. It was a bug in the DUT’s RTL.

This RTL flaw had escaped all three virtual platforms because of missing low-level system behavior modeling. Escaped one of the ICE setups due to lower fidelity implementation and surfaced only on the higher-fidelity ICE platform, which accurately mirrored real server behavior.

In real-world server systems, three critical hardware/software layers interact simultaneously. From the bottom up, the layers are:

  1. Motherboard chipsets, including PCIe switches, bridge chips, and other supporting silicon
  2. BIOS, handling low-level system initialization and configuration
  3. Operating System (OS), such as Linux or Windows, running on top

Virtual host platforms typically simulate only the OS layer using a virtual machine approach (typically QEMU based). The BIOS is minimally represented, and chipset behavior is completely abstracted out.

On the high-fidelity ICE platform, however, a real Intel server board was connected through the speed adapter. During boot, this Intel chipset correctly issued a Vendor Defined Message (VDM) over PCIe, a standard behavior in many production Intel servers, but not modeled at all in virtual platforms. Upon receiving this VDM, the DUT incorrectly dropped the packet instead of updating the PCIe flow control. This caused a deadlock during system boot. There was no software workaround, the only solution was to fix the RTL before tape-out.

Results

If undetected, the chip would have failed in every server deployment, resulting in a dead-on-arrival product. Detecting the bug pre-silicon saved the company a multi-million-dollar re-spin and months of schedule delay. The incident demonstrated why high accuracy virtual environments are critical to finding bugs early while high fidelity in-circuit setups are necessary to have final fidelity and confidence in the design.

Case Study #2: ICE Delivers Superior Throughput

The Challenge

When designing peripheral interface products, engineering teams often rely on virtual solutions for early verification. While virtual environments can model a protocol controller, they cannot accurately represent the physical (PHY) layer.

In these virtual models, the PHY is removed and replaced by a simplified “fake” model allowing to write software for basic register programming but does not support link training, equalization, or true electrical signaling. As a result, link training may appear to succeed because the model “assumes” compliance. Subtle issues like timing mismatches, equalization failures, and signal integrity problems remain hidden until late post-silicon testing. Testing real-world interoperability, especially with diverse third-party hardware and drivers, is not possible.

A leading hyperscaler faced significant challenges because of this drawback. Early in their design cycles, they faced months-long delays just to program and train PHYs, pushing crucial bug discovery into expensive post-silicon stages.

The Solution

To overcome these challenges, they adopted Synopsys Speed Adapters to bring PHYs into the emulation environment.

With this approach, PHYs are physically connected to the emulator through the speed adaptation. These boards support full programming, training, and link initialization just as they would on silicon.

This integration effectively turns the emulation environment into a true In-Circuit Emulation (ICE) platform, combining the speed and visibility of pre-silicon emulation with the physical accuracy and interoperability of real-world hardware

Examples of Impact

PCIe Gen5 Interface

  • In a virtual setup, a Gen5 device’s link training sequences seemed successful.
  • When tested via a speed adapter and a PHY, the customer uncovered critical timing mismatches and equalization failures that would have escaped virtual verification.
  • Catching these issues pre-silicon avoided a potential costly silicon re-spin and ensured full compliance with Gen5 specs.

UFS Storage Interface

  • A UFS host controller passed functional tests in a virtual model.
  • When connected to a real UFS PHY through a speed adapter, engineers discovered clock misalignments, burst mode instabilities, and data corruption under stress conditions—problems rooted in real signaling, invisible in virtual models.
  • Early detection improved system reliability and ensured compliance with JEDEC standards.

Driver Interoperability Testing

  • In root complex mode, different GPUs (NVIDIA, AMD, Intel) each use different drivers and optimizations.
  • Virtual environments cannot test these real drivers because they require a physical interface.
  • Speed adapters allowed full driver stacks against real devices, exposing errata and interoperability bugs that virtual models could never catch.
Results

Previous four months to program the PHY plus up to six months to train it in post-silicon were executed in pre-silicon in a couple of weeks. This was possible because speed adapters ran workloads , enabling rapid design iterations and faster bring-up cycles. Another benefit was improved debug and reuse since the same PHY configuration trained in pre-silicon could be directly reused in post-silicon, accelerating bring-up.

Case Study #3: Ethernet Product Validation

Challenge

When developing advanced Ethernet products—such as ultra-Ethernet, smart NICs, or intelligent switches—engineers face a recurring challenge: how to bring real software traffic into the Ethernet validation environment.

Virtual environments offer partial solutions. Virtual tester generators (VTG) offer low-level packet traffic (Layer 2, Layer 3) but do not exercise the application software stack. Virtual Hosts (VHS) allow software interaction but lacks flow-control capabilities. Without flow control, packets are dropped, an unacceptable limitation for validation environments where fidelity and determinism are critical.

As a result, traditional virtual environments are either incomplete (VTG) or not fully-reliable from a traffic control perspective (VHS). This gap left design teams without a way to fully validate Ethernet products across all protocol layers—especially the higher layers (L4–L7) that depend on real drivers, operating systems, and diagnostic software.

Solution

Ethernet speed adapters provide the missing link by bridging virtual test environments with real software execution over Ethernet.

Unlike VHS, speed adapters guarantee zero packet drops, delivering deterministic performance even under high traffic. Virtual testers (e.g., from Ixia or Spirent) remain useful for low-level (layer 2/3) functional validation. Speed adapters enable execution of real drivers and Linux-based diagnostic tools that testers cannot emulate. Together, virtual testers and speed adaptors form a complete solution spanning all Ethernet layers.

For startups or budget-constrained customers, speed adapters complement more expensive virtual tester licenses.  Speed adapters can provide equivalent packet generation and analysis at lower cost. Also, free and open-source test generators can be layered on top of a speed adapter to replicate tester functions at much lower cost.

Results

In practice, this hybrid approach has enabled customers to validate real software stacks against hardware under development without packet loss. Catch design bugs that only appear in higher protocol layers, issues that purely virtual test environments cannot expose. Scale affordably, combining limited VTG licenses with speed adapters to achieve full test coverage.

Case Study #4: Real-World Sensor Interoperability with MIPI Speed Adapters

The Challenge

The Display and Test Framework (DTF) team of a major fabless enterprise faced a recurring and costly problem. They needed to validate their chip design against real MIPI-based image sensors and cameras. However, in a virtual emulation environment, this was impossible because virtual models can mimic protocol behavior but cannot replicate real sensor electrical signaling or timing. Vendor-specific cameras and sensors each have unique initialization sequences, timing quirks, and signal integrity characteristics that no generic virtual model can capture. When first silicon returned from the fab, it frequently failed to interface with the intended cameras and sensors, leading to long bring-up efforts or even full silicon re-spins.

This limitation created a significant time-to-market bottleneck. By the time hardware compatibility issues can be found the design has already gone through costly fabrication, delaying product launches.

The Solution

To eliminate this bottleneck, the company adopted MIPI speed adapters to enable ICE with real sensor hardware. Using this approach, the chip design running inside the emulator could be directly connected to real, vendor-specific MIPI cameras and image sensors. Engineers could exercise full initialization, configuration, and data streaming paths just as they would on physical silicon. The setup supported easy swapping of different sensors and camera models, enabling rapid interoperability testing across vendors.

This capability gave the DTF team the real-world coverage they needed in pre-silicon, without waiting for chips to return from the fab.

Results

The design was successfully tested with the exact vendor-specific camera and sensor models planned for production. By catching integration issues pre-silicon, the enterprise avoided costly design re-spins caused by post-silicon bring-up failures. Removing the post-silicon camera/sensor debug cycle accelerated overall product schedules. Finally, the team could sign off knowing the design was already proven with real-world peripherals.

Case Study #5: How Synopsys’ System Validation Server (SVS) Caught Critical Bugs Missed by other solutions

The Challenge

Pre-silicon validation using ICE has historically faced a critical obstacle. Standard off-the-shelf host servers are not designed to tolerate the slow or intermittent response times of an emulator. When the emulator clock stalls or slows the host often times out, aborting the test run.

This customer’s silicon validation team encountered this limitation firsthand. While they used a commercial emulation host server for ICE, this system wasn’t enforcing strict real-world timing AND protocol checks. This risked letting flawed designs pass pre-silicon signoff, only to fail later in production.

The Solution

To overcome these limitations, the customer’s validation team adopted Synopsys’ System Validation Server (SVS) as their host system for ICE validation. SVS is a specialized, pre-validated host machine designed specifically to work with speed adapters and emulators. It offers two major advantages over generic hosts or legacy commercial host server setups. The SVS ships with a custom BIOS, engineered to tolerate the slow response times of emulators to eliminate timeouts that can otherwise terminate validation runs prematurely. SVS faithfully mimics the DUT that will eventually plug into, including enforcing strict specification compliance, especially for complex subsystems like PCIe. See figure 1 at the top of the article.

The validation team tested their design on both 3rd party emulation hardware and Synopsys’ SVS. Using the 3rd party emulation, the system booted successfully, but on SVS, the boot failed completely. Initially, the engineers suspected a hardware fault in SVS. As they put it: “Your SVS is broken while the other guys work fine.”

However, after a detailed debug session, it emerged that their DUT contained configuration errors in PCIe space registers. The 3rd party emulation solution and host server masked these errors because it used an outdated BIOS that failed to enforce PCIe register constraints. By contrast, SVS strictly enforced PCIe specifications and correctly rejected the illegal register values. The bug was non-fixable by firmware (no software patch could correct it).

Results

SVS exposed an RTL-level configuration bug that virtual flows and another emulation solution missed. It eliminated timeout instability in virtue of the SVS’s modified BIOS that allowed stable, long-duration tests.

SVS ensured that only spec-compliant designs advanced to tape-out, eliminating false positives from legacy flows.

Had the design been taped out based on 3rd party emulation “pass,” the silicon would have been DOA, requiring a full, costly respin.

Conclusion

Back in 2015, I wrote “The Melting of the ICE Age” for Electronic Design, where I predicted the demise of in-circuit emulation (ICE). Its numerous drawbacks (see Part 1 of this series) seemed to doom it to history, replaced by transaction-based emulation and, later, hybrid approaches that drove the shift-left verification methodology. In hindsight, I must admit I underestimated the ingenuity and resourcefulness of the engineering community.

Today, the third generation of speed adapters has propelled ICE again into the limelight of system-level validation. Bugs once detectable only in post-silicon labs can now be identified pre-silicon. This capability not only reduces the risk of re-spins but also accelerates time-to-tapeout and saves enormous expense. Far from melting away, ICE has re-emerged as a cornerstone of system-level verification.

The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)

Also Read:

Statically Verifying RTL Connectivity with Synopsys

Why Choose PCIe 5.0 for Power, Performance and Bandwidth at the Edge?

Synopsys and TSMC Unite to Power the Future of AI and Multi-Die Innovation


CMOS 2.0 is Advancing Semiconductor Scaling

CMOS 2.0 is Advancing Semiconductor Scaling
by Daniel Nenni on 10-19-2025 at 10:00 am

CMOS 2.0

In the rapidly evolving landscape of semiconductor technology, imec’s recent breakthroughs in wafer-to-wafer hybrid bonding and backside connectivity are paving the way for CMOS 2.0, a paradigm shift in chip design. Introduced in 2024, CMOS 2.0 addresses the limitations of traditional CMOS scaling by partitioning a system-on-chip (SoC) into specialized functional tiers. Each tier is optimized for specific needs—such as high-performance logic, dense memory, or power efficiency—through system-technology co-optimization (STCO). This approach moves beyond general-purpose platforms, enabling heterogeneous stacking within the SoC itself, similar to but more integrated than current 3D stacking of SRAM on processors.

Central to CMOS 2.0 is the use of advanced 3D interconnects and backside power delivery networks (BSPDNs). These technologies allow for dense connections on both sides of the wafer, suspending active device layers between independent interconnect stacks. At the 2025 VLSI Symposium, imec demonstrated key milestones: wafer-to-wafer hybrid bonding at 250nm pitch and through-dielectric vias (TDVs) at 120nm pitch on the backside. These innovations provide the granularity needed for logic-on-logic or memory-on-logic stacking, overcoming bottlenecks in compute scaling for diverse applications like AI and mobile devices.

Wafer-to-wafer hybrid bonding stands out for its ability to achieve sub-micrometer pitches, offering high bandwidth and low-energy signal transmission. The process involves aligning and bonding two processed wafers at room temperature, followed by annealing for permanent Cu-to-Cu and dielectric bonds. Imec has refined this flow, achieving reliable 400nm pitch connections by 2023 using SiCN dielectrics for better strength and scalability. Pushing further, simulations revealed non-uniform bonding waves causing wafer deformation, impacting overlay accuracy. By applying pre-bond litho corrections, imec reached 300nm pitch with <25nm overlay error for 95% of dies. At VLSI 2025, they showcased 250nm pitch feasibility on a hexagonal pad grid, with high electrical yield in daisy chains, though full-wafer yield requires next-gen bonding tools.

Complementing frontside bonding, backside connectivity enables front-to-back links via nano-through-silicon vias (nTSVs) or direct contacting. For CMOS 2.0’s multi-tier stacks, this allows seamless integration of metals on both sides, with BSPDNs handling power from the backside to reduce IR drops and decongest frontside BEOL for signals. Imec’s VLSI 2025 demo featured barrier-less Mo-filled TDVs with 20nm bottom diameter at 120nm pitch, fabricated via a via-first approach in shallow-trench isolation. Extreme wafer thinning maintains low aspect ratios, while higher-order lithography corrections ensure 15nm overlay margins between TDVs and 55nm backside metals. This balances fine-pitch connections on both wafer sides, crucial for stacking multiple heterogeneous layers like logic, memory, and ESD protection.

BSPDNs further enhance CMOS 2.0 by relocating power distribution to the backside, allowing wider, less resistant interconnects. Imec’s 2019 pioneering work has evolved, with major foundries adopting it for advanced nodes. DTCO studies show PPAC gains in always-on designs, but VLSI 2025 extended this to switched-domain architectures—relevant for power-managed mobile SoCs. In a 2nm mobile processor design, BSPDN reduced IR drop by 122mV compared to frontside PDNs, enabling fewer power switches in a checkerboard pattern. This yielded 22% area savings, boosting performance and efficiency.

These advancements, supported by the NanoIC pilot line and EU funding, bring CMOS 2.0 from concept to viability. By enabling heterogeneity within SoCs, they offer scalable solutions for the semiconductor ecosystem, from fabless designers to system integrators. As pitches scale below 200nm, collaboration with tool suppliers will be key to overcoming overlay challenges. Ultimately, high-density front and backside connectivity heralds a new era of compute innovation, meeting demands for performance, power, and density in an increasingly diverse application space.

Read the source article here.

Also Read:

Exploring TSMC’s OIP Ecosystem Benefits

Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions

Advancing Semiconductor Design: Intel’s Foveros 2.5D Packaging Technology


Podcast EP311: An Overview of how Keysom Optimizes Embedded Applications with Dr. Luca TESTA

Podcast EP311: An Overview of how Keysom Optimizes Embedded Applications with Dr. Luca TESTA
by Daniel Nenni on 10-17-2025 at 10:00 am

Daniel is joined by Luca TESTA, the COO and co-founder of Keysom. After studying microelectronics in Italy, Luca obtained his PhD in France while working with STMicroelectronics on analog/RF circuit design.

Dan explores the charter and focus of Keysom with Luca. Luca describes how Keysom is providing an automated and reliable way to create optimized, efficient 32-bit processors for embedded applications such as IoT and edge AI. He explains the challenges of using standard processors for applications that demand small area and low power. In these cases, on average 40% of the instructions in a standard processor are not used. He describes Keysom’s CoreXplorer tool that provides an easy and efficient way to develop a customized processor that fits the specific needs of an application.

He describes real examples where 30% – 70% area reduction is achieved along with a 25% – 40% reduction in power. The approach uses a RISC-V architecture and ensures compatibility with the RISC-V ecosystem to create an optimized workflow. Luca goes on to describe additional benefits of Keysom’s approach and the company’s plans to expand sales and support in the US.

Contact Keysom

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Dr. Bernie Malouin Founder of JetCool and VP of Flex Liquid Cooling

CEO Interview with Dr. Bernie Malouin Founder of JetCool and VP of Flex Liquid Cooling
by Daniel Nenni on 10-17-2025 at 8:00 am

Bernie Malouin Headshot JetCool

Bernie Malouin is a technical professional with demonstrated experience from concept studies through system deployment. He has a strong track record working in dynamic environments, from highly complex, multi-million dollar development programs to deeply technical research projects. He founded JetCool Technologies after 8 years at MIT Lincoln Laboratory. There, he served as the Chief Engineer leading the technical development of a $100M+ airborne payload program for the US Government.

Tell us about your company?

JetCool, now part of Flex, designs and manufactures advanced liquid cooling for the world’s most demanding AI and HPC workloads. We spun out of MIT in 2019 with a mission to reinvent cooling at the chip level. In 2024, we joined Flex, a global leader in design, manufacturing, and supply chain, which has enabled us to scale faster and reach customers worldwide.

What sets us apart is performance and practicality. Our SmartPlate™ direct-to-chip technology outperforms leading liquid cooling solutions by more than 20%. It targets hotspots on the silicon, reducing thermal resistance and unlocking higher compute densities. Combined with Flex’s vertically integrated approach where compute, power, and cooling are designed and validated together we deliver rack-ready systems that lower power consumption, cut water use by up to 90%, and deploy seamlessly at global scale.

What problems are you solving?

AI has pushed compute demand, and heat, to unprecedented levels. Traditional air cooling simply can’t keep pace with today’s CPUs and GPUs, leaving data centers constrained by power, thermal limits, and sustainability pressures. JetCool, as a Flex company, solves this by delivering vertically integrated rack-level solutions that combine power distribution, liquid cooling, and system design from a single vendor. This integration reduces complexity, shortens deployment timelines, and ensures every component is validated to work together seamlessly.

Beyond technology, Flex provides global warranty for its products and service and support. That means customers can scale AI infrastructure anywhere in the world with confidence, knowing their racks are supported end-to-end from design and manufacturing to deployment and ongoing operations.

What application areas are your strongest?

We excel in helping customers maximize compute in power-constrained environments. Our single-phase, direct-to-chip technology cools high-power AI and HPC processors more effectively than conventional cold plates, delivering up to 20% better performance while lowering total IT power use. We understand that every data center is at a different stage of adoption. That’s why our portfolio spans from more efficient self-contained air-cooled solutions to fully liquid-cooled racks. This gives customers a clear migration path—start with air efficiency gains, then scale to hybrid or full liquid cooling as density and power demands grow.

Our strong partnerships with leading colocation providers, including Equinix, Telehouse, and Sabey Data Centers, showcase this flexibility in action. Together, we’ve deployed solutions that allow customers to adopt liquid cooling on their terms, whether through incremental pilots or complete rack-level rollouts. With JetCool and Flex, they gain a partner who can meet them where they are today and help them scale for tomorrow.

What keeps your customers up at night?

Our customers don’t want to invest in expensive infrastructure this year only to see it become obsolete the next. With AI chips advancing at breakneck speed, they worry about stranded capacity—data centers built for yesterday’s processors that can’t support tomorrow’s. At the same time, power and sustainability pressures are mounting. Many regions are already at their grid limits, and operators are being asked to do more compute with less energy and water. This is where JetCool and Flex step in. Our insight allows us to design vertically integrated power and cooling solutions that help customers future-proof. We’re acting as a resource and an ally, ensuring our customers are prepared for the next wave of AI hardware and can scale confidently without constant reinvestment.

What does the competitive landscape look like and how do you differentiate?

Liquid cooling is no longer experimental; it’s becoming the standard for AI infrastructure. That said, not all solutions are equal. JetCool differentiates through precision. Our microconvective technology cools chips at their hottest points, reducing thermal resistance and improving performance per watt. As part of Flex, we combine this innovation with large-scale manufacturing, systems integration, and global service. With Flex, customers get a fully validated, warrantied solution from a single partner.

What new features/technology are you working on?

We’re pushing toward the 1MW rack. That means not just higher-capacity cold plates, but rack-level solutions that integrate cooling distribution, power management, and monitoring. We’re also advancing smart sensing and telemetry, enabling operators to see and control cooling performance in real time. And at the silicon level, we’re collaborating with chipmakers to co-design next-generation cooling interfaces that reduce thermal bottlenecks from the start.

How do customers normally engage with your company?

We meet customers wherever they are in their liquid cooling journey. Some start with pilot deployments in a single row; others adopt our rack-ready systems through OEM partners like Dell. Because Flex can integrate, validate, and ship fully configured solutions, we simplify what has traditionally been a complex, multi-vendor process. Customers gain confidence knowing they’re supported end-to-end—from design through deployment and ongoing service.

Also Read:

CEO Interview with Gary Spittle of Sonical

CEO Interview with David Zhi LuoZhang of Bronco AI

CEO Interview with Jiadi Zhu of CDimension 


Webinar – The Path to Smaller, Denser, and Faster with CPX, Samtec’s Co-Packaged Copper and Optics

Webinar – The Path to Smaller, Denser, and Faster with CPX, Samtec’s Co-Packaged Copper and Optics
by Mike Gianfagna on 10-17-2025 at 6:00 am

Webinar – The Path to Smaller, Denser, and Faster with CPX, Samtec’s Co Packaged Copper and Optics

For markets such as data center, high-performance computing, networking and AI accelerators the battle cry is often “copper is dead”. The tremendous demands for performance and power efficiency often lead to this conclusion. As is the case with many technology topics, things are not always the way they seem. It turns out a lot of the “copper is dead” sentiment has to do with the view that it’s a choice of either copper or optics. In such a situation, optical interconnect will win.

But what if copper and optics could be integrated and managed together on one platform?  It turns out there are many short-reach applications where copper is superior. The ability to achieve this co-technology integration at advanced 224G speeds was the topic of a recent webinar from Samtec. If you struggle with the negative ramifications of “copper is dead”, you’ll will want to see the webinar replay. More details are coming, but first let’s examine the path to smaller, denser, and faster with CPX, Samtec’s co-packaged copper and optics solution.

You can view the webinar here.

Who’s Speaking

Matt Burns

The quality of a webinar, especially a live one is heavily influenced by the quality of the speaker. In the case of the upcoming event, everyone is in good hands. Matt Burns will be presenting. I’ve known Matt for quite a while. Samtec was an excellent partner of eSilicon back in the day, and I’ve attended many discussions and events with Matt. He has an easy-going presentation style, but under it all is a substantial understanding of what it takes to build high-performance communication channels and why it matters for any successful system design.

A quick summary of Matt’s background is in order. He develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 25 years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. He currently serves as Secretary at PICMG. If it’s a close-to-impossible system design issue, Matt has likely seen it and helped to flatten it.

Some Topics to be Covered

Using all the tools and technologies available for any complex design project is usually the best approach. Matt discuses this in some detail, describing the situations where passive copper interconnect delivers the best result. Short reach is certainly one aspect that influences this decision, but there are other considerations as well.

For longer reach channels, active optical channels can be an excellent choice. The reasons to drive one way vs. another are not as simple as you may think and Matt helps with examples for various strategies.

The key point in all this is, what if you could deploy both copper and optical interconnect in a unified way?  A mix and match scenario if you will. It turns out Samtec has been managing this kind of platform-level integration for about 15 years.

Getting into some specifics, the transition to 224 Gbps PAM4 signaling can strain copper interconnects due to reduced signal-to-noise rations and tighter insertion loss budgets. This usually limits reach to under 1 meter. Using co-packaged copper (CPC), this limit can be extended to 1.5 meters, enabling dense intra-rack GPU clusters while lowering system cost. But copper’s limitations over longer distances hinder inter-rack scaling.

Co-packaged optics (CPO) helps by integrating the optical engine within the switch silicon, enabling high-bandwidth, scalable links across racks. CPO overcomes copper’s physical constraints, reducing power and cooling costs, and unlocking scalable, efficient AI supercomputing fabrics that interconnect thousands of GPUs across data centers.

But what if you could have it both ways? Matt describes Samtec’s new strategy for advanced channel speeds that combines CPC and CPO to create a new category called CPX. This capability is delivered by Samtec’s Si-Fly® HD. A photo of the platform is shown on the top of this post.

Matt describes how this technology delivers the highest density 224 Gbps PAM4 solution in today’s market. He provides details about how the electrically pluggable co-packaged copper and optics solutions (CPX) are achievable on a 95 mm x 95 mm or smaller substrate using Samtec’s SFCM connector. The SFCM mounts directly to the package substrate and is pluggable with Samtec’s SFCC cable assembly or an optical cable assembly of your choosing.

Synopsys Example

Samtec has already worked with several high-profile system OEMs and IP providers to deploy this technology. Matt also talks about some of those achievements.

To Learn More

If you are faced with tough decisions regarding channel interconnect choices, you may have more options that you think. Matt Burns will take you through a new set of options enabled by Samtec’s new Si-Fly HD.  The webinar’s full title is CPX: Leveraging CPC/CPO for the Latest Scale-Up and Scale-Out AI System Topologies.  You can view a replay of this important webinar here.

Also Read:

Samtec Practical Cable Management for High-Data-Rate Systems

How Channel Operating Margin (COM) Came to be and Why It Endures

Visualizing System Design with Samtec’s Picture Search


Webinar – IP Design Considerations for Real-Time Edge AI Systems

Webinar – IP Design Considerations for Real-Time Edge AI Systems
by Mike Gianfagna on 10-16-2025 at 10:00 am

Webinar – IP Design Considerations for Real Time Edge AI Systems

It is well-known that semiconductor growth is driven by AI. That simple statement breaks down into many complex use cases, each with its own requirements and challenges. A webinar will be presented by Synopsys on October 23 that focuses on the specific requirements for one of the most popular use cases – AI at the edge. The speaker is very knowledgeable on the topic and will treat the audience to a comprehensive view of the many requirements to be considered for successful deployment of AI at the edge. I highly recommend you register for this important event. A link is coming but first let’s look a little closer at this webinar on IP design considerations for real-time edge AI systems.

The Presenter

Hezi Saar

The value of any webinar is heavily influenced by the presenter. In this case, it’s Hezi Saar, executive director of product line management for mobile, automotive, and consumer IP for the Synopsys Solutions Group. Hezi brings more than 20 years of experience in the semiconductor and embedded systems industries.  He has been with Synopsys for almost 17 years. He has also been the Chair of the Board of Directors for the MIPI Alliance for over nine years. Before Synopsys, Hezi was involved in product marketing, product management and design at Actel, ISD/Winbond and RAD Data Communications.

Hezi is quite good at explaining complex topics in an easy-to-understand way. The 25-minite webinar will be followed by a Q&A session with questions from the audience. I’m sure Hezi will do a great job with those questions as well.

Some Webinar Topics

Hezi presents a broad overview of IP architecture and integration methodologies that support real-time AI workloads at the edge. Here are some other topics he discusses:

A useful historical perspective of how AI has driven semiconductor growth is presented. The trends associated with AI models are discussed – the focus is to provide more capacity with less resources. How the quality of small models for edge AI has increased is also reviewed. The motivations for moving from the cloud to the edge is another interesting topic. Power efficiency is critical here. Hezi presents data that shows power consumption can be up to 200X more efficient on the device (edge) vs. the cloud.

There are many drivers and many benefits associated with moving from the cloud to the edge. He points out that this is what’s driving the next innovation cycle as summarized by the diagram below.

Hezi then presents a very useful and informative overview of a broad range of smart and connected devices at the edge. He discusses the unique requirements for cost, performance, area and power for these cases.

Market data is also presented, showing edge Al device shipments for smartphones dominating, with smart speakers showing growth as well. There is a lot of very useful discussion around new opportunities and how to address consumer demands for cost-effective products. Considerations for model choices are discussed, along with an overview of how AI companion chips can help.

Hezi also explores the impact of multi-die approaches. This technology will help in some cases more than others.

To Learn More

I have touched on only a subset of the topics Hezi covers in this very informative webinar. If AI at the edge is in your plans, this webinar will provide substantial and valuable information. I highly recommend investing the time to attend. It will take less than an hour.

The webinar will be held Thursday, October 23, 2025, from 10:00 AM – 11:00 AM Pacific Daylight Time. You can reserve your spot at the event here. And that’s a summary of a webinar on IP design considerations for real-time edge AI systems.

Also Read:

Synopsys and TSMC Unite to Power the Future of AI and Multi-Die Innovation

AI Everywhere in the Chip Lifecycle: Synopsys at AI Infra Summit 2025

Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions


WEBINAR: Design and Stability Analysis of GaN Power Amplifiers using Advanced Simulation Tools

WEBINAR: Design and Stability Analysis of GaN Power Amplifiers using Advanced Simulation Tools
by Daniel Nenni on 10-16-2025 at 6:00 am

figure1

Why should high frequency circuit designers consider stability early in the design process? Isn’t there enough to worry about just making the circuit function at the fundamental frequency?

Figure 1: The Winslow Probe in ADS allows you to derive many figures of merit in postprocess using equations, based on a single simulation.

In the past, Microwave Engineers used to solve stability problems in the lab, perhaps adding bypassing or loss in a strategic location to stabilize their circuits. Stability was viewed as too complicated to model or predict, and the problems were usually easy enough to solve in the lab anyway. But things are changing. Across the entire wireless communications industry, standards are moving higher in frequency and systems are getting more complex. Instability arises from a combination of gain and feedback. In today’s circuits, gain is higher due to increasing device fT’s, and feedback is more prevalent because features are more compact and resonate more easily with signals that have smaller wavelengths. At the same time, advanced packaging technologies make the internals of the circuit less accessible than in the past, meaning things are harder to fix after the fact in the lab, even with the most apt technicians.

REGISTER HERE

To make circuits which meet the needs of modern communications systems, designers need to master stability by truly understanding the root causes of problems in the circuit before building a design. The problem: stability is a very complex topic. Most high frequency design engineers use only the classic K-factor to assess circuits, but this technique is based on assumptions which may not be valid for modern circuits. Besides, K-factor only applies to a two-port network at the external I/O’s, while the circuit inside could be very complex and hinder visibility from the outside. There are many alternative techniques in literature, but they are sometimes difficult to apply correctly, and furthermore it’s not clear which one is best for any given application.

This webinar will help designers understand how instabilities fundamentally arise in their circuit and illustrate how to troubleshoot and resolve these issues up front in the design process before manufacturing.

REGISTER HERE

This requires not only an understanding of theory and classic techniques, but also a practical knowledge of how to apply these techniques efficiently using modern design tools. This paper starts by reviewing the theory, discussing concepts like loop gain, return difference, and driving point impedance, and then expands to build a framework for applying these techniques to modern circuit design. The key is to use a new probe, called the WS-Probe, which has recently become available in Keysight’s Advanced Design System (ADS), to derive the necessary stability measures quickly and efficiently. The probe allows application of multiple stability analysis techniques to the circuit post-simulation for both small and large signal analysis in a non-invasive manner.  The goal will be to arrive at a simple, straightforward, rigorous and easy to apply process to determine whether or not your design is stable, and if not, how to go about fixing it.  After attending this webinar, you’ll look at stability in an entirely different way and the circuits you design will reflect that.

Figure 2: EM Circuit Excitation in ADS and RF Pro allows you to visualize and fix feedback paths that cause instability.

Figure 3: The New Winslow stability margin, only available in Keysight RF Circuit Professional (Nexus), allows you quantitatively discern the level of stability for your circuit.

REGISTER HERE

Also Read:

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

Video EP5: A Discussion of Photonic Design Challenges and Solutions with Keysight

Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight


Visualizing hidden parasitic effects in advanced IC design 

Visualizing hidden parasitic effects in advanced IC design 
by Admin on 10-15-2025 at 10:00 am

[white paper] Parasitic Analysis Figures

By Omar Elabd

As semiconductor designs move below 7 nm, parasitic effects—resistance, capacitance and inductance—become major threats to IC performance and reliability, often hiding where netlist reviews cannot reach. Design teams need advanced visualization tools like heat maps, layer-based analysis and direct layout correlation to spot and fix these invisible problems fast. Using a structured, multi-level workflow and an integrated EDA tool environment helps teams cut debug time and boost first-pass silicon success.

Seeing what the netlist cannot reveal

“Why does the simulation say this will work, but the chip fails every time in silicon?” If you’ve ever asked this question after weeks of painstaking debug, you are not alone. For IC designers working at the bleeding edge—below 7 nm, in billions of transistors and stacked 3D structures—the hardest problems are often the ones you can’t see. Invisible enemies lurk in every layout: resistance, capacitance and inductance effects that don’t manifest in your schematic, but can dominate performance and threaten reliability.

Modern parasitic problems are no longer minor corrections—they are decisive factors. Once you are working at the 5 nm node, hidden parasitics can account for more than half of total signal delay, compared to just one-tenth at older process technologies. Industry analysis reveals the critical impact: parasitic-related failures in advanced silicon designs can cost development teams weeks of debugging effort per incident.

This is the reality driving innovative approaches to visualization and analysis that help engineers see what a netlist alone cannot. By putting “eyes” on parasitics, designers can finally move from educated guesswork to precise solutions.

The unseen challenge: How parasitics threaten design success

Parasitic effects are electrical interactions that arise from the physical realities of semiconductor fabrication—unintentional resistance, capacitance and inductance generated by routing, stacking and proximity of metal structures. These are not intrinsic parts of the schematic, but unwelcome guests within every layout.

Their impact grows exponentially with process scaling and architectural complexity. For example:

  • Differential pairs and high-speed signaling: PCIe, DDR and SerDes interfaces depend on perfectly matched signal paths. Even a small mismatch in parasitic capacitance—just 5%—can trigger significant bit errors or total link failures. Identifying and fixing these subtle imbalances can require weeks of repeated layout analysis.
  • RF and high-frequency circuits: Circuits operating above 20 GHz are exceedingly sensitive to parasitics, such as inductive resonances in ground connections that can degrade transceiver fidelity by 30% or more.
  • Complex 3D architectures: FinFET and GAAFET structures introduce multi-layer parasitic effects that aren’t visible in traditional 2D reviews, making timing closure a moving target.

As stakes grow, traditional netlist-only methods are insufficient. The question becomes: How can engineers visualize, analyze and target the real root causes of performance failure within billions of intricate layout elements?

Elevating parasitic analysis: Making the invisible visible

To meet these needs, modern design workflows are evolving towards more comprehensive visualization and analysis techniques. Instead of reviewing static reports or text-based netlists, engineers use graphical representations that illuminate problem areas, quantify their impact and correlate electrical anomalies directly to their physical structures. A basic net visualization is illustrated in figure 1.

Figure 1. Net-level visualization example. A circuit layout visualization showing two interconnect traces. The image demonstrates net-level visualization, isolating and highlighting a specific net for detailed inspection while other elements remain visible in context.
Key capabilities in next-generation visualization environments include:

  • Heat maps: Intuitive color gradients instantly reveal high-resistance or high-capacitance hotspots.
  • Layer-based views: Designers can track parasitic paths across stacked metal and via layers, uncovering issues that escape 2D inspection.
  • Component-level highlighting: Engineers can pinpoint exactly which polygon or segment contributes to a problematic parasitic value, enabling precise fixes rather than broad re-designs.
  • Filtering and sorting: The ability to focus on specific nets, layers or parasitic types streamlines the hunt for critical contributors.
  • Direct layout correlation: Electrical values align with real physical structures, reducing the disconnect between simulation and manufacturing reality.

The value of visualization is not just in finding problems, but in reducing time and uncertainty. Broad overview capabilities combined with precise local analysis enable teams to move quickly from symptom to root cause—and from root cause to targeted solution.

Multi-level debugging: A hierarchical approach to effective analysis

Advanced parasitic workflows employ a structured, multi-level strategy, guiding users from global assessment to pinpoint measurements:

  1. Global net analysis: Identify potentially problematic nets and regions across the entire chip.
  2. Layer interaction review: Track how each net interacts with the multi-layer stack, uncovering inter-layer coupling or bottlenecks. Figure 2 shows an example layer-based view.
  3. Component-level inspection: Isolate individual parasitic values and their exact physical locations.

Figure 2. Layer-based analysis example. Circuit layout diagram displaying NET A across several interconnected layers. A legend identifies layer types and shows how each contributes to overall net performance.
This approach speeds up troubleshooting by allowing systematic narrowing—from thousands of possible causes to a handful of tangible fixes.

Categorizing and targeting parasitic impacts

A powerful parasitic analysis solution organizes extraction results into structured categories (figure 3) that reflect real physical properties:

  • Resistance and inductance sorted by layer, because these effects depend heavily on geometry and routing.
  • Coupling capacitance and mutual inductance categorized by net, aligning with how signals cross-talk and interact in densely packed circuits.

This enables designers to sort, filter and prioritize the highest-risk contributors. For example, sorting for maximum capacitance between nets on a SerDes channel highlights potential sources of bit errors faster than a manual scan through pages of raw numbers.

Mapping and modifying physical structures for performance gains

Visualization environments bridge the divide between electrical anomalies and their specific physical causes. By highlighting polygon segments that drive high resistance, as shown in figures 3 and 4, engineers can immediately spot bottlenecks, narrow traces or problematic vias.

Figure 3. Resistance layout highlighting. Interconnect trace with a segment highlighted in orange and labeled “R=0.78” to demonstrate resistance layout highlighting.
Figure 4. Point-to-point resistance layout highlighting. Two interconnect traces; one marked with driver and receiver points. The path between them is highlighted to indicate point-to-point resistance for targeted inspection.
Capabilities like interactive resistance calculators help designers measure and report values between chosen pins in the layout viewer, making iterative optimization faster and more precise. Reports generated from these measurements ensure traceability and streamline workflow.

For capacitance, visualizing exact segments responsible for coupling (figure 5) or intrinsic (figure 6) effects means improvements can be focused—such as modifying only the problem edge instead of rerouting the entire net.

Figure 5. Coupling capacitance visualization. Diagram showing two interconnects, NET A and NET B, with the coupling capacitance between them quantified and highlighted.)*
Figure 6. Intrinsic capacitance visualization. Alttext: NET A with segments along its path highlighted to indicate intrinsic capacitance.

Color-driven detection: Heat maps reveal hotspots

Heat maps are a powerful tool for visually identifying and prioritizing parasitic issues. Color-coded gradients transform raw data into intuitive displays: red for problematic intensity, green or blue for optimal performance (figure 7).

Figure 7. Intrinsic capacitance heatmap visualization. Segments colored from red to yellow to blue, showing intrinsic capacitance levels along NET A and NET B.
Designers can set thresholds to reflect design-specific sensitivity, ensuring that attention is focused where the stakes are highest—such as moderate coupling capacitance in high-speed I/O blocks.

Reporting and connectivity: Detailed insights for optimization

Efficiently optimizing net performance depends on access to structured information, not just raw extraction data. Report-driven approaches present the hierarchy, connectivity and physical attributes of every net, along with all associated devices, ports and layer usage (figure 8).

Figure 8. Detailed net information and analysis results. Block diagram showing structured net information, with branches for Devices, Layers and Ports.

Switching between hierarchical and flat views helps teams analyze both the design logic and implementation details, all linked directly with the layout.

Parasitic-aware simulation: Unifying analysis and validation

Bringing schematic probing and full parasitic analysis into a single environment removes traditional workflow barriers (figure 9). Designers can launch simulations that account for every extracted parasitic effect—directly from the schematic, with results correlated to both physical layout and electrical performance.

Figure 9. Unified environment extraction and simulation workflow. Diagram showing flow from schematic to probed schematic with added parasitic elements, advancing to simulation waveform.
This integrated approach promotes faster project cycles, deeper collaboration between circuit and layout teams, and higher confidence in first-pass silicon results.

Siemens Calibre: Solving the challenge with industry-leading visualization

The visualization and analysis methodologies described above are brought to life in Siemens Calibre extraction and debug solutions. Leading-edge design teams worldwide rely on Calibre xRC, Calibre xACT and Calibre xACT 3D for performance-accurate parasitic extraction, backed by Calibre Interactive and Calibre RVE for advanced, user-friendly visualization.

Calibre’s multi-level approach delivers measurable gains: users report up to 50% reductions in parasitic debugging time, 35% improvements in first-pass silicon success and a 25% increase in critical path timing performance. Structured workflows start with broad overview and drill down to component-level fixes, leveraging heat maps, filtering, and direct layout correlation, all in a unified environment.

What sets Calibre apart is its deeply integrated environment: from schematic probing and parasitic extraction to simulation, every step is streamlined. Engineers can highlight specific nets, layers or physical structures, instantly visualize hotspots and generate automated reports for compliance or ongoing optimization. This combination of visualization, quantification and interaction empowers rapid debug and smarter design decisions.

For organizations designing high-speed interfaces, RF circuits or complex multi-layer SoCs, Calibre offers the flexibility to build custom workflows that emphasize capacitance mitigation, resistance targeting or specialized 3D analysis as needed. Its modular framework adapts to each project’s unique challenges, supporting continuous innovation as process nodes scale down and design architectures evolve.

Conclusion: Seeing the future of IC performance

In the relentless drive to smaller nodes and greater circuit complexity, invisible parasitic effects have become major roadblocks to performance, reliability and time-to-market. By adopting visualization-rich analysis and interactive debugging, IC design teams gain the clarity needed to see, understand and solve what traditional netlists alone never reveal.

The future of high-performance IC design depends on making the invisible visible—transforming data into actionable insight, and insight into proven results. With tools like Siemens Calibre leading the way, design teams can meet the challenge and realize the full potential of modern semiconductor innovation.

Ready to dive deeper into strategies for tackling hidden parasitic effects in advanced IC design?

Download the full technical paper, “Beyond the netlist: Visualizing the invisible enemies of IC performance,” to learn more.

About the author:

Omar Elabd is a Product Engineer at Siemens EDA, supporting Calibre extraction products. Based in Cairo, Egypt, Omar is an honors graduate of The American University Cairo with a major in Electronics and Communication Engineering along with a minor in Business Administration. Omar specializes in developing customer-specific flows for Calibre extraction tools and focuses on expanding the edge of current technology to meet evolving industry demands. He can be reached at omar.elabd@siemens.com

Also Read:

Protect against ESD by ensuring latch-up guard rings

Something New in Analog Test Automation

Tessent MemoryBIST Expands to Include NVRAM


Statically Verifying RTL Connectivity with Synopsys

Statically Verifying RTL Connectivity with Synopsys
by Bernard Murphy on 10-15-2025 at 6:00 am

TestMAX Advisor Use Model min

Many years ago, not long after we first launched SpyGlass, I was looking around for new areas where we could apply static verification methods and was fortunate to meet Ralph Marlett, a guy (now friend) with extensive experience in DFT. Ralph joined us and went on to build the very capable SpyGlass DFT app. So capable that SpyGlass DFT is now integrated inside Synopsys’ TestMAX Advisor to check that your RTL is DFT-clean prior to test insertion. As a full block or system design stabilizes, you can use TestMAX Advisor technology to insert DFT structures and verify this modified RTL continues to meet test requirements following insertion. These post-insertion checks are also enabled though the SpyGlass technology. Static analysis proves to be incredibly important in shifting DFT verification left to avoid late-stage schedule surprises.

Modern DFT demands

Test today is much more complex that just scan-based test. We must still support scan but also boundary scan, memory built-in self-test (MBIST) and logic built-in self-test (LBIST). The BIST options are important for in-system testing and, especially now, for on-the-fly self-test in mission-critical applications like cars. In addition to meeting these needs, feeding test data and control and reading results back for scan test across many scan chains in the design must be handled through test compression and decompression logic.

In large SoCs test infrastructure is commonly built hierarchically, where scan chains, compression, MBIST, LBIST and other test logic roll up to test interfaces around IPs and subsystems, which in turn roll up to SoC-level interfaces. Together all this hierarchical test logic, plus connectivity with the functional elements that are being tested, becomes a very complex logic overlay on top of the mission mode system.

Figure: SoC Level Connectivity Verification Challenges

OK, a lot of work but you build it once and you’re ready, apart from some later stage fine-tuning? Not necessarily. Updates to the mission mode design, whether pre-implementation or late stage ECOs, commonly require updates to the DFT logic and connectivity.

DFT logic must be verified and regressed like any other logic. We already know that verifying mission mode functionality is very resource- and time-consuming. Using the same dynamic methods to verify DFT logic in addition to mission mode logic would explode schedules and resource demands. Further, dynamic testing can never prove that a path between two points does not exist (needs to be demonstrated in some connectivity checks), and formal methods are not effective at these circuit sizes. Some level of dynamic verification is still useful as a double-check but not as the main method.

Fortunately, DFT logic elements such as MBIST and JTAG controllers are pre-verified, so the great majority of verification can be reduced to a limited set of connectivity checks. These can be grouped into value propagation, path existence or non-existence, and conditional connectivity checks. Examples commonly include that for some specified test mode certain design nodes must be in a specified state or should not be tied off to a specified state, or that a sensitizable (or sensitized) path should exist or should not exist between certain nodes. These concepts can be extended further to test for conditional connectivity, for example that a pin should be tied to a certain state OR should be driven a certain kind of element.

In short, most of the DFT functionality and its interaction with mission mode functionality can be verified through static connectivity checks. Once appropriate checks are defined, verification regressions should run in times comparable to those for other static checks.

The TestMAX approach

Connectivity checks for TestMAX are steered through Tcl commands as you might expect. When I first saw sample constraints files in the Webinar demos they seemed rather complex but the webinar hosts (Kiran Vittal, Executive Director Product Management and Markets at Synopsys and fellow Atrenta alum) and Ayush Goyal (Sr. Staff R&D engineer at Synopsys) pointed out that the way the constraints are constructed can easily can be made design independent. No doubt such a constraints file would take some effort to setup the first time but then could be reused across many designs.

Here I’m sure I’m going to display how long it has been since I did anything of this nature, but I found the approach intriguing, somewhat similar in principle to SQL-like operations on a database. You build lists of objects for value tests or pairs of objects for path tests, and you define what is required or illegal in such cases. List building is based on name-matching (allowing wild cards). If designs stick to appropriate naming conventions (e.g. a PLL name contains PLL) then the constraints should work.

Conditional connectivity checks simply continue this theme. A first check might require a “1” on a certain class of nodes. Whatever cases fail this check are gathered in a second list and checked against another test, for example whether surviving nodes are driven by some node in a different list you have defined. And so on.

Easy enough to understand. On performance, Ayush added that he knows of a design with over a billion flat instance and 80 million or more flip-flops in which they were able to run connectivity checks between 100 billion pairs of nodes in less than a day. Dynamic verification would have no chance of competing with that performance or with completeness of test.

Good stuff. You can register to watch the webinar HERE.

Also Read:

Why Choose PCIe 5.0 for Power, Performance and Bandwidth at the Edge?

Synopsys and TSMC Unite to Power the Future of AI and Multi-Die Innovation

AI Everywhere in the Chip Lifecycle: Synopsys at AI Infra Summit 2025


Assertion IP (AIP) for Improved Design Verification

Assertion IP (AIP) for Improved Design Verification
by Daniel Payne on 10-14-2025 at 10:00 am

Detailed flow min

Over the years design reuse methodology created a market for Semiconductor IP (SIP), now with formal techniques there’s a need for Assertion IP (AIP). Where each AIP is a reusable and configurable verification component used in hardware design to detect protocol and functional violations in a Design Under Test (DUT).  LUBIS EDA is focused on formal services and tools, so I received an update on their approach to developing these AIPs and detecting corner-case bugs in high-risk IPs.

Before I jump into the details of the approach that LUBIS EDA uses, let’s first review how simulation-based verification differs from formal verification. With simulation an engineer is writing stimulus to cover all the known states of a design, hoping that the coverage is high enough. With formal verification the formal tool figures out all the possible paths from inputs to outputs within a design.

Simulation-based verification

Formal-based verification

One approach used by LUBIS EDA is their in-house property generator, which works at the Electronic System Level (ESL) rather than Register Transfer Level (RTL). This enables them to deliver verification services that are faster, higher quality and more efficient. The property generator enables you to go from an abstract model to your AIP in a matter of minutes, which is a huge leap in verification productivity. Here’s what that flow looks like: first the abstract model is parsed and analyzed by the property generator, then the formal verification IP is created as System Verilog Assertions (SVA). These assertions check your design intent and provide full coverage of the functional behavior.

Property generator flow

Your abstract model at the ESL level is written in C++ or SystemC and can be simulated to verify its behavior, the Property Generator reads in that code and generates the AIP for you. The assertions are then bound to your RTL design through the refinement step which is supported by Large Language Models (LLMs) for a faster result.  The assertions are human-readable and correct-by-construction, so you don’t need to have a dedicated assertion review session. Run your favorite formal tool in this flow and then look for any failures.

One example for applying this AIP approach is for cryptographic hash functions like SHA-512. The following shows the C++ model on the left and the property that is generated on the right that covers a portion of the model.

Summary

How does this approach make formal verification more efficient? Verification engineers can apply formal approaches by manually writing assertions. Manually writing formal assertions take time, error prone and requires expertise, so automating this step saves you engineering time and effort.

The generated Assertion IP (AIP) covers every possible scenario and stimuli to guarantee a bug-free design. This approach is also quite useful to help you verify blocks of logic or even complex IP cores.

Is your project under enormous time pressure and would you like to leverage the benefits of such an efficient approach? Then you should consider LUBIS EDA’s consulting services to achieve first-class SoC design quality. If you want to carry out your project yourself, there are also courses on formal verification that could help you work faster. LUBIS EDA’s website also has many useful blog articles on using formal verification techniques.

To take the next step, just contact the team at LUBIS EDA.

Related Blogs