Synopsys IP Designs Edge AI 800x100

Webinar Preview – Addressing Functional ECOs for Mixed-Signal ASICs

Webinar Preview – Addressing Functional ECOs for Mixed-Signal ASICs
by Mike Gianfagna on 09-11-2025 at 10:00 am

Webinar Preview – Addressing Functional ECOs for Mixed Signal ASICs

An engineering change order, or ECO in the context of ASIC design is a way to modify or patch a design after layout without needing to re-implement the design from its starting point. There are many reasons to use an ECO strategy. Some examples include correcting errors that are found in post-synthesis verification, optimizing performance based on detailed parasitic effects and efficiently responding to design enhancements. For the case of mixed signal ASIC design, the challenges become even greater due to the subtle interactions between the analog and digital circuits.

I recently had a chance to preview the content of a webinar to be presented by Easylogic in collaboration with SemiWiki that dug into these issues in significant detail. In under 30 minutes, I was treated to an excellent overview of what challenges exist and how to address those challenges with a well-defined strategy and flow. I learned a lot, and I believe you will, too. Watch the replay HERE. More on that in a moment but first let me give you a few observations about addressing functional ECOs for mixed-signal ASICs.

The Webinar Presenter

The webinar is presented by Richard Chang. Richard brings over 30 years of experience in the VLSI industry to the webinar. He has held a variety of design and management positions that include comprehensive expertise across the entire ASIC/SoC design cycle—from specification, design, simulation, and FPGA prototyping to tape-out and silicon bring-up.

Richard has a relaxed presentation style that is easy to follow. The topic of ECO strategies for mixed signal ASICs can become quite complex, but Richard is able to keep the discussion at the right level to convey key points without getting bogged down in details. The entire webinar content is under 30 minutes. You will take away some valuable information from Richard’s presentation. Let’s look at a few of the topics he will cover.

Key Topics

Richard will begin with an overview of the mixed-signal ASIC applications – the broad scope of use for this technology and some of the unique challenges designers face. The diagram below summarizes some of the challenges that he will explore.

The Complexity Behind Mixed Signal Design

Richard also reviews the various events that drive the need for ECOs in mixed signal design. There are three broad categories here:

  • What drives manual ECOs at the netlist level
  • What types of changes result in frequent ECO requests
  • The impact of ECOs on DFT and layout

Richard will provide a lot of detail about all three of these areas with examples. This sets the stage for the portion of the webinar that explains how to manage ECO challenges with Easylogic’s fully automated and patented ECO solution.

Richard will provide excellent detail here to explain how Easylogic fits into the flow and what the resulting benefits are. His discussion will focus on four major themes:

  • RTL-based automatic netlist tracing
  • Achieving substantially smaller patches
  • Shortening the ECO turnaround time
  • Seamless design flow integration

He will provide plenty of detail on all these topics, including real design results. Richard will conclude his discussion by pointing out there are hundreds of tapeout successes using Easylogic’s tools.

To Learn More

If ECOs are becoming a larger part of your design flow, you will want to take advantage of the advice provided in this webinar. You can learn more about Easylogic on SemiWiki here, or the company’s website here. And you can watch the replay here. And that’s how you can learn about addressing functional ECOs for mixed-signal ASICs.

Also Read:

WEBINAR: Functional ECO Solution for Mixed-Signal ASIC Design

Easylogic at the 2025 Design Automation Conference #62DAC

Easy-Logic and Functional ECOs at #61DAC


The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)

The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)
by Lauro Rizzatti on 09-11-2025 at 6:00 am

The Rise, Fall, and Rebirth of In Circuit Emulation Part 1 Figure 1

Introduction: The Historical Roots of Hardware-Assisted Verification

The relentless pace of semiconductor innovation continues to follow an unstoppable trend: the exponential growth of transistor density within a given silicon area. This abundance of available semiconductor fabric has fueled the creativity of design teams, enabling exponentially advanced systems-on-chip (SoCs). Yet, the very scale that empowers new possibilities also imposes severe challenges on design verification.

As chips grew larger and more complex, the test environment required to validate them expanded proportionally in both scope and sophistication. These test workloads demanded longer execution times to achieve meaningful coverage. The combination of ballooning design sizes and heavier test workloads pushed traditional Hardware Description Language (HDL) simulation environments beyond their limits. In many cases, simulators were forced to swap designs in and out of host memory, creating bottlenecks that drastically slowed execution and reduced verification throughput.

Back in the early 1980s, a few pioneering startups sought alternatives beyond simulation engines. Zycad was among the first to experiment with dedicated hardware-based verification engines. While innovative, these early tools were limited in flexibility and had short lifespans. The breakthrough came shortly thereafter with the rise of reconfigurable platforms built around field-programmable gate arrays (FPGAs). By the mid-1980s two trailblazing companies, Ikos Systems and Quickturn Design Systems, began developing the first generation of hardware-assisted verification (HAV) tools, including hardware emulators and FPGA-based prototypes. Though initially the emulation platforms were large, heavy, unreliable, and expensive to acquire and operate, they introduced a new paradigm in design verification by enabling orders-of-magnitude speedup compared to simulation alone.

Early Deployment Mode: In-Circuit Emulation

The early adoption of hardware acceleration in the design verification process marked a pivotal shift in how semiconductor designs were tested and validated. The initial deployment mode for HAV was In-Circuit Emulation (ICE). This approach offered two key breakthroughs.

First, the hardware-based verification engine enabled the execution of the design-under-test (DUT) at speeds several orders of magnitude faster than traditional HDL simulation environments. Whereas HDL simulators typically operated and still operate at frequencies in the tens or hundreds of Hertz, the hardware-assisted platforms could run at Megahertz-level speeds enabling verification of much larger and more complex designs within practical timeframes.

Second, ICE made it possible to drive the DUT with real-world traffic, rather than relying solely on artificial stimuli such as software-based test benches or manually crafted test vectors. By connecting the emulator directly into the socket of the actual target system, it could validate the behavior of the design under realistic operating conditions. This not only improved the thoroughness of functional verification—vastly more testing could be executed in the same timeframe—but also its accuracy because of the fidelity of the real-world testbed, not achievable with a virtual testbench. Thoroughness and accuracy contributed to reducing the risk of taping-out a faulty design and avoided a massive impact on the financial bottom line.

The Devil in the Clock Domains: Need for Speed Adapters

From the outset, a fundamental challenge in ICE deployment became apparent: the inherent clock-speed mismatch between the target system and the DUT hosted on the HAV platform. Target systems—such as processor boards, I/O peripherals, or custom development environments—operate at full production speeds, typically ranging from hundreds of megahertz to several gigahertz. In contrast, the emulated DUT runs at much lower frequencies, often only a few megahertz, constrained by the intrinsic limitations of hardware emulation platforms.

This vast timing disparity, often spanning three or more orders of magnitude, makes cycle-accurate interaction impossible, resulting in potential data loss, synchronization issues, and non-functional behavior. To address this, speed adapters were introduced as an intermediary layer. Conceptually implemented using FIFO buffers, these hardware components were inserted between the emulator’s I/O and the target system to decouple the asynchronous, high-speed nature of the real world from the deterministic, slower execution of the DUT.

Early Implementations of Speed Adapters (1985-1995)

ICE promised dramatic improvements in verification productivity in virtue of high performance and accurate test workloads, however, the need for speed adapters introduced challenges of their own in flexibility, scalability, reusability, single user, debugging efficiency, remote access, and reliability.

Lack of Flexibility

Speed adapters are protocol-specific hardware implementations, each designed to support a particular interface standard, such as PCIe, USB, or Ethernet. This made them inflexible and non-generic. Implementing a new adapter for a different protocol often required custom engineering work, firmware development, and precise synchronization logic. Even small variations in protocol versions or signal timing could lead to incompatibilities. As a result, the setup process became complex, error-prone, and time-consuming.

Limited Reusability

Each speed adapter was essentially a one-off solution, tailored for a specific interface and usage scenario. Reusing an adapter across projects, even those with similar hardware, often proved impractical due to slight architectural or timing differences. Furthermore, since these adapters were fixed-function hardware, they did not allow for corner-case testing, protocol tweaking, or exploratory “what-if” analysis. This rigidity hampered their usefulness in iterative or exploratory verification workflows.

Frustrating and Error-prone Design Debug

One of the most serious drawbacks of ICE mode was the difficulty of DUT debugging. When the target system drove the design under test, the behavior of the design under verification (DUV) became non-deterministic. A bug might appear in one run and vanish in the next, making root cause analysis extremely difficult. This lack of repeatability stemmed from the asynchronous, event-driven nature of the interaction between the real system and the slower emulator. Without deterministic control over inputs and timing, capturing and tracing a failure became a frustrating and prolonged process.

Cumbersome Remote Access

In increasingly distributed engineering environments, remote accessibility became a key requirement. ICE mode, however, suffered from a fundamental limitation: it required physical access to plug or unplug the target system in/from the emulator. Without someone on-site, remote teams were effectively blocked from initiating or modifying test sessions, creating a bottleneck for globally distributed development teams and undermining continuous integration workflows.

Reliability Risks and Maintenance Overhead

Like any hardware, speed adapters had a finite mean-time-between-failures (MTBF). A malfunctioning adapter might introduce intermittent or misleading behavior, leading verification engineers to chase phantom bugs in the DUT when the issue sat in the speed adapter hardware. This could significantly delay debug cycles and erode confidence in the verification platform. As a preventative measure, regular maintenance and validation of the adapters was required, adding operational overhead and further complicating the test setup.

The Virtual Takeover: The Shift from ICE to Virtual Verification (1995-2015)

Within just a few years, the limitations of the first wave of speed adapters became apparent. The rigidity and simplicity of their designs limited the deployment scope to manage data traffic between two time domains. The inability to handle flow control, packet integrity, and full system synchronization created significant bottlenecks that could not scale with the growing complexity of SoCs. As a result, the EDA industry shifted its focus toward a more sustainable approach: virtualization.

ICE gradually lost its appeal and ceased to be the default method for system verification. It was pushed to the final stages of validation, used mainly for full system testing with real-world traffic just before tape-out.

Meanwhile, a promising approach emerged in the form of transaction-based emulation, often referred to as virtual ICE or transaction-level modeling (TLM). Instead of driving the DUT with physical target systems via physical speed adapters, this method used software-based virtual interfaces, like digital twins, of physical speed adapters to drive and monitor the DUV through high-level transactions.

This shift from bit-signal-accurate physical emulation to protocol-accurate virtual emulation marked a critical turning point in the evolution of HAV platforms. It enabled a new class of verification use modes that were more flexible, expansive in applications and better aligned with modern SoC design methodologies.

The advantages of this shift were numerous and transformative:

  • Higher Performance: By replacing cycle-accurate pin-level I/O with abstracted transactions, the emulator could operate at significantly higher effective speeds, enabling faster verification cycles.
  • Greater Debug Visibility: With all communication occurring in the digital domain in synch with the emulator the test environment became deterministic, making it easier to log, trace, and debug data flows without intrusive probes or external logic analyzers.
  • Simplified Setup: Virtual platforms eliminated the need for complex cabling, custom sockets, or fragile speed adapters, making emulation setups more robust and scalable.
  • Remote Accessibility: Engineers could now run emulations from anywhere via remote access to the host workstation, making collaboration easier across geographies.
  • Software Co-Verification: Perhaps most importantly, transaction-based emulation enabled tight integration with embedded software environments. Developers could boot operating systems, run production firmware, and validate complete software stacks alongside hardware, long before first silicon became available.

Collectively, these breakthroughs elevated HAV platforms from a specialized tool to an indispensable cornerstone of the verification toolbox. By the late 2000s, no serious SoC development could achieve predictable schedules and quality without leveraging virtualized verification methodologies.

Third Generation of Speed Adapters (2015-Present)

Over the past decade the EDA industry, recognizing the unique testing capabilities of ICE, revisited the technology and committed significant efforts to overcoming the shortcomings of earlier generations of speed adapters.

The focus shifted toward mitigating critical bottlenecks that had long constrained flexibility, scalability, debugging accuracy, and system fidelity. This third generation of speed adapters introduced greater configurability, smarter buffering techniques, and a range of advanced mechanisms designed to bridge the gap between functional verification and true system-level validation. These enhancements paved the way for broader adoption of ICE in the verification of increasingly complex SoCs.

Major Enhancements of 3rd Gen SA

Enhanced Design Debug

Modern speed adapters have significantly enhanced debug capabilities in ICE environments. Protocol analyzers and monitors are now instantiated inside the adapter on both the high-speed and low-speed sides of the adapter, enabling bidirectional traffic observation. By correlating activity across these interfaces, the system provides a protocol-aware, non-intrusive, high-level debug environment that was previously unattainable in ICE. See figure 1.

Figure 1: State-of-the-art Speed Adaptor Solution (Source: Synopsys)

This approach mirrors what has long been available in virtual platforms. In virtual environments, whether handling Ethernet packets or PCIe transactions, transactors include C-based monitoring code that inspects the traffic. The results are displayed in a protocol-aware viewer, allowing engineers to analyze packet-level activity directly, similar to what an Ethernet packet sniffer would provide, without resorting to low-level waveform dumps. The result is improved debug efficiency and faster root-cause analysis in hardware-assisted verification.

Verification via Real PHY and Interoperability

All modern peripheral interfaces, whether PCI Express, USB, Ethernet, CXL, or others, are built on two fundamental blocks, a controller and a physical layer (PHY).

  • The controller is a purely digital block that implements the logic of the communication protocol: encoding, packet handling, flow control, and error detection.
  • The PHY is a mixed-signal design, bridging the digital and analog worlds. It manages voltage levels, current drive, impedance, timing margins, and the electrical signaling that allows data to move across pins, connectors, and transmission lines.

In transaction level flows, the PHY cannot be accurately represented. To work around this limitation, the PHY is substituted by a simplified or “fake” model that allows for basic register programming but omits the critical analog behaviors that dominate real-world operation. Such models cannot uncover issues that manifest only at the physical interface.

To overcome these blind spots, modern speed adapters integrate real PHYs into the hardware-assisted verification flow. By introducing genuine physical interfaces into the loop, design teams can validate their DUT against real-world conditions rather than abstracted models. This capability brings several tangible benefits:

  • Accurate link training and initialization – protocols like PCIe, CXL, and USB require precise electrical handshakes to establish communication. These can only be exercised with a real PHY.
  • Timing and signal integrity validation – engineers can detect marginal failures, jitter, or power-related issues invisible to fake models.
  • Protocol compliance at the electrical layer – ensures interoperability with other vendors’ devices and adherence to industry standards.
  • Higher silicon confidence – by testing against true physical interfaces, teams dramatically reduce the risk of surprises in silicon bring-up.

Today’s design teams are confronted with two very different, yet equally critical, challenges. On one hand, they must continue to validate and support legacy interfaces that remain essential in many deployed systems. In these cases, speed adapters can integrate an FPGA that implements a legacy PHY, enabling seamless connection to the external environment and ensuring backward compatibility with established standards.

On the other hand, teams are also working with advanced, next-generation protocols that are still in the process of being defined and refined. For these emerging standards, speed adapters incorporate real PHY IP test chips, providing direct access to the world analog behaviors that virtual models cannot capture.

Companies like Synopsys, with expertise in both PHY IP and hardware-assisted verification, are at the forefront of this shift. Their solutions allow design teams to test interoperability earlier, accelerate development cycles, and bring new products to market with far greater confidence.

System Validation Server

For protocols such as PCIe, relying on a standard host server as a test environment is not feasible. The limitation stems from the server’s BIOS, which enforces strict timeout settings. These timeouts are easily exceeded by the relatively long response times of an emulation system. Once triggered, the timeouts cause the validation process to stall or hang, preventing meaningful progress.

To achieve a complete ICE solution, this challenge must be addressed head-on. The answer lies in a purpose-built System Validation Server (see Fig. 2) equipped with a modified BIOS. By removing or adjusting the restrictive timeout parameters, the server can operate seamlessly with the slower responses of the emulated design.

Figure 2: System Validation Server for In-Circuit Emulation (Source: Synopsys)

This out-of-the-box solution offers immediate, practical benefits. Instead of spending months in back-and-forth iterations with an IT department to obtain and configure a host machine with a customized BIOS, validation teams can deploy a ready-made server that works from day one. The result is a dramatic reduction in setup overhead, faster time-to-validation, and a more reliable environment for exercising advanced protocols like PCIe under real-world conditions.

Ultra-High-Bandwidth Communication Channel

One of the most transformative advances in modern speed adapters is the introduction of ultra-high-bandwidth communication channels between the emulator and the adapter. Today’s leading-edge solutions can sustain throughput levels of up to 100 Gbps, enabling them to increase the overall validation throughput.

By approaching the communication rates of the real silicon, speed adapters allow engineers to stress-test SoCs under realistic workloads, validate protocol compliance at line rate, and observe system behavior under continuous, heavy traffic conditions.

Moreover, this capability ensures that networking-intensive applications, such as those found in data centers, 5G infrastructure, and high-performance computing systems, can be tested in environments that closely mimic deployment scenarios. The result is a dramatic increase in confidence that designs will not only function but also perform optimally once in the field.

Multi-user Deployment

A single speed adapter can be logically partitioned to support concurrent multi-user operation. The adapter’s port resources can be allocated in their entirety to a single user, or subdivided into up to three independent partitions, each assigned to a different user.

For instance, a 12-port Ethernet speed adapter may be configured as a monolithic resource (all 12 ports mapped to one user) or segmented into three logical groups of four ports each, enabling three users to access discrete subsets of the adapter in parallel.

The same partitioning capability applies to PCIe interfaces: the adapter can expose up to three independent PCIe links, each link operating in isolation and assigned to separate users. These features, resource partitioning, port multiplexing, and independent link allocation are natively supported in the architecture of the speed adapter.

Advanced buffering and flow-control techniques

The architectural refinements eliminated packet drops and ensured deterministic behavior across fast and slow clock domains. They also allowed speed adapters to scale with large verification workloads.

Conclusion

Today’s speed adapters combine flexible, multiuser deployment, stable and exceptionally fast high-speed links, deterministic flow control, improved DUT debuggability, proven resilience under continuous datacenter workloads. With these advances, third-generation speed adapters moved ICE from a niche verification mode into a mainstream, indispensable tool for system validation.

Today’s emulation platforms configured with this third generation of speed adapters finally bridge the gap between functional verification—which validates DUT functionality and I/O protocols—and system-level validation, which ensures that designs interact correctly with the physical world. This holistic approach closes a long-standing blind spot in hardware-assisted verification, enabling faster debug cycles, higher-quality silicon, and first-pass success in increasingly complex SoCs.

Also Read:

eBook on Mastering AI Chip Complexity: Pathways to First-Pass Silicon Success

448G: Ready or not, here it comes!

Synopsys Webinar – Enabling Multi-Die Design with Intel


Tessent MemoryBIST Expands to Include NVRAM

Tessent MemoryBIST Expands to Include NVRAM
by Mike Gianfagna on 09-10-2025 at 10:00 am

Tessent MemoryBIST Expands to Include NVRAM

The concept of built-in self-test for electronics has been around for a while. An article in Electronic Design from 1996 declared that, “built-in self-test (BIST) is nothing new.” The memory subsystem is a particularly large and complex part of any semiconductor design, and it’s one that can be particularly vexing to test. Design teams have therefore applied the concepts of BIST to memories for quite a while as a result. Memory BIST has many advantages including better reliability, reduced manufacturing costs, and enhanced system performance. The complexity of implementing an effective memory BIST strategy has gotten quite challenging, however. The huge size and performance demands of memory systems contributes to this, as does the additional complexity of 3D IC design. 

Siemens Digital Industries Software has been developing the Tessent™ MemoryBIST software and IP to deliver a complete solution for at-speed test, diagnosis, repair, debug and characterization of silicon memories of all types. The ability to address this broad class of problem with one comprehensive solution provides significant benefits. This platform has recently been extended to handle the unique requirements of embedded non-volatile RAM, or NVRAM. Since embedded NVRAM use is on the rise, this is a significant addition. Let’s examine the footprint of what Siemens delivers with a special focus on how Tessent MemoryBIST expands to include NVRAM.

Tessent MemoryBIST Overview

Some of the reasons that this platform is the Industry-leading solution for memory built-in self-test include its hierarchical architecture that enables the addition of built-in self-test and self-repair capabilities at both the individual core level and the top level. Both software and enabling IP are included.

The system delivers on-chip generated test vectors to memories at application clock frequencies. Tessent MemoryBIST controllers are configurable to support a variety of memory types, as well as a range of memory timing interfaces and port configurations. These are accessed and controlled through an IEEE 1687-2014 (IJTAG) network. This highly configurable network can access all Tessent IP and support any third-party IJTAG-compliant instruments. There is a lifecycle management aspect to this as well, since controllers can be accessed throughout the life of the device, including manufacturing test, silicon debug and in-system test. The figure below illustrates the hierarchical Tessent MemoryBIST infrastructure.

Hierarchical Tessent MemoryBIST infrastructure

There are many benefits provided by a system like this. Here are some:

  • Flexible and automated BIST IP integration, access network integration and pattern validation shorten time-to-market
  • Resource sharing and flow integration with Tessent LogicBIST and Tessent TestKompress reduce overall DFT cost and increase defect coverage
  • The option to use field programmable algorithm specifications allows complete control of test quality and test time trade-offs
  • User controllable area and test time trade-off options enable product-specific test cost optimization
  • On-chip global eFuse management and optional non-volatile memory test capability reduce overall manufacturing costs
  • Desktop-based test debug and characterization speed time-to-market
  • Customizable pass/fail criteria provided by the ECC option enhances yield and reliability
  • Works with ECC detection/correction capabilities to safeguard against aging defects

A Look at NVRAM Support

Thanks in part to the ubiquitous use of AI workloads in just about every design, the need for embedded NVRAM is growing significantly. While Flash has been the go-to approach for years, this technology doesn’t scale to advanced process nodes and so there are several new NVRAM technologies entering the market.

Etienne Racine

NVRAMs have very different test criteria and requirements, and so the need for BIST automation here is quite significant. That’s why the recent expansion of Tessent MemoryBIST to include NVRAM is so important. I had the opportunity to speak with Etienne Racine, the Product Manager for Tessent MemoryBIST at Siemens Digital Industries Software. Etienne has been working on silicon test solutions at Siemens and at Mentor before that for almost 18 years. Prior to Siemens and Mentor, he spent over a decade working on advanced test solutions, so Etienne brought a wealth of knowledge to the discussion.

Etienne explained that until recently there has been no well-developed BIST support for NVRAMs. This is why the additional capabilities of the Tessent suite are significant. We discussed access protocols and Etienne explained that the commands to operate embedded NVRAMs are quite different than those for embedded SRAMs. This means that the way BIST interfaces to the NVRAM must be changed. We also discussed the need to support trimming and trimming sequences. This process is similar to calibration.

He went on discuss the process of deployment of MRAMs, which is one of the newer embedded NVRAM technologies designed to replace Flash. For MRAMs, a trimming, or calibration step must be done before the device can be reliably used. The Tessent MemoryBIST NVM option provides these capabilities in an automated way. He also discussed the ability to define the required waveforms needed to test new NVRAM technologies. These waveforms can be described in the Tessent platform, paving the way for more efficient BIST strategies.

Looking ahead, Tessent MemoryBIST also provides support for testing memories that are external to the device containing the BIST IP. Support is provided for both stand-alone memory packages at the board level and 2.5/3D packages consisting of one or more memory dies stacked on top of a separate logic die. This strategy is summarized in the diagram below.

Testing 3D stacked memory die

To Learn More

My conversation with Etienne touched on many topics. There are a lot more capabilities delivered by the Tessent MemoryBIST software and IP. You can find many well packaged solutions to your MBIST requirements here. The good news is there is a comprehensive overview of the platform available. You can access your copy of Tessent MemoryBIST, memory self-test, repair and debug here. And that’s how Tessent MemoryBIST expands to include NVRAM.

Also Read:

Smart Verification for Complex UCIe Multi-Die Architectures

Orchestrating IC verification: Harmonize complexity for faster time-to-market

Perforce and Siemens at #62DAC


The Importance of Productizing AI. Everyday Examples

The Importance of Productizing AI. Everyday Examples
by Bernard Murphy on 09-10-2025 at 6:00 am

image generation fail

Keeping up with the furious pace of AI innovation probably doesn’t allow a lot of time for deep analysis across many use cases. However I can’t help feeling we’re sacrificing quality and ultimately end user acceptance of AI by prioritizing new capabilities over rigorous productization. I am certain that product companies do rigorous testing and quality control but modeling the human element in AI interaction, through prompts or voice commands for example, seems still to be more art than science, an area where we still don’t understand safe boundaries between how we communicate with AI and how AI responds. To illustrate I’m sharing a couple of my own experiences, one in image generation and one in voice control for music in my car.

Image generation

I regularly use AI to generate images for my blogs and more recently for my website because I can generate exactly what I want and fine tune as needed through refined prompts. At least that’s my expectation as an average non-expert user. Building a trial website image revealed a gap between expectation and practice, as should be clear in the image above.

I didn’t set out to create an image of a guy with three hands. My prompt was something like “show an excited storyteller standing on a pile of ideas”. A little abstract but nothing odd about the concept.  The first image the generator built was OK but not quite what I wanted so I added a couple of modest prompt tweaks. The image I’m showing came from the second tweak. The generator hallucinated and I have no idea why, nor did I know how to correct this monster.

I switched to a different starting prompt and then to a different image generator (I tried GPT4-o and DALL-E 3) but was still shooting in the dark. Without any understanding of safe boundaries there was no guarantee I wouldn’t run into a different hallucination. The obvious concern here is how this affected my productivity. My goal was to quickly generate a decent image conveying my concept, then move on with the rest of my website building task. Instead I spent the best part of a day experimenting blindly with the image generation tool.

Voice control for Apple Music in my new car

I write frequently about automotive tech so it seemed only right that on a recent car purchase I should go for a model with all the latest automation options, allowing me to comment from experience, not just from theory. I’m not going to name the car model, but this is a premium brand, known for excellent quality.

There are many AI features in the car including advanced driver assistance which I haven’t yet started to explore (maybe in later blogs). Here I just want to talk about controlling music selection through Car Play using voice control, an important safety feature to minimize driver distraction. The alternative control surface is an inviting center console touch screen which is not where I should be looking when I’m driving; I know because I drifted partly out of lane a couple of times driving back from the dealer. Not going to make that mistake again.

Now I know I should only be using voice controls when driving. But when my voice commands aren’t working it is natural to look at the screen to try to figure out the problem. That was happening to me a lot when driving back from the dealer. I eventually learned what I was doing wrong, which pointed to more opportunities for improved productization.

First a few of the tracks stored on my phone are corrupt – no idea why. When Apple Music hit a corrupt track it stopped playing. I initially guessed something about the app was broken when running through Car Play. Second, and more importantly, I didn’t know what voice commands I could reasonably use with Siri. An illusion encouraged by chat and voice control engines is that they can understand anything. In fact they do a very good job of mimicking understanding within a bounded range but they don’t always make it apparent where they are crossing from correct interpretation of our intent to guesses or to simply saying they don’t understand or ignoring me.

As an infrequent Siri user, the trick I had to learn was to first to use voice control exclusively when driving, and second to ask Siri what commands she can understand. I also learned that Apple Music might get stuck on a (presumably corrupt) track but wouldn’t tell me why it had stopped. Now I know if I give a command and don’t hear anything I have to suggest it skip a track.

On the plus side as a Siri novice, I learned that Siri knows a lot about music genres, so I don’t have to limit a request to a specific song or album. I can ask for baroque music or rock and it will do something intelligent with that request. Pretty cool.

Takeaways

What does any of this have to do with the larger world of AI applications? The systems used in these two examples are built on AI platforms. The image generation (diffusion) part is different but interpreting the prompt is regular LLM technology. Voice-based commands also build on LLM except the input is audio rather than text. So my experiences in these simple examples could be indicative of what we might encounter in many AI applications.

You might not consider what I found to be major issues, though for me creating confusion while driving is a potentially big problem. My major takeaway here was that I had to learn more about Siri capabilities before I could use Car Play effectively while still being a safe driver. Not quite the trouble-free experience I had expected.

When generating images I didn’t appreciate the productivity hit required by trial-and-error exploration through prompts. I will now be a lot more cautious in using image generation and I will be more cynical about products which claim to simplify generating exactly the image I want.

What could product builders like my auto manufacturer, smartphone builders, and image generation products do to help more? My suggestions would be more guard bands to separate accurate interpretation from guesses, more methods to validate/double-check interpretation, and more active feedback when something goes wrong. Improved productization on top of already impressive AI capabilities could limit negative experiences (and accidents) and help AI transition from neat concepts to full production acceptance.

One more important area where product builders might contribute is to help us refine our own understanding of what AI can and cannot do. Unfortunately too many users still see AI as magical, thinking it can do whatever they want, maybe even better than they can imagine. We need to have drilled into us that AI is just a technology like any other technology, very capable within its own limitations, and that we must invest in understanding how to operate it correctly. Then we will avoid disappointment, or worse dismissing everything AI as hype, when instead the real problem is in our over-inflated expectations.

Also Read:

Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside

CEO Interview with Carlos Pardo of KD

Arm Reveals Zena Automotive Compute Subsystem


Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet
by Jonah McLeod on 09-09-2025 at 10:00 am

Rising Wafer

Can cash and IBM collaboration put Japan into premier-league chipmaking? Rapidus is betting billions it can.

When Japan announced the creation of Rapidus in 2022, the news was met with a mix of enthusiasm and skepticism. The company would enter the market at a time of escalating demand for semiconductor fabrication capacity to power the build-out of AI/ML data centers worldwide, and amid international political pressure for each region to secure locally sourced production. Here was a government-backed consortium with a bold mandate: return Japan to the forefront of advanced logic manufacturing, producing 2 nm chips by 2027 and positioning itself for nodes beyond 2 nm with IBM’s support, (Atsuyoshi Koike CEO, Rapidus Corp. on August 26 at the Hot Chips 2025 Conference). For an industry dominated by TSMC, Samsung, and Intel, this looked like a geopolitical counterweight.

The model of commercial–government co-investment that catalyzed Taiwan’s semiconductor ascent was exemplified on February 21, 1987, with the founding of Taiwan Semiconductor Manufacturing Company (TSMC) in Hsinchu, Taiwan. In a 2005 speech at MIT Sloan, TSMC founder Dr. Morris Chang described the capital structure that launched the company: “Philips put up about 27 percent, the [Taiwan] government put up 48 percent, and I started a three- to four-month campaign to round up the other 25 percent.” This strategic alignment of public and private capital laid the foundation for what would become the world’s most advanced semiconductor foundry.

TSMC’s emergence coincided with the inflection point of the IBM PC era, as global PC shipments surged from roughly 20 million units in 1988 to over 100 million by 1998—a 5× volume increase. The company’s pure-play foundry model was uniquely positioned to capitalize on this demand, enabling fabless innovators to scale rapidly without the burden of fabrication infrastructure. TSMC didn’t just ride the wave—it became the foundation on which the PC boom was built.

Fast forward to 2025, and Rapidus has achieved several milestones—pilot 2 nm fab activity in Hokkaido (Rapidus, 2025), successful nanosheet transistor demonstrations with IBM (Koike, Hot Chips 2025), and ecosystem partnerships with Siemens, imec, and Tenstorrent (Rapidus, 2025 press release). But behind the scenes, an unusual element of their strategy is drawing attention in the industry: the company is reportedly offering substantial upfront payments to IP vendors to secure early support (SemiWiki, Jul 8, 2025). This approach diverges sharply from the traditional royalty-driven model used by TSMC, and it raises fundamental questions about sustainability, competitiveness, and whether money can substitute for ecosystem momentum.

TSMC Juggernaut

In the conventional foundry model, IP vendors—companies providing essential building blocks like memory controllers, I/O subsystems, or physical libraries—port their IP to a new process node based on expected customer demand. They absorb some upfront engineering cost in exchange for future royalties once customers tape out silicon at volume.

This model thrives at TSMC, where the scale is undeniable. Moving from N3 to N2, for example, was straightforward for many vendors because the ecosystem is already in place. The network effect here is powerful: customers prefer the node with the richest IP catalog, and IP vendors support the node with the largest customer base. It becomes a self-reinforcing cycle—what we might call the TSMC Snowball.

Morris Chang, TSMC’s founder, has emphasized that Taiwan’s long-term ascendancy in semiconductor manufacturing came not just from financial capital, but from structural advantages. As he noted in a recent MIT talk, success required a steady supply of well-trained technicians, low turnover among employees, and the benefits of co-location: “Learning is local. The experience curve works only when you have a common location.” TSMC thrived because it could build an ecosystem in one place, accumulating knowledge and lowering costs over time. This underscores the challenge Rapidus faces in Japan, where it must build not only fabs but also a cohesive ecosystem that can replicate such learning-by-doing effects.

Silicon by Subsidy

Rapidus, by contrast, does not yet have that volume. Analysts estimate the company may only reach 25,000 wafers per month by 2026—a fraction of what TSMC and Samsung move through advanced nodes. From an IP vendor’s perspective, supporting Rapidus’s 2 nm node is a high-risk, low-return proposition under a pure royalty model. To bridge the gap, Rapidus is reportedly providing upfront incentives to IP vendors to ensure critical libraries will be available for early customers. This “pay to play” strategy provides immediate engagement, but it is an expensive way to buy credibility.

The question is whether this model can scale. If Rapidus spends heavily on upfront inducements while wafer output remains modest, the economics could become unsustainable—particularly given estimates that the company will need 5 trillion yen (~US $34–35 billion) to reach full mass production. IBM, for its part, is contributing R&D expertise, intellectual property, and engineering support. Roughly 150 Rapidus engineers have trained at IBM’s Albany NanoTech Complex, and IBM has provided nanosheet transistor designs and know-how. But IBM is not funding Rapidus directly.

That leaves the burden squarely on Japanese government subsidies (over 1.7 trillion yen committed so far) and corporate investors like Toyota, NTT, Sony, SoftBank, Kioxia, and NEC. With each new expense—construction, equipment, packaging, and now upfront IP agreements—the funding gap looms larger. In July 2025, Rapidus reached a key milestone: obtaining electrical characteristics in 2 nm GAA transistors at its IIM-1 fab according to Koike in his Hot Chips presentation. This underscores that IBM’s knowledge transfer is yielding tangible technical results, even as financial sustainability remains an open question.

Equipment ≠ Ecosystem

What Rapidus seems to recognize is that lithography scanners, deposition tools, and etchers—though expensive—are not the hardest part of launching an advanced node. The real bottleneck is the design ecosystem. If customers can’t access proven IP libraries, trusted design flows, and reliable EDA tool support, they won’t risk a tapeout at Rapidus. Upfront deals are a way to short-circuit that chicken-and-egg problem.

But it is also an admission that Rapidus lacks the organic pull that makes TSMC’s ecosystem self-sustaining. TSMC doesn’t have to offer inducements; vendors flock to TSMC because their customers demand it. Here Rapidus hopes to differentiate on speed, Koikede declared. Its All-Single Wafer Processing concept promises turn-around times as short as 15–50 days, compared to roughly 120 days for conventional batch processing. The company claims this “world’s shortest TAT” will give fabless customers faster iterations, a potential incentive to choose Rapidus despite ecosystem challenges.

The Foundry That Blinked

There is precedent here. GlobalFoundries, during its early 20 nm and 14 nm efforts, offered financial incentives to lure IP providers. The model was workable in the short term but failed to produce a virtuous cycle. Without sufficient customer pull, GlobalFoundries pivoted away from bleeding-edge logic to focus on specialty and trailing-edge nodes. Could Rapidus meet the same fate? Its government backing is stronger, and its partnership with IBM gives it world-class technical foundations. But the structural challenge—convincing IP vendors and fabless customers to bet on a low-volume node—remains.

It is important to distinguish between Rapidus’s ecosystem partners and its IP vendors. Companies like Tenstorrent are fabless design houses and potential customers, developing RISC-V CPU IP and AI accelerators. They may work with Rapidus to manufacture chips, but they are not IP vendors in the sense of providing standard cell libraries, memory compilers, or interface IP. The upfront incentives are aimed at traditional IP providers such as Arm, Synopsys, or Cadence—whose libraries are essential for enabling customer designs on a new node.

Rapidus’s strategy of relying on upfront arrangements introduces a set of structural risks that could undermine its long-term viability. While government subsidies may cushion the early years, the question looms: can the model sustain itself once Rapidus must operate without external funding? The reliance on pre-paid commitments also risks entrenching vendor dependency, potentially locking Rapidus into a narrow ecosystem of IP providers and limiting flexibility for future customers.

Perception is another challenge. If customers begin to view IP support as contingent on inducements rather than intrinsic ecosystem strength, confidence in the node’s reliability—especially for mission-critical tapeouts—may erode. And with every yen spent securing ecosystem buy-in, margin pressure intensifies in a market already defined by razor-thin profitability.

By contrast, TSMC has a more resilient model. It monetizes IP support indirectly through high-volume production that guarantees royalties for vendors, and directly through premium wafer pricing at advanced nodes. This approach reinforces ecosystem strength without compromising long-term margins or customer trust. Engineers, EDA developers, and foundry insiders understand that semiconductor competition isn’t just about transistors per square millimeter. It’s about design enablement, ecosystem health, and business model viability.

For TSMC, the story is straightforward: the ecosystem follows volume. For Rapidus, the story is more precarious: the ecosystem must be bought before volume can materialize. In his Hot Chips presentation Koike asserted that Beyond subsidies and IBM know-how, Rapidus is betting on Design–Manufacturing Co-Optimization (DMCO), integrating AI, advanced sensors, and partnerships like Keysight to improve yield and PDK precision. Combined with its “Rapid and Unified Manufacturing Service” (RUMS), Rapidus is pitching a foundry model built on speed and co-innovation, not just wafer starts.

The Payment Illusion

Rapidus deserves credit for attempting something few nations have dared in decades: re-entering the leading edge of semiconductor manufacturing. Its partnership with IBM has already yielded technical milestones, and its government and corporate backing give it a level of resilience GlobalFoundries never enjoyed. But money alone will not guarantee success. Upfront incentives may secure early IP availability, but they do not guarantee customer adoption or sustainable economics. In the end, Rapidus must find a way to transform these early arrangements into long-term ecosystem momentum.

Morris Chang’s reflections at MIT underscore the gap. Taiwan’s rise was powered by talent pipelines, co-location, and the experience curve—factors that compounded over decades and lowered costs through learning by doing. Rapidus, by contrast, is trying to buy time with subsidies and upfront deals. If it succeeds, it will validate Japan’s gamble. If it fails, it will reinforce why the TSMC juggernaut keeps rolling—and why competing at the bleeding edge requires not just technology and subsidies, but an ecosystem that grows organically from customer demand.

Also Read:

Revolutionizing Processor Design: Intel’s Software Defined Super Cores

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


Exploring Cycuity’s Radix-ST: Revolutionizing Semiconductor Security

Exploring Cycuity’s Radix-ST: Revolutionizing Semiconductor Security
by Admin on 09-09-2025 at 6:00 am

image


Cycuity’s Radix-ST represents a groundbreaking advancement in semiconductor security, addressing the growing complexity and vulnerability of modern chip designs. Introduced on August 27, 2025, by Cycuity, Inc., Radix-ST leverages static analysis techniques to identify and resolve security weaknesses early in the chip design cycle. As cyber threats targeting semiconductor devices escalate, this tool offers a proactive, efficient approach to safeguarding the backbone of today’s electronic systems, from IoT devices to data centers.

At its core, Radix-ST operates without the need for simulation or emulation, distinguishing it from traditional dynamic methods like Cycuity’s Radix-S and Radix-M. By analyzing Register Transfer Level (RTL) source code as soon as design components are developed, it delivers early detection of potential vulnerabilities. This efficiency is critical in an industry where late-stage security issues can lead to costly redesigns or, worse, exploited weaknesses in deployed systems. Radix-ST goes beyond basic linting tools by performing deep security analysis, pinpointing issue locations in the source code and mapping them to the MITRE-maintained Common Weakness Enumeration (CWE) database. This integration with the broader Radix workflow ensures actionable insights, enhancing design security from the outset.

The tool’s benefits are evident in its ability to minimize user input while maximizing impact. It automates RTL inspection with proprietary detection engines, reducing false positives and providing integrated reporting within the Radix GUI. Features like cross-view navigation across source, schematic, and cone views empower engineers to explore and address weaknesses comprehensively. Feedback from users, such as Mark Labbato (Senior Engineer) of Booz Allen Hamilton, highlights its practicality enabling security analysis before full simulation environments are ready, thus optimizing verification cycles. Mitch Mlinar, Cycuity’s VP of Engineering, emphasizes that this early intervention cuts costs and boosts productivity, a claim supported by its seamless integration with existing Radix tools.

Radix-ST’s relevance is underscored by the evolving threat landscape. Semiconductors, powering everything from automotive systems to defense applications, are increasingly targets for remote cyberattacks. Traditional verification methods often miss subtle vulnerabilities, especially in complex system-on-chip (SoC) designs. While static analysis has limitations, detecting only specific issue types, its speed and early application complement dynamic techniques, offering a balanced security assurance strategy. This approach aligns with industry shifts toward “secure by design” principles, where proactive risk mitigation is paramount.

The tool’s rollout comes at a time when hardware security is no longer optional but a necessity, as noted by Cycuity CEO Andreas Kuehlmann. With features like quantifiable security coverage metrics, Radix-ST provides a data-driven way to assess verification completeness, helping teams identify areas needing further testing. Its adaptability across commercial and defense sectors, supported by a $99 million IDIQ contract for supply chain security, underscores its broad applicability. Use cases range from securing roots of trust in hardware to ensuring compliance with evolving cybersecurity standards like ISO 21434.

Critically, while Radix-ST promises significant advantages, its effectiveness depends on adoption and integration into diverse design workflows. The semiconductor industry’s reliance on global supply chains and multiple partners could challenge uniform implementation. Moreover, the tool’s focus on static analysis might not fully address runtime vulnerabilities, suggesting a need for continued innovation. Nonetheless, Radix-ST positions Cycuity as a pioneer, offering a scalable solution to a pressing challenge.

Bottom line: Cycuity’s Radix-ST is a transformative tool in semiconductor security, enabling early vulnerability detection with minimal overhead. As of August 28, 2025, it stands as a vital asset for engineers navigating the complexities of modern chip design, reinforcing the industry’s shift toward robust, proactive security measures.

About Cycuity

Cycuity is a pioneer in hardware security delivering security assurance for semiconductor devices, a rapidly increasing target for remote cyberattacks. Cycuity’s innovative Radix software products and services specify, integrate and verify security across the hardware development lifecycle to ensure robust protection for the chips powering today’s sophisticated electronic systems. Radix uncovers security weaknesses across all levels, from block and subsystem to full system-on-chip (SoC) and firmware, enabling our customers to identify and resolve risks prior to manufacturing. Serving both commercial and defense industries, Cycuity provides the broadest security assurance across the design supply chain. For more information, please visit https://cycuity.com.

Also Read:

Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott

Security Coverage: Assuring Comprehensive Security in Hardware Design

Podcast EP287: Advancing Hardware Security Verification and Assurance with Andreas Kuehlmann

Leveraging Common Weakness Enumeration (CWEs) for Enhanced RISC-V CPU Security


Smart Verification for Complex UCIe Multi-Die Architectures

Smart Verification for Complex UCIe Multi-Die Architectures
by Admin on 09-08-2025 at 10:00 am

Figure 1

By Ujjwal Negi – Siemens EDA

Multi-die architectures are redefining the limits of chip performance and scalability through the integration of multiple dies into a single package to deliver unprecedented computing power, flexibility, and efficiency. At the heart of this transformation is the Universal Chiplet Interconnect Express (UCIe) protocol, which enables high-bandwidth, low-latency communication between dies. But with innovation comes complexity: inter-die interactions, evolving protocol features, and the need to verify behavior across multiple abstraction layers create significant verification challenges.

This article examines the core verification hurdles in UCIe-based multi-die systems and explains how Questa™ One Avery™ Verification IP (Avery VIP) provides a protocol-aware, automated, and layered verification framework. It supports block-level to full system-level validation—combining automation, advanced debug tools, and AI-driven coverage analysis to accelerate verification closure.

Verification Hurdles in Multi-Die Systems

In traditional verification, individual design blocks are validated in isolation. Multi-die architectures require a shift toward verifying the entire system, ensuring that inter-die communication, synchronization, and interoperability between protocols, such as PCIe, CXL, CHI, and UCIe, function seamlessly. The challenge grows as the UCIe specification evolves—UCIe 2.0 and later versions introduce features such as the management transport protocol, lane repair, autonomous link training, and advanced sideband handling—all of which require verification environments to remain adaptable and up to date.

The test environment must also offer flexibility to simulate both standard operating conditions (such as steady-state data transfers and control signaling) and edge cases (like misaligned lanes or failed link training). Fine-grained controllability is essential, allowing errors to be injected and traffic patterns manipulated to test robustness—especially since faults can propagate across dies and protocol layers.

In addition, system-level performance metrics like cross-die latency, throughput, and bandwidth must be monitored in real time to ensure the design meets its performance targets under diverse workloads. This performance validation needs to be continuous and consistent across various traffic patterns and system states.

Configuration space management presents another challenge, as multi-die systems require synchronized register updates across dies, including real-time error reporting and runtime reconfiguration. Finally, verification must be able to scale from simulation, where deep debug visibility is possible, to emulation and hardware prototyping, for speed and real-world validation.

Accelerating Multi-Die Verification with Questa One Avery UCIe VIP

The Questa One Avery UCIe VIP is built to handle the full spectrum of multi-die verification needs through a layered, configurable framework.

Figure 1. Layered UCIe verification framework.

It supports multiple bus functional models (BFM) across diverse DUT types. At the block level, Avery VIP can verify standalone components, such as the Logical PHY (LogPHY), D2D adapter, and protocol layers. At the die or system level, it operates in two modes: a full-stack mode for end-to-end testing of the D2D adapter, LogPHY, mainband, and sideband (with or without raw die-to-die interface (RDI), and a No-LogPHY/RDI2RDI mode for direct, higher-level protocol testing without physical link dependencies.

Figure 2. Avery VIP use models.

To accelerate customer testbench deployment, the Questa One VIP Configurator can generate a UVM-compliant testbench in just a few clicks, enabling verification to begin almost immediately. It allows visual DUT–VIP connectivity mapping with automatic signal binding, intuitive GUI-based BFM configuration, and selection of pre-built sequences aligned with the DUT’s use model. This eliminates extensive manual set up and provides a ready-to-run environment from the outset.

Figure 3: GUI based Avery VIP Configurator.

Avery VIP also provides layer-specific and feature-specific callbacks, enabling on-the-fly traffic manipulation and precise error injection, whether that means corrupting FLITs, altering parameter exchanges, or testing abnormal traffic patterns. Real-time monitoring and dynamic score boarding help track system reactions to these injected conditions.

For compliance and coverage, Avery VIP includes a comprehensive Compliance Test Suites (CTS) framework. Questa™ One Avery CTS is organized by DUT type, protocol mode, and direct specification references. The protocol test suite contains more than 500 checks for UCIe core layers and approximately 3,000 checks for PCIe, CXL, and AXI interactions. Runtime configurability allows protocol features, FLIT formats, or topology to be adjusted on the fly without rebuilding the environment.

The Avery CTS smart API layer further simplifies test creation, offering high-level functions to monitor link training, access configuration or memory spaces, control link state transitions, generate placeholder protocol traffic, and inject packets at precise protocol states.

Debugging and performance tracking are built in. Avery VIP features protocol-aware transaction recording, correlating transactions across layers and highlighting errors with associated signal activity. FSM recording logs internal state transitions such as LTSSM, FDI, and RDI events, complete with timestamps and annotations. Layer-specific debug trackers focus on LogPHY, the D2D adapter, FDI/RDI drivers, parity checks, and configuration space accesses. The performance logger provides real-time throughput, latency, and bandwidth measurements, and can focus on specific simulation phases for targeted analysis.

Figure 4: Protocol-aware transaction association.

Finally, an AI-driven verification co-pilot is integrated through Questa One Verification IQ, acting as an intelligent assistant throughout the verification cycle. The Questa One Verification IQ Coverage Analyzer leverages machine learning to automatically identify and prioritize coverage gaps, rank tests based on their contribution to overall coverage, and detect recurring failure signatures for faster root-cause analysis. The system visualizes coverage data through intuitive heatmaps and bin distribution charts, enabling teams to make data-driven decisions, focus effort on high-impact areas, and continuously refine their verification strategy for maximum efficiency.

Figure 5: Questa One VIQ Coverage Analyzer.

Driving the Future of Multi-Die with UCIe Verification

As an open standard, UCIe is rapidly becoming the cornerstone of heterogeneous integration, allowing chipmakers to combine chiplets from different vendors into a single package with seamless interoperability. By standardizing the interconnect layer, UCIe unlocks a new wave of innovation in data centers, AI accelerators, and high-performance computing, paving the way for designs that are more scalable, energy-efficient, and cost-optimized than traditional monolithic approaches.

The Questa One Avery UCIe VIP is built to handle the demands of this rapidly evolving landscape. It scales effortlessly from block-level to full system-level verification, streamlines environment set up, allows precise error injection, and brings in AI-driven coverage analysis to speed up closure. With its mix of compliance suites, flexible runtime configuration, and powerful debug tools, it gives verification teams the confidence to hit performance, compliance, and reliability goals, helping the industry move toward a future where multi-die systems aren’t the exception, but the standard.

For a deeper dive into the full verification architecture, download the complete whitepaper: Accelerating UCIe Multi-Die Verification with a Scalable, Smart Framework.

Ujjwal Negi is a senior member of the technical staff at Siemens EDA based in Noida, specializing in storage and interconnect technologies. With over two years of hands-on experience, she has contributed extensively to NVMe, NVMe over Fabrics, and UCIe Verification IP solutions. She graduated with a Bachelor of Technology degree from Bharati Vidyapeeth College of Engineering in 2023.

Also Read:

Orchestrating IC verification: Harmonize complexity for faster time-to-market

Perforce and Siemens at #62DAC

Breaking out of the ivory tower: 3D IC thermal analysis for all

 


PDF Solutions Adds Security and Scalability to Manufacturing and Test

PDF Solutions Adds Security and Scalability to Manufacturing and Test
by Mike Gianfagna on 09-08-2025 at 6:00 am

PDF Solutions Adds Security and Scalability to Manufacturing and Test

Everyone knows design complexity is exploding. What used to be difficult is now bordering on impossible. While design and verification challenges occupy a lot of the conversation, the problem is much bigger than this. The new design and manufacturing challenges of 3D innovations and the need to coordinate a much more complex global supply chain are examples of the breadth of what lies ahead. The opportunity of ubiquitous use of AI creates even more challenges. I’m not referring to designing AI chips and systems, but rather how to use AI to increase the chances of design success for those systems.

Data is the new oil, and the foundation of AI. That statement is quite relevant here. What is needed to tame these challenges is a way to collect data upstream in that complex global supply chain and find ways to extract insights that will be used to drive decisions and actions further down the supply chain. Doing that across a complex supply chain requires a highly scalable and secure platform. It turns out PDF Solutions has been quietly building what’s needed to make all this happen. Let’s examine what they’ve been doing and what comes next as PDF Solutions adds security and scalability to manufacturing and test solutions.

Building the Foundation

There are two key products from PDF Solutions that build an infrastructure for the future. A bit of background will set the stage.

First is the Exensio® Analytics Platform, which automatically collects, analyzes and detects statistical changes in manufacturing test, assembly, and packaging operations that negatively affect product yield, quality, or reliability, wherever those operations occur.

The suite of modules in this product enables fabless companies, IDMs, foundries, and OSATs to securely collect and manage test data, assure high outgoing quality, deploy machine learning to the edge, establish controls around the test process, and improve efficiency across the manufacturing supply chain.

The machine learning capabilities will be key to what comes next. More on that in a moment.  The product has quite a large footprint across manufacturing and test operations worldwide.

Of course, a massive data analytics platform like this needs many sources of data across the worldwide supply chain to power it. To enable this part of the platform there is DEX, PDF’s data exchange network. DEX is an infrastructure deployed globally at OSAT locations that connects to potentially every test cell or test and assembly equipment across the worldwide supply chain. It automatically delivers test rules, models & recipes, and brings, in real time, all of that test data back to Exensio. There, it will be normalized using PDF Solutions semantic model that ensures it is always complete, consistent and ready for analysis.  DEX is the plumbing for test rules, models and data for Exensio if you will.

I began my career in test data analytics. Back then, the number and type of testers that were part of that process was extremely small compared to what exists today. In spite of that, I can tell you that interfacing to each source of test data, acquiring the data and ensuring it was accurate and timely was a huge challenge. The problem has gotten much, much bigger. DEX is a combination of hardware and software developed by PDF Solutions that interfaces to all data sources, validates the information and transports it to the required location for timely analysis. This is a very complex process.

The system is optimized to handle the unique challenges of semiconductor data acquisition, enabling PDF Solutions to manage petabytes of information on cloud. The diagram below illustrates how DEX feeds critical data to Exensio.

How DEX feeds critical data to Exensio

What’s Next

Systems the size and scope of Exensio and DEX have already created a substantial impact on the semiconductor industry. The graphic at the top of this post illustrates some of the areas that have benefited from these systems. The next chapter of this story will focus on a concept called Data Feed Forward or DFF. PDF sees DFF providing an enterprise solution to collect, transform and distribute data across a customer’s supply chain to enable advanced test methodologies utilizing feed-forward data.

Using PDF’s global infrastructure, test data can be captured upstream and transported to the edge to be used to inform optimized strategies for downstream testing. The concept of feed forward data enablement isn’t new. It has been used successfully to optimize physical implementation by using early data to inform estimates for late-stage implementation.  In the case of manufacturing and test, a globally available, secure and scalable data infrastructure can be used to optimize test in the chiplet and advanced packaging era.

We are seeing the dawn of AI-driven test from PDF Solutions based on this technology. The goal of AI-driven test is to save time, reduce cost and improve quality. The diagram below summarizes the current three focus areas for this capability.

Digging Deeper

Dr. Ming Zhang

I had the opportunity recently to speak with Ming Zhang, vice president of Fabless Solutions at PDF. I’ve worked with Ming in the past and I’ve always found him to be insightful with a strong view of the big picture and what it all means. So, getting his explanation of AI for test was particularly valuable. Here’s what I learned about the three AI-driven test solutions summarized above.

Ming explained that Predictive Test has an optimization focus. As we all know, test insertions are expensive in terms of manufacturing time and tester cost. Most test strategies apply the same test program to all chips. But what if that could be optimized? Using feed forward data, the parts of a design that are both robust and potentially weak can be identified. Using this kind of unit-specific data, a test program can be developed that is optimized for the specific part. Ming explained what that meant is that test for parts of the design that were demonstrated as robust could be minimized or even skipped.

On the other hand, parts of the design that showed potential weakness could be tested more rigorously. In the end, the actual test time might be the same as before, but the tests applied would be more effective at finding good and bad die, so quality would improve for the same, or lower test cost.

We then discussed Predictive Binning. Ming described this approach as a straight cost saving measure. He explained that based on early data, chips that were likely to fail a test step could be removed or binned out. This essentially avoids test costs that would result in a failure so test cost would be reduced.

We then discussed Predictive Burn-in. This is another cost saving measure. In this case, the devices that have exhibited robust behavior can be identified as not requiring burn-in. Since this process requires hundreds to thousands of hours using expensive measurement and ambient control equipment, a significant cost saving can be realized by avoiding burn-in where it’s not necessary.

Ming pointed out that all of these technologies apply advanced AI algorithms to the massive worldwide manufacturing database managed by PDF Solutions. The initial AI models are developed by PDF. Ming went on to explain that some of its customers want to build proprietary models to drive test decisions as well. PDF support this process also, creating an environment where certain customers can build their own AI for test models and algorithms to enhance competitiveness.

I was quite energized after my discussion. Ming painted a rather upbeat and exciting view of what’s ahead. By the way, if you happen to be in Taiwan on September 12, Ming will be presenting this work at the SEMICON event there.

The Last Word

Dr. John Kibarian

PDF Solutions’ president, CEO and co-founder Dr. John Kibarian made a comment recently that steps back a bit and takes a bigger picture view of where this can all go. He said:

“Collaboration from the system company, all the way back to the equipment vendor, is important today in order to get out new products. Then once those products are launched, the collaboration for the ongoing maintenance of that production flow is requiring a lot more effort. The collaboration required is going to go up. It will be doable at scale only because the industry is going to require it to be doable.”

 The vision painted by John is quite exciting as PDF Solutions adds security and scalability to manufacturing and test.

Also Read:

PDF Solutions and the Value of Fearless Creativity

Podcast EP259: A View of the History and Future of Semiconductor Manufacturing From PDF Solution’s John Kibarian


Revolutionizing Processor Design: Intel’s Software Defined Super Cores

Revolutionizing Processor Design: Intel’s Software Defined Super Cores
by Admin on 09-07-2025 at 2:00 pm

Intel European CPU Patent Application

In the ever-evolving landscape of computing, Intel’s patent application for “Software Defined Super Cores” (EP 4 579 444 A1) represents a groundbreaking approach to enhancing processor performance without relying solely on hardware scaling. Filed in November 2024 with priority from a U.S. application in December 2023, this innovation addresses the inefficiencies of traditional high-performance cores, which often sacrifice energy efficiency for speed through frequency turbo boosts. By virtually fusing multiple cores into a “super core,” Intel proposes a hybrid software-hardware solution that aggregates instructions-per-cycle (IPC) capabilities, enabling energy-efficient, high-performance computing. This essay explores the concept, mechanisms, benefits, and implications of Software Defined Super Cores (SDC), highlighting how they could transform modern processors.

The background of this patent underscores persistent challenges in processor design. High-IPC cores, while powerful, depend heavily on process technology node scaling, which is becoming increasingly difficult and costly. Larger cores also reduce overall core count, limiting multithreaded performance. Hybrid architectures, like those blending performance and efficiency cores, attempt to balance single-threaded (ST) and multithreaded (MT) needs but require designing and validating multiple core types with fixed ratios. Intel’s SDC circumvents these issues by creating virtual super cores from neighboring physical cores—typically of the same class, such as efficiency or performance cores—that execute portions of a single-threaded program in parallel while maintaining original program order at retirement. This gives the operating system (OS) and applications the illusion of a single, larger core, decoupling performance gains from physical hardware expansions.

At its core, SDC operates through a synergistic software and hardware framework. The software component—potentially integrated into just-in-time (JIT) compilers, static compilers, or even legacy binaries—splits a single-threaded program into instruction segments, typically around 200 instructions each. Flow control instructions, such as conditional jumps checking a “wormhole address” (a reserved memory space for inter-core communication), steer execution: one core processes odd segments, the other even ones. Synchronization operations ensure in-order retirement, with “sync loads” and “sync stores” enforcing global order. Live-in loads and live-out stores handle register dependencies, transferring necessary data via special memory locations without excessive overhead (estimated at under 5%). For non-linear code, like branches or loops, indirect branches or wormhole loop instructions dynamically re-steer cores, using predicted targets or stored program counters to maintain parallelism.

Hardware support is minimal yet crucial, primarily enhancing the memory execution unit (MEU) with SDC interfaces. These interfaces manage load-store ordering, inter-core forwarding, and snoops, using a shared “wormhole” address space for fast data transfers. Cores may share caches or operate independently, but the system guarantees memory ordering and architectural integrity. The OS plays a pivotal role, provisioning cores based on hardware-guided scheduling (HGS) recommendations, migrating threads to SDC mode when beneficial (e.g., for high-IPC phases), and reverting if conditions change, such as increased branch mispredictions or system load demanding more independent cores.

The benefits of SDC are multifaceted. Energy efficiency improves by allowing longer turbo bursts or operation at lower voltages, as aggregated IPC reduces the need for frequency scaling. Flexibility is a key advantage: platforms can dynamically adjust between high-ST performance (via super cores) and high-MT throughput (via individual cores), adapting to workloads without fixed hardware ratios. Unlike prior multi-threading decompositions, which incurred 25-40% instruction overheads from replication, SDC minimizes redundancy, focusing on explicit dependencies. This could democratize high-performance computing, reducing reliance on advanced process nodes and enabling scalable designs in data centers, mobile devices, and AI accelerators.

However, challenges remain. Implementation requires precise software splitting to minimize communication overhead, and hardware additions, though small, must be validated for reliability. Compatibility with diverse instruction set architectures (ISAs) via binary translation is mentioned, but real-world deployment may face OS integration hurdles.

In conclusion, Intel’s Software Defined Super Cores patent heralds a paradigm shift toward software-centric processor evolution. By blending virtual fusion with efficient inter-core communication, SDC promises to bridge the gap between performance demands and hardware limitations, fostering more adaptable, efficient computing systems. As technology nodes plateau, innovations like this could define the next era of processors, empowering applications from AI to everyday computing with unprecedented dynamism.

You can see the full patent application here.

Also Read:

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability

Intel Unveils Clearwater Forest: Power-Efficient Xeon for the Next Generation of Data Centers

Intel’s IPU E2200: Redefining Data Center Infrastructure

Revolutionizing Chip Packaging: The Impact of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB)


TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future
by Admin on 09-07-2025 at 8:00 am

TSMC Substainability Report 2024 2025

TSMC, the world’s most trusted semiconductor foundry, released its 2024 Sustainability Report, underscoring its commitment to embedding environmental, social, and governance principles into its operations. Founded in 1987 and headquartered in Hsinchu Science Park, TSMC employs 84,512 people globally and operates facilities across Taiwan, China, the U.S., Japan, and Europe. The report, spanning 278 pages, highlights TSMC’s role as an “innovation pioneer, responsible purchaser, practitioner of green power, admired employer, and power to change society.” Amid rising global risks like extreme weather, as noted in the World Economic Forum’s Global Risks Report, TSMC emphasizes multilateral cooperation to advance sustainability, aligning with UN Sustainable Development Goals (SDGs).

In letters from ESG Steering Committee Chairperson C.C. Wei and ESG Committee Chairperson Lora Ho, TSMC reaffirms sustainability as core to its resilience and competitiveness. Wei stresses that ESG is embedded in every decision, driving net zero emissions by 2050 and carbon neutrality. The company saved 104.2 billion kWh globally in 2024 through efficient chips, equivalent to 44 million tons of reduced carbon emissions. By 2030, each kWh used in production is projected to save 6.39 kWh worldwide. Ho highlights collaborations across five ESG directions: green manufacturing, responsible supply chains, inclusive workplaces, talent development, and care for the underprivileged.

Environmentally, as a “practitioner of green power,” TSMC focuses on climate and energy (pages 108-123), water stewardship (pages 124-134), circular resources (pages 135-146), and air pollution control (pages 147-153). It deployed 1,177 energy-saving measures, achieving 810 GWh in annual savings and 13% renewable energy usage, targeting 60% by 2030 and RE100 by 2040. Scope 1-3 emissions reductions follow SBTi standards, with 2025 as the baseline for absolute cuts by 2035. A new carbon reduction subsidy for Taiwanese tier-1 suppliers and the GREEN Agreement for 90% of raw material emitters aim to slash Scope 3 emissions. Water-positive goals by 2040 include a 2.7% reduction in unit consumption and 100% reclaimed water systems. Circular efforts recycled 97% of waste globally, transforming 9,400 metric tons into resources, while volatile organic compounds and fluorinated GHGs saw 99% and 96% reductions, respectively.

Socially, TSMC positions itself as an “admired employer” (pages 155-202), fostering an inclusive workplace with a Global Inclusive Workplace Statement and campaigns on action, equity, and allyship. It conducted a global Workplace Human Rights Climate Survey and expanded human rights due diligence to suppliers, incorporating metrics into long-term goals. Women comprise 40% of employees, with targets for over 20% in management. Talent development averaged 90 learning hours per employee, with programs like the Senior Manager Learning and Development achieving 90-point satisfaction. Occupational safety maintained an incident rate below 0.2 per 1,000 employees, enhanced by 24/7 ambulances and diverse protective gear. As a force for societal change (pages 204-232), TSMC’s foundations benefited 1,391,674 people through 171 initiatives, investing NT$2.441 billion. Social impact assessments using IMP and IRIS+ frameworks supported STEM education, elderly care, and SDG 17 partnerships.

Governance-wise (pages 234-251), TSMC reported NT$2.95 trillion in revenue and NT$1.17 trillion in net income, with 69% from advanced 7nm-and-below processes. R&D spending hit US$6.361 billion, up 3.1-fold in a decade. The ESG Performance Summary (pages 263-271) details metrics like 100% supplier audits and top rankings in DJSI and MSCI ESG.

Bottom line: The report showcases TSMC’s 2024 achievements: 11,878 customer innovations, 96% customer satisfaction, and NT$2.45 trillion in Taiwanese economic output, creating 358,000 jobs. Despite challenges like geopolitical tensions, TSMC’s net zero roadmap and inclusive strategies position it as a sustainability leader, driving shared value for stakeholders and a resilient future.

Also Read:

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

TSMC Describes Technology Innovation Beyond A14

TSMC Brings Packaging Center Stage with Silicon