RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Self-Aligned Spacer Patterning for Minimum Pitch Metal in DRAM

Self-Aligned Spacer Patterning for Minimum Pitch Metal in DRAM
by Fred Chen on 11-16-2025 at 10:00 am

Spacer Patterning for Minimum Pitch Metal in DRAM 1

The patterning of features outside a DRAM cell array can be just as challenging as those within the array itself [1]. The array contains features which are densely packed, but regularly arranged. On the other hand, outside the array, the minimum pitch features, such as the lowest metal lines in the periphery for the sense amplifier (SA) and sub-wordline driver (SWD) circuits, are meandering in appearance, and the pitch is varying over a range. Stitched double patterning has been the method of choice but will not be sufficient when minimum pitch drops below 40 nm pitch [2,3]. Moreover, when the minimum pitch drops below 40 nm, EUV stochastic defectivity becomes an issue [4,5].

Self-aligned spacer patterning may be applied to the minimum pitch metal features in the DRAM periphery [6]. The sense amplifier and sub-wordline driver patterns are typically characterized by islands with minimum pitch lines meandering around them (Figure 1).

Figure 1. Examples of DRAM lowest metal layer patterns outside the array.

The wrapping of the lines around the islands does suggest similarity to the spacer wrapping around the core mandrel feature. Guided by the outline of the metal line directly wrapping around the islands, the appropriate mandrel features are shown in Figure 2.

Figure 2. Core and spacer feature for the corresponding patterns in Figure 1. The spacer is deposited over the core features and then etched back, leaving only the portions on the sidewall of the core features.

The core is then removed, leaving the spacer (Figure 3).

Figure 3. Core removal (from Figure 2), leaving the spacer.

The spacer acts as a mandrel for a second spacer, followed by filling of the remaining gaps (Figure 4). When necessary, cuts are used.

Figure 4. Completion of the patterning with second spacer and gap-fill, plus any necessary line cuts. The second spacer is deposited over the first spacer and etched back, leaving only the portions on the sidewall of the first spacer. Then the gap-fill material is deposited and etched back or otherwise planarized. The cuts are etch masks for blocking metal trench etching or direct breaks etched into the metal lines.

The patterning flow is similar to the self-aligned quadruple patterning (SAQP) used for pitch quartering [7]. Thus, for down to ~37 nm minimum metal pitches, this can be expected to allow DUV lithography to be used without the heavy processing burden of multiple litho-etch (LE) steps.

References

[1] F. Chen, Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM.

[2] F. Chen, Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery.

[3] S-M. Kim et al., “Issues and Challenges of Double Patterning Lithography in DRAM,” Proc. SPIE 6520, 65200H (2007).

[4] Y. Li, Q. Wu, Y. Zhao, “A Simulation Study for Typical Design Rule Patterns and Stochastic Printing Failures in a 5 nm Logic Process with EUV Lithography,” CSTIC 2020.

[5] F. Chen, Predicting Stochastic EUV Defect Density with Electron Noise and Resist Blur Models.

[6] F. Chen, Triple Spacer Patterning for DRAM Periphery Metal.

[7] H. Yaegashi et al., “Overview: Continuous evolution on double-patterning process,” Proc. SPIE 8325, 83250B (2012); DOI: 10.1117/12.915695.

Also Read:

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

An Insight into Building Quantum Computers

I Have Seen the Future with ChipAgents Autonomous Root Cause Analysis


CEO Interview with Dr. Peng Zou of PowerLattice

CEO Interview with Dr. Peng Zou of PowerLattice
by Daniel Nenni on 11-16-2025 at 8:00 am

Dr. Peng Zou President & CEO, Co Founder

Dr. Zou is one of the industry’s leading experts in power delivery for high performance processors.  Before founding PowerLattice, he held technical leadership roles at Qualcomm/NUVIA, Huawei and Intel, where he led the multidisciplinary teams advancing integrated voltage regulator technologies across magnetic materials, circuit design and system architecture.  Recognizing that the “power wall” had become a major limiting factor for AI performance, Dr. Zou set out to drive a step-change in power delivery and founded PowerLattice. He holds 15 U.S. patents with additional patents pending.

Tell us about your company.
PowerLattice is reimagining how power is delivered in the world’s most demanding compute systems. We’ve developed the first power delivery chiplet that brings power directly into the processor package—improving performance, efficiency, and reliability. The result is a fundamental shift in how high-performance chips get powered, paving the way for the next generation of AI and advanced computing. We have silicon in hand and are now sampling to customers, so we decided it was time to emerge from stealth.

What problems are you solving?
AI accelerators and GPUs are pushing past 2 KW per chip, straining data centers that already consume as much energy as mid-size cities. Conventional power delivery forces very high electrical current to travel long, resistive paths before reaching the processor, wasting energy and limiting performance. The inefficiency and heat losses from moving power across a motherboard are now a hard limit—the “AI power wall.”

PowerLattice eliminates that barrier by moving power delivery directly into the processor package, right next to the compute die. We have also developed circuit innovations and technologies that deliver ultra-fast response times for precise voltage regulation, a capability that is crucial for processor performance. This approach reduces total compute power needs by more than 50%, effectively doubling performance.

What application areas are your strongest?
All segments from hyperscale data centers to AI chipmakers and edge compute stand to benefit from our technology. But our initial focus is on AI since it’s the AI chipmakers who are hitting the power wall the most. Their chips are pushing the limits of power density and efficiency, and our solution directly tackles those constraints—delivering higher performance per watt and unlocking scalability for the next wave of AI systems.

What keeps your customers up at night?
For most of our customers, the biggest challenge isn’t compute, it’s power. They’re reaching the physical limits of how much energy they can deliver to a chip and are having to design around that. Reliability is another major concern; as AI models scale, even micro-instabilities in power delivery can ripple across entire systems.

To address this, we’ve built a voltage-stabilizing layer directly into our chiplet design. When GPUs are pushed to their limits, voltage fluctuations can shorten their lifespan and compromise reliability. Our technology keeps voltage steady at the source, extending the usable life of GPUs and ensuring consistent performance under extreme workloads.

What does the competitive landscape look like and how do you differentiate?
Our biggest competitors are those providing legacy solutions – and this is exactly why we see such a big opportunity to disrupt the market. Traditional power delivery solutions were never designed for this era of compute. They use large, discrete voltage regulation modules (VRMs) that sit on the motherboard and regulate power externally. The result is wasted energy and also voltage fluctuations.

Our approach brings voltage regulation directly onto the wafer, integrating inductors and passives at the silicon level. It’s a fundamentally different approach. By bringing power directly into the processor package, we can reduce compute power needs by more than 50%.

Our chiplet-based approach integrates easily into existing SoC designs and is also very configurable, so we’re seeing a lot of strong interest from customers.

What new features or technology are you working on?
Right now we’re focusing on design wins with major customers and scaling through key manufacturing milestones. We’ve proven the silicon — now it’s about ramping and driving adoption. Ultimately, our goal is to make power delivery as programmable and scalable as compute itself.

How do customers normally engage with your company?
We work closely with semiconductor vendors, hyperscalers, and system integrators.  Our model is highly collaborative because the integration of power and compute is no longer optional.

Also Read:

CEO Interview with Roy Barnes of TPC

CEO Interview with Mr. Shoichi Teshiba of Macnica ATD

CEO Interview with Sanjive Agarwala of EuQlid Inc.


Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires

Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires
by Daniel Nenni on 11-14-2025 at 10:00 am

Daniel is joined by Pedro Pires, a product and technology leader with a strong background in IP and data management within the EDA industry. Currently a product manager at Keysight Technologies, he drives the roadmap for the AI-driven data management solutions. Pedro’s career spans roles in software engineering and data science at Cadence, ClioSoft, and Keysight.

In this broad view of the impact of data management across the industry, Dan explores several trends with Pedro. Current data management challenges are discussed, along with an assessment of how Keysight Design Data Management (DDM) (SOS) addresses these challenges. Requirements for security, data organization and performance are all touched on. The relative benefits of a tool like DDM (SOS) compared to open source implementations is also covered.

Pedro presents many details of real-world customer usage of DDM (SOS). He also assesses what impact tools such as this will have on future projects, including the expanding use of AI.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Silicon Catalyst on the Road to $1 Trillion Industry

Silicon Catalyst on the Road to $1 Trillion Industry
by Daniel Nenni on 11-14-2025 at 6:00 am

image (4)

There were quite a few announcements at the Silicon Catalyst event at the Computer History Museum last week. The event itself was eventful with semiconductor legends in the audience and on the stage. First let’s talk about the announcements Silicon Catalyst made then we will talk about the event itself.

In addition to expanding in Japan and Australia, Silicon Catalyst added  more companies to their semiconductor industry leading accelerator:

Silicon Catalyst Announces Seven Newly Admitted Companies to Semiconductor Industry Accelerator.

Silicon Catalyst, the world’s only incubator focused exclusively on accelerating semiconductor solutions, continues to welcome innovative startups into its prestigious portfolio. The announcement, made on November 8, 2025, underscores the incubator’s mission to nurture next-generation chip design and fabrication technologies.

The newly admitted companies span diverse applications, from AI-optimized processors to advanced sensor systems and energy-efficient power management solutions. Each startup gains access to Silicon Catalyst’s extensive ecosystem, including in-kind tools from industry leaders like Arm, TSMC, and Synopsys, as well as mentorship from seasoned semiconductor executives.

“These companies represent the cutting edge of hardware innovation,” said Pete Rodriguez, CEO of Silicon Catalyst. “Their technologies address critical challenges in AI, IoT, automotive, and beyond.” The selected startups underwent a rigorous screening process, evaluating technical merit, market potential, and team strength.

Since its inception in 2015, Silicon Catalyst Portfolio Companies have raised over $1B in VC funding and in excess of $200M in grants. This latest group brings fresh intellectual property and novel architectures poised to shape the future of computing.

Bottom line: With semiconductor demand surging, Silicon Catalyst continues to play a pivotal role in bridging innovation and commercialization.

One of the topics discussed during the panel session at the event was when will the semiconductor industry will hit one trillion dollars thus the name of the event. One of the reasons why the semiconductor industry will hit one trillion dollars is because of start-up companies like the ones that are part of the SCI portfolio, many of which I have done CEO interviews and podcasts with. I also remember way back when Arm, TSMC, and TSMC were start-ups 30+ years ago. They have both taken collaboration to a new level with massive ecosystems surrounding them.

Yes, the Semiconductor industry will hit $1 trillion dollars. This discussion has been ongoing for several years but lately the date has been pulled in. My guess was 2030 but the panel now says it could be sooner due to the AI surge and the coming of quantum computing.

The event panel itself was filled with semiconductor luminaries, Dr. Ravi Subramanian for example. I worked for Ravi at Berkeley Design Automation as an advisor and spent time with him in Taiwan. He would routinely give master classes on how to develop customer and partner relationships based on trust, respect, and technology. I also worked with Solido Design and Fractal Technologies, both were acquired by Ravi’s team at Siemens EDA. So yes, Ravi is a good example of the many industry luminaries that collaborate with Silicon Catalyst.

The videos from the event are available here. Take a look at the panel discussion and you will see Ravi in action.

Silicon Catalyst is also very active in the semiconductor ecosystem, they are at just about every conference I attend here in Silicon Valley. The next one is the Quantum-to-Business Conference December 9-11th at the Santa Clara Convention Center. If you are interested there is a Silicon Catalyst discount code: SC-20-SV for 20% off admission. I hope to see you there.

Bottom line: The best talent attracts the best talent and the Silicon Catalyst ecosystem is full of the best talent, absolutely.

Also Read:

CEO Interview with Adam Khan of Diamond Quanta

CEO Interview with Andrew Skafel of Edgewater Wireless

Cutting Through the Fog: Hype versus Reality in Emerging Technologies


Hierarchically defining bump and pin regions overcomes 3D IC complexity

Hierarchically defining bump and pin regions overcomes 3D IC complexity
by Admin on 11-13-2025 at 8:00 am

connectivity in a hierarchical IC package floorplan

By Todd Burkholder and Per Viklund, Siemens EDA

The landscape of advanced IC packaging is rapidly evolving, driven by the imperative to support innovation on increasingly complex and high-capacity products. The broad industry trend toward heterogeneous integration of diverse die and chiplets into advanced semiconductor package systems has led to an explosion in device complexity and pin counts.

Thus, the adoption of chiplets is accelerating at an unprecedented pace. Chiplets offer a modular solution, providing smaller, convenient building blocks that communicate via standardized interfaces, thereby enabling more flexible and cost-effective system integration.

The complexity of packages themselves is experiencing explosive growth. Package pin counts have surged from approximately 100,000 or fewer pins just a few years ago to upwards of 50 million pins in contemporary designs. Projections indicate a potential tenfold increase in these numbers within the next few years, creating a profound impact across every facet of the semiconductor ecosystem.

The sheer scale of this complexity far exceeds the capacity of any single human designer to manage effectively. A solution capable of abstracting this complexity into manageable portions is indispensable. This is precisely where hierarchical device planning becomes paramount. It represents a methodology and a suite of technologies designed to decompose overwhelming complexity into digestible, manageable segments.

A salient significant challenge lies in optimizing smaller functional areas within a package and subsequently reusing these optimized blocks in derivative designs. Hierarchical device planning directly addresses this by integrating established hierarchical design methodology techniques—long characteristic of chip design—into the realm of advanced IC packaging. This approach is crucial for managing the intricate interface connectivity inherent in package devices composed of numerous smaller building blocks.

However, before fully embracing a hierarchical design implementation strategy, it is crucial to acknowledge the unique challenges of IC packaging. A key challenge is that, at the top level, hierarchical floorplans require a unique set of signals for each instance of a placed building block.

Out with the old, in with the new

Designing viable bump patterns for chiplets and interposers involves managing a multitude of signals, interface I/Os, and power and ground connections. While managing perhaps 100,000 pins, as was common some years ago, was challenging but generally feasible, albeit prone to errors, the current reality of millions or even 50 million pins renders such manual approaches absolutely unworkable. Consequently, traditional assembly and planning methodologies, which model large-capacity pin devices like high-performance computing die as single, flat entities, are no longer sufficient. These flat approaches demand extraordinary designer skill to manage the connectivity and topological relationships of all functional blocks.

For packaging designs of lower complexity, and even sometimes for reasonably complex current designs, the traditional tool of choice has often been spreadsheets, particularly Microsoft Excel. While spreadsheets may suffice for small designs, they become woefully inadequate when dealing with multiple chiplets, their intricate interfaces, and the presence of interposers or silicon bridges. Furthermore, in many advanced packaging scenarios, some components are co-designed concurrently with the package itself, meaning they are in a constant state of flux. The sheer volume of data and the imperative to maintain synchronization across all these dynamic elements make these manual methods entirely unviable.

The consequences of errors in package assembly can be catastrophic. Historically, there have been instances where such errors have led to astronomical financial repercussions, even resulting in the demise of entire companies. The costs associated with a failed package, especially a large, complex one, are immense. The long-term consequence, assuming a company survives such a setback, is an invaluable—albeit painful—lesson learned, driving a commitment to never repeat the mistake. This underscores the critical need for robust, error-preventing methodologies and tools from the outset.

Modern advanced packaging demands solutions capable of managing the entire package assembly as a unified entity. This includes robust capabilities for tracking connectivity throughout the complete package assembly and providing comprehensive, full-package assembly verification in three dimensions. Given that all these advanced packages inherently involve some form of 3D integration, validating their structural and electrical integrity is paramount. It is crucial to remember that a package now comprises multiple designs stacked together—chiplets, interposers, silicon bridges, and other elements. Relying on traditional, disconnected methods for such complex assemblies introduces an unacceptably high risk of failure. This necessitates a transition to a more synchronized and integrated design methodology.

A new paradigm for managing IC complexity

This is precisely where hierarchical device planning introduces a new paradigm. The core innovation lies in the ability to hierarchically define parameterized regions of component pins. Instead of grappling with the minutiae of every single pin and its connectivity, designers can now work with these abstracted, hierarchically defined regions. This allows them to plan, design, analyze, and optimize the overall package layout at a higher level of abstraction, deferring the detailed pin-level considerations until they are genuinely necessary.

A significant advantage of this approach is the automatic synthesis of all pins according to the parameters set within these defined regions. Package designers are intimately familiar with the frequent design changes that occur throughout the development flow. Traditionally, implementing each change individually was a time-consuming and error-prone process. With hierarchical device planning, designers can simply modify the relevant parameters of a region, and the system automatically updates the circuit. This capability can save days, or even weeks, of design effort, representing a critical leap in efficiency and responsiveness to design iterations.

Figure 1. Connectivity in a hierarchical IC package floorplan, showing that bumps within the sub-devices are represented at the top level. 
Enabling 3D IC solutions

The trajectory of IC packaging development mandates the adoption of appropriate methodologies and tools that directly address the designer’s evolving challenges. Foremost among these is the need to shield designers from being overstretched by complexity, a common outcome when tools fail to provide adequate support. Designers require assistance to operate at a practical abstraction level—one that renders the design manageable. Presenting a designer with 50 million pins without context offers no actionable insight into optimizing the design. Instead, tools must facilitate a higher-level view that guides optimal design decisions.

Furthermore, these solutions must provide access to multi-domain analysis very early in the design cycle. This includes critical analyses such as signal integrity (SI), power integrity (PI), thermal analysis, and thermal stress analysis. Performing these analyses proactively, long before the package layout is finalized, is essential for driving early design decisions and ensuring the correct path is taken when choices arise. Discovering major issues post-layout is extraordinarily costly, often necessitating a complete package redesign—a luxury rarely afforded by tight development schedules. Early analysis is therefore indispensable.

Siemens’ Innovator 3D IC portfolio solution exemplifies this integrated approach, supporting designers from initial planning and optimization through detailed analysis and package layout.

Figure 2.  Innovator3D IC solution suite cockpit.

A critical component of this solution is robust work-in-progress data management. The sheer volume of data involved in a modern package design demands meticulous tracking to ensure the correct versions of all files are utilized. Forgetting to import an updated Verilog file, for instance, can lead to the fabrication of an incorrect package. Automated tracking and error detection mechanisms are vital to mitigate the numerous potential points of failure. By integrating these capabilities within a unified, AI-infused user experience, solutions like the Innovator 3D IC solution suite are intuitive and efficient for designers to adopt and utilize.

Package designers must leverage every available tool to address the significant device complexity and the explosion in pin counts characteristic of today’s IC packaging designs. In support of this, a concerted effort is underway to develop new solutions, standards, and methodologies. For instance, new interface standards, such as UCI Express, Bunch of Wires (BOW), and Advanced Interface Specification (AIS), are emerging to standardize communication between chiplets. Concurrently, advanced design methodologies and tools are being developed to assist design teams and facilitate seamless interaction with foundries, substrate fabricators, and OSAT providers.

It is crucial for all professionals involved in package design to recognize that effective solutions are available. While many designers may perceive their specific challenges as unique, in most cases the underlying problems are shared across the industry. Fortunately, this leads to a common set of solutions. By actively seeking out and adopting these advanced tools and methodologies, designers can more effectively tackle the complexities of 3D ICs and heterogeneous integration, ensuring the successful realization of next-generation electronic systems.

Contact Siemens EDA

Also Read:

A Compelling Differentiator in OEM Product Design

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Visualizing hidden parasitic effects in advanced IC design 


CDC Verification for Safety-Critical Designs – What You Need to Know

CDC Verification for Safety-Critical Designs – What You Need to Know
by Mike Gianfagna on 11-13-2025 at 6:00 am

CDC Verification for Safety Critical Designs – What You Need to Know

Verification is always a top priority for any chip project. Re-spins result in lost time-to-market and significant cost overruns. Chip bugs that make it to the field present another level of lost revenue, lost brand confidence and potential costly litigation. If the design is part of the avionics or control for an aircraft, the stakes go up, way up. There are substantial rules and guidelines to be adhered to for this class of design. And some of those rules have evolved over decades, making interpretation and adherence challenging.

A recent white paper from Siemens Digital Industries Software examines this class of design for a particularly vexing design bug – clock domain crossing (CDC) issues and resultant metastability. The white paper does a great job explaining the subtleties of CDC bugs and how to address those issues against the rigors of safety-critical rules for airborne systems. If your chip is destined for airborne use, this white paper is must-read. A link is coming, but first I’d like to provide an overview of CDC verification for safety-critical designs – what you need to know.

CDC Challenges

The white paper does a great job explaining how CDC bugs can cause problems with a chip design, and in particular intermittent problems. In safety-critical applications, an intermittent problem can be difficult to find, and bugs that make it to silicon can result in catastrophic consequences.

To summarize the issue, we need to examine the impact of metastability on a design. This term refers to what happens in digital circuits when clock and data inputs of a flip-flop change at approximately the same time. When this occurs, the flip-flop output can oscillate and settle to a random value. This metastability will lead to incorrect design functionality such as data loss or data corruption on CDC paths. The more asynchronous clock domains there are in a design, the worse the problem can become. And in today’s highly integrated and concurrent designs, the number of independent clock domains in a typical device is growing.

The white paper presents many examples that illustrate the types of problems to look for and how to correct them. It points out that this is a serious problem in safety-critical designs in that it frequently causes chips to exhibit intermittent failures. These failures generally go undetected during simulation (which tests a chip’s logic functions) and static timing (which tests for timing – within a single clock domain).

The paper goes on to explain that a typical verification methodology simply does not consider potential bugs from clock-domain crossing paths. Thus, if CDC paths are not explicitly verified, CDC bugs are typically identified in the actual hardware device in the field, a very bad outcome for safety-critical designs.

Design Assurance – the Good and the Bad

Another important part of this story are the guidelines that must be adhered to when sourcing safety-critical airborne devices. The white paper describes document RTCA/DO-254 “Design Assurance Guidance for Airborne Electronic Hardware” in detail. This specification is used by the Federal Aviation Administration (FAA), European Union Aviation Safety Agency (EASA), and other aviation authorities to ensure that complex electronic hardware used in avionics works reliably as specified, avoiding faulty operation and potential air disasters.

This goal is clearly important. One of the challenges of implementing a methodology to achieve the goal is the size and scope of the DO-254 spec. The FAA began enforcing it in 2005. The document is modeled after earlier specifications for certifying software, which were originally published over 45 years ago.

So, there is a lot of information in this document, both old and new. All in-flight hardware (FPGA or ASIC designs) must now comply with DO-254, and correct interpretation of the requirements and implementation in a production design flow presents challenges.

Digging deeper, the white paper explains that DO-254 projects assign a design assurance level (DAL) of A through E. The level corresponds to the criticality of a resulting failure. A failure in a level A design would result in catastrophic conditions (such as the plane crashing), while a failure in a level E design might mean that some passengers could be subject to minor inconvenience. Level A (catastrophic) and level B (hazardous/severe/major) projects must not only follow DO-254 processes but must also address additional safety concerns.

How to Automate CDC Verification

The white paper then presents a detailed overview of how to build a methodology that will conform to DO-254 requirements and deliver reliable, safe chips. It is explained that a comprehensive CDC verification solution must do four distinct things:

  1. Perform a structural analysis
  2. Verify transfer protocols
  3. Globally check for reconvergence
  4. Implement netlist glitch analysis

Details of these tasks are presented, as well as some of the unique capabilities of the Siemens Questa CDC solution. The white paper explains that many companies have recognized the benefits of Questa CDC and have adopted it as an added design assurance strategy as part of their verification arsenal. Specific details are presented for several real commercial implementations using Quest CDC. These examples cover many diverse projects:

  • S.-based storage/networking company
  • Large global computer company
  • Large Japanese consumer products company
  • S.-based wireless communications provider
  • Maker of military space systems
  • Large aerospace technology company
  • Defense and aerospace systems supplier

The white paper goes on to explain one of the key aspects of the DO-254 process is to deter- mine that the tools used to create and verify designs are working properly. The process to ensure this is called “tool assessment.”

There are many dimensions to this process, and substantial details about how to achieve a successful tool assessment are presented. The diagram below provides an overall flow of the process.

Design and verification tool assessment and qualification flow diagram

A tool vendor cannot assess or qualify their own tools, and the FAA does not provide blanket approval for use of any tools in DO-254 projects

This white paper does provide valuable details and suggestions for getting through the assessment process for Questa CDC as easily as possible.

To Learn More

If you’re involved in the development of safety-critical electronics this white paper provides substantial value regarding how to minimize CDC risks and how to build a compliant design flow.

The information presented is detailed, clear and actionable. And there is an Appendix with many additional and useful references. You can get your copy of Automating Clock-Domain Crossing Verification for DO-254 (and Other Safety-Critical) Designs here. 

You can also learn more about Questa CDC here. And that’s CDC verification for safety-critical designs – what you need to know.

Also Read:

A Compelling Differentiator in OEM Product Design

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Visualizing hidden parasitic effects in advanced IC design 


Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots
by Daniel Nenni on 11-12-2025 at 10:00 am

Ceva WiFi 7 1x1 Client IP

In the rapidly evolving landscape of connected devices, where artificial intelligence meets the physical world, Ceva  has unveiled a groundbreaking solution: the Ceva-Waves Wi-Fi 7 1×1 client IP. Announced on October 21, 2025, this IP core is designed to supercharge AI-enabled IoT devices and pioneering physical AI systems, enabling them to sense, interpret, and act with unprecedented responsiveness. As IoT ecosystems expand, projected to encompass over 30 billion devices by 2030, reliable, low-latency connectivity becomes paramount. Ceva’s innovation addresses this need head-on, leveraging the IEEE 802.11be standard to deliver ultra-high performance in compact, power-constrained form factors.

At its core, the Ceva-Waves Wi-Fi 7 1×1 client IP is a turnkey solution tailored for client-side applications, such as wearables, smart home gadgets, security cameras, and industrial sensors. Unlike bulkier access point implementations, this 1×1 configuration (one spatial stream for transmit and receive) optimizes for cost-sensitive, battery-powered designs, making it ideal for mass-market adoption. Key Wi-Fi 7 features baked into the IP include Multi-Link Operation, which allows simultaneous data transmission across multiple frequency bands (2.4 GHz, 5 GHz, and 6 GHz) for seamless aggregation and reduced interference; 4096-QAM modulation for 20% higher throughput than Wi-Fi 6; and enhanced puncturing to dodge congested channels dynamically. These capabilities slash latency to sub-millisecond levels, boost peak speeds beyond 5 Gbps, and enhance reliability in dense environments, crucial for real-time applications like augmented reality glasses or autonomous drones.

What sets this IP apart is its synergy with Ceva’s broader Smart Edge portfolio, particularly the NeuPro family of NPUs. When paired with Wi-Fi 7 connectivity, these NPUs empower devices to process sensor data, run inference models, and make decisions locally at the edge. This on-device intelligence minimizes cloud dependency, fortifying data privacy by keeping sensitive information, like health metrics from a fitness tracker, off remote servers. It also extends battery life by up to 30% through efficient power management and reduces operational costs by curbing data transmission volumes. In essence, Ceva’s solution transforms passive IoT nodes into proactive physical AI agents that perceive their surroundings (via cameras or microphones), reason through AI algorithms, and act autonomously, whether adjusting a smart thermostat based on occupancy or alerting factory workers to hazards.

Tal Shalev, Vice President and General Manager of Ceva’s Wireless IoT Business Unit, emphasized the strategic timing: “Wi-Fi 7’s breakthroughs in speed, resilience, and latency are driving rapid adoption. Our turnkey solution helps customers cut complexity and time-to-market delivering smarter, more responsive IoT experiences powered by edge intelligence.” Already licensed by multiple leading semiconductor firms, the IP has seen swift uptake, underscoring its market readiness. Industry analysts echo this enthusiasm; Andrew Zignani, Senior Research Director at ABI Research, notes, “Wi-Fi 7 is set to transform IoT by enabling the low-latency, high-throughput connectivity required for real-time edge intelligence and Physical AI. Solutions like Ceva’s are critical to bringing these capabilities into cost-sensitive, battery-powered devices.”

The implications ripple across sectors. In consumer wearables, imagine earbuds that not only stream audio but also perform real-time voice-to-text translation without lag. Smart homes could orchestrate ecosystems where lights, locks, and appliances collaborate via mesh networks, anticipating user needs through predictive AI. Industrial IoT benefits from resilient links in harsh environments, enabling predictive maintenance that prevents downtime. For emerging physical AI—think robotic companions or self-navigating vacuums—Wi-Fi 7 provides the deterministic backbone for multi-device orchestration, fostering collaborative intelligence akin to a “swarm” of sensors.

B0ttom Line: Ceva’s move positions it as a linchpin in the Wi-Fi 7 rollout, with over 60 licensees already harnessing the CEVA-Waves family for diverse applications. As edge computing surges, this IP doesn’t just connect devices; it imbues them with agency, paving the way for a future where AI seamlessly bridges digital and physical realms. By democratizing advanced connectivity, Ceva accelerates innovation, ensuring that smarter, more intuitive experiences are accessible to all.

Contact CEVA

Also Read:

A Remote Touchscreen-like Control Experience for TVs and More

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier


Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU

Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU
by Daniel Nenni on 11-12-2025 at 8:00 am

SemiDynamics Cervell NPU

In the fast-paced world of AI development, bridging the gap from trained models to production-ready applications can feel like an eternity. Enter Semidynamics’ newly launched Inferencing Tools, a game-changing software suite designed to slash deployment times on the company’s Cervell RISC-V Neural Processing Unit. Announced on October 22, 2025, these tools promise to transform prototypes into robust products in hours, not weeks, by leveraging seamless ONNX Runtime integration and a library of production-grade samples.

Semidynamics, a European leader in RISC-V IP cores, has built its reputation on high-performance, open-source hardware tailored for machine learning. The Cervell NPU exemplifies this ethos: an all-in-one RISC-V architecture fusing CPU, vector, and tensor processing for zero-latency AI workloads. Configurable from 8 to 256 TOPS at INT4 precision and up to 2GHz clock speeds, Cervell scales effortlessly for edge devices, datacenters, and everything in between. Its fully programmable design eliminates vendor lock-in, supporting large language models, deep learning, and high-performance computing with standard RISC-V AI extensions. Whether powering on-device assistants or cloud-scale vision pipelines, Cervell’s efficiency stems from its unified instruction stream, enabling infinite customization without fragmented toolchains.

At the heart of the Inferencing Tools is a high-level library layered atop Semidynamics’ ONNX Runtime Execution Provider for Cervell. Developers no longer wrestle with model conversions or low-level kernel tweaks. Instead, they point to an ONNX file, sourced from repositories like Hugging Face or the ONNX Model Zoo, select a configuration, and launch inference directly on Cervell hardware. Clean APIs handle session setup, tensor management, and orchestration, stripping away boilerplate code and minimizing integration risks. This abstraction sits comfortably above the Aliado SDK, Semidynamics’ kernel-level library for peak performance tuning, offering two lanes: rapid prototyping via the Tools or fine-grained optimization via Aliado.

ONNX Runtime integration is the secret sauce. As an open-standard format, ONNX ensures compatibility across ecosystems, and Semidynamics’ Execution Provider plugs it into Cervell’s vector and tensor units via the Aliado Kernel Library. The result? Plug-and-play execution for thousands of pre-trained models, with validated performance across diverse topologies. No more custom wrappers or compatibility headaches—developers focus on application logic, not plumbing.

To supercharge adoption, Semidynamics includes production-grade samples that serve as blueprints for real-world apps. For LLMs, expect ready-to-run chatbots using Llama or Qwen models, complete with session handling and response generation. Vision enthusiasts get YOLO-based object detection pipelines for real-time analysis, while image classifiers draw from ResNet, MobileNet, and AlexNet for tasks like medical imaging or autonomous navigation. These aren’t toy demos; they’re hardened for scale, with built-in error handling and optimization hooks.

The benefits ripple outward. “Developers want results,” notes Pedro Almada, Semidynamics’ lead software developer. “With the Inferencing Tools, you’re running on Cervell, prototype in hours, then harden for production.” Teams report shorter cycles, predictable latency, and maintainable codebases, ideal for embedding AI in agents, assistants, or edge pipelines. Complementing this is the Aliado Quantization Recommender, a sensitivity-aware tool that scans ONNX models for optimal bit-widths (INT4 to INT2), balancing accuracy and bandwidth without exhaustive trials.

Bottom line: In an era where AI deployment lags innovation, Semidynamics’ Inferencing Tools democratize Cervell’s power. By fusing open hardware with streamlined software, they accelerate the journey from lab to launch, empowering developers to ship smarter, faster products. As RISC-V gains traction in AI, expect this suite to redefine edge inferencing—open, scalable, and unapologetically efficient.

Also Read:

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Vision-Language Models (VLM) – the next big thing in AI?

Semidynamics adds NoC partner and ONNX for RISC-V AI applications


Adding Expertise to GenAI: An Insightful Study on Fine-Tuning

Adding Expertise to GenAI: An Insightful Study on Fine-Tuning
by Bernard Murphy on 11-12-2025 at 6:00 am

AI Model Tuner

I wrote earlier about how deep expertise, say for high-quality RTL design or verification, must be extracted from in-house know-how and datasets. In general, such methods start with one of many possible pre-trained models (GPT, Llama, Gemini, etc.). To this consultants or in-house teams add fine-tuning training, initially through supervised fine-tuning (SFT), refined through reinforcement learning with human feedback (RLHF) and subsequently enhanced/maintained through iterative refinement. ChatGPT claims this is the dominant flow (I incline to thinking them pretty accurate in their own domain). Supervision is through labeling (question/answer pairs). In most cases relying on human labeling alone is too expensive, so we must learn how to automate this step.

 A nice example of SFT from Microsoft

This Microsoft paper studies two different methods to fine-tune a pre-trained model (GPT-4), adding expertise on recent sporting events. The emphasis in this paper is on the SFT step rather than following steps. Before you stop reading because this isn’t directly relevant to your interests, I can find no industry-authored papers on fine-tuning for EDA. I know from a comment at a recent conference that Microsoft hardware groups are labeling design data, so I suspect topics like this may be a safe proxy for publishing research in areas relevant to internal proprietary work.

Given the topic tested in the study, the authors chose to fine-tune with data sources (here Wikipedia articles) added after the training cutoff for the pre-trained model, in this case September 2021. They looked at two approaches to fine-tuning on this corpus, one token-based and one fact-based.

The token-based method for label generation is very simple and mirrors the standard practice for generation per the paper. Here they seed with a manually generated label per the article overview section and prompt to generate a bounded set of labels from the article. The second method (which they call fact-based) is similar except that it prompts the model to break down complex sentences if needed into multiple atomic sentences. The authors also allowed for some filtering in this case to remove facts irrelevant to the purpose of the study. Here also the model was asked to generate multiple unique labels.

The paper describes training trials, run in each case on the full set of generated labels, also subsets to gauge sensitivity to training sample size. Answers are validated using the same model running a test prompt (like a test for a student) allowing only pass/fail responses.

The authors compare accuracy of results across a variety of categories against results from the untuned pre-trained model, their range of scaled fine-tuned options, and against RAG over the same sections used in fine-tuning but based on Azure OpenAI hybrid search. They conclude that while token-based training does increase accuracy over the untrained model, it is not as uniform in coverage as fact-based training.

Overall they find that SFT significantly improves performance over the base pre-trained model within the domain of the added training. In this study RAG outperforms both methods but they get close to RAG performance with SFT.

I don’t find these conclusions entirely surprising. Breaking down complex sentences into individual labels feels like it should increase coverage versus learning from more complex sentences. And neither method should be quite as good as vector-based search (more global similarity measures) which could catch inferences that might span multiple statements.

Caveats and takeaway

Fine-tuning is clearly still a very dynamic field, judging by recommended papers from Deep Research in Gemini and ChatGPT, complemented by my own traditional research (Google Scholar for example, where I found this paper). There is discussion of synthetic labeling, though concerns that this method can lead to significant errors without detailed human review.

One paper discusses how adding a relatively small set (1000) of carefully considered human-generated labels can be much more effective for performance than large quantities of unlabeled or perhaps synthetically labeled training data.

There is also concern that under some circumstances fine-tuning could break capabilities in the pre-trained model (this is known as catastrophic forgetting).

My takeaway is that it is possible to enhance a pre-trained model against training data and modest training prompts and get significantly better response accuracy than the pre-trained model alone could provide. However expert review is important to build confidence in the enhanced model and it is clear that 100% model accuracy is still an aspirational goal.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

AI RTL Generation versus AI RTL Verification


EDA Has a Value Capture Problem — An Outsider’s View

EDA Has a Value Capture Problem — An Outsider’s View
by Admin on 11-11-2025 at 10:00 am

Figure1 (1)

By Liyue Yan (lyan1@bu.edu)

Fact 1: In the Computer History Museum, how many artifacts are about Electronic Design Automation (EDA)? Zero.

Fact 2: The average starting base salary for a software engineer at Netflix is $219K, and that number is $125K for Cadence; the starting base salary for a hardware engineer at Cadence is $119K (source: levels.fyi)

Fact 3: EDA industry revenue has been 2% of semiconductor industry revenue for over 25 years, and only recently climbed to 3%.

Part 1: Overview 
Starting Point

I started my inquiry on EDA’s value capture issue as a research project, puzzled by the fact that this critical technology has historically captured only 2% of the revenue generated by the semiconductor industry. As an outsider, it was not obvious whether this proportion is reasonable or not. While the question of how exactly EDA is undervalued or overvalued (and under-charging or over-charging) warrants a whole other discussion, it is not surprising that EDA folks are wanting a larger share. When I was strolling around conferences pitching this 2% number to engage people in my research, the responses converged:

When asked, employees from the bigger vendors, particularly senior engineers, expressed strong opinions that the technologies are under-appreciated. They believe that the whole industry should get paid more and they should get paid more. “10%!” some said. Well, that might be too greedy. Some of the engineers questioned what the salespeople were doing exactly that the industry did not get higher revenue. In general, employees from bigger vendors can’t discuss details but requested me sharing my findings when they are complete. Because they want to know. And everybody wants to know, but nobody talks.

Smaller vendors (salespeople, engineers, founders, sometimes that one person playing all these roles), uniformly pointed the finger at the big players: “It is them! Those companies squeeze us by keeping their prices down. It is so unfair. Not only do we suffer, but the whole industry is also underpriced! You should ask them what they are doing!” The expression, the tone, and the structure of their blaming account are shockingly similar, as if they were rehearsed from a secret union whose sole purpose was to recount their suffering to each other. While small EDA companies have no problems sharing with me, they believe they are not so relevant as the industry-level revenue and profit are driven by the big three.

There is an advantage as an outsider. My ignorance provides me courage to throw any random question to any random person who is willing to catch it. When I threw my question at an executive panel, Joe Costello, former CEO of Cadence, responded along the lines of “Profit margin? I don’t think about it. I think about the value we provide through our products. I always believe that if we bring value to the customer, then we will get value.” Oh no.

Wally Rhines, former CEO of Mentor, gave a milder response consistent with what he wrote in his book, which is that he believes that EDA has a healthy long-term relationship with the customers: “I’m convinced that salespeople for the EDA industry work with their semiconductor customers to provide them with the software they need each year, even in times of semiconductor recessions, so that the average spending can stay within the budget” (Predicting Semiconductor Business Trends After Moore’s Law, Rhines, 2019). After a short conversation with Wally, I am convinced that he is a strong market believer, believing that we are close to a good equilibrium (and if not, the market can always sort itself out with any temporary deviation). Except that, first, there can be multiple equilibria, and we don’t know whether we are in a good one that sustains EDA’s innovation, and second, even if the invisible hand can work its magic bringing us to a better point, who knows how long it would take. In the long run, market will correct itself, but “in the long run we are all dead” (Keynes, 1923).

So, it seems that EDA people are thinking a lot about value creation but little about value capture, expecting customers to automatically appreciate their products through paying a fair amount. It turns out I wasn’t imagining this. Charles Shi, senior analyst at Needham, pointed out at DAC 2024 that many EDA folks believe that if they create value, then they will automatically get value, and unfortunately, that is not true. So, I set the goal to understand why EDA got a constant 2% share of semiconductor revenue and how this number was reached. We know in theory this outcome is due to a combination of macro-level market structure and micro-level firm practices, but I would like to know what those practices are and how they contribute to value capture. More specifically, I want to know how firms are selling to maximize their gain. The next question, then, is whom to talk to and what to ask, which is essentially the entire process. The list includes salespeople, engineers, CAD teams, and buyers.

Jumping to the end—a preview of findings

After talking to more than a dozen decision makers, formally or informally, answers quickly emerged. Even if the current number of participants may not be enough to guarantee an academic paper and the project is ongoing, I thought it would be interesting to share some key findings. Most of them should not come as a surprise. Think of it as a mirror (perhaps a slightly distorted one) – from time to time we do need to look into one, though the observations there should not be terribly unexpected.

 

A quick preview of some key findings:

  1. Professional negotiation teams are often used by the buyers, but not by the vendors.
  2. Given the long contract terms, vendors cannot shop customers easily. That means, at contract renewal times, customers have the leverage of alternatives, whereas vendors do not.
  3. Negotiations are almost always done by the end of fiscal year or fiscal quarter, even for private vendors. (Buyers insist this is always the case, while some vendors deny it.)
  4. Heavy quota pressure for sales potentially contributes to heavy discounts.
  5. Customers are only willing to pay a small fraction to small vendors and startups for products that are similar to or even better than those of big vendors.
  6. Bundling practice erodes monopoly rents, giving away value from the better products.
  7. A small number of large customers often account for the majority of a vendor’s revenue.

The above points are mostly around contracting and negotiating, with some around structures and others around practice, and none are in favor of EDA vendors. Then there are some more positive factors:

  1. Historically sticky users.
  2. Low competition pressure from startups.
  3. Customers’ product search is only triggered by pain, never by price/cost.

The seemingly good news for EDA, of system companies as new customers, may not be too good after all. We tend to think that they have bigger budgets and more generous, therefore increasing EDA’s profit margin. Well, they also have shorter contracts in general, training younger engineers who are more comfortable with switching tools. This increases their bargaining power. Additionally, less experienced users require more technical support, increasing costs for the vendors.

In all, industry structure, business model, and contracting practice combine as driving factors for the value capture, which I unpack in the next part.

Part 2 What I Have Learned

What business is Electronic Design Automation (EDA) in?

One can categorize a business through various lenses. To outsiders, one can explain that EDA is in the software industry. To someone who is interested in technologies, one can say EDA is in the semiconductor industry. If I were to explain it to business or economics researchers, I would say that EDA supplies process tools for chip design.

Why does it matter what business EDA is in? First, research studies suggest that stakeholders evaluate products and businesses through the lens of category, in plain words, putting them in a box first so they can be more easily understood and compared to similar offerings and players. If a business cannot be understood, i.e., put in a box, then it risks being devalued. And in this case, if investors and analysts do not know the proper reference group for EDA, then they would not cover it (or provide buy recommendations), and a mere “no coverage” from analysts can negatively affect stock market evaluation. At least that’s what existing studies say.

EDA is really one of its kind, and a tiny market. So, what reference group can an analyst use? What other stocks is the same analyst covering? CAD? The customers and downstream structure can vary a lot. Semiconductors? Yes, they are often covered by the same analyst, but the semiconductor industry is really not a good reference, except to the extent that their revenues are co-influenced by the downstream applications. Who wants to cover it? Nobody, unless you have an EE-related background. So what box do stock analysts put EDA into? The Black Box.

Willingness-to-pay is the second reason one should ask what business EDA is in. Value is in the eye of the beholder, and we should really ask how hardware designers feel about EDA.

The value of a business is often categorized into gain creators and pain relievers. EDA is certainly not the former, which is often driven by individual consumptions that chase temporary joys. People say businesses that leverage the seven deadly sins are the most profitable. Amen! Pride, envy, and lust – social media and cosmetics. Gluttony – fast food, alcohol. Sloth – delivery, gaming, sports watching, etc. Rest assure that EDA satisfies none of these criteria. EDA is not consumable. EDA is not perishable. And EDA is not going to make the user addicted.

So, EDA is more like a pain reliever. Well, if not careful, some engineers may even assert that it is in the “pain producing” business: “Ugly!” “Frustrating!” “Outdated.” “Stupid.” You can hear the pain. But when I pointed out that perhaps the alternative of no tools is more painful, there wasn’t much of an argument. One issue is, we don’t know the counterfactual enough to develop better appreciation. You never know what you have until it’s gone.

All of the above suggest, it is naturally difficult for EDA tools to price up, even without challenges in competition, business model and business practices, which we will dive into next.

But perhaps the discussion should first start with some positive notes. There are a few factors that work in favor of the big EDA vendors, which is 90% of the market:

Sticky users. IC designers are constantly under time pressure. Changing tools means adjustment and adjustment means loss of time. No one wants to change a routine unless it is absolutely necessary, which means EDA vendors can get by as long as they are not doing a terrible job.

Little competition from startups and small players. There were many startups that outdid incumbents and won market share in the past. That time has gone. A combination of lack of investors, increased complexity of problems, and fully exercised oligopoly power has led to a decline of EDA startups. Those few small or new players have been taking the scraps: providing solutions to peripheral problems, taking only a fraction of the price that a big vendor would get, or earning nothing at all if the big vendor decides to provide the competing tool for free in a bundle with other products.

Customers’ product search is only triggered by pain. Customers do not initiate their search for alternative tools just because the existing ones are expensive. That means, there isn’t much price competition pressure once you’ve got the account. But this also means, the pressure is on the initial pricing. Once you lock the customers in, as long as the pain is manageable, the accounts stay.

To the best of my understanding, EDA vendors are leveraging the above factors fairly well, especially factor No. 2, squeezing the small players (which is not without costs). However, there are even more factors that negatively affect the industry’s value capture, among which I rank the incentive design and pricing the top culprits.

Quota

Here goes a story:

“It was 2010, Sep 3rd, just one month away from Sep 30th, the fiscal year end of Company X. The corporate management team sets a goal to book $300 million for next year, and that translates to many sales quotas for its foot soldiers. John works in sales, and his job is to sell all types of tools, to either new or existing customers. John has a yearly quota of $2 mil. He has managed to book $1.4 mil so far, and he has a chance of booking a few new customers for $850k. But he missed the quota last quarter, and if he does not deliver this time, he will be let go. John has a mortgage with a $4k monthly payment and two kids, one is 3rd grade and the other just entering school. John’s wife has not been particularly happy about his recent working hours and heavy travels.

10 Days later, John closed a deal on Sep 13th, bringing his balance down to $300k from $600k. John expected to close the next new customer Company Elppa for $450 – $550k for 20 licenses. The negotiation went on for a week, and the customer stood firm at $350k, claiming that’s their budget. By Sep 21st, John was stressed and asked his manager if it is possible to lower the price to 350. The manager nodded yes to 400, as he was trying to meet his own quota. The eventual deal was $400k with promised additional support to the new customers. John was relieved. The customer was happy. The management was ok with this season’s performance. The licensing price for Elppa effectively dropped from the target $550k to $400k, by a percentage of 27%. Hopefully this gap can be closed in the future.

Two years later, John moved to Company Y. His accounts were left with Ron. Ron couldn’t find a way to increase the price by more than 5% with Elppa since Elppa was also trying to buy more licenses. Ron ended up charging a 5% price increase for 10 more licenses. The gap was never closed.”

This is absolutely a made-up story. Except that there was a time an employee must be let go if he has not met the quota for two consecutive quarters, and that customers do always want to negotiate by the end of a quarter to leverage this quota pressure, and that it is difficult to increase price for an established account. And the mortgage probably is four thousand a month. Rumor has it some tools are given for free in the bundle to attract clients, and the units for those tools have become unsustainable as a result. Rumor also has it there was once a 98% discount from the listing price at a desperate time (compared to the usual 60-80% discount rate). While the discontinuous incentive design, the quota system that are used by all vendors, can increase sales on some occasions, it can also point effort in the wrong direction, especially when the quota is always the total dollar amount.

The story depicts some issues that are essential to EDA’s value capture problem, given the current business model.

Negotiation

Most customers make multi-year orders. This type of contracts brings comfort to both sides. Customers can now focus on their work and develop routines, and vendors have the confidence that they will not starve in the next season. But this also means, customers’ budget is mostly locked with their existing contracts. In addition to limited customer turnover, EDA vendors can also expect few new clients. This puts vendors in a weak position in contract negotiation. With potential new customers locked into their existing deal, vendors have few alternatives equivalent to any existing customer at the time of negotiation. In contrast, customers can always switch vendors, despite the difficulty in executing a switch. This results in imbalanced negotiation power. This imbalance is particularly true when: (1) the customers have already been using competing tools; (2) the customers are young and adaptable; (3) the customers are big.

Customers negotiate with a few EDA vendors; vendors negotiate with hundreds of customers. In these repeated negotiations, big customers largely use professional negotiators who do nothing but negotiate contracts, whereas vendors have their street-smart people-orientated engineer-turned-sales. No matter how awesome these salespersons are, it is hard to argue there is no skill difference, not to mention the quota pressure. And yet, these are the only moments that any value created by EDA is cemented into revenue.

Bundling

Bundling is often considered a highly effective tool for price discrimination, able to maximize value capture by hitting each customer’s willingness-to-pay (WTP) for a set of products. It works because each customer has different levels of WTP for any specific product, but the variance is largely cancelled out when a whole bundle of products is offered together. The price for the bundle is usually fixed.

But bundling in EDA is nothing like the instrument used in typical pricing strategy, despite sharing the same name. In fact, there is no point in using the word “bundle” at all; a more accurate description would be “there is no fixed price, you can buy anything you want, we hope you try our other products, we will give you a discount when you buy more, and we just charge them all together.”

So how exactly this practice is harming EDA businesses?

One case is when the client wants to purchase a competing product of A, possibly superior, the sales may say, if you buy our B and C, we can give you A for free!

This sort of “bundle” is essentially bad pricing competition that squeezes out the small players, and big vendors themselves have to bear the consequences of low pricing for a long time due to locked-in contracts, as we discussed earlier.

The bigger vendors also did not develop mutual forbearance against each other to avoid competing on the same product features. Business school classrooms like to use the Coca-Cola vs. PepsiCo example through which students develop the understanding that both companies are so profitable largely because they do not compete on price, but on perceived differentiation through branding. It is possible to have different strengths instead of all striving for a complete flow.

The “bundle” practice also obscures the value of each tool. Instead of using A1 to compete with A2, a company can use A1, B1, C1 together to compete with any combination of competing tools. When you offer A1 for a low price, you are effectively eroding the profit from B1 or C1, even if perhaps one of them has monopoly power. As for how much value is given away, only these vendors can tell with their data.

Industry structure

One determining factor here is the industry structure of EDA and their downstream. Just the big three EDA firms account for 90% of market share. That market concentration should come with high market power. Well, if you look at the customers of any big vendors, the semiconductor companies, perhaps two or three can make up 70% of their revenue. Not so much power for the vendors after all.

Business model?

Many think the business model is the problem. I am not so sure. The common argument is that even though EDA provides critical tools to enable any downstream chip design and applications, the current business model does not allow it to get a fixed share of the final value created. Some others say, unless EDA can find a way like Silicon IP, charging fee per production unit, the business won’t be sustainable.

Let’s take a look at the underlying logic of these arguments. This is equivalent of saying, the university should charge students a fixed percentage of their future incomes; otherwise it is not fair, since the university degrees enable their careers. Or, power tool manufacturers should charge customers based on the value of house they build. Even better, the coffee you bought this morning woke you up and help you close a $2 million deal, so pay the coffee shop 1%. It does not make sense.

But this is how people are thinking, and the pricing logic for EDA vendors follows: We guess the customers’ revenue and use that to price discriminate; we believe we should charge customers with simple designs or low value application lower fees and increase the number when the design complexity and downstream revenue increase. Whether consciously or not, many vendors are on the same page in this pricing logic.

There are two parts to this logic. One line of reasoning is that once a tool is provided, it can be used for 100 hours, or 1000 hours, and of course they should charge more for the ones used more. This part seems somewhat reasonable because the vendor is essentially providing more tools for the heavy users, even though there is little or no additional cost incurred by the vendor. A solution is cloud and usage monitoring, which could be implemented with time.

The other argument is that some customers use the tool to produce 1 million chips whereas some others only produce 10 thousand. Shouldn’t one charge the former more per license, given that the tools enable them to achieve a higher revenue? I do not believe so. The tool should charge a customer up to the added value – in this case, how much cost it saves the customer compared to the alternatives, which also decides the customer’s maximum willingness-to-pay – and charge at least its own production costs (in this case, its own costs for maintenance). As for where exactly the price lands in this range, it depends on competition and negotiation, which are discussed above in the industry structure and negotiation sections.

So, what are the possible remedies that can improve EDA’s value capture? 

Part 3 Remedies

Three remedies are proposed:

  1. Better incentive design
  2. Smaller accounts (but more of them)
  3. Stay in different lanes

The first proposed remedy focuses on the incentive issues in negotiation, the second on negotiation power, and the third on the willingness-to-pay and negotiation power resulting from market structure.

Better incentive design

The only opportunity EDA has to capture its value is the moment a deal is made. Who makes the deal and how it is made determine two years of revenue from that contract. I am not an expert in contracting but it does not take an expert to see that the total-dollar-amount based quota distorts incentives.

Here is what big vendors could do without changing much of their existing business model:

Hire one or two experts in sales incentives design. Ideally, they have a PhD in Economics or Finance. They can do data work and some simple formal modeling. They have experience or at least appreciation for experimental and behavioral economics. They could be just out of graduate school, or currently working in Amazon dealing with pricing models that never leverage much of their real training. Currently, the big vendors employ hundreds of engineers with PhDs, but only hire a few staff with a BS or MS for performance analytics and pricing. No, let an Econ PhD work, and they will be worth more than 4 bachelor’s.

It is best to hire them directly instead of sourcing from consulting firms, as the sales performance data needs to be reviewed constantly, and incentive schemes may need to be adjusted. However, it would be reasonable to first evaluate whether there is a real need through economic consulting groups.

I imagine EDA firms could use the same type of people for pricing.

In any case, any incentive scheme should consider a quota not just based on the total dollar amount. Realized price per license needs to be incorporated into incentive formulas as well.

Smaller accounts

This suggestion is not about getting new and small contracts. It is changing the situation where two or three customers make up 70% of an EDA vendor’s revenue, so it has little negotiation power in each contract. Obviously, EDA cannot change the semiconductor industry structure, but it can change the concentration of its contracts. That is, breaking down bigger contracts to smaller ones so each one is not as critical. Essentially you are treating one customer as twenty customers. This is also aligned with EDA’s price discrimination strategy based on complexity and the total value created. Different projects of a customer can have different levels of complexity and produce at different scales.

What’s the benefit for customers when this can increase their contracting and negotiation costs, not to mention the vacancy time of each license (though we can expect that the pricing can eventually be based on actual run time)? Clean separation for budgets allocated to different products. Accountability at the project or business unit level.

Stay in different lanes

Stay in different lanes, or at least have separate strengths. Save the effort in working on one’s shortcomings and divert those resources into innovation around one’s unique strengths. This also extends to vendors’ recent diversification into other areas, such as IoTs and Automotives. With new realms open for innovation, this could be a time to a reset the mode of competition.

The above remedies are brief by design. They are not comprehensive solutions, but practical ideas meant to prompt a rethink of how EDA approaches value capture. EDA’s non-stop innovation is vital, but sustaining the field and keeping it attractive to talent requires taking value capture just as seriously.

Professor Liyue Yan is a researcher of strategy and entrepreneurship at Boston University’s Questrom School of Business. Her work examines strategic decision-making and entrepreneurial entry, with ongoing projects focusing on the Electronic Design Automation (EDA) industry.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

AI RTL Generation versus AI RTL Verification

PDF Solutions Charts a Course for the Future at Its User Conference and Analyst Day