Bronco Webinar 800x100 1

Revolution EDA: A New EDA Mindset for a New Era

Revolution EDA: A New EDA Mindset for a New Era
by Admin on 11-17-2025 at 6:00 am

Picture1

Murat Eskiyerli, PhD, is the founder of Revolution EDA  

Modern software development environments have evolved dramatically. A developer can download Visual Studio Code, install a few plugins, and be productive within minutes. The cost? Perhaps a few hundred dollars per month for cloud development resources. Compare that to custom integrated circuit design, where $50,000 per engineer per year is the minimum entry point—and that’s before considering the learning curve, vendor lock-in, and integration headaches that have persisted since the early 1990s.  

The scripting languages that underpin major EDA tools reflect this stagnation: SKILL is a LISP variant from the 1960s; Tcl dates to 1988. More significantly, expertise in these languages is increasingly rare, creating another barrier to entry and innovation.  

This is why Revolution EDA exists, with the tagline “A new EDA mindset for a new era.”  

The Open-Source Core Philosophy  

Revolution EDA takes inspiration from the VS Code model: an open-source core platform that can be extended through a vibrant plugin ecosystem. The core is written in Python—the lingua franca of AI and scientific computing—making it immediately accessible to a new generation of designers and enabling seamless integration with modern machine learning workflows.  

The common design platform is completely free and open source1. Plugin developers are free to define their own licensing models, just as they do in the VS Code ecosystem. Some can be free; others can serve as gateways to commercial services or foundry-specific offerings. Using tools like Nuitka, closed-source plugins can be distributed as binaries, protecting proprietary IP while maintaining the open core model.  

No databases but plain-text JSON  

Most EDA tools rely on binary databases—opaque blobs that can corrupt, that clash with modern version control systems like Git, and that require constant export-import cycles for AI tools to understand them. Revolution EDA uses only JSON to store all design data: configuration, cellviews, in fact everything.  

This isn’t just a technical choice; it’s strategic. JSON is ubiquitous, and LLMs are trained on massive amounts of JSON data. As generative AI becomes essential to IC design workflows, Revolution EDA designs are natively AI-readable and AI-writable. No translation layer, no data format friction. Your designs can be inspected, modified, and version-controlled using standard text editors and Git workflows.  

Core Capabilities  

Revolution EDA provides a complete front-end design environment with hierarchical schematic and layout editors. Key features include:  

Schematic capabilities: Advanced symbol creation with instance parameters that can be Python functions for dynamic calculation. Symbols can be auto-generated from schematics and Verilog-A modules. Configuration views are editable with the Config Editor enabling flexible netlisting without modifying designs.  

Layout editor: Full hierarchical layout with rectangles, polygons, paths, pins, labels, vias, and Python-based parametric cells. Layer management, rulers, and GDS import/export are built in.  

Python integration: Labels can reference Python functions for sophisticated instance callbacks. Parametric layout cells are also written in Python without the overhead of proprietary solutions.  

Library management: Familiar browser interface for creating, organizing, and managing libraries, cells, and views (schematic, symbol, layout, config, spice, veriloga).  

Two existing plugins extend the core functionality. Revedasim provides point-and-click simulation using Xyce, with plans to support additional analog and mixed-signal simulators. Revedaplot delivers visualization of simulation data—it can plot a half-gigabyte data file in under three seconds.  

The Path Forward: PDKs and Foundry Partnerships  

Today, front-end PDKs are available for IHP and GlobalFoundries OpenPDKs. The next release will integrate DRC/LVS capabilities through KLayout and Netgen. Users are already requesting Calibre integration, which would complete a foundry-acceptable design flow.  

Here’s what’s crucial to understand: Revolution EDA being open-source does not require PDKs to be open-source. Like plugins, PDKs can be offered as binaries or encrypted files, giving foundries complete IP protection. We’re actively seeking partnerships with foundries to develop and validate commercial PDKs.  

This is where the opportunity lies. For foundries, supporting Revolution EDA means:  

Enabling a new generation of designers and startups who are currently priced out of custom IC design  

Gaining a platform that’s natively compatible with AI-driven design flows  

Participating in an ecosystem rather than maintaining yet another proprietary tool integration  

Offering PDKs as commercial products within the plugin model  

For design engineers and nascent startups working on analog/mixed-signal designs, it means breaking free from six-figure or more annual EDA costs while still having a path to foundry-quality designs.  

Production Reality  

Revolution EDA is in active development. The core platform is stable enough for early adopters to explore and experiment. The plugin ecosystem is nascent—think of this as the VS Code of 2015, not 2024. What we’re offering isn’t a drop-in replacement for established flows yet, but rather an invitation to help shape what custom IC design tools should become.  

The question isn’t whether traditional EDA vendors will continue to serve large design houses—they will. The question is whether the next generation of IC innovation will come from teams that can afford $50K+ per seat, or whether we’ll enable orders of magnitude more designers to participate in custom silicon design.  

Try It Yourself  

Revolution EDA runs on Windows and Linux. If you’re already using Python, installation is simple: pip install revolution-eda. Binaries are available for download, and the complete source code is on GitHub.  

We’re looking for early adopters, plugin developers, and most importantly, foundry partners willing to develop commercial PDKs. If you’re curious about what modern IC design tools could be, or if you’re interested in enabling the next wave of custom silicon innovation, visit https://reveda.eu/contact.  

The EDA industry has operated on the same fundamental model for three decades. Revolution EDA is asking a simple question: what if we started fresh, with modern languages, open architectures, and AI-native formats?  

The revolution won’t happen overnight. But it has to start somewhere.  

Also Read:

Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires

WEBINAR: Revolutionizing Electrical Verification in IC Design

Hierarchically defining bump and pin regions overcomes 3D IC complexity


Self-Aligned Spacer Patterning for Minimum Pitch Metal in DRAM

Self-Aligned Spacer Patterning for Minimum Pitch Metal in DRAM
by Fred Chen on 11-16-2025 at 10:00 am

Spacer Patterning for Minimum Pitch Metal in DRAM 1

The patterning of features outside a DRAM cell array can be just as challenging as those within the array itself [1]. The array contains features which are densely packed, but regularly arranged. On the other hand, outside the array, the minimum pitch features, such as the lowest metal lines in the periphery for the sense amplifier (SA) and sub-wordline driver (SWD) circuits, are meandering in appearance, and the pitch is varying over a range. Stitched double patterning has been the method of choice but will not be sufficient when minimum pitch drops below 40 nm pitch [2,3]. Moreover, when the minimum pitch drops below 40 nm, EUV stochastic defectivity becomes an issue [4,5].

Self-aligned spacer patterning may be applied to the minimum pitch metal features in the DRAM periphery [6]. The sense amplifier and sub-wordline driver patterns are typically characterized by islands with minimum pitch lines meandering around them (Figure 1).

Figure 1. Examples of DRAM lowest metal layer patterns outside the array.

The wrapping of the lines around the islands does suggest similarity to the spacer wrapping around the core mandrel feature. Guided by the outline of the metal line directly wrapping around the islands, the appropriate mandrel features are shown in Figure 2.

Figure 2. Core and spacer feature for the corresponding patterns in Figure 1. The spacer is deposited over the core features and then etched back, leaving only the portions on the sidewall of the core features.

The core is then removed, leaving the spacer (Figure 3).

Figure 3. Core removal (from Figure 2), leaving the spacer.

The spacer acts as a mandrel for a second spacer, followed by filling of the remaining gaps (Figure 4). When necessary, cuts are used.

Figure 4. Completion of the patterning with second spacer and gap-fill, plus any necessary line cuts. The second spacer is deposited over the first spacer and etched back, leaving only the portions on the sidewall of the first spacer. Then the gap-fill material is deposited and etched back or otherwise planarized. The cuts are etch masks for blocking metal trench etching or direct breaks etched into the metal lines.

The patterning flow is similar to the self-aligned quadruple patterning (SAQP) used for pitch quartering [7]. Thus, for down to ~37 nm minimum metal pitches, this can be expected to allow DUV lithography to be used without the heavy processing burden of multiple litho-etch (LE) steps.

References

[1] F. Chen, Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM.

[2] F. Chen, Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery.

[3] S-M. Kim et al., “Issues and Challenges of Double Patterning Lithography in DRAM,” Proc. SPIE 6520, 65200H (2007).

[4] Y. Li, Q. Wu, Y. Zhao, “A Simulation Study for Typical Design Rule Patterns and Stochastic Printing Failures in a 5 nm Logic Process with EUV Lithography,” CSTIC 2020.

[5] F. Chen, Predicting Stochastic EUV Defect Density with Electron Noise and Resist Blur Models.

[6] F. Chen, Triple Spacer Patterning for DRAM Periphery Metal.

[7] H. Yaegashi et al., “Overview: Continuous evolution on double-patterning process,” Proc. SPIE 8325, 83250B (2012); DOI: 10.1117/12.915695.

Also Read:

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

An Insight into Building Quantum Computers

I Have Seen the Future with ChipAgents Autonomous Root Cause Analysis


CEO Interview with Dr. Peng Zou of PowerLattice

CEO Interview with Dr. Peng Zou of PowerLattice
by Daniel Nenni on 11-16-2025 at 8:00 am

Dr. Peng Zou President & CEO, Co Founder

Dr. Zou is one of the industry’s leading experts in power delivery for high performance processors.  Before founding PowerLattice, he held technical leadership roles at Qualcomm/NUVIA, Huawei and Intel, where he led the multidisciplinary teams advancing integrated voltage regulator technologies across magnetic materials, circuit design and system architecture.  Recognizing that the “power wall” had become a major limiting factor for AI performance, Dr. Zou set out to drive a step-change in power delivery and founded PowerLattice. He holds 15 U.S. patents with additional patents pending.

Tell us about your company.
PowerLattice is reimagining how power is delivered in the world’s most demanding compute systems. We’ve developed the first power delivery chiplet that brings power directly into the processor package—improving performance, efficiency, and reliability. The result is a fundamental shift in how high-performance chips get powered, paving the way for the next generation of AI and advanced computing. We have silicon in hand and are now sampling to customers, so we decided it was time to emerge from stealth.

What problems are you solving?
AI accelerators and GPUs are pushing past 2 KW per chip, straining data centers that already consume as much energy as mid-size cities. Conventional power delivery forces very high electrical current to travel long, resistive paths before reaching the processor, wasting energy and limiting performance. The inefficiency and heat losses from moving power across a motherboard are now a hard limit—the “AI power wall.”

PowerLattice eliminates that barrier by moving power delivery directly into the processor package, right next to the compute die. We have also developed circuit innovations and technologies that deliver ultra-fast response times for precise voltage regulation, a capability that is crucial for processor performance. This approach reduces total compute power needs by more than 50%, effectively doubling performance.

What application areas are your strongest?
All segments from hyperscale data centers to AI chipmakers and edge compute stand to benefit from our technology. But our initial focus is on AI since it’s the AI chipmakers who are hitting the power wall the most. Their chips are pushing the limits of power density and efficiency, and our solution directly tackles those constraints—delivering higher performance per watt and unlocking scalability for the next wave of AI systems.

What keeps your customers up at night?
For most of our customers, the biggest challenge isn’t compute, it’s power. They’re reaching the physical limits of how much energy they can deliver to a chip and are having to design around that. Reliability is another major concern; as AI models scale, even micro-instabilities in power delivery can ripple across entire systems.

To address this, we’ve built a voltage-stabilizing layer directly into our chiplet design. When GPUs are pushed to their limits, voltage fluctuations can shorten their lifespan and compromise reliability. Our technology keeps voltage steady at the source, extending the usable life of GPUs and ensuring consistent performance under extreme workloads.

What does the competitive landscape look like and how do you differentiate?
Our biggest competitors are those providing legacy solutions – and this is exactly why we see such a big opportunity to disrupt the market. Traditional power delivery solutions were never designed for this era of compute. They use large, discrete voltage regulation modules (VRMs) that sit on the motherboard and regulate power externally. The result is wasted energy and also voltage fluctuations.

Our approach brings voltage regulation directly onto the wafer, integrating inductors and passives at the silicon level. It’s a fundamentally different approach. By bringing power directly into the processor package, we can reduce compute power needs by more than 50%.

Our chiplet-based approach integrates easily into existing SoC designs and is also very configurable, so we’re seeing a lot of strong interest from customers.

What new features or technology are you working on?
Right now we’re focusing on design wins with major customers and scaling through key manufacturing milestones. We’ve proven the silicon — now it’s about ramping and driving adoption. Ultimately, our goal is to make power delivery as programmable and scalable as compute itself.

How do customers normally engage with your company?
We work closely with semiconductor vendors, hyperscalers, and system integrators.  Our model is highly collaborative because the integration of power and compute is no longer optional.

Also Read:

CEO Interview with Roy Barnes of TPC

CEO Interview with Mr. Shoichi Teshiba of Macnica ATD

CEO Interview with Sanjive Agarwala of EuQlid Inc.


Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires

Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires
by Daniel Nenni on 11-14-2025 at 10:00 am

Daniel is joined by Pedro Pires, a product and technology leader with a strong background in IP and data management within the EDA industry. Currently a product manager at Keysight Technologies, he drives the roadmap for the AI-driven data management solutions. Pedro’s career spans roles in software engineering and data science at Cadence, ClioSoft, and Keysight.

In this broad view of the impact of data management across the industry, Dan explores several trends with Pedro. Current data management challenges are discussed, along with an assessment of how Keysight Design Data Management (DDM) (SOS) addresses these challenges. Requirements for security, data organization and performance are all touched on. The relative benefits of a tool like DDM (SOS) compared to open source implementations is also covered.

Pedro presents many details of real-world customer usage of DDM (SOS). He also assesses what impact tools such as this will have on future projects, including the expanding use of AI.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Silicon Catalyst on the Road to $1 Trillion Industry

Silicon Catalyst on the Road to $1 Trillion Industry
by Daniel Nenni on 11-14-2025 at 6:00 am

image (4)

There were quite a few announcements at the Silicon Catalyst event at the Computer History Museum last week. The event itself was eventful with semiconductor legends in the audience and on the stage. First let’s talk about the announcements Silicon Catalyst made then we will talk about the event itself.

In addition to expanding in Japan and Australia, Silicon Catalyst added  more companies to their semiconductor industry leading accelerator:

Silicon Catalyst Announces Seven Newly Admitted Companies to Semiconductor Industry Accelerator.

Silicon Catalyst, the world’s only incubator focused exclusively on accelerating semiconductor solutions, continues to welcome innovative startups into its prestigious portfolio. The announcement, made on November 8, 2025, underscores the incubator’s mission to nurture next-generation chip design and fabrication technologies.

The newly admitted companies span diverse applications, from AI-optimized processors to advanced sensor systems and energy-efficient power management solutions. Each startup gains access to Silicon Catalyst’s extensive ecosystem, including in-kind tools from industry leaders like Arm, TSMC, and Synopsys, as well as mentorship from seasoned semiconductor executives.

“These companies represent the cutting edge of hardware innovation,” said Pete Rodriguez, CEO of Silicon Catalyst. “Their technologies address critical challenges in AI, IoT, automotive, and beyond.” The selected startups underwent a rigorous screening process, evaluating technical merit, market potential, and team strength.

Since its inception in 2015, Silicon Catalyst Portfolio Companies have raised over $1B in VC funding and in excess of $200M in grants. This latest group brings fresh intellectual property and novel architectures poised to shape the future of computing.

Bottom line: With semiconductor demand surging, Silicon Catalyst continues to play a pivotal role in bridging innovation and commercialization.

One of the topics discussed during the panel session at the event was when will the semiconductor industry will hit one trillion dollars thus the name of the event. One of the reasons why the semiconductor industry will hit one trillion dollars is because of start-up companies like the ones that are part of the SCI portfolio, many of which I have done CEO interviews and podcasts with. I also remember way back when Arm, TSMC, and TSMC were start-ups 30+ years ago. They have both taken collaboration to a new level with massive ecosystems surrounding them.

Yes, the Semiconductor industry will hit $1 trillion dollars. This discussion has been ongoing for several years but lately the date has been pulled in. My guess was 2030 but the panel now says it could be sooner due to the AI surge and the coming of quantum computing.

The event panel itself was filled with semiconductor luminaries, Dr. Ravi Subramanian for example. I worked for Ravi at Berkeley Design Automation as an advisor and spent time with him in Taiwan. He would routinely give master classes on how to develop customer and partner relationships based on trust, respect, and technology. I also worked with Solido Design and Fractal Technologies, both were acquired by Ravi’s team at Siemens EDA. So yes, Ravi is a good example of the many industry luminaries that collaborate with Silicon Catalyst.

The videos from the event are available here. Take a look at the panel discussion and you will see Ravi in action.

Silicon Catalyst is also very active in the semiconductor ecosystem, they are at just about every conference I attend here in Silicon Valley. The next one is the Quantum-to-Business Conference December 9-11th at the Santa Clara Convention Center. If you are interested there is a Silicon Catalyst discount code: SC-20-SV for 20% off admission. I hope to see you there.

Bottom line: The best talent attracts the best talent and the Silicon Catalyst ecosystem is full of the best talent, absolutely.

Also Read:

CEO Interview with Adam Khan of Diamond Quanta

CEO Interview with Andrew Skafel of Edgewater Wireless

Cutting Through the Fog: Hype versus Reality in Emerging Technologies


Hierarchically defining bump and pin regions overcomes 3D IC complexity

Hierarchically defining bump and pin regions overcomes 3D IC complexity
by Admin on 11-13-2025 at 8:00 am

connectivity in a hierarchical IC package floorplan

By Todd Burkholder and Per Viklund, Siemens EDA

The landscape of advanced IC packaging is rapidly evolving, driven by the imperative to support innovation on increasingly complex and high-capacity products. The broad industry trend toward heterogeneous integration of diverse die and chiplets into advanced semiconductor package systems has led to an explosion in device complexity and pin counts.

Thus, the adoption of chiplets is accelerating at an unprecedented pace. Chiplets offer a modular solution, providing smaller, convenient building blocks that communicate via standardized interfaces, thereby enabling more flexible and cost-effective system integration.

The complexity of packages themselves is experiencing explosive growth. Package pin counts have surged from approximately 100,000 or fewer pins just a few years ago to upwards of 50 million pins in contemporary designs. Projections indicate a potential tenfold increase in these numbers within the next few years, creating a profound impact across every facet of the semiconductor ecosystem.

The sheer scale of this complexity far exceeds the capacity of any single human designer to manage effectively. A solution capable of abstracting this complexity into manageable portions is indispensable. This is precisely where hierarchical device planning becomes paramount. It represents a methodology and a suite of technologies designed to decompose overwhelming complexity into digestible, manageable segments.

A salient significant challenge lies in optimizing smaller functional areas within a package and subsequently reusing these optimized blocks in derivative designs. Hierarchical device planning directly addresses this by integrating established hierarchical design methodology techniques—long characteristic of chip design—into the realm of advanced IC packaging. This approach is crucial for managing the intricate interface connectivity inherent in package devices composed of numerous smaller building blocks.

However, before fully embracing a hierarchical design implementation strategy, it is crucial to acknowledge the unique challenges of IC packaging. A key challenge is that, at the top level, hierarchical floorplans require a unique set of signals for each instance of a placed building block.

Out with the old, in with the new

Designing viable bump patterns for chiplets and interposers involves managing a multitude of signals, interface I/Os, and power and ground connections. While managing perhaps 100,000 pins, as was common some years ago, was challenging but generally feasible, albeit prone to errors, the current reality of millions or even 50 million pins renders such manual approaches absolutely unworkable. Consequently, traditional assembly and planning methodologies, which model large-capacity pin devices like high-performance computing die as single, flat entities, are no longer sufficient. These flat approaches demand extraordinary designer skill to manage the connectivity and topological relationships of all functional blocks.

For packaging designs of lower complexity, and even sometimes for reasonably complex current designs, the traditional tool of choice has often been spreadsheets, particularly Microsoft Excel. While spreadsheets may suffice for small designs, they become woefully inadequate when dealing with multiple chiplets, their intricate interfaces, and the presence of interposers or silicon bridges. Furthermore, in many advanced packaging scenarios, some components are co-designed concurrently with the package itself, meaning they are in a constant state of flux. The sheer volume of data and the imperative to maintain synchronization across all these dynamic elements make these manual methods entirely unviable.

The consequences of errors in package assembly can be catastrophic. Historically, there have been instances where such errors have led to astronomical financial repercussions, even resulting in the demise of entire companies. The costs associated with a failed package, especially a large, complex one, are immense. The long-term consequence, assuming a company survives such a setback, is an invaluable—albeit painful—lesson learned, driving a commitment to never repeat the mistake. This underscores the critical need for robust, error-preventing methodologies and tools from the outset.

Modern advanced packaging demands solutions capable of managing the entire package assembly as a unified entity. This includes robust capabilities for tracking connectivity throughout the complete package assembly and providing comprehensive, full-package assembly verification in three dimensions. Given that all these advanced packages inherently involve some form of 3D integration, validating their structural and electrical integrity is paramount. It is crucial to remember that a package now comprises multiple designs stacked together—chiplets, interposers, silicon bridges, and other elements. Relying on traditional, disconnected methods for such complex assemblies introduces an unacceptably high risk of failure. This necessitates a transition to a more synchronized and integrated design methodology.

A new paradigm for managing IC complexity

This is precisely where hierarchical device planning introduces a new paradigm. The core innovation lies in the ability to hierarchically define parameterized regions of component pins. Instead of grappling with the minutiae of every single pin and its connectivity, designers can now work with these abstracted, hierarchically defined regions. This allows them to plan, design, analyze, and optimize the overall package layout at a higher level of abstraction, deferring the detailed pin-level considerations until they are genuinely necessary.

A significant advantage of this approach is the automatic synthesis of all pins according to the parameters set within these defined regions. Package designers are intimately familiar with the frequent design changes that occur throughout the development flow. Traditionally, implementing each change individually was a time-consuming and error-prone process. With hierarchical device planning, designers can simply modify the relevant parameters of a region, and the system automatically updates the circuit. This capability can save days, or even weeks, of design effort, representing a critical leap in efficiency and responsiveness to design iterations.

Figure 1. Connectivity in a hierarchical IC package floorplan, showing that bumps within the sub-devices are represented at the top level. 
Enabling 3D IC solutions

The trajectory of IC packaging development mandates the adoption of appropriate methodologies and tools that directly address the designer’s evolving challenges. Foremost among these is the need to shield designers from being overstretched by complexity, a common outcome when tools fail to provide adequate support. Designers require assistance to operate at a practical abstraction level—one that renders the design manageable. Presenting a designer with 50 million pins without context offers no actionable insight into optimizing the design. Instead, tools must facilitate a higher-level view that guides optimal design decisions.

Furthermore, these solutions must provide access to multi-domain analysis very early in the design cycle. This includes critical analyses such as signal integrity (SI), power integrity (PI), thermal analysis, and thermal stress analysis. Performing these analyses proactively, long before the package layout is finalized, is essential for driving early design decisions and ensuring the correct path is taken when choices arise. Discovering major issues post-layout is extraordinarily costly, often necessitating a complete package redesign—a luxury rarely afforded by tight development schedules. Early analysis is therefore indispensable.

Siemens’ Innovator 3D IC portfolio solution exemplifies this integrated approach, supporting designers from initial planning and optimization through detailed analysis and package layout.

Figure 2.  Innovator3D IC solution suite cockpit.

A critical component of this solution is robust work-in-progress data management. The sheer volume of data involved in a modern package design demands meticulous tracking to ensure the correct versions of all files are utilized. Forgetting to import an updated Verilog file, for instance, can lead to the fabrication of an incorrect package. Automated tracking and error detection mechanisms are vital to mitigate the numerous potential points of failure. By integrating these capabilities within a unified, AI-infused user experience, solutions like the Innovator 3D IC solution suite are intuitive and efficient for designers to adopt and utilize.

Package designers must leverage every available tool to address the significant device complexity and the explosion in pin counts characteristic of today’s IC packaging designs. In support of this, a concerted effort is underway to develop new solutions, standards, and methodologies. For instance, new interface standards, such as UCI Express, Bunch of Wires (BOW), and Advanced Interface Specification (AIS), are emerging to standardize communication between chiplets. Concurrently, advanced design methodologies and tools are being developed to assist design teams and facilitate seamless interaction with foundries, substrate fabricators, and OSAT providers.

It is crucial for all professionals involved in package design to recognize that effective solutions are available. While many designers may perceive their specific challenges as unique, in most cases the underlying problems are shared across the industry. Fortunately, this leads to a common set of solutions. By actively seeking out and adopting these advanced tools and methodologies, designers can more effectively tackle the complexities of 3D ICs and heterogeneous integration, ensuring the successful realization of next-generation electronic systems.

Contact Siemens EDA

Also Read:

A Compelling Differentiator in OEM Product Design

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Visualizing hidden parasitic effects in advanced IC design 


CDC Verification for Safety-Critical Designs – What You Need to Know

CDC Verification for Safety-Critical Designs – What You Need to Know
by Mike Gianfagna on 11-13-2025 at 6:00 am

CDC Verification for Safety Critical Designs – What You Need to Know

Verification is always a top priority for any chip project. Re-spins result in lost time-to-market and significant cost overruns. Chip bugs that make it to the field present another level of lost revenue, lost brand confidence and potential costly litigation. If the design is part of the avionics or control for an aircraft, the stakes go up, way up. There are substantial rules and guidelines to be adhered to for this class of design. And some of those rules have evolved over decades, making interpretation and adherence challenging.

A recent white paper from Siemens Digital Industries Software examines this class of design for a particularly vexing design bug – clock domain crossing (CDC) issues and resultant metastability. The white paper does a great job explaining the subtleties of CDC bugs and how to address those issues against the rigors of safety-critical rules for airborne systems. If your chip is destined for airborne use, this white paper is must-read. A link is coming, but first I’d like to provide an overview of CDC verification for safety-critical designs – what you need to know.

CDC Challenges

The white paper does a great job explaining how CDC bugs can cause problems with a chip design, and in particular intermittent problems. In safety-critical applications, an intermittent problem can be difficult to find, and bugs that make it to silicon can result in catastrophic consequences.

To summarize the issue, we need to examine the impact of metastability on a design. This term refers to what happens in digital circuits when clock and data inputs of a flip-flop change at approximately the same time. When this occurs, the flip-flop output can oscillate and settle to a random value. This metastability will lead to incorrect design functionality such as data loss or data corruption on CDC paths. The more asynchronous clock domains there are in a design, the worse the problem can become. And in today’s highly integrated and concurrent designs, the number of independent clock domains in a typical device is growing.

The white paper presents many examples that illustrate the types of problems to look for and how to correct them. It points out that this is a serious problem in safety-critical designs in that it frequently causes chips to exhibit intermittent failures. These failures generally go undetected during simulation (which tests a chip’s logic functions) and static timing (which tests for timing – within a single clock domain).

The paper goes on to explain that a typical verification methodology simply does not consider potential bugs from clock-domain crossing paths. Thus, if CDC paths are not explicitly verified, CDC bugs are typically identified in the actual hardware device in the field, a very bad outcome for safety-critical designs.

Design Assurance – the Good and the Bad

Another important part of this story are the guidelines that must be adhered to when sourcing safety-critical airborne devices. The white paper describes document RTCA/DO-254 “Design Assurance Guidance for Airborne Electronic Hardware” in detail. This specification is used by the Federal Aviation Administration (FAA), European Union Aviation Safety Agency (EASA), and other aviation authorities to ensure that complex electronic hardware used in avionics works reliably as specified, avoiding faulty operation and potential air disasters.

This goal is clearly important. One of the challenges of implementing a methodology to achieve the goal is the size and scope of the DO-254 spec. The FAA began enforcing it in 2005. The document is modeled after earlier specifications for certifying software, which were originally published over 45 years ago.

So, there is a lot of information in this document, both old and new. All in-flight hardware (FPGA or ASIC designs) must now comply with DO-254, and correct interpretation of the requirements and implementation in a production design flow presents challenges.

Digging deeper, the white paper explains that DO-254 projects assign a design assurance level (DAL) of A through E. The level corresponds to the criticality of a resulting failure. A failure in a level A design would result in catastrophic conditions (such as the plane crashing), while a failure in a level E design might mean that some passengers could be subject to minor inconvenience. Level A (catastrophic) and level B (hazardous/severe/major) projects must not only follow DO-254 processes but must also address additional safety concerns.

How to Automate CDC Verification

The white paper then presents a detailed overview of how to build a methodology that will conform to DO-254 requirements and deliver reliable, safe chips. It is explained that a comprehensive CDC verification solution must do four distinct things:

  1. Perform a structural analysis
  2. Verify transfer protocols
  3. Globally check for reconvergence
  4. Implement netlist glitch analysis

Details of these tasks are presented, as well as some of the unique capabilities of the Siemens Questa CDC solution. The white paper explains that many companies have recognized the benefits of Questa CDC and have adopted it as an added design assurance strategy as part of their verification arsenal. Specific details are presented for several real commercial implementations using Quest CDC. These examples cover many diverse projects:

  • S.-based storage/networking company
  • Large global computer company
  • Large Japanese consumer products company
  • S.-based wireless communications provider
  • Maker of military space systems
  • Large aerospace technology company
  • Defense and aerospace systems supplier

The white paper goes on to explain one of the key aspects of the DO-254 process is to deter- mine that the tools used to create and verify designs are working properly. The process to ensure this is called “tool assessment.”

There are many dimensions to this process, and substantial details about how to achieve a successful tool assessment are presented. The diagram below provides an overall flow of the process.

Design and verification tool assessment and qualification flow diagram

A tool vendor cannot assess or qualify their own tools, and the FAA does not provide blanket approval for use of any tools in DO-254 projects

This white paper does provide valuable details and suggestions for getting through the assessment process for Questa CDC as easily as possible.

To Learn More

If you’re involved in the development of safety-critical electronics this white paper provides substantial value regarding how to minimize CDC risks and how to build a compliant design flow.

The information presented is detailed, clear and actionable. And there is an Appendix with many additional and useful references. You can get your copy of Automating Clock-Domain Crossing Verification for DO-254 (and Other Safety-Critical) Designs here. 

You can also learn more about Questa CDC here. And that’s CDC verification for safety-critical designs – what you need to know.

Also Read:

A Compelling Differentiator in OEM Product Design

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Visualizing hidden parasitic effects in advanced IC design 


Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots
by Daniel Nenni on 11-12-2025 at 10:00 am

Ceva WiFi 7 1x1 Client IP

In the rapidly evolving landscape of connected devices, where artificial intelligence meets the physical world, Ceva  has unveiled a groundbreaking solution: the Ceva-Waves Wi-Fi 7 1×1 client IP. Announced on October 21, 2025, this IP core is designed to supercharge AI-enabled IoT devices and pioneering physical AI systems, enabling them to sense, interpret, and act with unprecedented responsiveness. As IoT ecosystems expand, projected to encompass over 30 billion devices by 2030, reliable, low-latency connectivity becomes paramount. Ceva’s innovation addresses this need head-on, leveraging the IEEE 802.11be standard to deliver ultra-high performance in compact, power-constrained form factors.

At its core, the Ceva-Waves Wi-Fi 7 1×1 client IP is a turnkey solution tailored for client-side applications, such as wearables, smart home gadgets, security cameras, and industrial sensors. Unlike bulkier access point implementations, this 1×1 configuration (one spatial stream for transmit and receive) optimizes for cost-sensitive, battery-powered designs, making it ideal for mass-market adoption. Key Wi-Fi 7 features baked into the IP include Multi-Link Operation, which allows simultaneous data transmission across multiple frequency bands (2.4 GHz, 5 GHz, and 6 GHz) for seamless aggregation and reduced interference; 4096-QAM modulation for 20% higher throughput than Wi-Fi 6; and enhanced puncturing to dodge congested channels dynamically. These capabilities slash latency to sub-millisecond levels, boost peak speeds beyond 5 Gbps, and enhance reliability in dense environments, crucial for real-time applications like augmented reality glasses or autonomous drones.

What sets this IP apart is its synergy with Ceva’s broader Smart Edge portfolio, particularly the NeuPro family of NPUs. When paired with Wi-Fi 7 connectivity, these NPUs empower devices to process sensor data, run inference models, and make decisions locally at the edge. This on-device intelligence minimizes cloud dependency, fortifying data privacy by keeping sensitive information, like health metrics from a fitness tracker, off remote servers. It also extends battery life by up to 30% through efficient power management and reduces operational costs by curbing data transmission volumes. In essence, Ceva’s solution transforms passive IoT nodes into proactive physical AI agents that perceive their surroundings (via cameras or microphones), reason through AI algorithms, and act autonomously, whether adjusting a smart thermostat based on occupancy or alerting factory workers to hazards.

Tal Shalev, Vice President and General Manager of Ceva’s Wireless IoT Business Unit, emphasized the strategic timing: “Wi-Fi 7’s breakthroughs in speed, resilience, and latency are driving rapid adoption. Our turnkey solution helps customers cut complexity and time-to-market delivering smarter, more responsive IoT experiences powered by edge intelligence.” Already licensed by multiple leading semiconductor firms, the IP has seen swift uptake, underscoring its market readiness. Industry analysts echo this enthusiasm; Andrew Zignani, Senior Research Director at ABI Research, notes, “Wi-Fi 7 is set to transform IoT by enabling the low-latency, high-throughput connectivity required for real-time edge intelligence and Physical AI. Solutions like Ceva’s are critical to bringing these capabilities into cost-sensitive, battery-powered devices.”

The implications ripple across sectors. In consumer wearables, imagine earbuds that not only stream audio but also perform real-time voice-to-text translation without lag. Smart homes could orchestrate ecosystems where lights, locks, and appliances collaborate via mesh networks, anticipating user needs through predictive AI. Industrial IoT benefits from resilient links in harsh environments, enabling predictive maintenance that prevents downtime. For emerging physical AI—think robotic companions or self-navigating vacuums—Wi-Fi 7 provides the deterministic backbone for multi-device orchestration, fostering collaborative intelligence akin to a “swarm” of sensors.

B0ttom Line: Ceva’s move positions it as a linchpin in the Wi-Fi 7 rollout, with over 60 licensees already harnessing the CEVA-Waves family for diverse applications. As edge computing surges, this IP doesn’t just connect devices; it imbues them with agency, paving the way for a future where AI seamlessly bridges digital and physical realms. By democratizing advanced connectivity, Ceva accelerates innovation, ensuring that smarter, more intuitive experiences are accessible to all.

Contact CEVA

Also Read:

A Remote Touchscreen-like Control Experience for TVs and More

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier


Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU

Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU
by Daniel Nenni on 11-12-2025 at 8:00 am

SemiDynamics Cervell NPU

In the fast-paced world of AI development, bridging the gap from trained models to production-ready applications can feel like an eternity. Enter Semidynamics’ newly launched Inferencing Tools, a game-changing software suite designed to slash deployment times on the company’s Cervell RISC-V Neural Processing Unit. Announced on October 22, 2025, these tools promise to transform prototypes into robust products in hours, not weeks, by leveraging seamless ONNX Runtime integration and a library of production-grade samples.

Semidynamics, a European leader in RISC-V IP cores, has built its reputation on high-performance, open-source hardware tailored for machine learning. The Cervell NPU exemplifies this ethos: an all-in-one RISC-V architecture fusing CPU, vector, and tensor processing for zero-latency AI workloads. Configurable from 8 to 256 TOPS at INT4 precision and up to 2GHz clock speeds, Cervell scales effortlessly for edge devices, datacenters, and everything in between. Its fully programmable design eliminates vendor lock-in, supporting large language models, deep learning, and high-performance computing with standard RISC-V AI extensions. Whether powering on-device assistants or cloud-scale vision pipelines, Cervell’s efficiency stems from its unified instruction stream, enabling infinite customization without fragmented toolchains.

At the heart of the Inferencing Tools is a high-level library layered atop Semidynamics’ ONNX Runtime Execution Provider for Cervell. Developers no longer wrestle with model conversions or low-level kernel tweaks. Instead, they point to an ONNX file, sourced from repositories like Hugging Face or the ONNX Model Zoo, select a configuration, and launch inference directly on Cervell hardware. Clean APIs handle session setup, tensor management, and orchestration, stripping away boilerplate code and minimizing integration risks. This abstraction sits comfortably above the Aliado SDK, Semidynamics’ kernel-level library for peak performance tuning, offering two lanes: rapid prototyping via the Tools or fine-grained optimization via Aliado.

ONNX Runtime integration is the secret sauce. As an open-standard format, ONNX ensures compatibility across ecosystems, and Semidynamics’ Execution Provider plugs it into Cervell’s vector and tensor units via the Aliado Kernel Library. The result? Plug-and-play execution for thousands of pre-trained models, with validated performance across diverse topologies. No more custom wrappers or compatibility headaches—developers focus on application logic, not plumbing.

To supercharge adoption, Semidynamics includes production-grade samples that serve as blueprints for real-world apps. For LLMs, expect ready-to-run chatbots using Llama or Qwen models, complete with session handling and response generation. Vision enthusiasts get YOLO-based object detection pipelines for real-time analysis, while image classifiers draw from ResNet, MobileNet, and AlexNet for tasks like medical imaging or autonomous navigation. These aren’t toy demos; they’re hardened for scale, with built-in error handling and optimization hooks.

The benefits ripple outward. “Developers want results,” notes Pedro Almada, Semidynamics’ lead software developer. “With the Inferencing Tools, you’re running on Cervell, prototype in hours, then harden for production.” Teams report shorter cycles, predictable latency, and maintainable codebases, ideal for embedding AI in agents, assistants, or edge pipelines. Complementing this is the Aliado Quantization Recommender, a sensitivity-aware tool that scans ONNX models for optimal bit-widths (INT4 to INT2), balancing accuracy and bandwidth without exhaustive trials.

Bottom line: In an era where AI deployment lags innovation, Semidynamics’ Inferencing Tools democratize Cervell’s power. By fusing open hardware with streamlined software, they accelerate the journey from lab to launch, empowering developers to ship smarter, faster products. As RISC-V gains traction in AI, expect this suite to redefine edge inferencing—open, scalable, and unapologetically efficient.

Also Read:

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Vision-Language Models (VLM) – the next big thing in AI?

Semidynamics adds NoC partner and ONNX for RISC-V AI applications


Adding Expertise to GenAI: An Insightful Study on Fine-Tuning

Adding Expertise to GenAI: An Insightful Study on Fine-Tuning
by Bernard Murphy on 11-12-2025 at 6:00 am

AI Model Tuner

I wrote earlier about how deep expertise, say for high-quality RTL design or verification, must be extracted from in-house know-how and datasets. In general, such methods start with one of many possible pre-trained models (GPT, Llama, Gemini, etc.). To this consultants or in-house teams add fine-tuning training, initially through supervised fine-tuning (SFT), refined through reinforcement learning with human feedback (RLHF) and subsequently enhanced/maintained through iterative refinement. ChatGPT claims this is the dominant flow (I incline to thinking them pretty accurate in their own domain). Supervision is through labeling (question/answer pairs). In most cases relying on human labeling alone is too expensive, so we must learn how to automate this step.

 A nice example of SFT from Microsoft

This Microsoft paper studies two different methods to fine-tune a pre-trained model (GPT-4), adding expertise on recent sporting events. The emphasis in this paper is on the SFT step rather than following steps. Before you stop reading because this isn’t directly relevant to your interests, I can find no industry-authored papers on fine-tuning for EDA. I know from a comment at a recent conference that Microsoft hardware groups are labeling design data, so I suspect topics like this may be a safe proxy for publishing research in areas relevant to internal proprietary work.

Given the topic tested in the study, the authors chose to fine-tune with data sources (here Wikipedia articles) added after the training cutoff for the pre-trained model, in this case September 2021. They looked at two approaches to fine-tuning on this corpus, one token-based and one fact-based.

The token-based method for label generation is very simple and mirrors the standard practice for generation per the paper. Here they seed with a manually generated label per the article overview section and prompt to generate a bounded set of labels from the article. The second method (which they call fact-based) is similar except that it prompts the model to break down complex sentences if needed into multiple atomic sentences. The authors also allowed for some filtering in this case to remove facts irrelevant to the purpose of the study. Here also the model was asked to generate multiple unique labels.

The paper describes training trials, run in each case on the full set of generated labels, also subsets to gauge sensitivity to training sample size. Answers are validated using the same model running a test prompt (like a test for a student) allowing only pass/fail responses.

The authors compare accuracy of results across a variety of categories against results from the untuned pre-trained model, their range of scaled fine-tuned options, and against RAG over the same sections used in fine-tuning but based on Azure OpenAI hybrid search. They conclude that while token-based training does increase accuracy over the untrained model, it is not as uniform in coverage as fact-based training.

Overall they find that SFT significantly improves performance over the base pre-trained model within the domain of the added training. In this study RAG outperforms both methods but they get close to RAG performance with SFT.

I don’t find these conclusions entirely surprising. Breaking down complex sentences into individual labels feels like it should increase coverage versus learning from more complex sentences. And neither method should be quite as good as vector-based search (more global similarity measures) which could catch inferences that might span multiple statements.

Caveats and takeaway

Fine-tuning is clearly still a very dynamic field, judging by recommended papers from Deep Research in Gemini and ChatGPT, complemented by my own traditional research (Google Scholar for example, where I found this paper). There is discussion of synthetic labeling, though concerns that this method can lead to significant errors without detailed human review.

One paper discusses how adding a relatively small set (1000) of carefully considered human-generated labels can be much more effective for performance than large quantities of unlabeled or perhaps synthetically labeled training data.

There is also concern that under some circumstances fine-tuning could break capabilities in the pre-trained model (this is known as catastrophic forgetting).

My takeaway is that it is possible to enhance a pre-trained model against training data and modest training prompts and get significantly better response accuracy than the pre-trained model alone could provide. However expert review is important to build confidence in the enhanced model and it is clear that 100% model accuracy is still an aspirational goal.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

AI RTL Generation versus AI RTL Verification