Synopsys IP Designs Edge AI 800x100

Free and Open Chip Design Tools: Opportunities, Challenges, and Outlook

Free and Open Chip Design Tools: Opportunities, Challenges, and Outlook
by Admin on 08-24-2025 at 10:00 am

OPen EDA Ecosystem 2025 SemiWiki

Designing semiconductor chips has traditionally been costly and controlled by a few major Electronic Design Automation (EDA) vendors—Cadence, Synopsys, and Siemens EDA who dominate with proprietary tools protected by NDAs and restrictive licenses. Fabrication also requires expensive, often export-controlled equipment. This oligopoly raises barriers for small companies, researchers, and students.

A growing movement is shifting towards free and open-source EDA tools and open Process Design Kits (PDKs) that lower costs, broaden access, and foster innovation. Open EDA can be applied partially for simulation or across the entire design flow from concept to fab-ready layout. While most advanced nodes still require proprietary PDKs, open PDKs for older nodes (e.g., SkyWater 130 nm, IHP 130 nm, GlobalFoundries 180 nm, ICSprout 55 nm) enable ASIC production without NDAs.

Illustrated uses span education, research, and industry:
  • Commercial adoption: Google, Nvidia, and NXP use tools like Verilator (fast open-source simulation) and DREAMPlace (timing optimization) in production workflows. SPHERICAL applies open tools to design radiation-hardened chips for satellites; Swissbit integrates open, provably correct cryptographic modules.

  • Educational outreach: Initiatives like Tiny Tapeout and One Student One Chip let thousands of students produce working ASICs for as little as $300, using open PDKs.

  • Emerging products: Automotive ECUs, high-speed serial links, and gaming consoles are being prototyped with open flows.

Advantages and opportunities include:
  • Cost reduction: Proprietary EDA licenses can cost $10k+ per workstation per month. Open tools slash entry costs, enabling low-volume and niche ASIC designs to be profitable.

  • Innovation: Open frameworks allow modification, integration of AI-based design aids, and rapid iteration, often impossible in closed systems.

  • Security: Open designs and tools can be audited, mathematically verified, and built to standards like Caliptra, reducing risks of hidden hardware Trojans or backdoors. Transparent, community-verified cryptographic modules enhance trust.

  • Education and skills: Students can install and explore open EDA freely, enlarging the future talent pool in semiconductor design.

Challenges remain significant:
  • Performance gap: Open tools may lag behind commercial software in supporting advanced nodes, analog/mixed-signal, or mm-wave designs.

  • PDK access: Leading-edge fabs (e.g., TSMC N3) will likely never open their PDKs, limiting open EDA’s reach to legacy and mid-range processes.

  • Commercial risk: For high-value ASICs, companies may prefer proven proprietary flows to minimize costly re-spins.

  • Coordination vs. fragmentation: Efforts like OpenROAD (US), iEDA (China), and Coriolis (France) may duplicate work or compete rather than pool resources.

The paper outlines strategic options:
  • For investors: Experiment internally with open tools, join cost-sharing consortia (CHIPS Alliance, Linux Foundation), and explore hybrid flows mixing open and proprietary components.

  • For governments: Fund open PDK development, secure fabrication options, and support international cooperation—even across geopolitical lines—to build technological sovereignty.

Security-driven innovation is a standout theme. Openness enables formal verification of hardware components, provably secure random number generators, and side-channel-resistant designs. These can be deployed in “trusted” or even open fabs, forming verified value chains from chip design to system integration.

The future of open EDA could follow several paths:
  1. Remain primarily educational, producing small ASICs for learning.

  2. Emerge as a viable commercial competitor to the “Big Three,” especially for mid-range nodes.

  3. Blend into hybrid flows, where open tools augment proprietary ones.

  4. Drive creation of transparent, standardized fabs with openly shared process data.

Bottom line: While barriers exist, the trend appear, per UCSD’s Andrew Kahng, “unstoppable and irreversible.” As more companies and governments seek cost-effective innovation, security assurance, and independence from proprietary lock-in, free and open EDA is poised to expand its role in global semiconductor development.

The original paper from HEP Alliance is here.

Also Read:

WEBINAR: Functional ECO Solution for Mixed-Signal ASIC Design

Taming Concurrency: A New Era of Debugging Multithreaded Code

Perforce Webinar: Can You Trust GenAI for Your Next Chip Design?


Chiplets: providing commercially valuable patent protection for modular products

Chiplets: providing commercially valuable patent protection for modular products
by Robbie Berryman on 08-24-2025 at 6:00 am

Screenshot 2025 07 27 at 08 32 09

Many products are assembled from components manufactured and distributed separately, and it is important to consider how such products are manufactured when seeking to provide commercially valuable patent protection. This article provides an example in the field of computer chip manufacture.

Chiplets

A system-on-a-chip (SoC) is a type of integrated circuit product which acts as an entire computer in a single package, providing low-power and high performance data processing. SoCs are widely used, and provide the brains of smartphones, leading edge laptops, IoT devices, and much more.

A SoC includes essential functionality such as a central processing unit (CPU), memory, input/output circuitry, and so on. A traditional SoC provides these functions within a single monolithic piece of semiconductor material (for example, silicon) manufactured and distributed as a single integral device.

An emerging technology is the manufacture of a chiplet-based SoC by assembling a number of separately manufactured microchips known as ”chiplets” together in a package. Each chiplet is a building block having some functionality, and the collection of chiplets together provides the functionality of the SoC.

A chiplet-based SoC can have various advantages over monolithic SoCs, including increased production yield due to testing of individual chiplets, increased design flexibility as chiplets can be made using different manufacturing processes, and allowing simplified SoC design by assembling off-the-shelf chiplets.

Monolithic system-on-a-chip (SoC) and chiplet-based SoC

The law

In the UK, direct infringement under s.60(1) UK Patents Act requires that an infringing product includes every feature claimed in a patent claim.

This is fairly straightforward when considering infringement by an integral device such as a monolithic SoC: anyone making, importing, or selling a monolithic SoC including the claimed invention infringes the patent. For a monolithic SoC it doesn’t matter which parts of the SoC perform the different parts of the invention: the SoC is manufactured in one go and therefore an infringing SoC includes all of the claimed features from the point of manufacture.

However, due to the introduction of chiplets, a SoC is also an example of a product which may be assembled from separately manufactured parts. When patenting inventions in SoCs, such as developments at an architectural or micro-architectural level or techniques which might be implemented using a SoC, it may be natural to claim features of the SoC as a whole based on an assumption that a monolithic SoC would be used. However, this might lead to difficulties enforcing the patent.

In particular, a patent claim for an invention implemented in a chiplet-based SoC might include features provided by different chiplets, meaning that no chiplet alone provides all of the features of the invention. Therefore, manufacturers (or importers or sellers) of individual chiplets might not directly infringe the patent. The patent might only be directly infringed when the chiplets are finally assembled into a SoC, and this can diminish the commercial value of the patent as large parts of the supply chain are unprotected.

Indirect infringement under s.60(2) UK Patents Act might provide a get-out in some cases where a chiplet could be considered to indirectly infringe a patent for a SoC even if the chiplet does not include all the claimed features. However, it is often much harder to prove that indirect infringement has occurred, especially in cases of cross-border sale between a manufacturer in one territory and a downstream party in another territory.

Practical advice

It is important that an attempt is made to draft claims so they are directly infringed by products which are manufactured and distributed together. In the field of computer chip manufacture, claims should attempt to cover individual chiplets rather than full SoCs.

Often the core concept of an invention is actually provided by features of a particular sub-component, such as a particular chiplet. A careful selection of claim features can limit the claims to that particular component, so manufacture and sale of the component alone directly infringes the claim. If other elements are important to provide context for the inventive concept, it may be sufficient to refer to those elements indirectly in the claims so that they are not required for infringement.

It is worth noting that, as demonstrated by the introduction of chiplets changing the way SoCs are manufactured, what might be considered an integral product is liable to change as technology develops. We therefore recommend seeking professional advice from a patent attorney familiar with the technical field of your invention.

D Young & Co is a leading top-tier European intellectual property firm, dedicated to protecting and enforcing our clients’ IP rights. For over 130 years we’ve been applying our world-class expertise to take ideas, products and services further.

Also Read:

Alphawave Semi and the AI Era: A Technology Leadership Overview

Enabling the Ecosystem for True Heterogeneous 3D IC Designs

Altair SimLab: Tackling 3D IC Multiphysics Challenges for Scalable ECAD Modeling


IMEC’s Advanced Node Yield Model Now Addresses EUV Stochastics

IMEC’s Advanced Node Yield Model Now Addresses EUV Stochastics
by Fred Chen on 08-23-2025 at 8:00 am

EUV Stochastics

It lays the foundation for the Stochastics Resolution Gap

Chris Mack, the CTO of Fractilia, recently wrote of the “Stochastics Resolution Gap,” which is effectively limiting the manufacturability of EUV despite its ability to reach resolution limits approaching 10 nm in the lab [1,2]. As researchers have inevitably found, the shrinking dimensions of features targeted by EUV lithography have led to increasing stochastic variability operating at the molecular level [1,3,4]. This, in turn, leads to variations of feature width, feature position, edge roughness, and worst of all, yield-killing defects.

An SPIE paper by IMEC last year gave an updated yield model which strove to take into account the stochastic behavior of EUV lithography [5]. The model made use of defect density from calibrated wafer data and was said to be benchmarked against industry [5,6]. The model is essentially:

Wafer yield = Systematic yield x Random yield,

where systematic yield is an estimated value (98%) and random yield is given by the Poission model exp(-A*D0), with A being the die area (here taken to be 1 cm2), and D0 being the defect density. Since some layers require more than one mask, D0 would be the product of the defect densities per mask use. Since EUV stochastic effects get worse with smaller pitch, the corresponding D0 per use of EUV mask increases with pitch. In fact, there is a cliff that starts just below 40 nm pitch (Figure 1).

Figure 1. Defect density per EUV mask use from calibrated wafer data, owing to stochastic behavior [5].

Advanced nodes also use immersion ArF lithography (193i). In IMEC’s model, the defect density per 193i mask use is held fixed at 0.005/cm2 [5,6]. Table 1 gives the assumed D0 values per mask use for the 7nm and 5nm nodes.

Table 1. Defect density (per cm2) per mask use for 7nm and 5nm process nodes [5].

Note that there are several versions of 5nm nodes. N5M applies both 193i and EUV for metal patterning (one mask each), while N5 EUV applies EUV only (single exposure patterning) and N5C EUV adds an EUV cut mask for metal patterning. Likewise, for 7nm, N7C EUV is N7 EUV with an EUV cut mask added for metal patterning. N5M EUV has better D0 per mask due to a more relaxed M1 pitch (equal to gate pitch), while N5 EUV and N5C EUV used tighter M1 pitch, effectively a shrink compared to N5 193i and N5M.

Table 2. Number of 193i and EUV masks used per 7nm and 5nm layer, assumed in [5].

Assuming a 1 cm2 die area, the yields are calculated for each of the five layers (M1, V1, M2, V2, M3), according to the number of masks used, then the layer yields are multiplied together, and finally the systematic yield is multiplied by the result to give the total yield. For 7nm, we see in Figure 2 that although the use of fewer masks with EUV increases the yield, the difference from the all 193i case is small. For 7nm, 193i LELE (self-aligned) patterning is largely sufficient and actually cheaper than single exposure EUV [7,8].

Figure 2. Estimated yields for 7nm (all 193i/all EUV single exposure/all EUV, including cuts).

On the other hand, for 5nm, more layers are at tighter pitches, increasing the stochastic defect density from EUV. Thus, increasing EUV use actually increased the overall defect density, lowering yield (Figure 3).

Figure 3. Estimated yields for 5nm (all 193i/mixed 193i/EUV/all EUV single exposure/all EUV, including cuts).

The reason for the drastic change in trend is the much higher EUV defect density (0.057/cm2) at the tighter metal pitch. In fact, compared to the 193i defect density (0.005/cm2), it is 11 times higher, meaning its impact on yield would be the same as using eleven 193i masks in multipatterning!

Thus, IMEC’s updated yield model is the basis for the Stochastics Resolution Gap, as it can be projected that yield impact from stochastics outweighs intrinsic resolution in process choices for advanced nodes. Further tuning of the model would be beneficial, such as accounting for performance impact from roughness and edge placement error as well as CD variations. This spotlights the need for improved, high-volume metrology techniques for detecting these issues from EUV stochastics, definitely something that Fractilia would be happy to deal with.

References

 

[1] C. Mack, Stochastics: Yield-Killing Gap No One Wants to Talk About.

[2] J. Y. Choi et al., Proc. SPIE 13424, 134240A (2025).

[3] H. Fukuda, J. Appl. Phys. 137, 204902 (2025), and references therein; https://doi.org/10.1063/5.0254984.

[4] F. Chen, Facing the Quantum Nature of EUV Lithography.

[5] Y-P. Tsai et al., Proc. SPIE 12954, 1295404 (2024).

[6] Y-P. Tsai et al., Proc. SPIE 12052, 1205203 (2022).

[7] E. Vidal-Russell, J. Micro/Nanopatterning, Materials, and Metrology 23, 041504 (2024).

[8] L-Å Ragnarsson et al., EDTM 2022.

This article first appeared on Substack: IMEC’s Advanced Node Yield Model Now Addresses EUV Stochastics

Also Read:

Edge Roughness Differences Among EUV Resists

Facing the Quantum Nature of EUV Lithography

High-NA Hard Sell: EUV Multi-patterning Practices Revealed, Depth of Focus Not Mentioned


Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move

Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move
by Daniel Nenni on 08-22-2025 at 10:00 am

Dan is joined by Ben Packman, Chief Strategy Officer of PQShield. Ben leads global expansion through sales and partner growth across multiple vertical markets, alongside taking a lead role in briefing both government and the supply chain on the quantum threat. He has 30 years of experience in technology, health, media, and telecom, as well as advising multiple startups in the UK tech space.

Dan discusses the substantial worldwide effort underway to implement post-quantum cryptography (PQC) standards with Ben. Market dynamics and the interdependencies of a global supply chain are reviewed. Ben describes the various timelines being followed by chip. software and system companies and explains the sometimes complex interdependencies of each segment as everyone moves toward implementation of standards around the world for PQC compliance.

Ben discusses the risks and costs involved and provides some predictions on how the entire effort will come together and in what time frame.

Contact PQShield

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Silvaco: Navigating Growth and Transitions in Semiconductor Design

Silvaco: Navigating Growth and Transitions in Semiconductor Design
by Admin on 08-22-2025 at 8:00 am

Silvaco Wally Rhines

Silvaco Group, Inc., a veteran player in the EDA and the TCAD space, continues to evolve amid the booming semiconductor industry. Founded in 1984 and headquartered in Santa Clara, California, Silvaco specializes in software for semiconductor process and device simulation, analog custom design, and semiconductor intellectual property. As of September 2025, the company has demonstrated resilience through record financials, strategic acquisitions, leadership shakeups, and product innovations, positioning itself as a key enabler for advanced technologies like AI, photonics, and power devices.

Financially, Silvaco closed 2024 on a high note, reporting record gross bookings of $65.8 million and revenue of $59.7 million for the full year. This momentum carried into 2025, with first-quarter bookings reaching $13.7 million and revenue at $14.1 million, alongside securing nine new customers. The second quarter saw bookings of $12.91 million and revenue of $12.05 million, with 10 new logos in sectors like photonics, automotive, military, foundry, and consumer electronics. Looking ahead, Silvaco projects third-quarter 2025 bookings between $14.0 million and $18.2 million, a 42% to 84% jump from the prior year’s third quarter. These figures reflect sustained demand for its digital twin modeling platform, which aids in yield improvement and process optimization. However, challenges emerged in late 2024, including a $2 million revenue midpoint reduction due to order freezes and pushouts, though the company retains a buy rating from analysts.

Leadership transitions marked a pivotal shift in 2025. On August 21, Silvaco announced a CEO change, with Dr. Walden C. Rhines stepping in as the new chief executive. Rhines, a semiconductor industry veteran, brings expertise from his prior roles at Mentor Graphics and Texas Instruments. This followed the departure of Babak Taheri. Complementing this, Silvaco appointed Chris Zegarelli as Chief Financial Officer effective September 15, 2025. Zegarelli, with over 20 years in semiconductor finance, previously held positions at Maxim Integrated and Analog Devices. These moves come after the March 2025 exit of the former CFO, signaling a focus on stabilizing operations amid growth.

Acquisitions have bolstered Silvaco’s portfolio. In March 2025, it expanded its offerings by acquiring Cadence’s process design kit business, enhancing its TCAD capabilities for advanced nodes. More recently, the company completed the purchase of Mixel Group, Inc., a leader in low-power, high-performance mixed-signal connectivity IP, strengthening its SIP lineup for automotive and IoT applications. On the product front, Silvaco extended its Victory TCAD and digital twin modeling platform to planar CMOS, FinFET, and advanced CMOS technologies in April 2025, enabling next-generation semiconductor development. Its tools, including FTCO™ for fab optimization and Power Device Analysis, continue to drive efficiency.

Partnerships and initiatives underscore Silvaco’s commitment to talent and ecosystem growth. In June 2024, it collaborated with Purdue, Stanford, and Arizona State universities to enhance semiconductor workforce development. Additionally, Silvaco’s EDA tools were included in India’s Digital India (DLI) Scheme, inviting startups and MSMEs to access them via the ChipIN Centre. Upcoming events include SEMICON India (September 2-4, 2025) and IP-SoC China (September 11, 2025), where Silvaco will showcase its innovations.

Looking forward, Silvaco’s trajectory appears promising despite industry headwinds like talent shortages and rising design costs. With 46 new customers in 2024 and continued expansion, the company is well-positioned to capitalize on the AI chip market’s growth. As it integrates recent acquisitions and leverages new leadership, Silvaco aims to deliver value in a sector projected to hit $383 billion by 2032. Investors will watch closely as it navigates these dynamics toward sustained profitability.

About Silvaco Group, Inc.
Silvaco is a provider of TCAD, EDA software, and SIP solutions that enable semiconductor design and digital twin modeling through AI software and innovation. Silvaco’s solutions are used for semiconductor and photonics processes, devices, and systems development across display, power devices, automotive, memory, high performance compute, foundries, photonics, internet of things, and 5G/6G mobile markets for complex SoC design. Silvaco is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, Egypt, Brazil, China, Japan, Korea, Taiwan, Singapore and Vietnam. Learn more at silvaco.com.

Also Read:

Analysis and Exploration of Parasitic Effects

Silvaco at the 2025 Design Automation Conference #62DAC

TCAD for 3D Silicon Simulation


Taming Concurrency: A New Era of Debugging Multithreaded Code

Taming Concurrency: A New Era of Debugging Multithreaded Code
by Admin on 08-21-2025 at 10:00 am

tech paper 800x600 02

As modern computing systems evolve toward greater parallelism, multithreaded and distributed architectures have become the norm. While this shift promises increased performance and scalability, it also introduces a fundamental challenge: debugging concurrent code. The elusive nature of race conditions, deadlocks, and synchronization bugs has plagued developers for decades. The complexity of modern software systems—spanning millions of lines of code, multiple threads, and distributed nodes—demands a radical transformation in debugging methodology. This is where technologies like time travel debugging, thread fuzzing, and multi-process correlation step in.

Multithreaded applications are notoriously difficult to reason about because their behavior is often non-deterministic. A program might pass every test during development, only to fail sporadically in production due to a subtle timing bug. Race conditions arise when the outcome of code depends on the unpredictable ordering of thread execution. Deadlocks occur when threads wait on each other indefinitely. Such defects are not only hard to reproduce, but even harder to isolate and correct using conventional debugging tools.

Historically, debugging concurrency issues involved a laborious process: reproduce the failure, guess at its cause, add logging, recompile, and try again. This loop could span weeks, especially when the problem occurred in customer environments where direct access to the system was limited. Engineers would often spend more time trying to reproduce bugs than actually fixing them.

Undo, a company specializing in advanced debugging technologies, proposes a modern solution: time travel debugging (TTD). TTD allows developers to capture a complete execution trace of their application, enabling them to step forward and backward through code execution like a video replay. Every line executed, every memory mutation, and every variable value is preserved in a recording file. With this, engineers can inspect the application at any point in time, using standard debugging tools and commands. A single recording becomes a 100% reproducible snapshot of the program’s behavior, regardless of the environment or timing.

The power of TTD is amplified when paired with thread fuzzing. This technique intentionally perturbs the scheduling of threads during testing to make hidden concurrency bugs more likely to appear. Unlike random bug hunting, thread fuzzing is systematic. It can simulate scenarios such as thread starvation, lock contention, and data race conditions—revealing defects that may occur only once in a million executions. Undo’s feedback-directed thread fuzzing, introduced in version 8.0, takes this further by identifying shared memory locations accessed by multiple threads and targeting them for more frequent thread switching. This significantly increases the likelihood of exposing race conditions.

Another essential feature is multi-process correlation, which enables simultaneous debugging of multiple cooperating processes. Whether processes communicate via shared memory or sockets, Undo captures every inter-process read and write. By analyzing this “shared memory log,” developers can track exactly which process modified a variable and when. With commands like ublame and ugo, they can jump directly to the code responsible for a data inconsistency—an otherwise daunting task in distributed systems.

These technologies represent a paradigm shift. Debugging is no longer about hoping to reproduce a failure, but about deterministically analyzing it after it’s happened—once and for all. Major technology companies such as SAP, AMD, Siemens EDA, Palo Alto Networks, and Juniper Networks have adopted Undo to accelerate their debugging cycles and improve software reliability.

In an age where concurrency is a feature, not an option, debugging tools must evolve to match the complexity they confront. Undo’s time travel debugging, thread fuzzing, and multi-process correlation offer a robust, scalable solution. They don’t just make debugging faster—they make the previously impossible, possible. And in doing so, they free developers to focus less on chasing ghosts and more on building the future.

Read the full technical paper here.

Also Read:

Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law

CEO Interview with Dr Greg Law of Undo


Perforce Webinar: Can You Trust GenAI for Your Next Chip Design?

Perforce Webinar: Can You Trust GenAI for Your Next Chip Design?
by Mike Gianfagna on 08-21-2025 at 6:00 am

Perforce Webinar Can You Trust GenAI for Your Next Chip Design?

GenAI is certainly changing the world. Every day there are new innovations in the use of highly trained models to do things that seemed impossible just a short while ago. As GenAI models take on more tasks that used to be the work of humans, there is always a nagging concern about accuracy and bias. Was the data used to train the model correct?  Do we know where it all came from, and can the sources be trusted? In chip design, these concerns are certainly there as well. Errors introduced this way can be hard to find and extremely expensive to fix. If you’re not worried about this, you should be.

Perforce recently held a webinar that deals with this class of problem in a predictable and straight-forward way. You can watch a replay of the webinar here. Below is a preview of some of the topics that are addressed in the Perforce Webinar to answer the question – can you trust GenAI for your next chip design?

The Webinar Presenter

Vishal Moondhra

Vishal Moondhra is the VP of Solutions Engineering at Perforce IPLM. He brings over 20 years of experience in digital design and verification.  Vishal brings the right skillset and perspective to the webinar. He is a recognized expert in his field and often speaks on current challenges in semiconductor design.

His experience in engineering and senior management includes innovative startups like LGT and Montalvo and large multi-national corporations such as Intel and Sun. In 2008, Vishal co-founded Missing Link Tools, which built the industry’s first comprehensive design verification management solution, bringing together all aspects of verification management into a single platform. Missing Link was acquired by Methodics in 2012 and Methodics was acquired by Perforce in 2020.

Topics Covered

Vishal begins by framing the problem. What are the unique challenges and risks associated with using GenAI in the semiconductor industry? And how do these risks compare to software development? There are four areas he sets up for further investigation. They are:

  • Liability: Sensitivity around ownership and/or licensing of data used for training
  • Lack of Traceability: Challenges in tracing exactly how a model was trained
  • High Stakes: Extremely high cost of errors & impact on production timelines
  • Data Quality Concerns: Mixing external IP with internal, fully vetted datasets

A lot of the discussion that follows focuses on design flows and provenance. We all know what a design flow is, but to be clear, here is a definition of provenance:

The history of the ownership of an object, especially when documented or authenticated.

So, a key point in trusting GenAI for chip design is trusting the GenAI models. And trusting those models means knowing how the models were trained, and what data was used for that training. More broadly, having a full picture of the provenance of all the IP and training data that goes into your design. The problem space is actually larger than this. The figure below provides a more comprehensive view of all the information that must be tracked and managed to achieve the required level of trust for all aspects of the design.

What Needs to be Managed

Vishal discusses some of the elements required to achieve trust in using AI to train internal models. These items include:

  • Clear and auditable data provenance for all training datasets
  • Complete traceability of all IPs and IP versions
  • Secure and compliant use of both internal and external IP
  • Enhance trust in AI adoption using highly sensitive IP

Vishal then details the processes required to setup an AI training/machine learning pipeline. He also provides details about how Perforce IPLM can be used to implement a comprehensive system to track and validate all aspects of a design to achieve a level of trust and confidence that will provide the margin of victory for any complex design project.

To Learn More

Chip design has become far more complex. It’s not just the design complexity and that all-important “right first time” mandate. It now also includes massive IP and supporting data from a worldwide supply chain. Increasing reliance on GenAI models to accelerate the whole process brings in the additional requirements to examine how those models are trained and what data was used in the process.

All of this is important to understand and manage. This webinar from Perforce provides significant details about how to set up the needed processes and how to use the information effectively. I highly recommend you watch it.  You can access the webinar replay here. Check out the Perforce webinar where you will find out how you can trust GenAI for your next chip design.

Also Read:

Perforce at DAC, Unifying Software and Silicon Across the Ecosystem

Building Trust in Generative AI

Perforce at the 2025 Design Automation Conference #62DAC


Weebit Nano Moves into the Mainstream with Customer Adoption

Weebit Nano Moves into the Mainstream with Customer Adoption
by Mike Gianfagna on 08-20-2025 at 10:00 am

Weebit Nano Moves into the Mainstream with Customer Adoption

Disruptive technology typically follows a path of research, development, early deployment and finally commercial adoption. Each of these phases are difficult and demanding in different ways. No matter how you measure it, getting to the finish line is a significant milestone for any company. Weebit Nano is disrupting the way embedded non-volatile memory is implemented with its resistive RAM, or ReRAM technology. The company recently announced some new developments that indicate that the all-important move to commercial adoption is happening. Let’s look at the details as Weebit Nano moves into the mainstream with customer adoption.

Why This is Significant

Embedded non-volatile memory (NVM) finds application in many high-growth markets including automotive, consumer, medical/wearable, industrial and the ubiquitous application of AI inferencing on the edge to name just a few. Flash has been the workhorse here for many years. But flash is hitting limits such as power consumption, speed, endurance and cost. It is also not scalable below 28nm.

Newer technologies such as ReRAM offer a way around these issues to keep the innovation pipeline going. Weebit Nano was incorporated in 2015 with a vision of creating a leap forward in storage and computing capabilities to drive the proliferation of intelligent devices. Its development and validation in the market of ReRAM has been an important part of that journey. This is worth watching as ReRAM delivers a combination of high performance, low power, and low cost that is not achievable by other NVMs.

You can learn more about Weebit Nano’s journey on SemiWiki here.

What Was Announced

As a public company traded on the Australian Securities Exchange  (ASX), Weebit Nano recently issued a Q4 FY25 Quarterly Activities Report. A key announcement was that the technology transfer work to onsemi was progressing well with tapeout of first demo chips embedded with Weebit ReRAM expected this year. For background, NASDAQ-100 onsemi is a tier-1 IDM, designing and manufacturing both its own products and supporting a few select product companies.

The announcement went on to report that after tapeout and qualification, Weebit ReRAM will be available in onsemi’s Treo™ Platform. This provides  a cost-effective, low-power NVM suitable for use in high temperature applications such as automotive and industrial. Treo is an analog/mixed signal platform available from onsemi. You can learn more about the Treo Platform here.

Weebit Nano also announced that it is on track to complete qualification at DB HiTek this calendar year as well. DB HiTek, formerly Dongbu HiTek, is a semiconductor contract manufacturing and design company headquartered in South Korea. DB HiTek is one of the major contract chip manufacturers, alongside TSMC, Samsung Electronics, GlobalFoundries, and UMC. It is also the second-largest foundry in South Korea, behind Samsung Electronics. You can learn more about DB HiTek here.

It was also reported that DB HiTek is demonstrating Weebit’s ReRAM, embedded in a test chip manufactured at DB HiTek, at industry events such as the PCIM conference in Germany – Europe’s largest power semiconductor exhibition. The edge AI demonstration, running on a DB HiTek 1Mb ReRAM module in silicon, showed an application of gesture recognition and was effective to showcase for potential customers the advantages of integrating ReRAM on-chip.

It was also reported that technical evaluations and commercial negotiations are progressing with more than a dozen foundries, IDMs, and product companies. Weebit remains well-positioned to meet its targets, securing multiple licensing agreements before the end of the calendar year.

Related to this, another key announcement is that Weebit Nano signed a design license agreement with its first product customer. The customer, a U.S.-based company, plans to incorporate Weebit’s technology into select security-related applications. This is an important milestone in Weebit Nano’s transition to commercial adoption, with more licensing deals expected soon.

To Learn More

Coby Hanoch

The announcement went on to provide impressive financial details for the company. There are also comments from Coby Hanoch, CEO of Weebit Nano that are definitely worth reading. Almost every design requires some type of embedded non-volatile memory. In the face of advancing process nodes, Weebit Nano’s ReRAM is one to watch. You can read the entire announcement here. And that’s how Weebit Nano moves into the mainstream with customer adoption.

Also Read:

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Podcast EP267: The Broad Impact Weebit Nano’s ReRAM is having with Coby Hanoch

Weebit Nano is at the Epicenter of the ReRAM Revolution


Everspin CEO Sanjeev Agrawal on Why MRAM Is the Future of Memory

Everspin CEO Sanjeev Agrawal on Why MRAM Is the Future of Memory
by Admin on 08-20-2025 at 8:00 am

Toggle MRAM Everspin SemiWiki

Everspin’s recent fireside chat, moderated by Robert Blum of Lithium Partners, offered a crisp look at how the company is carving out a durable niche in non-volatile memory. CEO Sanjeev Agrawal’s core message was simple: MRAM’s mix of speed, persistence, and robustness lets it masquerade as multiple memory classes, data-logging, configuration, and even data-center memory—while solving headaches that plague legacy technologies.

At the heart of Everspin’s pitch is performance under pressure. MRAM reads and writes in nanoseconds, versus microsecond-class writes for NOR flash. In factory automation, that difference is existential: robots constantly report state back to PLCs, and a sudden power loss with flash can scrap in-process work. With MRAM, the system snapshots in nanoseconds, so when power returns, machines resume exactly where they left off. The same attributes—instant writes, deterministic behavior, and automotive-grade temperature tolerance (−40°C to 125°C) translate into reliability for EV battery management systems, medical devices, and casino gaming machines (which must log every action multiple times).

Everspin organizes its portfolio into three families. “Persist” targets harsh environments and mission-critical data capture. “UNESYS” combines data and code for systems that need high-speed configuration—think FPGAs that today rely on NOR flash and suffer painfully slow updates. Agrawal’s thought experiment is vivid: a large configuration image that takes minutes on NOR could be near-instant on MRAM reboots and rollbacks included. Finally, “Agilus” looks forward to AI and edge-inference use cases, where MRAM’s non-volatility, SRAM-like speed, and low leakage enable execute-in-place without the refresh penalties and backup complexities of SRAM.

The business model is intentionally diversified: product sales; licensing and royalties (including programs where primes license Everspin IP, qualify processes, and pay per-unit royalties); and U.S. government programs that need radiation-tolerant, mission-critical memory. On the product side, Everspin ships both toggle (field-switched) MRAM and perpendicular spin-transfer torque (STT-MRAM), the latter already designed into data-center and configuration solutions. The customer base spans 2,000+ companies—from Siemens and Schneider to IBM and Juniper served largely through global distributors (about 90% of sales), with Everspin engaging directly early in the design-in cycle.

Strategically, the company sees a major opening as NOR flash stalls around the 40 nm node and tops out at modest densities. Everspin is shipping MRAM that “looks like NOR” at 64–128 Mb today, with a roadmap stepping from 256 Mb up to 2 Gb, aiming to land first high-density parts in the 2026 timeframe. Management sizes this replacement and adjacent opportunity at roughly $4.3B by 2029, and claims few credible MRAM rivals at these densities. Competitively, a South Korea–based supplier fabbed by Samsung offers a single native 16 Mb die (binned down to 4 Mb) with a longer-dated jump to 1 Gb, while Avalanche focuses on aerospace/defense with plans for 1 Gb STT-MRAM; Everspin’s pitch is breadth of densities, standard packages (BGA, TSOP), and an active roadmap.

Beyond industrial and config memory, low-earth-orbit satellites are a fast-emerging tailwind. With tens of thousands of spacecraft projected, radiation-tolerant, reliable memory is paramount; Everspin cites work with Blue Origin and Astro Digital and emphasizes MRAM’s inherent radiation resilience when paired with hardened front-end logic from primes. Supply-chain-wise, the company spreads manufacturing across regions: wafers sourced from TSMC (Washington State and Taiwan Fab 6) are finished in Chandler, Arizona; packaging and final test occur in Taiwan. For STT-MRAM, GlobalFoundries handles front end (Germany) and MRAM back end (Singapore), with final assembly/test again in Taiwan, diversifying tariff and logistics exposure.

Financially, Everspin reported another profitable quarter, slightly ahead of guidance and consensus, citing early recovery from customers’ post-pandemic inventory overhang and growing design-win momentum for its XSPI family (with revenue contribution expected in 2025). The balance sheet remains clean—debt-free with roughly $45M in cash—and the go-to-market team has been bolstered with a dedicated VP of Business Development and a new VP of Sales recruited from Intel’s Altera business.

Agrawal’s long-view is unabashed: MRAM is “the future of memory.” Whether replacing NOR in configuration, displacing SRAM at the edge for inference, or anchoring radiation-tolerant systems in space, Everspin’s thesis is that one versatile, non-volatile, fast, and power-efficient technology can simplify architectures while cutting energy use—an advantage that grows as AI workloads proliferate. If execution matches the roadmap, the company stands to be the domestic standard-bearer for MRAM across a widening set of markets.

See the full transcript here.

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution


A Principled AI Path to Spec-Driven Verification

A Principled AI Path to Spec-Driven Verification
by Bernard Murphy on 08-20-2025 at 6:00 am

NLP versus LLM choice min

I have seen a flood of verification announcements around directly reading product specs through LLM methods, and from there directly generating test plans and test suite content to drive verification. Conceptually automating this step makes a lot of sense. Carefully interpreting such specs even today is a largely manual task, requiring painstaking attention to detail in which even experienced and diligent readers can miss or misinterpret critical requirements. As always in any exciting concept, the devil is in the details. I talked with Dave Kelf (CEO at Breker) and Adnan Hamid (President and CTO at Breker) to better understand those details and the Breker approach.

Created with Image Creator from Bing

The devil in the specs

We like to idealize specs as one or more finalized documents representing the ultimate description of what is required from the design. Of course that’s not reality, maybe never was but certainly not today where specs change rapidly, even during design, adapting to dynamically evolving market demands and implementation constraints.

A more realistic view according to Adnan is a base spec supplemented by a progression of engineering change notices (ECNs) each reflecting incremental updates to the prior set of ECNs on top of the base spec. Compound that with multiple product variants covering different feature extensions. As a writer I know that even if you can merge a set of changes on top of a base doc, the merge may introduce new problems in readability, even in critical details. Yet in review and updates, reviewers naturally incline to edits considered in a local context rather than trying to evaluate each edit in the context of the whole doc and spec variants on each update.

Worse yet in this fast-paced spec update cycle, clarity of explanation often takes a back seat to urgent partner demands, especially when multiple writers contribute to the spec. In the base doc and especially in ECNs developers/reviewer lean heavily on assumptions that a “reasonable” reader will not need in-depth explanations for basic details. “Basic” in this context is a very subjective judgement, often assuming more in-depth domain expertise as ECNs progress. In the domain we are considering this is specialized know-how. How likely is it that a trained LLM will have captured enough of that know-how? Adnan likes to point to the RISC-V specs as an example illustrating all these challenges, with a rich superstructure of extensions and ECNs which he finds can in places imply multiple possible interpretations that may not have been intended by the spec writers.

Further on the context point, expert DV engineers read the same specs but they also talk to design engineers, architects, apps engineers, marketing, to gather additional input which might affect expected use cases and other constraints. And they tap their own specialized expertise developed over years of building/verifying similar designs. None of this is likely to be captured in the spec or in the LLM model.

Automated spec analysis can be enormously valuable in consolidating ECNs with the base spec to generate a detailed and actionable test plan, but that must then be reviewed by a DV expert and corrected through additional prompts. The process isn’t hands-free but nevertheless could be a big advance in DV productivity.

The Breker strategy

Interestingly Breker test synthesis is built on AI methods created long before we had heard of LLMs or even neural nets. Expert systems and rule-based planning software were popular technologies found in early AI platforms from IBM and others and became the foundation of Breker test synthesis. Using these methods Breker software has already been very effective and in production for many years, auto-generating very sophisticated tests from (human-interpreted) high level system descriptions. Importantly, while the underlying test synthesis methods are AI-based they are much more deterministic than modern LLMs, not prone to hallucination.

Now for the principled part of the strategy. It is widely agreed that all effective applications of AI in EDA must build on a principled base to meet the high accuracy and reliability demands of electronic design. For example production EDA software today builds on principled simulation, principled place and route, principled multiphysics analysis. The same must surely apply to functional test plan definition and test generation for which quality absolutely cannot be compromised. Given this philosophy, Breker are building spec interpretation on top of their test synthesis platform along two branches.

The first branch is their own development around natural language processing (NLP), again not LLM-based. NLP also pre-dates LLMs, using statistical methods rather than the attention-based LLM machinery. Dave and Adnan did not share details, but I feel I can speculate a little without getting too far off track. NLP-based systems are even now popular in domain specific areas. I would guess that a constrained domain, a specialist vocabulary, and a presumed high level of expertise in the source content can greatly simplify learning versus requirements for general purpose LLMs. These characteristics could be an excellent fit in interpreting design behavior specs.

Dave tells me they already have such a system in development which will be able to read a spec and convert it into a set of Breker graphs. From that they can directly run test plan generation and synthesis, with support for debug, coverage, portability to emulation, all the capabilities that the principled base already provides. They can also build on the library of test support functions already in production, for coherency checking, RISC-V compliance and SoC readiness, Arm SocReady, power management and security root of trust checks.  A rich set of capabilities which don’t need to be re-discovered in detail in a general purpose LLM model.

The second branch is partnership with some of the emerging agentic ventures. In general, what I have seen among such ventures is either strong agentic expertise but early in building verification expertise, or strong verification expertise but early in building agentic expertise. A partnership between a strong agentic venture and a strong verification company, seems like a very good idea. Dave and Adnan believe that what they already have in their Breker toolset, combined with what they are learning in their internal development can provide a bridge between these platforms, combining full-blown LLM-based spec interpretation, and the advantages of principled test synthesis.

My takeaway

This is a direction worth following. All the excitement and startup focus in verification is naturally on LLMs but today accuracy levels are not yet where they need to be to ensure production quality test plans and test generation without expert DV guidance. We need to see increased accuracy and we need to gain experience in the level of expert effort required to bring auto-generated tests to signoff quality.

Breker is investing in LLM-based spec-driven verification, but like other established verification providers must meanwhile continue to support production quality tools and advance the capabilities of those tools. Their two-pronged approach to spec interpretation aims to have the best of both worlds – an effective solution for the near term through NLP interpretation and an LLM solution for the longer term.

You can learn more about the principled Breker test synthesis products HERE.