SILVACO 073125 Webinar 800x100

Verific Design Automation at the 2025 Design Automation Conference

Verific Design Automation at the 2025 Design Automation Conference
by Lauro Rizzatti on 06-08-2025 at 8:00 am

62nd DAC SemiWiki

Rick Carlson, Verific Design Automation’s Vice President of Sales, is an EDA trends spotter. I was reminded of his prescience when he recently called to catch up and talk about Verific’s role as provider of front-end platforms powering an emerging EDA market.

Verific, he said, is joining forces with a group of well-funded startups using AI technology to eliminate error-prone repetitive tasks for efficient and more productive chip design. “We’re in a new space where no one is sure of the outcome or the impact that AI is going to have on chip design. We know there are going to be some significant improvements in productivity. It’s going to be an amazing foundation.”

I was intrigued and wanted to learn more. Rick set up a call for us to talk with Ann Wu, CEO of startup Silimate, an engaging and articulate spokesperson for this new market. Silimate, one of the first companies to market, is developing a co-pilot (chat-based GenAI) for chip and IP designers to help them find and fix functional and PPA issues. Impressively, it is the first EDA startup to get funding from Y Combinator, a tech startup accelerator. Silimate is also a Verific customer.

Ann was formerly a hardware designer at Apple, a departure from the traditional EDA developer profile. Like Ann, other founders of many of the new breed of EDA startups were formerly designers from Arm, NVIDIA, SpaceX, Stanford and Synopsys.

While doing a startup was always part of her game plan, Ann’s motivation for becoming an entrepreneur came from frustrations within the chip design flow and availability of new technology to solve some of these pressing issues.

AI, Ann acknowledged, may provide a solution to some of the problems she encountered and the reason behind the excitement and appetite about AI for EDA applications. “Traditional EDA solutions solve isolated problems through heuristic algorithms. There’s a high volume of gray area in between these well-defined boxes of inputs and outputs that had previously been unsolvable. Now with AI, there is finally a way to sift through and glean patterns, insights and actions from these gray areas.”

We turn to the benefits of EDA using AI technology. “Having been in the industry as long as I have,” says Rick. “I know the challenges are daunting, especially when you consider that our customers want to avoid as much risk as possible. They want to improve the speed to get chips out, but they are all about de-risking everything.”

I ask Ann if adding AI is only a productivity gain. “Productivity as a keyword is not compelling.” It’s an indirect measure of the true ROI, she notes, and adds it’s ultimately reducing the time to tape out while achieving the target feature set that engineering directors and managers look for.”

“What we are doing has been time-tested,” answered Rick when asked why these startups are going to Verific. “We recently had a random phone call from a researcher at IBM. He already knew that IBM was using Verific in chip design. He said, “I know that we need to deal with language and Verific is the gold standard.’

“We’re lucky we’ve just been around long enough. Nobody else in their right mind would want to do what we’ve done because it’s painstaking. I wouldn’t say boring, but it’s not as much fun as what Ann is doing, that’s for sure.”

As we move on to talk about funding and opportunities, Rick jumps in. “When people look at an industry, they want to know the leaders and immediately jump to the discussion of revenue and maturity. EDA is a mature industry and a three- or four-horse race. I think there are more horses at the starting line today that have the potential to make a dramatic impact.

“We’ve got an incredible amount of funds we can throw at this, assuming that we can achieve what we want to achieve. This is not something that just came along. This is a seismic shift in the commitment to use all the talent, tools, technology and money to make this happen.

“To me, it’s not a three-horse race—maybe it’s a 10-horse race. We really won’t know until we look back in another six months or a year from now at what that translates to. I am betting on it because the people doing this for the most part are not professional CAD developers. They looked at the problem and think they can make a dent.”

DAC Registration is Open

Notes:

Verific will exhibit at the 62 Design Automation Conference (DAC) in Booth #1316 at the Moscone Center in San Francisco from June 23–25.

Silimate’s Akash Levy, Founder and CTO, will participate in a panel titled “AI-Enabled EDA for Chip Design” at 10:30am PT Tuesday, June 24, during DAC.

Also Read:

Breker Verification Systems at the 2025 Design Automation Conference

The SemiWiki 62nd DAC Preview


Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs

Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs
by Daniel Nenni on 06-06-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Chouki Aktouf, CEO and Founder of Defacto Technologies. Dan explores the challenges of building complex SoCs with Chouki, who describes challenges around managing complexity at the front end of the process while staying within PPA requirements and still delivering a quality design as fast and cost effectively as possible.

Chouki describes how Defacto’s SoC Compiler addresses the challenges discussed along with other important items such as design reuse. He provides details about how Defacto is helping customers of all sizes to optimize the front end of the design process quickly and efficiently so the resulting chip meets all requirements.

Contact Defacto

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey
by Daniel Nenni on 06-06-2025 at 6:00 am

Dan is joined by Graeme Hickey, vice president of engineering at PQShield. Graeme has over 25 years of experience in the semiconductor industry creating cryptographic IP and security subsystems for secure products. Formerly of NXP Semiconductor, he was senior manager of the company’s Secure Hardware Subsystems group responsible for developing security and cryptographic solutions for an expansive range of business lines.

Dan explores the changes that are ahead to address post-quantum security with Graeme, who explains what these changes mean for chip designers over the next five to ten years, Graeme explains that time is of the essence, and chip designers should start implementing current standards now to be ready for the requirements in 2030. This process will continue over the next five to ten years.

Graeme describes the ways PQShield is helping chip designers prepare for the post-quantum era now. One example he cites is the PQPlatform-TrustSys, a complete PQC-focused security system that provides architects with the tools needed for the quantum age and beyond. Graeme also discusses the impact of the PQShield NIST-ready test chip. Graeme describes what chip designers should expect across the supply chain as we enter the post-quantum era.

Contact PQShield

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


ChipAgent AI at the 2025 Design Automation Conference #62DAC

ChipAgent AI at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-05-2025 at 10:00 am

ChipAgentsAtDAC

The semiconductor world is gathering at DAC 62, and ChipAgents AI is coming ready to show why agentic AI is the missing piece in modern RTL design and verification. Whether you’re drowning in terabytes of waveform data, grinding toward 100% functional coverage, or hunting for ways to accelerate time-to-market, our sessions and live demos will give you a first-hand look at how autonomous AI agents can transform your flow.

ChipAgents AI @ DAC 62: Where Agentic AI Meets Next-Gen Verification

June 23–25, 2025 • Moscone West, San Francisco

ChipAgents Sessions

Day & Time

Venue

Title

What You’ll Learn

Mon 6/23 10:30 a.m.

Exhibitor Forum (Level 1)

Taming the Waveform Tsunami: Agentic AI for Smarter Debugging

See Waveform Agents trace failure propagation across modules and time in seconds—no manual spelunking required. Real case studies show days-long debug cycles cut to minutes.

Tue 6/24 1:45 p.m.

Exhibitor Forum (Level 1)

CoverAgent: How Agentic AI Is Redefining Functional Coverage Closure

Watch CoverAgent analyze coverage reports, infer unreachable bins, and auto-generate targeted stimuli—driving up to 80 % faster closure in complex SoCs.

Wed 6/25 11:15 a.m.

DAC Pavilion (Level 2)

Beyond Automation: How Agentic AI Is Reinventing Chip Design & Verification

CEO Prof. William Wang reveals how multi-agent workflows tackle constraint solving, automated debug, proactive design optimization, and more.

Tip: All three talks are designed for live Q&A—bring your toughest verification pain points.

Live Demo & 1-on-1s

Exhibition Booth #1308, Level 1 10 a.m.–6 p.m. daily

  • Waveform Agents: Natural-language root-cause analysis on multi-TB VCD/FST dumps
  • CoverAgent: Autonomous coverage gap hunting & stimulus generation
  • ChipAgents CLI & VS Code Extension: Plug-in AI agents for Verilog, SystemVerilog, UVM

Come with your own specs, traces or coverage reports and we’ll run them live.

Why Agentic AI Now?

  • Scale: LLM-powered agents reason across RTL, waveforms, testbenches, logs, and documentation simultaneously.
  • Speed: Hypothesis-driven search slashes debug and closure cycles by orders of magnitude.
  • Explainability: Results are surfaced as step-by-step causal chains, so engineers stay in control.
  • Complementary: Works alongside existing simulators, formal tools, and waveform viewers—no rip-and-replace.

Meet the Team

  • William Wang – Founder & CEO, UCSB AI faculty
  • Zackary Glazewski – Forward-Deployed Engineering Lead
  • Mehir Arora – AI Research Engineer, Functional Coverage Specialist

They’ll be joined by the engineering crew behind our SOC-scale deployments and early-access customers.

Book a Private Briefing or Join Our Private Party

Slots fill fast during DAC week. To reserve a 30-minute roadmap briefing—or to request an invitation to our private rooftop dinner for semiconductor executives and leading engineers—visit chipagents.ai or stop by Booth #1308.

See You in San Francisco! DAC Registration is Open

If your verification team is buried under data, waveforms, coverage debt, or deadline pressure, ChipAgents AI has something you’ll want to witness live. Mark your calendar for June 23–25, swing by Booth #1308, and discover how agentic AI is turning RTL understanding from an art into a science.

About us

We are reinventing semiconductor design and verification through advanced AI agent techniques. ChipAgents AI is pioneering an AI-native approach to Electronic Design Automation (EDA), transforming how chips are designed and verified. Our flagship product, ChipAgents, aims to boost RTL design and verification productivity by 10x, driving innovation across industries with smarter, more efficient chip design.

Also Read:

AlphaDesign AI Experts Wade into Design and Verification

CEO Interview with Dr. William Wang of Alpha Design AI


proteanTecs at the 2025 Design Automation Conference #62DAC

proteanTecs at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-05-2025 at 8:00 am

62nd DAC SemiWiki

Discover how proteanTecs is transforming health and performance monitoring across the semiconductor lifecycle to meet the growing demands of AI and Next-Gen SoCs.

Stop by DAC booth #1616 to experience our latest technologies in action, including interactive live demos and explore our full suite of solutions — designed to boost reliability, optimize power, and enhance product quality for next-gen AI and data-driven applications.

Don’t miss our daily in-booth theater sessions, featuring expert talks from industry leaders in ASIC design, IP, EDA, cloud infrastructure, including: Arm, Andes, Samsung, Advantest, Alchip, Siemens, PDF Solutions, Teradyne, Cadence, GUC, and more!  Plus, hear insights from proteanTecs’ own experts.

Interested in a deeper dive? We’re now booking private meeting room sessions tailored to your company’s needs. Learn how our cutting-edge, machine learning-powered in-system monitoring delivers unprecedented visibility into device behavior — from design to field.

During the show, we will be presenting multiple solutions, including:
  1. Power and Performance
  2. Reliability, Availability, Serviceability
  3. Functional Safety & Diagnostics
  4. Chip Production
  5. System Production
  6. Advanced Packaging

Meet us at Booth #1616

See the full booth agenda, HERE.

Book a meeting with proteanTecs at DAC 2025

proteanTecs is the leading provider of deep data analytics for advanced electronics monitoring. Trusted by global leaders in the datacenter, automotive, communications and mobile markets, the company provides system health and performance monitoring, from production to the field.  By applying machine learning to novel data created by on-chip monitors, the company’s deep data analytics solutions deliver unparalleled visibility and actionable insights—leading to new levels of quality and reliability. Founded in 2017 and backed by world-leading investors, the company is headquartered in Israel and has offices in the United States, India and Taiwan.

DAC registration is open.

Also Read:

Cut Defects, Not Yield: Outlier Detection with ML Precision

2025 Outlook with Uzi Baruch of proteanTecs

Datacenter Chipmaker Achieves Power Reduction With proteanTecs AVS Pro

Also Read:

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing

Cut Defects, Not Yield: Outlier Detection with ML Precision

2025 Outlook with Uzi Baruch of proteanTecs


Arm Reveals Zena Automotive Compute Subsystem

Arm Reveals Zena Automotive Compute Subsystem
by Bernard Murphy on 06-05-2025 at 6:00 am

Zena CSS min

Last year Arm announced their support for standards-based virtual prototyping in automotive, along with a portfolio of new AE (automotive enhanced) cores. They also suggested that in 2025 they would be following Arm directions in other LOBs by offering integrated compute subsystems (CSS). Now they have delivered: their Zena CSS subsystems for automotive applications round out their automotive strategy.

The Motivation

What is the point of a CSS and why is it important for automotive? In part the motivation is the same as for CSS applications in infrastructure. The customers for these subsystems see them as necessary but not core to their brand. Complete and pre-validated subsystem IPs like Zena are an obvious win, reducing effort and time to deployment without compromising opportunities for differentiation. Automotive OEMs, Tier1s, even leading automotive semi suppliers in some instances, aren’t going to differentiate in compute subsystems. Their brand builds around AI features, sensing, IVI, control, and communication (V2X and in the car). Zena provides a jump start in designing their systems.

Arm is a good company to watch in this area because electronic/AI content is now a huge part of how automotive brands are defined, and Arm completely dominates in processor usage among automakers and automotive chip suppliers. As a result, Arm sees further ahead than most when it comes to trends in automotive electronics. For example we’re already familiar with the concept of a software defined vehicle (SDV), supporting over the air (OTA) updates for maintenance and feature enhancements, orchestrating sensing and control between multiple functions across the car, and emerging potential in V2X communication. Dipti Vachani (Senior VP and GM for Automotive at Arm) says that looking forward she sees the next step being a trend towards AI-defined vehicles. This concept is worth peeling further.

A cynic might assume “AI-defined vehicles” is just buzzword inflation, but there’s more to it than that. First AI has become central to innovation in the modern car – how automakers differentiate and defend their brands, even how they monetize what they provide. Dipti suggests a range of emerging possibilities: in ADAS, adjusting to driver behavior and environment in real-time to better support safety; in IVI to provide support for more personalized voice-enabled control, an important step beyond the limited voice options available today; and for vehicle control to optimize energy consumption and vehicle dynamics based on load and road conditions. I have written separately about advances like birds-eye-view with depth for finer control in autonomy when cornering, for driver and occupant monitoring systems, and for more intelligent battery management.

OK, so lots of AI capabilities in the car, but what does this have to do with Arm, especially if OEMs and Tier1s are differentiating in AI, etc? We already know that to manage the cost side of all this innovation OEMs have moved to zonal architectures, a small number of hardware components around the car rather than components everywhere. Differentiating AI models can be updated OTA as needed, important because AI innovation is fast and furious – what is competitive this year may look dated next year. Models must operate reliably and be updated safely and securely, with regular in-flight checking and corrective action for hardware misbehavior and robust protection against hacking in-flight or during updates. All critical requirements in a car, but this management is beyond the bounds of AI.

Compute subsystems and SDV in the age of AI-defined vehicles

From what I see, safety and security are out of scope today for AI. Research in safety in AI is nascent at best. AI for car-quality security is a bit more advanced, primarily for attack detection and not yet production level. More obviously, orchestration of functions across the car, the communication through which that orchestration must operate, actuation for mechanical functions, display functions and many other non-AI functions, all these are beyond the scope of AI. Such functions, still the great majority of administrative compute in a car, must continue to be handled through software running on a backbone of zonal processors, each managed by one or more standard CPU subsystems (here Zena) front ending the AI engines. In this context, given the cloud-based virtual software development Arm highlighted last year natively modeling Zena in that development, Arm’s role becomes more obvious.

Zena role in zonal processors

Further, there are likely to be many more AI models to support in any given car than there are zonal processors. Running multiple AI models on an NPU is already possible since multi-core NPUs are now common. But which models should run when must be governed by orchestration under an OS running on a CPU subsystem. This orchestration also handles feeding data into the NPU, taking results back out to the larger system, swapping models in and out, and managing updates from the cloud. Together of course with comprehensive safety and security control for the complete automotive electronic system.

Safety in advanced automotive electronics has already evolved to ASIL-B or ASIL-D levels implemented through ASIL-D certified safety islands which regularly monitor other functions in the processor through function isolation, self-test, reboot if necessary, before bringing that function back online. Or perhaps shutting down a broken subsystem and triggering a driver/dealer warning to be addressed in a service call. Security is even more rigorous: secure boot, state of the art encryption, secure enclaves, authentication for downloads, etc. etc.

In short, complete automotive systems depend on CPU subsystem front-ends to the NPU back-end which run the AI models. A standard to ensure interoperability is essential to making this complex environment work well, as is a trusted virtual platform/digital twin to support software development in advance of a car being ready for testing. This is why Arm kicked off the SOAFEE standard four years ago. Dipti says that Zena is the physical manifestation of SOAFEE and claims that between software virtual prototyping and time and effort saved by having a fully characterized compute subsystem in Zena, automotive systems builders can save up to a whole model year in time and 20% in engineering effort over building their own compute subsystem.

For developers, virtual prototyping platforms are already available from major EDA suppliers. Zena is currently in deployment with early adopters and is expected to become more generally available later in 2025.

Takeaway

I see Zena and the larger strategy continuing a theme that has been quite successful for Arm in their Neoverse/infrastructure directions – pre-verified/validated compute subsystems as an IP, backed by cloud-native development based on open standards.  The ecosystem will continue to grow around these standards, competitors are free to enter but will be expected to comply with the same standards, while Arm must continue to execute to stay ahead. Nothing wrong with that for automotive OEMs and Tier1s, though clearly Arm has a strong head start.

You can read more HERE.

Also Read:

S2C: Empowering Smarter Futures with Arm-Based Solutions

SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments

The RISC-V and Open-Source Functional Verification Challenge


High-NA Hard Sell: EUV Multi-patterning Practices Revealed, Depth of Focus Not Mentioned

High-NA Hard Sell: EUV Multi-patterning Practices Revealed, Depth of Focus Not Mentioned
by Fred Chen on 06-04-2025 at 10:00 am

HNA EUV Fred Chen

In High-NA EUV lithography systems, the numerical aperture (NA) is expanded from 0.33 to 0.55. This change has been marketed as allowing multi-patterning on the 0.33 NA EUV systems to be avoided. Only very recently have specific examples of this been provided [1]. In fact, it can be shown that double patterning has been implemented for EUV in cases where DUV double patterning could have sufficed.

What a Higher NA offers

The increase in NA allows more diffraction orders or a wider range of spatial frequencies to be used for imaging. Having more diffraction orders for the same image allows brighter, narrower peaks, as shown in the example of Figure 1.

The sharper peak means the normalized image log slope (NILS) is better, so the stochastic effect of shot noise in the photon absorption won’t be as severe. Consequently, a directly printed image would be more likely to be degraded for 0.33 NA compared to 0.55 NA.

Current EUV Uses Multipatterning

To keep the shot noise low enough to keep the single 0.33 NA exposure, the dose would have to be increased to a point where throughput or resist loss would be a detracting issue, e.g., > 100 mJ/cm2. On the other hand, if the 0.33 NA pattern were split into two separately exposed portions (Figure 2), each one would have a denser range of spatial frequencies due to wider separations between features, which will improve the NILS.

Figure 2. Random 36 nm via pattern (taken from [1]) split into two portions for 0.33 NA EUV double patterning; each color represents one of two masks. DUV double patterning can follow the same split for this case.

Interestingly, in this case, the minimum 100 nm distance means DUV can also be used with double patterning for the same pattern. This is consistent with an earlier finding that DUV and EUV double patterning may be overlapped due to the impact of stochastic effects [2].

Furthermore, if the pattern of Figure 2 were scaled down by a factor of the NA ratio (0.33/0.55), so that the via size becomes 36 nm x 0.6 = 21.6 nm, the same situation will apply to the High-NA case as well, since the spatial frequency range (normalized to 0.55 NA) is now reduced to the same as previously for 0.33 NA. This means we should expect double patterning for High-NA EUV, triple patterning for low-NA EUV, and quadruple patterning for DUV (Figure 3).

Figure 3. Different multipatterning scenarios for the 0.6x scaled pattern of Figure 2.

On the other hand, it can be noted that via patterns can conform to a diagonal grid [3], which would enable DUV/low-NA double patterning or High-NA EUV single patterning for location selection if the vias are fully self-aligned (Figure 4).

Figure 4. Applying via diagonal grid location selection to the pattern of Figure 3 simplifies the multipatterning (double patterning for DUV/Low-NA EUV, single patterning for High-NA EUV).

High-NA Depth of Focus Challenged by Resist Thickness

A fundamental consequence of having a wider range of spatial frequencies in a larger numerical aperture is that there is a wider range of optical paths used in forming the image. Each path corresponds to an angle with the optical axis. At the wafer, the wider range leads to the higher spatial frequencies getting more out of phase with the lower ones, causing the image to lose contrast from defocus. This is visualized in Figure 5.

Figure 5. 30 nm pitch with line breaks presents a wide range of diffraction orders with High-NA, leading to a relatively limited depth of focus.

As Figure 5 shows, this is particularly bad for line breaks, where the tip-to-tip distance needs to be controlled. Likewise, it would apply to the corresponding line cut pattern. The depth of focus reduction applies generally to patterns with wide spacings between features such as the random via pattern of Figure 2. Figure 6 shows that even 15 nm defocus is enough to significantly affect a 40 nm pitch line pattern, due to four diffraction orders being included by a 0.55 numerical aperture as opposed to two diffraction orders for a 0.33 numerical aperture.

Figure 6. A 40 nm pitch line pattern is significantly affected even with 15 nm defocus, due to more diffraction orders being included with 0.55 NA.

To preserve the image uniformity as much as possible through the resist thickness, the resist thickness needs to be at most as thick as the depth of focus. A depth of focus < 30 nm for High-NA means resist thickness has to be < 30 nm, which may further experience 50% resist thickness loss [4]. Such a thin retained resist layer also would have absorbed very little EUV, leading to even greater absorbed photon shot noise and greater sensitivity to electrons from the underlayer [5] as well as the EUV plasma [6].

Thus, though obviously not mentioned in the marketing, it is reasonable to expect that High-NA EUV exposure cannot provide enough depth of focus for a reasonable resist thickness, and any future Hyper-NA (at least 0.75 [7]) would be even worse.

Exposing EUV

References

[1] C. Zahlten et al., Proc. SPIE 13424, 134240Z (2025).

[2] F. Chen, Can LELE Multipatterning Help Against EUV Stochastics?.

[3] F. Chen, Routing and Patterning Simplification with a Diagonal Via Grid.

[4] F. Chen, Resist Loss Prohibits Elevated DosesJ. Severi et al., “Chemically amplified resist CDSEM metrology exploration for high NA EUV lithography,” J. Micro/Nanopatterning, Materials, and Metrology 21, 021207 (2022).

[5] H. Im et al., Proc. SPIE 13428, 1342815 (2025).

[6] Y-H. Huang, C-J. Lin, and Y-C. King, Discover Nano 18:22 (2023).

[7] G. Bottiglieri et al., Proc. SPIE 13424, 1342404 (2025).

This article first appeared in Exposing EUV: High-NA Hard Sell: EUV Multipatterning Practices Revealed, Depth of Focus Not Mentioned

Also Read:

Impact of Varying Electron Blur and Yield on Stochastic Fluctuations in EUV Resist

Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery

A Perfect Storm for EUV Lithography


Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025
by Bernard Murphy on 06-04-2025 at 6:00 am

Anirudh with Jensen and Lip Bu

Anirudh (Cadence President and CEO) had two fireside chats during CadenceLIVE 2025, the first with Jensen Huang (Founder and CEO of NVIDIA) to kick off the show, and later in the day with Lip-Bu Tan (CEO of Intel). Of course Jensen and Lip-Bu also turn up for other big vendor shows but I was reminded that there is something special about Cadence and Anirudh’s relationship with both CEOs. Cadence and Jensen/NVIDIA are mutual customers and partners in providing services. Cadence’s latest hardware accelerator (the Millennium M2000 Supercomputer) is built on NVIDIA Blackwell and they collaborate on datacenter design technologies and drug design technologies. On the other hand, as Cadence’s former CEO, Lip-Bu  rescued it from a slump, much as he now aims to do with Intel, before handing over the reins to his protégé (Anirudh). Lip-Bu hugged Anirudh when he came on stage which says a lot about the close relationship between these two leaders. Cadence is tight with two of the world’s top semis, notwithstanding Intel’s current trials. Good for Cadence and for Anirudh!

Fireside chat with Jensen

Too much material here to summarize in a short section, instead I’ll select a few highlights. First, though officially announced in Anirudh’s following keynote, Cadence’s new AI accelerator, the Millennium M2000 Supercomputer, was revealed in this talk. This accelerator is based on the NVIDIA Blackwell platform and Jensen sprang a surprise by ordering 10 NVL72 Millennium M2000s on the spot. He’s not just happy that Cadence built their machine on his product, he also wants to build the Cadence product into the NVIDIA datacenter in support of their joint digital twin initiatives.

Now imagine how far digital twins can permeate into every aspect of industrial design. The concept is already big in logic chip design through platforms like Cadence’s Palladium and Protium systems. In the EDA world we don’t call these platforms digital twins but Jensen does – NVIDIA has been using Palladium for years to design GPUs. Now platforms like Millennium can extend design support out to non-electronic domains through AI on top of principled simulation: datacenters, wind tunnels and turbines, factory design and automation, drug discovery and design, the possibilities are endless.

Which leads to a question – if all design is going to move in this direction, what infrastructure will support all this AI and compute? Datacenters certainly but we’re talking about datacenters at hyperscale. AI factories – Jensen says they are now building Gigawatt factories. The capital required for such a factory is $60B – at the same level as Boeing’s revenues for a year. Few enterprises can make investments at that scale; most will be using digital twins as a service to design their products, drugs, and manufacturing systems. Which makes talk of sovereign AI less hyperbolic that it might have appeared. There are moves in this direction already in the US, in Japan and in the UK.

You could argue that this is just fear of missing out (FOMO) but it’s now global FOMO. The success or failure of enterprises, even country GDPs can swing on being ready or being late to the party. AI has indeed made these interesting times to live in.

Fireside Chat with Lip-Bu

I’m far from the first and far from the most qualified to offer an opinion on Lip-Bu taking the CEO position at Intel but this is my article, so here’s my opinion. That Intel is struggling is not news, and I’m sure a comfortable choice for many would have been a long-time semi or foundry exec, but I’m also guessing that the board decided it was time to be bold. Lip-Bu has served on the Intel board, he turned around Cadence from a not dissimilar slump, and he has an enviable reputation in running his own venture fund (40+ years, over 500 startups, and 145 IPOs). He is also well connected to a lot of influential people in tech, rated on board memberships and connection to money. Not a bad start.

I confess I have a bias to seeing Intel regain its design mojo (my domain). I’m not qualified to speak to the foundry side – I’ll leave commentary there to others. What I heard in the fireside chat is consistent with what I have heard of Lip-Bu’s management style at Cadence. A big focus on product and delighting the customer by going the extra mile. He is already trimming layers of management so that he hears directly from R&D and sales. He also intends to instill a culture of humility both towards customers and towards each other (quite a change for Intel?).

He intends to continue his VC work, not as a sideline but as a very active channel to spot big waves as they approach. And to stay in touch with the startup culture, which he wants to bring back to the company. An ability to move fast in promising directions with a minimum of approval layers and oversight.

We all see a trend to purpose-built silicon, especially around AI. Lip-Bu believes that Intel must adapt to this need, not only to delight but also to build trust. They must embrace opportunities for custom development with big customers. In my paltry knowledge of foundry opportunities I know that Intel is well established in advanced packaging. This is an area they have potential to differentiate.

For EDA/SDA suppliers he suggests there is plenty of opportunity to help. In tooling he wants to see Intel supporting more compatibility with customer preferences, meaning support across the board wherever needed. And I’m sure he will be looking for out-of-the-box thinking from partners, opportunities not just to polish existing solutions but to truly explore multi-way opportunities – customer, Intel, foundry, EDA/IP supplier.

Opportunities to be bold everywhere, not just for Intel but also for their partners.

Also Read:

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

Optimizing an IR for Hardware Design. Innovation in Verification

LLMs Raise Game in Assertion Gen. Innovation in Verification


TCAD for 3D Silicon Simulation

TCAD for 3D Silicon Simulation
by Daniel Payne on 06-03-2025 at 10:00 am

Silvaco TCAD min

Semiconductor fabs aim to have high yields and provide processes that attract design firms and win new design starts, but how does a fab deliver their process nodes in a timely manner without having to run lots of expensive silicon through the line? This is where simulation and TCAD tools come into play, and to learn more about this field I attended a Silvaco webinar. Mao Li from Silvaco presented the webinar and had over 50 slides to cover in under 45 minutes, so it was fast-paced.

Silvaco offers TCAD tools for accurate 3D simulation and device optimization, applicable to processes spanning CMOS, memory, RF and power devices. Logic CMOS technology in the past 20 years has gone from 90nm to the Angstrom era, using planar, FinFET, GAA and 3D structures, like CFET. Each generation of fab technology has presented unique technical challenges that required new modeling capabilities for simulation in TCAD tools.

Mr. Li talked about stress and HKMG (High-K Metal Gate)  process challenges that required both 2D and 3D simulation approaches. FinFET technology required new transistor-level process simulation for structure, doping and stress effects, this is where Victory Process is used.

Following process simulation comes device simulation, where the transistor characteristics are predicted for NFET and PFET devices using Victory Device.

Going beyond individual transistors, they can simulate standard cell layouts in their 3D structure, followed by parasitic extractions with Victory RCx to enable the most accurate SPICE circuit simulations. Silvaco showed their flow from TCAD to SPICE, enabling Design Technology Co-Optimization (DTCO).

Memory technology was presented, starting with the history of DRAM evolution, and the pursuit of ever-smaller cell sizes. 3D modeling of saddle fin shapes is supported for DRAM cell arrays.

3D NAND process integration was explained using two engines, Victory Cell mode using an explicit mesh and Victory Process mode using a level set. Stress simulation for 3D NAND results were presented, along with the cell device electrical characteristics.

High-frequency and high-performance applications like wireless communications, radar, satellite and space communications use RF-SOI process technology, and this is modeled and simulated with the Victory tools. High-voltage power devices in LDMOS technology were accurately modeled using 2D or 3D techniques.

The big picture from Silvaco is that their tools are used by both simulation and fab engineers to enable Fab Technology Co-Optimization (FTCO), from automating Design of Experiments using modeling of process, device and circuits, all the way to building a Digital Twin for fab engineers.

For process simulation each step is modeled: etch/deposit, implantation, diffusion and activation, stress. Device simulation includes both a basic model and advanced model. Parasitic extraction uses a 3D structure, then applies a field solver for most accurate RC values. The Victory Process tool is continually improved to include two new diffusion models for best accuracy, especially for 3D FinFET devices. These models are extensively calibrated across doping species, implantation dose ranges, temperature ranges and annealing time ranges.

Development continues for advanced structure generation, along with speed ups in runtime performance. Support of orientation of the silicon lattice has been added, plus new quantization models, and advanced mobility effects.

Instead of using trial and error fab runs to develop a process, using this AI-driven FTCO approach will save engineering time, effort and costs. A case study for FinFET device performance was shared that used machine learning from the Victory DoE and Victory Analytics tools, allowing users to find the optimal input values to satisfy multiple outputs. MonteCarlo simulation was used for both margin analysis and Cp/Cpk characterization.

Summary

Silvaco has a long history in TCAD tools and over time their products have been updated to support fab processes across CMOS, memory, RF and Power devices. Using TCAD for 3D silicon simulation is a proven approach to save time to market. FTCO is really happening.

View the webinar recording online for more details.

Related Blogs


Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination
by Admin on 06-03-2025 at 6:00 am

Im1 Weebit Relaxation Aware Programming in ReRAM Optimizing Write Termination RRAM 1024x704
By Marcelo Cueto R&D Engineer, Weebit Nano Ltd

Resistive RAM (ReRAM or RRAM) is the strongest candidate for next-generation non-volatile memory (NVM), combining fast switching speeds with low power consumption. New techniques for managing a memory phenomenon called ‘relaxation’ are making ReRAM more predictable — and easier to specify for real-world applications.

What is the relaxation problem in memory? Short-term conductance drift – known as ‘relaxation’ – presents a challenge for memory stability, especially in neuromorphic computing and multi-bit storage.

At the 2025 International Memory Workshop (IMW), a team from CEA-LetiCEA-List and Weebit presented a poster session, “Relaxation-Aware Programming in RRAM: Evaluating and Optimizing Write Termination.” The team reported that Write Termination (WT), a widely used energy-saving technique, can make these relaxation effects worse.

So what can be done? Our team proposed a solution: a modest programming voltage overdrive that curbs drift without sacrificing the efficiency advantages of the WT technique.

Energy Savings Versus Stability

Write Termination improves programming efficiency by halting the SET (write) operation once the target current is reached, instead of using a fixed-duration pulse. This reduces both energy use and access times, supporting better endurance across ReRAM arrays.

It’s desirable, but problematic in action.

Tests on a 128kb ReRAM macro showed that unmodified WT increases conductance drift by about 50% compared to constant-duration programming.

In these tests, temperature amplified the effect: at 125°C, the memory window narrowed by 76% under WT, compared to a fixed SET pulse. Even at room temperature, degradation reached 31%.

Such drift risks destabilizing systems that depend on tight resistance margins, including neuromorphic processors and multi-level cell (MLC) storage schemes, where minor shifts can translate into computation errors or data loss.

The experiments used a testchip fabricated on 130nm CMOS, integrating the ReRAM array with a RISC-V subsystem for fine-grained programming control and data capture.

Conductance relaxation was tracked from microseconds to over 10,000 seconds post-programming. A high-speed embedded SRAM buffered short-term readouts, allowing detailed monitoring from 1µs to 1 second, while longer-term behavior was captured with staggered reads.

This statistically robust setup enabled precise analysis of both early and late-stage relaxation dynamics.

To measure stability, the researchers used a metric called the three-sigma memory window (MW₃σ). It looks at how tightly the memory cells hold their high and low resistance states, while ignoring extreme outliers.

When this window gets narrower, the difference between a “0” and a “1” becomes harder to detect — making it easier for errors to creep in during reads.

By focusing on MW₃σ, the team wasn’t just looking at averages — they were measuring how reliably the memory performs under real-world conditions, where even small variations can cause problems.

Addressing Relaxation with Voltage Overdrive

Voltage overdrive is the practice of applying a slightly higher voltage than the minimum required to trigger a specific operation in a memory cell — in this case, the SET operation in ReRAM.

Write Termination cuts the SET pulse short as soon as the target current is reached. That saves energy, but it also means some memory cells are just barely SET. They’re fragile — sitting near the edge of their intended resistance range. That’s where relaxation drift kicks in: over time, conductance slips back toward its original state.

So, the team asked a logical question:

“What if we give the cell just a bit more voltage — enough to push it more firmly into its new state, but not so much that we burn energy or damage endurance?”

Instead of discarding WT, the team increased the SET voltage by 0.2 Arbitrary Units (AU) above the minimum requirement.

Key results:

  • Relaxation dropped to levels comparable to constant-duration programming
  • Memory windows remained stable at both room and elevated temperatures
  • WT’s energy efficiency was mostly preserved, with only a ~20% increase in energy compared to unmodified WT

Modeling predicted that without overdrive, 50% of the array would show significant drift within a day. With overdrive, the same drift level would take more than 10 years, a timescale sufficient for most embedded and computing applications.

Balancing Energy and Stability

The modest voltage increases restored conductance stability without negating WT’s energy and speed benefits. Although the overdrive added some energy overhead, overall consumption remained lower than that of fixed-duration programming.

This adjustment offers a practical balance between robustness and efficiency, critical for commercial deployment.

As ReRAM moves toward wider adoption and is a prime candidate for use in neuromorphic and multi-bit storage applications, conductance drift will become a defining challenge.

The results presented at IMW 2025 show that simple device-level optimizations like voltage overdrive can deliver major gains without requiring disruptive architectural changes.

Check out more details of the research here.

Also Read:

Weebit Nano is at the Epicenter of the ReRAM Revolution

Emerging Memories Overview

Weebit Nano at the 2024 Design Automation Conference