WP_Term Object
(
    [term_id] => 16141
    [name] => NetApp
    [slug] => netapp
    [term_group] => 0
    [term_taxonomy_id] => 16141
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 13
    [filter] => raw
    [cat_ID] => 16141
    [category_count] => 13
    [category_description] => 
    [cat_name] => NetApp
    [category_nicename] => netapp
    [category_parent] => 386
)
            
NetApp Banner SemiWiki
WP_Term Object
(
    [term_id] => 16141
    [name] => NetApp
    [slug] => netapp
    [term_group] => 0
    [term_taxonomy_id] => 16141
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 13
    [filter] => raw
    [cat_ID] => 16141
    [category_count] => 13
    [category_description] => 
    [cat_name] => NetApp
    [category_nicename] => netapp
    [category_parent] => 386
)

Agentic EDA Panel Review Suggests Promise and Near-Term Guidance

Agentic EDA Panel Review Suggests Promise and Near-Term Guidance
by Bernard Murphy on 02-24-2026 at 6:00 am

Key takeaways

NetApp recently hosted a webinar on Agentic AI as the future for EDA and implications for infrastructure. Good list of panelists including Mahesh Turaga (VP Cadence Cloud) with an intro preso on infrastructure and agentic AI at Cadence, then our own Dan Nenni (Mr. SemiWiki) moderating, Khaled Heloue (Fellow AMD, CAD CAD/Methodology/AI), Rob Knoth (Sr Group Director Strategy and New Ventures, Cadence) and Janhavi Giri (NetApp Industry Lead, formerly at Intel). Excellent panel with of course views to the agentic future but also grounded guidance on getting into and progressing in adopting AI and agentic. Since this is a long webinar, I won’t dwell on vision, rather my takeaways on near-term observations.

Agentic EDA Panel Review Suggests Promise and Near-Term Guidance

Infrastructure implications

You’d have to be living under the proverbial rock not to be aware that hyperscalers and hopeful hyperscalers are investing huge amounts – hundreds of billions of dollars – in building mega-datacenters. What you might not know (but shouldn’t be surprising) is that 60% of that investment is going into technology development – our industry. What an opportunity for the systems and semiconductor ecosystem!

Mahesh highlighted some of the infrastructure challenge in these datacenters. Racks that once ran at 10-15kW are now climbing to 100-120kW and by 2027 we may see 1-2MW per rack. Already direct-to-chip liquid cooling is unavoidable and at higher power levels we will have to switch to immersion cooling. Further AI-centric dataflow within and between racks now demands microseconds of latency where previously we were OK with milliseconds.

Fast changes in infrastructure with big implications, especially support for design and operations (e.g. scheduling old hardware replacement with new hardware). GPU design cycles run 12-18 months, and hardware must be amortized over 5-6 years, so refresh updates must be carefully planned. Cadence with their Reality Digital Twin works very closely with NVIDIA in helping enterprises design and maintain their datacenters against these and other (thermal, cooling, etc) objectives.

NetApp also play an important role in infrastructure through their management of storage and cloud operations A large enterprise will have design data scattered around the world: US, Europe, India, Asia. And they also want to take full advantage of flexibility in compute/AI options: on-prem, cloud and hybrid configurations. Especially in agentic systems, learning from patterns in distributed data could suggest a lot of complexity and unacceptable performance overhead.

Managing complexity and performance effectively will depend in part on agentic architecture, also on sufficient agentic-aware support from storage and cloud infrastructure. NetApp provides this through an end-to-end data pipeline to find needed data across hybrid multi-clouds, ensure it is current by updating as sources change, provide data governance and security throughout the data lifecycle, and provide support for data transformation as needed by AI apps. All MCP capable (standard for agent communication) and integrated with container orchestration platforms such as Kubernetes.

Panel discussion takeaways

Dan kicked off with a great question: what are the most time-consuming and repetitive tasks that agentic AI could help automate? I like this because it goes to the heart of silly mass media fear mongering while shining a light on the real benefit to engineers (no, engineers won’t be replaced by AI, yes, they will get more time to focus on high-value tasks).

The daily routine of today’s engineer is consumed by low productivity friction tasks: learning how to run unfamiliar flows (especially for junior engineers), scripting through extension languages like Tcl, Skill, Python, figuring out what to do next when something crashes or they aren’t meeting a PPA goal, assembling reports on current progress for the next design review. Necessary but pedestrian work consuming significant time that could be better spent in creatively moving a design goal forward. Task-centric agents together with RAG-based lookup can minimize friction and help junior engineers spin up more quickly. Agentic methods can take this further by automating pedestrian tasks.

We’re still in the early stages of that journey, though in some applications further advanced than others. Driving EDA tools through natural language could be a big advance (I am sure that the next generation of engineers looking back at how we direct tools today will be stunned by the primitive scripting methods we now use). Agentic methods for repetitive analyses would be another obvious win: run analyses for a set of cases together with sweeps across certain parameters and boil down the results, returning the top 3 options worth investigating more closely. Methods that are easy to trust because you are effectively using agentic as a skilled intern capable of learning under your guidance. You can still monitor, check their work, but you don’t have to do the grunt work yourself.

Some of this is already happening, in verification, in floorplanning, in implementation optimization and multiphysics analysis. Still essentially productivity optimization around specific analyses but you could imagine going further. Khaled suggested aiming for fully automating design implementation on simple tiles.

What about risk management? There was consensus that new objectives must be phased in carefully with human supervision. I believe investment here will need major focus on methods to build trust – confidence scoring and “show your work” reports for example – also metrics to monitor how effectively apps are improving productivity and/or QoR. We don’t want to replace pedestrian work in regular flows with pedestrian work in wrestling agentic systems.

Another good question probed how best to train agentic systems. For me this question touches on multiple topics: the architecture of an agentic system in semiconductor/systems design, who should own which pieces, considering need to protect company secrets but also who has the most expertise to train certain agents. Does the nature of these systems promote a trend to an ecosystem of agents and agentic suppliers? For now discussion suggests fairly localized agents augmenting existing solutions, complemented by RAGs as support/lookup mechanisms to answer “how do I…” questions. Not a bad starting point, though maybe we can do a little better by embedding some of that RAG data inside agents.

Lots of food for thought. You can watch the webinar HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.