In AI it is easy to be distracted by hype and miss the real advances in technology and adoption that are making a difference today. Accellera hosted a panel at DAC on just this topic, moderated by Dan Nenni (Mr. SemiWiki). Panelists were: Chuck Alpert, Cadence’s AI Fellow driving cross-functional Agentic AI solutions throughout Cadence; Dr. Erik Berg, Senior Principal Engineer at Microsoft, leading generative AI strategy for end-to-end silicon development; Dr. Monica Farkash, AMD fellow, creator of ML/AI based solutions to reshape HW development flows; Harry Foster, Chief Scientist for Verification at Siemens Digital Industries Software; Badri Gopalan, R&D Scientist at Synopsys, architect and developer for coverage closure and GenAI related technology; and Syed Suhaib leading CPU Formal Verification at Nvidia.
Where are we really at with AI in EDA?
In 2023 everyone in EDA wanted to climb on the AI hype train. There was some substance behind the stories but in my view the promise outran reality. Two years later in this panel I heard more grounded views, not a reset but practical positions on what is already in production, what is imminent, and what is further out. Along with practical advice for teams eager to take advantage of AI but not sure where to start.
I like Chuck’s view, modeling AI evolution in EDA like the SAE model for automotive autonomy, progressing through a series of levels. Capabilities at level 1 we already see in production use, such as PPA optimization in implementation or regression optimization in verification. Level 2 should be coming soon, providing chat/search help for tools and flows. Level 3 introduces generation for code, assertions, SDCs, testbenches. Level 4 will support workflows and level 5 may provide full autonomy – someday. Just as in automotive autonomy, the higher you go, the more levels become aspirational but still worthy goals to drive advances.
According to Erik, executives in Microsoft see accelerating adoption in software engineering and want to know why the hardware folks aren’t there yet. Part of the problem is the tiny size (~1%) of the training corpus versus the software corpus, also a significantly more complex development flow. Execs get that but want hardware teams to come up with creative workarounds, to not keep falling further behind. An especially interesting insight is that in Microsoft teams are building more data awareness and learning how to curate and label data to drive AI based optimizations.
Monica offered another interesting insight. She has been working in AI for quite a long time and is very familiar with the advances that many of us now see as revolutionary. The big change for her is that, after a long period of general disinterest from the design community, suddenly all design teams want these capabilities yesterday. This sudden demand can’t be explained by hype. Hype generates curiosity, urgency comes from results seen in other teams. I know that this is already happening in implementation optimization and in regression suite optimization. Results aren’t always compelling, but they are compelling often enough to command attention.
Harry Foster added an important point. We’ve had forms of AI in point tools for some time now and they have made a difference, but the big gains are going to come from flow/agentic optimizations (Erik suggested between 30% and 50%).
Badri echoed this point and added that progress won’t just be about technical advances, it will also be about building trust. He sees agents as a form of collaboration which should be modeled on our own collaboration. While today we are allergic to the idea of any kind of collaboration in AI, he thinks we need to find ways to make some level of collaboration more feasible. Perhaps in sharing weights or RAG data. Unclear what methods might be acceptable and when, but more will be possible if we could find a path.
Syed offered some very practical applications of AI. Auto-fixing (or at least suggesting fixes) for naming compliance violations. At first glance this application might seem trivial. What’s important about a filename or signal name? A lot, if tools use those names to guide generation or verification, or AI itself. Equivalence checking for example uses names to figure out correspondence points in a design. At Nvidia, among other applications they use AI to clean up naming sloppiness, saving engineers significant effort in cleanup and boosting productivity through improved compliance. AI is also used to bootstrap testbench generation, certainly in the formal group.
Audience Q&A
There were some excellent questions from the audience. I’ll pick just a couple to highlight here. The first was essentially “how do you benchmark/decide on AI and agentic systems?” The consensus answer was to first figure out in detail what problem you want to solve and how you would solve it without AI. Then perhaps you can use an off-the shelf chatbot augmented with some well-organized in-house RAG content. Maybe you can add some fine-tuning to get close to what you want. Maybe you can use a much simpler model. Or if you have the resources and budget, you can go all the way to a customized LLM, as some companies represented on this panel have done.
Design houses have always built their own differentiated flows around vendor tools, often a mix of tools from different vendors. They build scripting and add in-house tools for all kinds of applications: creating or extracting memory and register maps, defining package pin and IO muxing maps and so on. In-house AI and particularly agentic AI could perhaps over time supersede scripting and even drive new approaches to agents for product team-specific tasks. EDA agents will likely also play a part in this evolution around their own flows. For interoperability in such flows one proposal was increased use of standards like MCP.
Another very good question came from the leader of a formal verification team who is ramping up a few engineers on SVA, while also aiming to ramp them up on machine learning. His question was how to train his team in AI methods, a challenge that I am sure is widely shared. Erik said “ask ChatGPT” and we all laughed, but then he added (I’ll roughly quote here):
“I’m 100% serious. I’ve had people complain, where’s the help menu? I said, just ask it your question. And if you’re having trouble with your prompts, give it your prompt and say, this is the output that I want. What am I doing wrong? It will be very frank with you. Use the tool to learn.”
Now that is a refreshing perspective. A technology that isn’t just useful for individual contributors, but also for their managers!
I’m not always a fan of panels. I often find that they offer few new insights, but this panel was different. Good questions and thought-provoking responses. More of these please Accellera. Benchmarking AI and agentic systems sounds like one topic that would draw a crowd!
Also Read:
Accellera at DVCon 2025 Updates and Behavioral Coverage
Accellera 2024 End of Year Update
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.