WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Opinions on Generative AI at CadenceLIVE

Opinions on Generative AI at CadenceLIVE
by Bernard Murphy on 05-18-2023 at 6:00 am

According to some AI dreamers, we’re almost there. We’ll no longer need hardware or software design experts—just someone to input basic requirements from which fully realized system technologies will drop out the other end. Expert opinions in the industry are enthusiastic but less hyperbolic. Bob O’Donnell, president, founder and chief analyst at TECHnalysis Research moderated a panel on this topic at CadenceLIVE with panelists Rob Christy (Technical Director and Distinguished Engineer, Implementation – Central Engineering Systems at Arm), Prabal Dutta (Associate Professor, Electrical Engineering and Computer Sciences, at University of California, Berkeley), Dr. Paul Cunningham (Senior Vice President and General Manager of the System & Verification Group at Cadence), Chris Rowen (VP of Engineering, Collaboration AI at Cisco) and Igor Markov (Research Scientist at Meta)—people who know more than most of us about chip design and AI. All panelists offered valuable insights. I have summarized the discussion here.

Opinions on Generative AI

Will generative AI change chip design?

The consensus was yes and no. AI can automate much of the human-in-the-loop interaction on top of necessary building block technologies: Place-and-route, logic simulation, circuit simulation, etc. This allows us to explore a broader—perhaps much broader—range of options than would be possible through manual exploration.

AI is fundamentally probabilistic, ideal where probabilistic answers are appropriate (generally improving on a baseline) but not where high precision is mandatory (e.g. synthesizing gates). Further, generative models today are very good in a limited set of fields, not necessarily elsewhere. For example, they are very inefficient in math applications. It is also important to remember that they really don’t learn skills—they learn to mimic. There is no underlying understanding of electrical engineering, physics, or math for example. In practical use, some limitations might be offset with strong verification.

That said, what they can do in language applications is remarkable. In other massive domain-specific datasets, such as in networking, large models could learn structure and infer many interesting things that have nothing to do with language. You could imagine superlinear learning in some domains if learning could run against worldwide corpora, assuming we can master thorny IP and privacy issues.

Can generative methods boost skill development?

In semiconductor and systems design, we face a serious talent shortage. Panelists believe AI will help younger, less experienced engineers accelerate quicker to a more experienced performance level. Experts will get better too, getting more time to study and apply new techniques from constantly expanding frontiers in microarchitectural and implementation research. This should be a reminder that learning-based methods will help with “every experienced designer knows” knowledge but will always be behind the expert curve.

Will such tools allow us to create different types of chips? In the near term, AI will help make better chips rather than new types of chips. Generative models are good with sequences of steps; if you are going through the same design process many times, AI can optimize/automate those sequences better than we can. Further out, generative methods may help us build new kinds of AI chips, which could be interesting because we are realizing that more and more problems can be recast as AI problems.

Another interesting area is in multi-die design. This is a new area even for design experts. Today, we think of chiplet blocks with interfaces built up as pre-determined Lego pieces. Generative AI may suggest new ways to unlock better optimizations, providing different answers than even the experts might quickly find.

Pitfalls

What are the potential pitfalls of applying generative AI to chip and or system design? We ourselves represent one problem. If the AI is doing a good job, do you start to trust it more than you should? Similar questions are already a concern for autonomous driving and autonomous weaponized drones. Trust is a delicate balance. We can trust but verify, but then what if verification also becomes learning-based to deal with complexity? When verification AI is proving the correctness of AI-generated design, where do we cross a line between justified and unjustified trust?

ChatGPT is a cautionary example. The great fascination and the great fallacy of ChatGPT is that you can ask it anything. We’re amazed by the specific smartness and by the fact that it covers so many different areas. It feels like the automatic general intelligence problem has been solved.

But almost all real-world applications will be much narrower, judged on different criteria than an ability to amaze or entertain. In business, engineering and other real-world applications we will expect high quality of results. There’s no doubt that such applications will progressively improve, but if hype gets too far ahead of reality, expectations will be dashed, and trust in further advances will stall.

More pragmatically, can we integrate established point skills into generative systems? Again, yes and no. There are some augmented models that are very productive and able to handle arithmetic and formula manipulation, for example, WolframAlpha which is already integrated with ChatGPT. WolframAlpha provides symbolic and numerical reasoning, complementing AI. Think of AI as the human-machine interface and the WolframAlpha augmentation as the deep understanding behind that interface.

Is it possible to bypass augmentation, to learn and load skills directly into the AI as modules as Neo was able to learn King Fu in the Matrix? How local is the representation of such skills in language models? Unfortunately, even now, learned skills are represented by weights in the model and are global. To this extent, loading a trained module as an extension to an existing trained platform isn’t possible.

There is a somewhat related question around the value of worldwide training versus in-house-only training. The theory is that if ChatGPT can do such a good job by training on a global dataset, then design tools should be able to do the same. This theory stumbles in two ways. First, the design data needed for training is highly proprietary, never to be shared under any circumstances. Global training also seems unnecessary; EDA companies can provide a decent starting point based on design examples routinely used to refine non-AI tools. Customers building on that base, training using their own data, report meaningful improvement for their purposes.

Second, it is unclear that shared learning across many dissimilar design domains would even be beneficial. Each company wants to optimize for its own special advantages, not through a multi-purpose soup of “best practices”.

Hope for reuse in AI and looking forward

Given earlier answers, are we stuck with unique models for each narrow domain? It’s not clear that one architecture can do everything, but open interfaces will encourage an ecosystem of capabilities, maybe like a protocol stack. Apps will diverge, but there can still be a lot of shared infrastructure. Also, if we think of applications which require a sequence of trained models, some of those models may be less proprietary than others.

Looking forward, generative AI is a fast-moving train. New ideas are appearing monthly, even daily, so what is not possible today may become possible or solved in a different way relatively soon. There are still big issues of privacy in any area depending on training across wide datasets. Proving that learned behavior in such cases will not violate patents or trade secrets seems like a very hard problem, probably best avoided by limiting such training to non-sensitive capabilities.

Despite all the caveats, this is an area to be fearless. Generative AI will be transformative. We must train ourselves to better leverage AI in our daily lives. And in turn, applying our learning to be more ambitious for our use in design technologies.

Great talk. Hopeful, with good insights into limitations and practical applications.

Also Read:

Takeaways from CadenceLIVE 2023

Anirudh Keynote at Cadence Live

Petri Nets Validating DRAM Protocols. Innovation in Verification

Share this post via:

Comments

2 Replies to “Opinions on Generative AI at CadenceLIVE”

You must register or log in to view/post comments.