Artificial intelligence (AI) is everywhere. The rise of the machines is upon us in case you haven’t noticed. Machine learning (ML) and its associated inference abilities promise to revolutionize everything from driving your car to making breakfast. We hear a lot about the macro, end-product impact of this technology, but there are many more back-stories about the revolution. Of particular interest to SemiWiki readers is what all this all means for chip design, chip verification and EDA.
I got a chance recently to chat with Paul Cunningham at Cadence about this topic. For those of you who don’t know Paul, he is a Corporate Vice President and General Manager at Cadence. He’s been there for almost nine years, overseeing everything from front-end to back-end to system verification products. With a diverse background like this, we had a lot of ground to cover during our conversation.
We started at 30,000 feet. How does EDA impact AI/ML design and how does AI/ML technology impact EDA? Paul discussed how Cadence approaches these requirements. It turns out there are three separate and distinct areas of focus at Cadence and they’re all important.
Regarding the impact AI/ML has on EDA tools, there are actually two parts to consider. EDA tools are faced with solving a lot of intractable problems that utilize heuristics to manage. Estimating congestion or parasitics for a large digital design early in the place and route flow are examples. In these cases, AI/ML can contribute to better data and a better chip layout as a result. These improvements are really invisible to the user—the tool just delivers better results. Cadence calls this “ML inside”.
The other impact AI/ML has on EDA tools has to do with the design flow. As everyone knows, chip design is an iterative process, with many parts of the design team collaborating to get the best result possible. There are many, many trial runs in the interest of the best layout, most complete verification, lowest power and so on. This process can extend over several months. In this context, AI/ML can be used to analyze the vast amounts of data each iteration produces with the goal of learning as much as possible from a given iteration or set of iterations. This process can reduce design time by essentially working smarter as opposed to harder. This process is quite new as it looks to productize designer intuition to make a design flow more efficient. Cadence calls this “ML outside”.
Paul went on to highlight the significance of ML outside. Up to now, EDA tools have a huge number of input parameters, but none of them capture the history and learning of the tool usage for the problem at hand. Said another way, the tool has no memory of its prior use. ML outside can change all that, creating a fundamentally new type of tool flow.
The third area of focus moves from tool-centric to ecosystem-centric. That is, how can you help to enable the chip and system design ecosystem to add AI/ML to their products? Paul explained that the term, ecosystem, is quite broad in this context and also quite important to the Cadence strategy. Foundries and certain IP suppliers play an important part of course. But design challenges have grown past hardware and Cadence also needs to look at how their verification products interface with software systems like Android, Windows and Linux to deliver a holistic debug capability.
We also discussed the wide variety of markets that all need assistance adding AI/ML to their products. Mobile, automotive, data center and mil/aero are just a few of many examples. What are the demands each of these markets presents? Does each need fundamentally new and different tools, or is it more about the flow? It turns out all chips need basically the same tools to get to tapeout, but the stress points the tools experience and the way the tools need to be tested against other parts of the ecosystem are quite different. If you consider the demands of a very small, ultra-low power chip vs. the demands of a massive data center processing chip, you’ll get the idea. The long life of an automotive chip vs. the relatively short life of a cell phone chip also shed light on the diversity of the problem.
So, supporting a broad range of markets is more about optimizing and testing tools and flows than it is about developing different tools for different markets. Fundamental to this strategy is the development of robust tools that support multiple use models of course. Paul provided a memorable analogy here that is worth repeating, “a Land Rover and a Ferrari are both cars, they’re just optimized and tested to be good at different things.”
Our final topic touched on what future AI/ML chips will look like. Paul felt strongly that a collection of custom, optimized processors will always deliver superior performance for AI/ML algorithms than an off-the-shelf product. So, the future of compute in this context is heterogeneous. Having spent a good part of my career as an ASIC supplier, I couldn’t agree more. This view of the future suggests vibrant growth for both EDA and semiconductor as the number of special purpose AI/ML processors explodes. I’ll leave you with that optimistic thought.
If you’d like to learn more about the AI/ML solutions Cadence offers, visit the AI / Machine Learning page on the Cadence website.Share this post via: