DAC was full of great panels, research papers and chip design stories this year, the same as other years. Being a virtual show, there were some differences of course. I’ve heard attendance was way up, allowing a lot more folks to experience the technical program. This is likely to be true for a virtual event. I’m sure we’ll see more statistics on that in the coming days.
AI and EDA is a favorite topic of mine, so when I saw a panel on the subject I was definitely going to attend. The panel focused on the EDA applications that had opportunity for AI assistance and how to assemble the data needed to train these applications. This second part is particularly important and problematic. More on that in a moment. Below is a summary of the panel focus. These are compelling topics.
This panel will discuss and debate two main questions that must be solved before machine learning can be successfully applied to computer-aided design of electronic systems. First, at what levels of abstraction is machine learning applicable? Potential applications include device optimization, circuit synthesis and optimization, logic synthesis, ESL, verification, and validation. The impact of machine learning at these levels of abstraction is up for debate.
Second, how do we manage the huge amounts of data required to apply machine learning? Does data need to be labeled; if so, who will provide labels? Can models be used to augment labeling? What intellectual property rights must be negotiated to obtain training data? Who will own the results of machine learning methods driven by outside data?
The panel was co-hosted by Jiang Hu from Texas A&M and Xiaoqing Xu from Arm. The panelists were:
- Elias Fallon – Cadence Design Systems
- Paul Franzon- North Carolina State Univ.
- Raviv Gal- IBM Research
- Sachin Sapatnekar – Univ. of Minnesota
Cadence has been quite active in the area of AI for EDA. I’ll cover the comments from Elias Fallon. First, a comment about the format of this particular panel. Most presentations and panels at DAC were pre-recorded, so on event day attendees are able to watch well-rehearsed presentations that flow quite smoothly. Due to some last-minute changes in the agenda, this panel was done live. It seems the panelists found out about this on event day. I have to say, a live event does have some spontaneity that provides an extra level of interest. All the panelists and the moderators did a great job keeping the session moving. Elias kicked off the panel. Here are some details of his comments.
Elias has 20 years of experience in EDA, mostly in analog IC design. He was originally at Neolinear, which was acquired by Cadence in 2004. Elias began with a broad observation about the convergence of drivers for computational software. Machine learning being applied to EDA problems so EDA can be applied to machine learning chip design is an example of this convergence. Cadence published a white paper on the topic of computational software that I covered on SemiWiki here. The diagram at the topic of this post is an example of how all this fits together provided by Elias.
Elias pointed out that AI is a new tool in the toolbox that can be used to advance the capabilities of EDA. Given the difficulty and scale of the problems EDA tries to solve, a new tool like AI can have a significant impact. To illustrate the data challenges associated with AI for EDA, Elias used an example problem – physical grouping of transistors for optimal performance in an analog IC layout. Elias explained that this problem has been the subject of a lot of research. He went on to point out that, in spite of it sounding simple, the ways to group and interdigitate transistors in an analog layout is highly context-dependent. Things like technology, application, yield learning and prior trial-and-error efforts to get it “just right”. All that makes this problem difficult to automate. So, can machine learning be applied to this problem to utilize good past results to achieve better future results.
The next point was shared by others on the panel as well. There just isn’t that much public data available on chip design. Commercial design details are sensitive from a competitive point of view. Even data assembled by the research sector is difficult to share due to the foundry process details it contains. All told, if one can assemble the details of five or six past analog layout designs, that’s about all you can get. The trick is decomposing the machine learning problem down into pieces that can be trained with less data.
Another interesting aspect of this lack of standardized data is that EDA algorithms will likely need to be able to perform autonomous training in the field, leveraging all the unique conditions that the tool encounters. Yet another challenge for EDA to solve.
If you have a pass for DAC, I highly recommend you listen to the entire panel. I believe the DAC presentations will continue to be available for a while. You can find this panel on AI, EDA and the data needed here.
Also ReadShare this post via: