Traditionally, EDA has been a brute force methodology where we buy more software licenses and more CPUs and keep running endless jobs to keep up with the increasing design and process complexities. SPICE simulation for example; when I meet chip designers (which I do quite frequently) I ask them how many simulations they do for a given task. The answer is always, “It depends on how many licenses and CPUs I have access to during a given time period.” The correct answer of course is “As many simulations as I need to get the best power, performance, area, and yield for my design.” FinFETs have exacerbated this problem as expected by a select few EDA experts which has led us to our new trendy buzzword Machine Learning (ML).
For the record, Machine Learning (adaptive algorithms) gives computers the ability to learn without being explicitly programmed. Arthur Samuel coined the term in 1959 while at IBM. You can see a lengthy definition on Wikipedia.
For a detailed description of ML as it applies to EDA please see these four videos from a panel at #54DAC.
First we have Eric Hall of Data Science (formerly Broadcom):
So, what is machine learning? Well, briefly it’s giving a set of features — X’s, we want to predict the outcome, Y. I’ll mostly be talking about supervised learning, and at times relating it to characterization prediction…
Next we have Ting Ku from Nvidia:
So, Machine learning seems to be the biggest buzz out there right now. So, what’s the magic?The first thing we need to learn is the classification of machine learning. I have a really, really simplified way of looking at things…
Next we have Sorin Dobre of Qualcomm:
My presentation is seven minutes. We do not have time to present all the techniques for machine learning. So, it’s more focused on the applicability in the EDA space, more of a kind of forecast, to use these techniques to address the design problems we are seeing at 10 nanometers, 7 nanometers and beyond…
And finally we have Jeff Dyck of Solido:
One of the things I wanted to talk about is, a lot of people are buzzing about machine learning right now. We’re excited about it. We’re trying to apply it. There are lots of in house projects that are looking to apply methods that already exist to solve big problems…
Based on my travels around the world attending conferences and meeting with customers I can tell you that ML is in fact real and will be the next big thing in EDA, absolutely.
The first ML EDA products that I have seen in production are in theSolido ML Characterization Suite (below) which uses machine learning to accelerate statistical characterization of standard cells, memory, and I/O. It delivers true 3-sigma LVF/AOCV/POCV values with Monte Carlo and SPICE accuracy, and handles non-Gaussian distributions. It adaptively selects simulations to meet accuracy requirements, minimizing runtime for all cells, corners, arcs, and slew-load combinations. This suite is in use by top semiconductor companies and foundries around the world so the results are real, not a result of marketeering.
Reduces library characterization time by 30 to 70%
Solido ML Characterization Suite Predictor uses machine learning to accelerate characterization of standard cells, memory, and I/O. Predictor accurately generates Liberty models at new conditions from existing Liberty data at different PVT conditions, Vtfamilies, supplies, channel lengths, model revisions, and more. It significantly reduces up-front characterization time as well as turnaround to generate library models, and works with NLDM, CCS, CCSN, waveforms, ECSM, AOCV, LVF, and more.
Generates Monte Carlo accurate statistical timing models >1000x faster than Monte Carlo
Solido ML Characterization Suite Statistical Characterizer uses machine learning technology to accelerate statistical characterization of standard cells, memory, and I/O. It delivers true 3-sigma LVF/AOCV/POCV values with Monte Carlo and SPICE accuracy, and handles non-Gaussian distributions. It adaptively selects simulations to meet accuracy requirements, minimizing runtime for all cells, corners, arcs, and slew-load combinations.
For fast, accurate, statistical characterization, Statistical Characterizer and Predictor can be used in combination. Use Statistical Characterizer on anchor corners to quickly add Monte Carlo accurate LVF, and then use Predictor to create and expand remaining corners, including statistical characterization data, generating savings of more than 50% without compromising accuracy.Share this post via: