Most EDA tools started out running on mainframe computers, then minicomputers, followed by workstations and finally desktop PCs running Linux. If your SoC design team is working on a big chip with over a billion transistors, then your company likely will use a compute farm to distribute some of the more demanding IC jobs over lots of cores to get your work done in a reasonable amount of time. A clearly emerging trend is to consider running EDA tools in the cloud on an as-needed basis, because the cloud scales so easily, and you don’t have to buy all of that hardware and hire an IT group to support you.
I’ve been watching this cloud trend for several years now, and each quarter I see more EDA companies partnering with the major cloud vendors to help IC design teams get their work done smarter and faster than ever before. Mentor for example has cloud-enabled several EDA tools:
- Calibre, June 2019
- Embedded Systems to Azure IoT Hub, April 2017
- Eldo, AFS – Analog Fast SPICE, October 2019
In this blog I’m focused on that last bullet point where Mentor recently announced that circuit design engineers can now simulate their SPICE netlists in the Azure cloud, scaling to 10,000 cores. The biggest application of this scaling would be for the task of library characterization flows, effectively shortening the wait time.
I spoke with Sathish Balasubramanian from Mentor last month to better understand why design teams need something like SPICE simulators in the cloud. He talked about engineering teams using their own compute resources with maybe 200-300 cores, typically running library characterization for a week. Sathish then noted that the same library characterization workload could be run in the Azure cloud on up to 10,000 cores, reducing the compute time to about an hour. OK, that sounds compelling to me.
Since library characterization and other AMS circuit simulation verification jobs are only run at certain times during a project, it starts to make sense to use a cloud-based vendor like Microsoft with their Azure offering, loaded with either Eldo or AFS circuit simulators.
Mentor has addressed the list of concerns that come up with running EDA tools in the cloud:
- Managing EDA tool licenses
- Data transferral
I then asked Sathish a set of questions:
Q: Why choose Azure?
A: It’s all based on customer demand, Mentor also has a relationship with Amazon Web Services. Microsoft is a close partner with Mentor.
Q: What is the learning curve like?
It’s quick, like a couple of hours to setup the Azure environment and get started. Customers first setup their Azure account, then start deploying the characterization workload. We have a configuration already setup for using Mentor library characterization tools, based on our Solido technology.
Q: Can I mix another vendor’s characterization tools with Mentor circuit simulators in the cloud?
A: At this time it’s an all-Mentor EDA tool flow in Azure.
Q: How efficient is it using Azure for circuit simulation jobs?
A: We can use up to 10,000 cores with a 91% linear scaling results, and it took some effort to reach that milestone.
Q: Who are the first customers of this cloud offering?
A: They are top 10 semiconductor companies and foundries, stay tuned for customer quotes.
Q: How do you manage all of those licenses?
A: The EDA tool licenses use Mentor’s FlexLM system, and then Microsoft has their pricing based on how many total CPU cycles you use.
Q: How do I find out about pricing?
A: Just contact your local Mentor Account Manager.
Q: Does Mentor use the cloud in developing EDA tools and running regression testing?
A: Yes, we are users of Azure internally too.
One classic way to approach a large, compute intensive challenge like SPICE circuit simulation is to divide and conquer, and Mentor’s use of Microsoft Azure to scale up to 10,000 cores for Eldo and AFS tools sure looks like a smarter way to go, compared to building up an internal compute farm.
EDA tools started out with mainframe computers, the early progenitor of cloud-computing, and now with vendors like Microsoft we’ve returned to centralized computing again because it makes sense for peak EDA tool run requirements.