The above title refers to a webinar that was hosted by Altair on April 28th. Chip design in the cloud is not a new idea. So, what is the big deal with the above title. Sometimes titles don’t reveal the full story. Annapurna Labs happens to be an Amazon company. It used to be an independent semiconductor company that was acquired by Amazon in 2015. So why not say, “Chip Design in the Cloud – Amazon and Altair” or “Chip Design in the Cloud – AWS and Altair.” The key phrases are “food for thought”, “eagle eyes”, and “optimized scaling.” After reading this blog you will know why.
The webinar was delivered by Andrea Casotto, Chief Scientist at Altair, Zohar Levy, HPC Project Manager at Altair and David Pellerin, Head of Worldwide Business Development for Infotech/Semiconductor Amazon Web Services.
Straight off the bat Andrea shocked the audience by stating that many companies are repatriating back from cloud to on-premises. He presented some cost overruns stats to back up his shocker statement. Of course, he quickly pointed out the reasons behind those overruns and introduced the solution as well. The solution is Rapid Scaling.
Rapid Scaling is Altair’s patented approach to implementing cloud elasticity. It is a feature within their Accelerator software and was developed by Altair when working with Annapurna Labs. This feature helps bring cloud services cost as close as possible to demand by not asking for more hardware than is needed to complete the workloads. It accomplishes this by:
- Categorizing similar characteristics jobs into workload buckets and calculating the speed at which each bucket can get scheduled
- Monitoring EDA license dependencies and availability of required licenses and not asking for hardware until the licenses become available
- Enforcing customer specified cost-schedule limits by not launching workloads and/or requesting more hardware resources when cost-tally gets close to preset limits
- Executing workload scheduling policies and accordingly switching between on-demand instances and spot instances to optimize cost, AND
- Stopping the Compute Farm growth at the optimal point knowing (based on its estimation) that all jobs still remaining in the queue will get dispatched to hardware within customer specified time window. Refer to Figure 1. In this example, the Compute Farm growth stops even when there are 100 jobs in the queue [vertical red line cutting through the graphs]. That is because the Accelerator estimates that all jobs in the queue can be dispatched within 10 minutes to existing hardware. The 10-minute window was set by the customer and is a configurable parameter.
Andrea continued by discussing the different operating systems, processor architectures and instance types currently supported by Rapid Scaling, and then passed the baton to Zohar.
Zohar demonstrated Annapurna Labs’ live production environment for semiconductor design, without and with Rapid Scaling feature enabled. Refer to Figure 2 for Altair Accelerator Architecture and operating environment. You will have to see the live demo to see the benefits presented visually on an hourly, daily, weekly or monthly time scale. Suffice it to say the demo clearly demonstrated cloud elasticity.
David followed Zohar with a talk summarizing Amazon’s experience in designing chips in the cloud.
He discussed how and why Amazon got into designing custom silicon, how these initiatives help its AWS customers and the expansion in the number and types of instances offered. Graviton/Graviton2, Inferentia, Trainium, and Nitro System were listed as examples of custom silicon built at Amazon Labs that are powering many of the purpose-built AWS instances. He shared case study snapshots of customers such as MediaTek, Qualcomm and Arm who have benefitted by EDA on AWS Cloud for designing their chips and IP.
David also highlighted how ARM-based instances are fast becoming a good high-performance alternative to traditional x86-based instances for EDA on the cloud. He spotlighted the recently announced X2gd Arm-based instance as particularly suited for EDA workloads as these instances have a high amount of memory.
David also touched on Amazon’s own EDA journey to AWS Cloud as they migrated (refer to Figure 3) from Annapurna Labs’ on-prem EDA flow to everything on AWS Cloud, except for emulators.
David closed his talk with a thought on how customers who have on-prem EDA flow could explore hybrid EDA orchestration. He pointed out that a tool such as Altair’s Accelerator knows when to tap into the Cloud for certain types of instances or for spot instances or for EDA licenses to optimize cost.
The webinar closed with a Q&A segment during which some excellent questions were fielded.
Now You Know
The Annapurna Labs team has a penchant for scaling obstacles. The word Annapurna refers to a mountain range in the Himalayas with a number of tall peaks. The Annapurna Labs logo showcases that. The etymology of the word Annapurna tells us that it stands for abundant food. True to its name, Annapurna Labs has certainly provided some food for thought with respect to efficiently scaling the peaks, valleys and plateaus of semiconductor design workloads utilizing AWS cloud services.
The word Altair stands for eagle as per etymological roots. Altair keeps an eagle eye on dependencies, resources and costs through its scheduling software equipped with patented rapid scaling technology. The result is very cost-effective scaling for Annapurna Labs. One case study derived a 50% cost savings compared to not leveraging the rapid scaling feature.
Altair’s Accelerator with its patented Rapid Scaling feature is a cost-conscious job scheduler proven to meet the compute demands of the semiconductor and EDA workloads in the cloud. It is capable of launching and managing millions of jobs on a daily basis.
Anyone designing semiconductor chips can benefit from Altair solution when designing in the cloud. It is currently supported on AWS cloud. I recommend you listen to the entire webinar and then discuss with Altair on ways to leverage their solution for your benefits.
Share this post via: