Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/eda-cloud-ramifications.12737/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

EDA: Cloud Ramifications

DanielPilling

New member
Dear SemiWiki Community,

Software as a service has permeated many aspects of on-prem software. It seems that EDA has been lagging this trend yet more recently the discussion re EDA Cloud has accelerated somewhat. There are a few arguments to be had:

1) EDA licenses tend to be underutilized as customers have to solve for peak usage rather than average usage. This would suggest that customers should be able to generate savings by paying 'by the hour'. The latter is a question of pricing discipline in this consolidated industry though
2) Cloud EDA would allow for infinitely scale-able hardware supply. For example: verification could be hugely expedited to the limits of software code parallelization. Does that mean that emulation hardware will become superfluous?
3) Chip start ups will require much less capital to get started and as such cloud EDA should enable the smaller tail to really participate in chip design

Any views / discussions would be hugely appreciated!
 
I don't know enough about hardware emulation to comment on (2), but I have comments to make on (1) and (3).

I like your point about peak usage. I've had the experience of working in startups where, when all on-site and remote engineers were working, there was an "odd man out" who would have to find something else to do because he couldn't get a license. Of course, that's not really peak usage, but under provisioning on the licenses. So, pay-by-the-hour? Well, the big EDA shops will sell licenses by the month, and maybe the week, which people use to get extra simulator licenses for "crunch time". Note that EDA companies do charge more for those licenses than they do for the annual ones, so there's usage optimization analysis to be done. Of course, none of that requires the cloud. My experiences in these things I've mentioned all occurred with on-site software.

As for serving startups, the big EDA companies have pretty much exactly what you're talking about already. I know for a fact that Cadence has a cloud-based offering that's specifically meant for startups that involves remote desktops running on their cloud accessed by customers online. One can easily imagine this being particularly compatible with a "bring your own computer" policy, since the personal machine is just serving as the messenger, so to speak. The real question is whether you keep this going after later rounds of funding and/or successful projects bringing in revenue. In my experience, hardware companies tend to start paranoid and get worse as they grow, so when you find yourself able to afford on-site servers and pricey workstations, you get those because no one wants the crown jewels of their business left floating around in someone else's cloud. Besides, the price of the hardware is much less than that of the software licenses.

Now, at this point, I feel obliged to mention that no one seems to have any problem locking up their IP in proprietary schematic and layout formats that you won't be able to even view, never mind edit, if you stop paying the the EDA company, but I suppose we have a split personality on that issue as an industry. Of course, maybe you can't view it, but neither can your competitor if it's sitting on your own hardware.
 
There has been cloud content at the past few Design Automation Conferences, I spoke at the Could Pavillion last year. To me it's all about business models. Clearly semiconductor design can be done in the cloud faster and more effectively than on private servers. The problem is transitioning the EDA licensing model to the cloud without EDA companies taking a big revenue hit. The same type of revenue hit EDA companies took when transitioning from perpetual to time based licenses in the mid 1990s:

https://semiwiki.com/wally-rhines/7754-honey-i-shrunk-the-eda-tam/

DAC 2020: From EDA to Design on Cloud, Machine Learning, Embedded Systems and More
As the premier conference for the design and design automation of electronic circuits and systems, the 57th Design Automation Conference program has expanded to also include many verticals closely integrated with and/or dependent on cutting-edge electronic design automation. Along with a large exhibit floor featuring top EDA, design on cloud and IP companies, stellar keynote sessions and endless networking, the topics below will be represented on both the industry and academia portions of the DAC program.

MONDAY July 20, 10:30am - 12:00pm
TOPIC AREA: EDA
EVENT TYPE: MONDAY TUTORIAL

Tutorial 2: EDA in the Cloud – Best Practices for Cloud Migration

Speakers:
Rob Lalonde - Univa Corp., Hoffman Estates, IL
Ilias Katsardis - Google, Inc., London, United Kingdom
Whether in semiconductor design, life sciences or deep learning, building high-quality products and getting to market quickly often demands extensive computer simulation. Enterprises compete in-part based on the scale, performance and cost-efficiency of their high-performance computing (HPC) environments. Yet in the world of EDA, the stakes are even higher when it comes to device validation and regression testing on designs that require massively compute-intensive operations. That said, modern VLSI and SoC designs comprised of millions of gates, which can often include minor design changes, can result in millions of digital and analog simulations to be re-run to ensure that the device continues to function and has not regressed.

Furthermore, with tape-out costs ranging from 10-15 million dollars, organizations cannot afford to make a mistake in device design and verification. Today’s EDA design teams are looking for new ways of doing business in order to achieve their goals at a faster pace. Yet it is becoming increasingly impossible to run a simulation on an existing on-premise cluster without spending weeks to complete projects. This is why cloud environments are becoming an increasingly compelling environment for EDA workloads. In addition, with advances in cloud offerings and more cloud-friendly licensing models from software vendors, the barriers to running EDA in the cloud are falling away.

This talk presented jointly by Google and Univa will discuss best practices for moving EDA workloads to the cloud. The presenters will begin by addressing the state of the cloud market, the opportunities and challenges faced with cloud computing within the unique realm of EDA, and then discuss solutions to ease an organization’s journey to the cloud. To demonstrate these latter points, a use case from a leading application-specific integrated circuit (ASIC) developer that designs and produces high-end chips and semiconductor IP, will be introduced. With an ever-increasing workload and demand for capacity, Google Cloud’s service-on-demand model and Univa Grid Engine were selected to help the company extend its workloads to the cloud. The presenters will also discuss how these solutions helped it manage its complex ASIC design workloads automatically, maximize shared resources and accelerate the execution of any container, application or service, all while increasing RIO and improving overall results.

Rob Lalonde Bio: Robert Lalonde is the vice president and general manager, cloud, at Univa. He brings over 25 years of executive management experience to lead Univa's accelerating growth and entry into new markets. Rob has held executive positions in multiple, successful high tech companies and startups. He possesses a unique and multi-disciplined set of skills having held positions in Sales, Marketing, Business Development, and CEO and board positions. Rob has completed MBA studies at York University's Schulich School of Business and holds a degree in computer science from Laurentian University.

Ilias Katsardis Bio: Ilias Katsardis is an HPC product specialist (EMEA) at Google. In this role, Ilias brings over 14 years of experience in the cloud computing and high-performance computing industries to promote Google Cloud’s state-of-the-art infrastructure for complex HPC workloads. Previously, he worked as an applications analyst at Cray Inc., where he was a dedicated analyst to the European Centre for Medium-Range Weather Forecasts (ECMWF), and, prior to that, was an HPC application specialist at ClusterVision. Ilias also founded Airwire Networks in 2006.
 
Last edited:
Hi Sunny, all great points. Particularly, the last point is intriguing i.e. something that I had not thought about before.
Hi. Sorry to fall silent like that; just updated my notification preferences.

Assuming you're talking about my beef with file formats, well, I suppose that's more of an analog thing. Digital design already has these standardized file formats their baseline IP goes into: VHDL and Verilog. What comes after normally is the result of synthesis tools. Of course, even then, things like restraint files and power intent all go into text files that can be read anywhere with a text editor. And on top of that, their formats are often open standards to boot!

Analog IP, OTOH, is defined at different levels of abstraction, first by schematics, then by custom layout. All of that involves vector graphics files that have always been tool dependent. There's some wiggle room for the mask layout, since GSDII has served as a de facto standard for decades now and many tools exist that can read it. That being said, layout device generators are proprietary and tool dependent and tend to depend on the foundry supporting the toolset. The baseline layout, including generated devices, again, tends to be sitting in a tool-specific database; GDSII mostly serves as an export format.

So, what's my point with all this? Well, you mentioned cloud EDA serving chip start-ups. And EDA companies are thinking about this: look at Mentor's "design enablement" program. Personally, I think to properly enable innovation and chip starts through the founding of new chip start-ups, we need a broader continuum of tools than we currently have. We need capable FOSS tooling to get that crazy idea off the ground with tinkering at home on the weekend on your own computer hardware. We need entry-level tooling when things get serious to allow that core group of 3 to 6 people to get a finished chip put together on a shoe string budget while the small team is living on savings. Well developed FOSS tools might be enough for that first effort. Cheaper cloud-based EDA could serve as a next step. The more normal in-house licenses on powerful workstations and server racks can come in after that. This could also be supplemented by cloud-based simulation, especially during "crunch time".

For all of the above to work smoothly in the analog space, we really need open schematic and layout formats, and ideally, these open formats, maybe 2 competing ones, should serve as the default format all the tools use, like the current state in the RTL world with Verilog and VHDL. EDA companies won't want to give up that lock-in, but it would be the best thing for the industry.

There are interesting things happening right now, with the Skywater fab announcing an open PDK. And in this case, they really mean open, with design rules and models available without signing an NDA. You can look at it on Github right now, though it's currently very incomplete. Also, as is the way with all things in this industry, the real emphasis is on the digital; see the above namedrop of Verilog and VHDL. There's a lot of talk about supporting the OpenROAD project. However, they do also talk about FOSS analog, it's just that none of the particulars driving the project seem to know what tools would be involved, exactly. BAG is mentioned, though in its current form, that's a FOSS tool built on top of Cadence Virtuoso. Calibre is also mentioned, but again, not a FOSS tool. DARPA's Electronics Resurgence Initiative, of which the OpenROAD project is a part, is seeking to drive the founding of new companies, and they are putting a lot of emphasis on FOSS tooling. At the moment, they haven't really focused on analog design, but maybe when (if?) they do, they'll take a moment to look at schematic and layout file formats.

Think I'll quit there. That's enough rambling. To be clear, FOSS stands for Free/Open Source Software. Skywater-PDK on Github. OpenROAD project.
 
Hi Sunny,

The major EDA ISVs all seem to be reaching for the clouds. I spent 2yrs in Cadence Cloud Business Development and interacted with customers ranging from 10 employees to more than 100,000. As opposed to open file formats, the main message I heard was: designers want one cloud (rather than multiple) to set up, learn, and use. No one wanted a metrics cloud, a Cadence Cloud, a Synopsys Cloud and so on. They wanted one common cloud platform across all tool flows. They didn’t want to have a specific cloud for each EDA vendor, and given that the vendors do not allow their IP or software offerings to be hosted by a competitor, the ISV Cloud Initiatives will be hard pressed to gain mass proliferation and deliver the benefits of HPC in the cloud. Furthermore, the investment to create and deliver to the market a leading edge PAAS solution would require nearly upwards of $30M in development costs and the EDA ISVs serve only semiconductor customers so that investment would be next to impossible to recoup across a single market vertical.

When I left Cadence, I continued my passion to deliver the industry a cloud platform that can host all software applications and deliver the power of nearly infinite compute (constrained only by budgets and Parallelization efficiency of the applications). I had worked with PAAS providers while I was at Cadence, and I also spearheaded a partnership between Cadence and pure PAAS providers called the Cadence Passport Program. My customers would commonly have mixed vendor workflows, so I wanted to find a PAAS that could help.

After much searching, Rescale was the only PAAS that fit the bill. Rescale naturally became the inaugural partner of that program.

In parallel to their work with Cadence, Rescale was deeply engaged with Samsung, as part of an initiative to bring SAFE CDP to the cloud for Samsung Foundry customers. The SAFE CDP demonstrated an improvement of TTM of 30% for Gaon Chips.

Rescale is also an IN-KIND partner to Silicon Catalyst and delivers the platform to Silicon Catalyst portfolio companies. As part of that program, design startups are given access to the program and some credits to be able to get started. This gives companies a chance to operate with resources way beyond what they could build on prem.

All of this hard work resulted in Rescale onboarding a majority of the market leading products from the Major EDA ISVs. Today, Rescale is able to offer the design community a rich portfolio of EDA applications all deployed in the cloud via a “bring your own license” usage model. The beauty of this approach is that the end user can simply set up a Rescale account, access preinstalled EDA software, point to their own license server (check with your vendor) and leverage the scale of the cloud for any design related activities. The time it takes for a new customer to initially log into the Rescale platform and launch a job is generally less than one hour. So the challenge of providing access to EDA software applications in the cloud has been solved.

The next challenge is to improve the flexibility of ISV licences, so more people can Reduce their TTM by 30% with compute in the cloud.

For example, if you needed to run 20,000 verification jobs that take an hour each on one core, you could do all 20,000 at once in the cloud with flexible on-demand style licenses, reducing your run time to just one hour. This kind of flexibility would allow companies to leverage additional parallelism when needed.

A business model change of that magnitude must be well thought out, so I wouldn’t expect anything soon. In other segments, the ISVs all ended up with improved revenue and happier customers, so there is clear motivation to go there. Rescale’s PAAS offering has a rich feature set to enable billing by usage so the technology is readily available if any of the EDA ISVs decide to make that move quickly there is no reason to build when you can partner.

In summary, the challenges with EDA software in the cloud are close to being fully resolved. Customers will need to lobby their EDA vendors to allow software to be accessed in time increments to align with compute access to in order for the industry to be able to realize the productivity of applying massive compute to regression runs, full chip STA, library cell characterization, LVS/DRC, OPC and other HPC tasks that are parallelized to run across 100s or 1000s of cores. Then the industry will be able to fully leverage the power of infinite compute to deliver massive TTM gains resulting in better silicon, sooner.

Now that engineers have access to all the compute power of the cloud, companies also need the ability to look at how resources are consumed, and budgets are managed. Rescale has worked with many of our customers and partners to ensure mgmt teams have full visibility into budget being spent and by which project and engineer. Check out:
.

The time to design in the cloud and beat your competitors to market is now.
 
Last edited:
Back
Top