I ask my customers about their cloud strategy and they all tell me “none”. The main reason is a red herring: “The legal department will never allow our IP outside our walls”.
Security issues on the cloud are largely solved, as proven by the fact that banks have no problem using external clouds. Behind the curtain, the real reason for a lack of a push out towards external clouds is the mismatch between the needs of engineering computing and the cloud offering.
Cloud providers tout the benefits of agility and elasticity of an external cloud, and how well it fits the needs of organizations with spiky workloads. This is not compelling to our most sophisticated customers: they constantly run a background load of random tests on their chips, before, during, and even after tapeout. Plus, they have multiple chips in the pipeline, so the load on the computing resources is always sustained.
In the past decade, EDA has benefited greatly from the Linux revolution. Linux brought higher speed and lower cost. Cloud Computing brings neither, at least not in engineering computing.
As technology progresses, it is possible that costs will go down, and data transfer latencies will be reduced. At such time, the EDA licensing model may also have evolved. Today, for what we know, the licensing model is another barrier to adoption of cloud bursting, for it does one no good to deploy 1,000 new cores in an external cloud if one also does not have 1,000 additional simulation licenses to go with it.
This puts the big EDA vendors, SNPS, CDN, MENT in an advantageous position as providers of cloud computing services for our community, although such offerings will be slanted towards a single vendor solution as opposed to a best-in-class approach.
From RTDA’s point of view, as provider of software to manage all computing resources, we remain neutral with respect to Cloud computing. If it happens, whether it is a cloud cluster that shares licenses with the main cluster, or a hybrid solution with shared data between the local cluster and the cloud machines, we have experimented with both.
For now, we keep our focus on improving our NetworkComputer scheduler, in order to provide the highest possible performance for processing our customers’ workloads using all available licenses and all available computing resources.