WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Lip-Bu Hyperscaler Cast Kicks off CadenceLIVE

Lip-Bu Hyperscaler Cast Kicks off CadenceLIVE
by Bernard Murphy on 09-02-2020 at 6:00 am

Lip-Bu (Cadence CEO) sure knows how to draw a crowd. For the opening keynote in CadenceLIVE (Americas) this year, he reprised his data-centric revolution pitch, followed by a talk from a VP at AWS on bending the curve in chip development. And that was followed by a talk by a Facebook director of strategy and technology on aspects of their hardware strategy. CadenceLIVE: Lip-Bu+hyperscaler cast, all delivered in 60 minutes. Not bad.

Lip-Bu Hyperscaler

Lip-Bu on Cadence

The Cadence top-level story remains very consistent. Data in one way or another is driving every aspect of innovation: In compute, in storage, in networking and in analytics. Some of the obvious trends in compute are application-driven system design. Witness Amazon, Google, Facebook, Baidu, TenCent and many others building their own hardware. Some design is very domain-specific, in AI accelerators, for example. Systems companies are also contributing to innovation in storage (Facebook was very instrumental in driving NVMe data caching) and in networking: Reconfigurable options for on-the-fly virtualization optimization. There’s plenty of basic innovation as well. Networking bandwidths soaring towards 50 Tbps and all kinds of new warm to hot memory technologies: Phase-change, magnetic, quasi-volatile and others.

Cadence’s role in supporting this explosion of new technology continues with the theme Intelligent System Design. “Design” encompasses the core design technologies: IP, functional verification, digital IC design and signoff, custom design and simulation. “System” is system interconnect (Allegro, not just for PCB, also packaging and 3D). Then implementation analytics and high speed RF design (this is new, I’ll talk more in my next CadenceLIVE blog), also system and embedded software partnerships, leveraging the Green Hills relationship. “Intelligent” applies AI and machine learning for further optimization. Consistent direction with incremental growth around system implementation and analytics and growth into secure embedded software and RF.

Nafea Bshara on design at AWS

Nafea co-founded Annapurna Labs, subsequently acquired into Amazon/AWS. These are the folks who developed the Arm-based AWS Graviton processor follow-ons, now available in the AWS cloud. Graviton makes headlines, they’re also working on AWS Inferentia for machine learning / inference and AWS Nitro for cloud hypervisor, network, storage and security.

Good stuff, but I was especially interested in his views on the benefits of design in the cloud. I wrote another blog on this topic recently, arguing that established cloud use in other departments in a design enterprise—finance, HR, legal—together with security and liability concerns, all tilt the scale towards cloud-centric use. All valid arguments but they don’t speak to many designers who aren’t directly involved in financial and legal concerns. Nafea talked about engineering concerns. Nafea’s group switched from their own datacenter to the cloud when they moved to 16nm. Yeah, they’re in AWS, but they’re still measured on design deliverables. They wouldn’t have switched if doing so didn’t accelerate meeting their goals.

The benefit of the cloud in engineering terms

Nafea talked about the relative predictability in compute demand which allows a design team to take advantage of spot pricing for much of their activity, still allowing to surge above that level as needed at demand-based pricing. When you’re done, or when you return to low-level needs as you forecast, you’re not paying for what you don’t need.

He contrasted that with the classical datacenter update approach. Periodic cross-group debates on what everyone wants, all different of course. Some high-end servers versus masses of mid-range servers, lots of cold-storage disks versus tradeoff with NVMe warm storage. Support for fast remote site access and demand. You wrestle and wrangle, wind up with some kind of compromise, which, at a big price tag, fails to completely satisfy anyone. Nafea contrasted with the cloud approach. Every design manager gets a budget to use however they choose. They buy access to whatever they want with the latest and greatest options the cloud provider has offer, if necessary, or many lower-priced servers for bulk regressions if that’s what they need, unconstrained by other department needs. Each design manager has complete control over how they manage their workload. That is a pretty compelling engineering motivation to switch.

Vijay Rao on hardware infrastructure at Facebook

Vijay talked about datacenter challenges at Facebook. A lot of this was on the very top-level facilities aspects of datacenters, construction, power distribution, cooling, that sort of thing. Fascinating stuff, though not directly relevant to much of my audience. I’ll call out a few things that struck me. We all know that Facebook hosts huge traffic—billions of users on Facebook, Messenger, Instagram and WhatsApp. Traffic that can be pretty spiky around holidays and major world crises. Much of this is high data volume— image/video upload, web-serving, video chats. Thanks to many more of us working from homes now, demand is spiking to unprecedented levels. Managing all this traffic with a continued strong user experience places extraordinary demands on the hardware. Which, incidentally, is why Facebook is a leader in initiatives like NVMe and the Telecom Infra Project.

Vijay talked particularly about their AI development at Facebook. They use AI for bots in Messenger to generate video trailers, to enable VR and AR, to run translations between languages. They use AI to catch policy violations (a sensitive topic these days). He talked about their development on a common compute platform for compute and AI inference. They share this work through the OpenCompute project, an organization they founded in 2011, which is now supported by all the big names in technology certainly, but far beyond as well (Shell and Goldman Sachs for example). Lots of leading-edge high volume and high-performance demand.

A fascinating kickoff to CadenceLIVE 2020. Check HERE for more on Intelligent System Design.

Also Read

Quick Error Detection. Innovation in Verification

The Big Three Weigh in on Emulation Best Practices

Cadence Increases Verification Efficiency up to 5X with Xcelium ML

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.