WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 384
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 384
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)
            
Mobile Unleashed Banner SemiWiki
WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 384
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 384
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)

Arm 2022 Neoverse Update, Roadmap

Arm 2022 Neoverse Update, Roadmap
by Bernard Murphy on 09-27-2022 at 6:00 am

Arm recently provided their annual update on the Neoverse product line, targeting infrastructure from cloud to communication to the edge. Chris Bergey (SVP and GM for infrastructure) led the update, starting with a shock-and-awe pitch on Neoverse deployment. He played up that Arm-based servers are now in every major public cloud across the world. AWS of course, Google cloud, Microsoft Azure, Alibaba and Oracle all support Arm-based instances. In 5G RAN, Dell, Marvell, Qualcomm, Rakuten, HPE announced partnerships, joining Nokia, Lenovo, Samsung and more in this space. NVIDIA announced their Grace server CPU and HPE their ProLiant servers, also Arm-based. Like I said – shock and awe.

Arm 2022 Neoverse Update

Perspectives from cloud builders, cloud developers

The cloud/datacenter backbone was at one time purely dependent on x86-based servers. Those servers continue to play an important role but now clouds must support a rapidly expanding diversity of workloads. CPU types have fragmented into x86-based versus Arm-based. GPUs are more common, for video processing support, gaming in the cloud and AI training. Specialized AI platforms have emerged like the Google TPUs. Warm storage depends on intelligent access to SSD, through Arm-based interfaces. Software defined networking interfaces are Arm-based. DPUs – data processing units – are a thing now, a descriptor for many of these data-centric processing units. Application-specific platforms for the datacenter, all of which are building on SystemReady® qualified Arm platforms.

Microsoft Azure made an important point, that the cloud game is now about total throughput at lowest operational cost, not just about highest performance. Power is a particularly important factor; even today power-related costs contribute as much as 40% of TCO in a datacenter. Mitigating this cost must touch all components within the center, compute instances, storage, AI, graphics, networking, everything. The Azure product VP stressed that Arm is working with them on a holistic view of TCO, helping them to define best solutions across the center. I assume Arm have similar programs with other cloud providers, shifting up to become a solutions partner to these hyperscalars.

Arm enables cloud independence

A developer advocate at Honeycomb (which builds an analysis tool for distributed services) made another interesting point: the ubiquity of Arm-based instances in the major clouds provides cloud independence for developers. Of course x86 platforms offer the same independence. I think the point here is that Arm has eliminated a negative through availability on a wide range of clouds.  Honeycomb also incidentally highlight the cost and sustainability advantages; Arm is calling this the carbon-intelligent cloud. Young development teams like both of course, but they also have an eye to likely growing advantages to their businesses in deploying on more sustainable platforms.

Product update

As a reminder the Neoverse family breaks down into three classes. The V-series offers highest performance per thread – the most important factor for scale-up workloads, such as scientific compute. The N-series is designed to provide highest performance per socket – the most important factor for scale-out workloads, good for (I’m guessing) massive MIMO basebands. The E-series is designed for efficient throughput in edge to cloud applications; think of a power over ethernet application for example.

The newest V-series platform is the V2, code-named Demeter. This offers improved integer performance, a private L2 cache to handle larger working datasets and expanded vector processing and ML capability. The platform now supports up to 512MB system level cache, a coherent mesh network with up to 4TB of throughput (!) and CXL for chiplet support. Supporting 2.5/3D coherent designs. Nvidia Grace is built on the V2 platform, which is interesting because Grace is one half of the Grace Hopper platform, in which Hopper is an advanced GPU.

In N-series, they plan an “N-series next” platform release next year with further improved performance per watt. They also have an E-series E2 update, and an “E-series-next” release planned next year. Not a lot of detail here.

About the competition

Seems clear to me that when Arm is thinking about competition these days, they are not looking over their shoulders (RISC-V). They are looking ahead at x86 platforms. For example, Arm compares performance on popular database applications between Graviton2 (AWS) and Xeon-based instances, measuring MongoDB running 117% faster than Intel. They also measured an 80% advantage over Intel in running BERT, a leading natural language processing platform.

I’m sure Arm is also taking steps to defend against other embedded platforms, but the Neoverse focus is clearly forward, not back. You can read more HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.