WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 393
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 393
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)
            
Mobile Unleashed Banner SemiWiki
WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 393
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 393
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)

ARM Answers Server Doubts

ARM Answers Server Doubts
by Bernard Murphy on 12-07-2018 at 7:00 am

At ARM TechCon this year, the company announced the Neoverse brand targeted to infrastructure, contrasting with the Cortex brand we are familiar with for edge devices such as smartphones and IoT devices. Cortex was already used in infrastructure, in networking, base stations and the like but Neoverse splits the infrastructure line of business from the edge line of business, including a roadmap for a dedicated family of processors.

22724-datacenter-min.jpeg

Few would dispute that ARM has a big footprint in communication infrastructure, but the high end of the Neoverse plan, as presented at TechCon, is in servers. Here many would argue that the evidence for progress was less compelling. Servers from HPE (based on Cavium/Marvell processors) and Ampere for example show some progress, and ARM asserts a million units have shipped this year. But Qualcomm dropping out of servers was widely read as a negative and some of us had started to wonder if ARM plans for servers were more hope than substance. ARM may not be Apple or Samsung, but they’re at a scale where a million units in a year is no longer particularly exciting. Where were the big wins?

ARM answered that question definitively when AWS (Amazon Web Services) very recently announced the immediate availability of Neoverse-based servers in their instance lineup. The Graviton processor at the heart of these servers was developed by Annapurna Labs (wholly owned by AWS). This news is compelling on a couple of levels. First, ARM-based servers are part of the AWS instance line-up (EC2 A1 instances). That’s about as a big a win for ARM servers as you can hope to find. Second, AWS developed these processors themselves. That certainly helps with cost and power, but more importantly it signals expanding need to differentiate in the cloud providers. We knew about this around the edges: GPUs for deep learning and ASICs/FPGAs for network optimization and software-defined storage. Looking for advantage in custom servers over commercial solutions takes differentiation to a new level and is likely to cause some disruption in the value-chain.

I talked with Mohamed Awad, VP of the infrastructure line of business at ARM to get more information on why the ARM-based servers work in the AWS lineup and how this may evolve over time. Mohamed acknowledges that lower cost instances in AWS EC2 services are a very visible advantage to cloud users, especially for ARM-based workloads. I have no doubt also that AWS is attracted to ARM’s end-to-end IoT strategy which should drive lots of traffic to their cloud. Why not make that as easy as possible?

I had to ask about performance. There are a number of comparisons of ARM-based server performance (EC2 A1 and Ampere for example) which show these are not as fast as high-end Intel Xeon or AMD Epyc servers. Are ARM-based servers intended mostly to serve ARM-generated traffic and the low-end of the cloud market?

Not at all, according to Mohamed. I’m probably not alone in thinking of datacenters as homogenous ranks of high-end server blades with maybe a few special-case oddities like GPUs sprinkled around. But Mohamed told me that’s already changing. Cloud workloads are not homogenous and there are multiple ways competitive providers can provide services to optimize those workloads, beyond deep-learning, network and storage accelerators. Less familiar may be support for web services (more data throughput than compute intensive, but need to serve many clients in parallel), containerized microservices (a popular trend virtualizing components of a larger service) and applications caching (like caching inside a device but here caching state for an application across many devices). Could you do all of this with Xeon or Epyc servers? Probably. Could you do it more cost-competitively, and maybe better in distributed compute throughput with custom servers? Absolutely.

The EC2 A1/Graviton instance is based on the ARM Cosmos platform, in turn based on the A72 and A75 cores. Following this ARM plan to introduce, on a yearly beat, the Ares, Zeus and Poseidon architectures, each of which they intend will show ~30% incremental improvement in performance, along with new features. Can they catch up with the high-end Intel/AMD processors? Who knows, but clearly that isn’t a necessary goal. There seems to be plenty of compute share they can grab in these rapidly evolving datacenter architectures.

Finally, I asked Mohamed about the other cloud providers – Microsoft Azure, Google Cloud and others. He wouldn’t tell me, but I have seen indications elsewhere that similar programs are underway. And frankly, if you were running those operations and you knew AWS were working on an added edge based on ARM-based servers, wouldn’t you be talking to ARM too?

Looks like ARM knew all along what they were doing in servers, they just didn’t tell us. And we spent all our time looking in the wrong direction.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.