Everyone is aware of ARM’s dominance in mobile devices and their likely dominance in IoT, but what about servers? ARM has been making a play for this area but conventional wisdom is that fortress Intel will protect its server market at all costs. You’ll hear that servers are not so much about compute power, they’re more about I/O and no-one knows I/O with all its backward compatibility requirements better than Intel. Is that really the case and is ARM charging a hill it cannot take? Or are changing market needs shifting to favor new entrants?
There was a very revealing presentation at ARM Techcon this year from the Linaro group which goes to the heart (or at least an important heart) of this topic. Linaro is an open-source collaboration to drive compatibility for Linux kernel and other software for ARM-based platforms; they developed the Linaro Enterprise Group (LEG) in 2012 to focus on compatibility for server platforms. That in turn enables server SoC development from AMD (Seattle platform), Cavium (ThunderX platform), Applied Micro (X-Gene platform) and HiSilicon among others, aside from internal server development in Amazon (for AWS), Facebook and Google.
LEG is steadily moving ARM up the ranks in server support. The first step was enablement – the stuff you have to do to even play in the server/cloud space. This is UEFI (the modern replacement for BIOS), ACPICA for configuration and power management, KVM for kernel virtualization infrastructure, XEN for hypervisor and OpenJDK for Java support. LEG have been busy developing patches, having these approved and getting them upstreamed to releases.
They then expanded focus to workload optimization and this is where it gets really interesting. There are a lot of capabilities you need to ensure you run well on a Linux platform: LAMP, OpenStack, Docker, Ceph and more but one area really points to a strategic focus – establishing ARM as a best-in-class citizen and officially supported platform in big data, specifically around Hadoop. Hadoop itself is a complex ecosystem, including Spark, Pig, Hive, Calcite, HBASE and many other pieces. (For anyone who thinks software is easy compared to hardware, your head should be spinning by now.)
A central component of Hadoop is H[SUB]2[/SUB]O which provides an interactive interface to an underlying database view of the data. A lot of the statistical analysis and model-building starts here. LEG ran benchmarking to look at scaling on a cluster of 6 Seattle (AMD) 8-core nodes, using a 14 GB dataset (airport landing and departure times). They found that both file-parsing and model-building scales linearly with memory and that, surprise surprise, the speed of the external network is limiting for performance. For 1Gb Ethernet, performance flattens out after 2 nodes, but for 10Gb Ethernet it remains linear with the number of nodes (at least up through 8 nodes).
This has an obvious consequence for on-chip server node integrations: integrations are valuable only in so far as you can achieve ~10Gb Ethernet communication between nodes. And that’s for up to 8 nodes; it will flatten at some point beyond that, which will drive you to to even higher speeds. It is unlikely you can do any of this with traditional fabrics. The people who are building large node-count SoCs (the Calxeda team now in Amazon Web Services for example) almost certainly see their own proprietary fabrics as their primary technology advantage.
Given all this, does ARM have a shot? Big Data support is a new and potentially large market driving server growth (and therefore worth chasing), the playing field is leveled through need for innovation in fast, coherent on-chip fabrics (where traditional IO interface expertise doesn’t help) and the biggest customers don’t seem to have the patience to wait for a commercial solution and are building their own server chips and servers (which they pretty much have to do using ARM cores).
In short, as long as the ARM ecosystem can keep pace with big data needs, yes – they have a very real shot.Share this post via: