Since I talked recently about AWS adding access to Arm-based server instances in their cloud offering, I thought it would be interesting to look further into other Arm-based server solutions. I had a meeting with Ampere Computing at Arm TechCon. They offer server devices and are worth closer examination as a player in this game.
First, the people at Ampere are heavy hitters. Start with Chairman and CEO Renee James, a past president of Intel. The CFO/COO is ex Apple and Intel and almost everyone else is ex Intel, immediately or at some time in the past, including the architect and VP of engineering, all with solid server backgrounds. I’ve also heard that they are raiding Marvell/Cavium for talent. I met with Matt Taylor, SVP of WW Sales and Biz Dev. Between Intel and Ampere, Matt was VP of sales for Qualcomm’s Datacenter group. All in all, a pretty impressive lineup for a business targeting the cloud space. The company is funded by the Carlyle group (first round), though no word on how much.
I had to ask Matt for his view on why QCOM exited servers. No real surprises but good to hear from an insider. He said the business opportunity was strong (well he would), but QCOM was distracted (just a bit). Paul Jacobs and Derek Aberle, who were supporters, left and QCOM had to cut $1B, for which datacenter was an easy target. Multiple reasons, fairly unique to QCOM, which didn’t really say anything about the general Arm-based server opportunity.
Ampere is going after the same target as Annapurna (AWS), except Ampere isn’t captive so is aiming at all the cloud top-end providers (the hyperscalers/super 8) – Google, Amazon, Microsoft, Facebook, Baidu, Alibaba, Tencent, and China Mobile – all of who buy servers by the railcar load.
On specs, Matt has offered that in current 16nm implementations the Ampere eMAG solution is comparable to Xeon Gold devices, but at half the cost and Epyc devices at half the power. Side-note on power: some analysts think cloud users won’t care – they just pay for usage time, so performance should be the only metric that matters. Wrong – power contributes significantly to total datacenter overhead in the cost of keeping the whole thing cooled. Your bill as a user is part runtime (and price) on the instance type you chose and part overhead, including cooling costs. So yeah, power matters, even though it’s an indirect cost.
Lenovo has released (recently) their ThinkSystem HR350A rack server based on the eMAG processor, so it’s already possible to deploy servers based on the devices. Just like Arm, they stress scale-out applications (high parallel operations like video serving, where it is easy to add more processors to handle more parallel requests) and similar applications where performance per dollar and performance per watt are important considerations.
Matt told me that they are at various stages (eval to deployment) with big cloud service providers and are hearing similar themes for workload trends well-fitted to Arm-based servers, including storage, internal and external search, content delivery, in-memory db applications and (interestingly in china) for mobile gaming with cloud-based rendering. Some of these are accelerator options but he stressed also standard server applications with differentiated capabilities that you couldn’t easily get on the usual platforms. Sadly he didn’t want to share specific examples.
Overall, sounds very consistent with the Arm story I wrote about earlier. Arm-based servers may not be as fast, unit for unit, as the best of the best from Intel and AMD but (a) they’re a lot cheaper and lower power than those options and (b) you can build your own customized solutions optimized to higher throughput per dollar/watt for specific workloads. In some pretty high traffic datacenter applications, the best of the best may not always be the best total system solution.Share this post via: