WP_Term Object
(
    [term_id] => 51
    [name] => RISC-V
    [slug] => risc-v
    [term_group] => 0
    [term_taxonomy_id] => 51
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 149
    [filter] => raw
    [cat_ID] => 51
    [category_count] => 149
    [category_description] => 
    [cat_name] => RISC-V
    [category_nicename] => risc-v
    [category_parent] => 178
)
            
SemiWiki Podcast Banner
WP_Term Object
(
    [term_id] => 51
    [name] => RISC-V
    [slug] => risc-v
    [term_group] => 0
    [term_taxonomy_id] => 51
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 149
    [filter] => raw
    [cat_ID] => 51
    [category_count] => 149
    [category_description] => 
    [cat_name] => RISC-V
    [category_nicename] => risc-v
    [category_parent] => 178
)

Google’s Road Trip to RISC-V at Warehouse Scale: Insights from Google’s Martin Dixon

Google’s Road Trip to RISC-V at Warehouse Scale: Insights from Google’s Martin Dixon
by Daniel Nenni on 12-21-2025 at 3:00 pm

Key takeaways

Google RISC V in Datacenter 2025

In a engaging presentation at a recent RISC-V summit, Martin Dixon, Google’s Director of Data Center Performance Engineering, took the audience on a metaphorical “road trip” to explore the company’s vision for integrating RISC-V into its massive warehouse-scale computing infrastructure. Drawing parallels from Google’s successful transition to ARM-based servers, Dixon outlined the opportunities, challenges, and necessary ingredients for bringing RISC-V to data center scale.

Google’s journey with heterogeneous computing began with its roots in commodity x86 platforms, celebrating its 27th birthday amid evolving needs. In the mid-2010s, the company began experimenting with ARM architectures, following the 2014 ARM server specification. This led to the 2022 launch of Tau T2A ARM instances and, more recently, the custom Axion ARM-based processors. Today, Google’s data centers already mix x86, ARM, and emerging architectures, including early RISC-V components. Dixon emphasized that heterogeneity and specialization are essential to overcoming the slowdown in Moore’s Law, enabling greater efficiency and performance at scale.

RISC-V’s openness and customization potential make it exciting, but Dixon cautioned it’s a “double-edged sword” without standards. He highlighted the need for baselines like the RVA23 profile and an upcoming RISC-V server platform specification to ensure compatibility for warehouse-scale deployment.

Using the road trip analogy, Dixon outlined key “ingredients” for success:
  • A roadmap — Standardized specifications with mandatory features like branch recording (similar to Intel’s LBR or ARM’s BRBE), side-channel-hardened crypto, and MMU support for security.
  • A cool car — High-performance server-class SoCs with at least 64 cores and support for 4GB+ memory per core, prioritizing performance, reliability, and maintainability.
  • Beyoncé — A humorous nod to Google’s internal “Beyoncé Rule” (from Beyoncé’s “Single Ladies”: “If you liked it, then you shoulda put a test on it”). Dixon stressed that critical functionality must have comprehensive tests to ease multi-architecture porting.
  • Friends — Strong community collaboration for a robust software ecosystem that “compiles and runs out of the box.”

Reflecting on lessons from porting to ARM, Dixon shared that Google’s top workloads (including YouTube, Spanner, BigQuery) represent nearly half its compute. Porting isn’t just about big services—schedulers require a mix of large and small jobs for efficient packing. Google ported over 30,000 packages via central efforts, automation, and AI-generated changes, enabling self-service for the long tail of workloads.

Developers’ fears about toolchain breakage proved unfounded; issues were mostly “boring” like config files, build paths, and flaky tests. Rare potholes included floating-point precision differences (resolved by standardizing to float128) and minimal memory ordering bugs. Overall, the transition was smoother than expected.

Looking ahead, Google is collaborating via RISC-V International on standards like QoS and RVA23, and as a founding RISE member, accelerating upstream work on Linux and LLVM. To “autopilot” the process, Google applied its Gemini AI model to 40,000 ARM porting edits, categorizing them to automate future changes. An AI agent now handles safe, gradual rollouts, often unnoticed by teams.

For RISC-V, Dixon called for ratifying server specs, delivering capable SoCs, expanding test coverage, and embracing AI. Google, with RISE and RISC-V International, is funding academics with Gemini credits to advance AI-driven porting.

Dixon closed optimistically, quoting Jack Kerouac: let’s “lean forward to the next venture” with RISC-V at warehouse scale. His talk underscores Google’s commitment to open architectures, positioning RISC-V as a key pillar in the future of hyperscale computing.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.