Synopsys IP Designs Edge AI 800x100
WP_Term Object
(
    [term_id] => 6435
    [name] => AI
    [slug] => artificial-intelligence
    [term_group] => 0
    [term_taxonomy_id] => 6435
    [taxonomy] => category
    [description] => Artificial Intelligence
    [parent] => 0
    [count] => 700
    [filter] => raw
    [cat_ID] => 6435
    [category_count] => 700
    [category_description] => Artificial Intelligence
    [cat_name] => AI
    [category_nicename] => artificial-intelligence
    [category_parent] => 0
)

Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure

Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure
by Daniel Nenni on 09-20-2025 at 8:00 am

Key Takeaways

  • Eric Xu outlined Huawei's roadmap for AI infrastructure, emphasizing the need for scalable and efficient systems to support next-generation artificial intelligence.
  • Huawei plans to release updated generations of Ascend AI chips and Kunpeng processors over the next three years, including the TaiShan 950 SuperPoD and Atlas 950/960 SuperPoDs.
  • The introduction of UnifiedBus, a proprietary interconnect, aims to enhance communication between nodes and support hybrid computing environments.
  • Huawei is transitioning to an open-source strategy for its core AI software stack to foster collaboration and lower barriers for developers within the AI ecosystem.
  • The keynote highlighted Huawei's strategic response to geopolitical challenges, focusing on system-level innovation and alternatives to U.S.-centric AI platforms.

Eric Xu at Huawei Connect 2025

At Huawei Connect 2025, held in Shanghai, Eric Xu, the Rotating Chairman of Huawei, delivered a keynote speech that laid out the company’s ambitious roadmap for AI infrastructure, computing power, and ecosystem development. His speech reflected Huawei’s growing focus on building high-performance systems that can support the next generation of artificial intelligence while advancing self-reliant technology development.

Setting the Stage

Xu began his keynote by reflecting on the rapid evolution of AI models and how breakthroughs over the past year have pushed the boundaries of computing. He noted that the increasing complexity of large models, particularly in inference and recommendation workloads, demands not just more powerful chips, but fundamentally new computing architectures. According to Xu, AI infrastructure needs to be both scalable and efficient—capable of handling petabyte-scale data and millisecond-level inference.

He also reminded the audience of the five key priorities he had previously outlined, such as the need for sustainable compute power, better interconnect systems, and software-hardware co-optimization. This year’s keynote built upon those principles and introduced Huawei’s vision for its next-generation systems.

New Products and Roadmap

One of the most significant parts of Xu’s speech was the unveiling of Huawei’s updated roadmap for chips and AI computing platforms. Over the next three years, Huawei will roll out several generations of Ascend AI chips and Kunpeng general-purpose processors. Each generation is designed to increase performance and density while supporting the growing needs of training and inference workloads.

Xu introduced the TaiShan 950 SuperPoD, a general-purpose computing cluster based on Kunpeng processors. It offers pooled memory, high-performance storage, and support for mission-critical workloads such as databases, virtualization, and real-time analytics. The design is intended to support diverse computing needs, with significant improvements in memory efficiency and processing speed.

On the AI side, Xu announced the Atlas 950 and Atlas 960 SuperPoDs. These are high-density AI compute systems capable of scaling to tens of thousands of AI processors. The upcoming Atlas 960 SuperCluster will combine over one million NPUs and deliver computing power measured in zettaFLOPS. This marks a shift toward ultra-large-scale AI systems, designed to handle foundation models, search, recommendation, and hybrid workloads.

To enable this, Huawei developed UnifiedBus, a proprietary interconnect that supports high-bandwidth, low-latency communication between nodes. It also supports memory pooling and intelligent task coordination. According to Xu, this interconnect is critical for scaling AI systems efficiently and supporting hybrid PoDs that combine AI, CPU, and specialized compute.

Open Source and Ecosystem Strategy

Another core element of the keynote was Huawei’s strong push toward openness. Xu announced that the company will fully open-source its core AI software stack, including its CANN compiler and virtual instruction set. Toolchains, model kits, and the openPangu foundation models will also become available to developers and partners by the end of the year.

This move toward open-source infrastructure is part of Huawei’s strategy to lower adoption barriers and encourage collaboration across the AI ecosystem. Xu emphasized that AI innovation cannot happen in silos, and by opening up its tools and platforms, Huawei hopes to enable more organizations to build on its technology.

Strategic Implications

Xu’s keynote also carried strategic overtones, reflecting Huawei’s response to geopolitical challenges and technology restrictions. With limited access to advanced semiconductor manufacturing, Huawei is shifting its focus toward system-level innovation—building powerful infrastructure using available nodes while maximizing performance through architecture and software.

The message was clear: Huawei is betting on large-scale infrastructure, hybrid compute systems, and interconnect innovation to maintain competitiveness in AI. The company aims to provide alternatives to traditional U.S.-centric AI platforms and chip providers, especially in markets seeking greater technological independence.

Bottom line: Eric Xu’s keynote at Huawei Connect 2025 outlined a bold vision for the future of AI infrastructure. From SuperPoDs and interconnect breakthroughs to open-source initiatives, Huawei is positioning itself as a central player in the next phase of AI development. If the company can execute its ambitious roadmap and foster a strong ecosystem, it may reshape the global AI landscape—especially in regions looking to build homegrown compute capabilities.

The full transcript is here.

Also Read:

MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices

AI Revives Chipmaking as Tech’s Core Engine

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

 

Share this post via:


Comments

There are no comments yet.

You must register or log in to view/post comments.