Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/qualcomm%E2%80%99s-ai-chip-push-targets-data-centers.23887/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Qualcomm’s AI Chip Push Targets Data Centers

Daniel Nenni

Admin
Staff member
1761577674642.png


On October 27, 2025, Qualcomm unveiled the AI200 and AI250, next-gen AI inference chips set to launch in 2026 and 2027, respectively, accelerating its data center ambitions. Tailored for cost-effective AI deployment, these chips challenge Nvidia’s 80% market dominance with lower total cost of ownership (TCO) and high efficiency for large language models and multimodal AI. The AI200, leveraging Qualcomm’s Hexagon NPU, supports 768GB LPDDR per PCIe card, while the AI250 introduces near-memory computing for 10x bandwidth gains.

Qualcomm’s strategic moves include acquiring Alphawave for $2.4 billion and partnering with Nvidia for NVLink integration, bolstering its ecosystem. Saudi Arabia’s Humain, committing to 200 megawatts, signals strong demand. Shares jumped nearly 15% to $182.23, reflecting market optimism. However, Qualcomm must overcome Nvidia’s ecosystem lead and Intel’s upcoming Crescent Island to gain traction. With AI inference demand soaring, Qualcomm’s modular, power-efficient chips could disrupt the $200 billion market by 2030.

 
The AI inference chips are named:
  • AI200: Set to launch in 2026, available as individual chips, PCIe cards, or full liquid-cooled server racks, featuring Qualcomm’s Hexagon Neural Processing Unit (NPU) and supporting up to 768GB of LPDDR memory per card.

  • AI250: Scheduled for release in 2027, this chip introduces near-memory computing for over 10x higher effective bandwidth and reduced power consumption, enhancing efficiency for large-scale AI inference workloads.
From what I heard they are on TSMC N3 but it could be N4. Does anyone know for sure? Maybe AI200 is N4 while AI250 is N3? Interested minds want to know.
 
Back
Top