Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-to-expand-ai-accelerator-portfolio-with-new-gpu.23826/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Intel to Expand AI Accelerator Portfolio with New GPU

Brady

Well-known member
October 14, 2025Published
NewsArtificial Intelligence

Intel Logo

Intel announces future inference optimized data center GPU code-named Crescent Island​


What’s New: Today at the 2025 OCP Global Summit, Intel announced a key addition to its AI accelerator portfolio, a new Intel Data Center GPU code-named Crescent Island is designed to meet the growing demands of AI inference workloads and will offer high memory capacity and energy-efficient performance.

“AI is shifting from static training to real-time, everywhere inference—driven by agentic AI,” said Sachin Katti, CTO of Intel. “Scaling these complex workloads requires heterogeneous systems that match the right silicon to the right task, powered by an open software stack. Intel’s Xe architecture data center GPU will provide the efficient headroom customers need —and more value—as token volumes surge.”

Why It Matters: As inference becomes the dominant AI workload, success depends on more than powerful chips—it requires systems-level innovation. From hardware to orchestration, inference needs a workload-centric, open approach that integrates diverse compute types with an open, developer-first software stack—delivered as systems that are easy to deploy and scale.

Intel is positioned to deliver this end-to-end - from the AI PC to the data center and industrial edge—with solutions built on Intel Xeon 6 processors and Intel GPUs.

By co-designing systems for performance, energy efficiency, and developer continuity—and collaborating with communities like the Open Compute Project (OCP)—Intel is enabling AI inference to run everywhere it’s needed most.

About the GPU: The new data center GPU code-named Crescent Island is being designed to be power and cost-optimized for air-cooled enterprise servers and to incorporate large amounts of memory capacity and bandwidth, optimized for inference workflows.

Key features include:
  • Xe3P microarchitecture with optimized performance-per-watt
  • 160GB of LPDDR5X memory
  • Support for a broad range of data types, ideal for “tokens-as-a-service” providers and inference use cases
Intel’s open and unified software stack for heterogeneous AI systems is currently being developed and tested on Arc Pro B-Series GPUs to enable early optimizations and iterations. Customer sampling of the new data center GPU code-named Crescent Island is expected in the second half of 2026.



Bets on this Xe3P-driven GPU being built by TSMC or Intel Foundry?

And what does this say about an Xe3P (confirmed to be “Celestial” branded) discrete consumer GPU?
 
I
And what does this say about an Xe3P (confirmed to be “Celestial” branded) discrete consumer GPU?
Getting much harder to design silicon that can do double duty for efficient data center inference as well as consumer GPU. Most of the new AI 4 bit and 8 bit floating point datatypes and associated tensor processing, as well as the new transformer acceleration engines and memory management hardware, are useless for graphics. So lots of wasted hardware for discrete GPUS.
 
Back
Top