Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-sees-5-000-w-gpus-possible-with-integrated-voltage-regulators.24103/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Intel Sees 5,000 W GPUs Possible with Integrated Voltage Regulators

XYang2023

Well-known member
1764347576555.png


If the performance requirements for new AI or HPC tasks demand maximum performance density regardless of power or heat constraints, Intel has a project presentation scheduled for the ISSCC conference in February 2026. One of the topics will be a 5,000 W GPU design utilizing integrated voltage regulators (IVRs). While this may seem extreme, Intel plans to leverage advanced packaging technology, specifically the Foveros-B variant, to deliver 5 kW GPUs by 2027. As accelerators scale to support larger AI and HPC workloads, traditional board-level regulators are hitting limits in current density and transient response.

Moving voltage regulation into the package shortens current paths and reduces delivery losses. Intel Foundry and its packaging division are already exploring high-density power delivery and kW-class integrated voltage regulators. Intel's Foveros roadmap targets production-ready integrated power elements by 2027, enabling customers to evaluate IVR-enabled assemblies at scale very soon. While Foveros-B is a 2027 product, customers could potentially evaluate multi-kilowatt designs with IVRs next year. Intel is not a singled-out player with multi-kilowatt designs, as NVIDIA "Rubin" silicon is rumored to have a TDP of up to 2.3 kW for highest-end models, leading to rack-level power consumption exceeding 250 kW.

 
Intel did use FVIR in Haswell/Broadwell CPUs.


AI Overview


Intel abandoned Fully Integrated Voltage Regulators (FIVR) due to efficiency issues, particularly with different processor workloads, and physical design constraints like heat and die area
. The FIVR technology, first used in the Haswell generation, proved to be inefficient for high-TDP processors during light workloads and low-TDP processors during heavy workloads. Consequently, the company removed it in favor of external motherboard-based voltage regulators, which offered a more adaptable and efficient power delivery solution for its Skylake processors and beyond.

Reasons for abandoning FIVR
  • Energy inefficiency: FIVR was not consistently energy efficient across all types of workloads. For example, it was less efficient on high-TDP processors during computationally light tasks and could be a bottleneck for low-TDP processors attempting heavier workloads like Turbo Boost.
  • Physical and thermal constraints: Integrating all voltage regulation components onto the CPU die was challenging for physical design. It created heat and took up valuable silicon real estate, which could be better used for other processor functions.
  • Workload flexibility: The external Motherboard-Based Voltage Regulator (MBVR) approach allowed for greater design flexibility. By having the voltage regulators on the motherboard, Intel could design power delivery systems optimized for specific processors and motherboards, rather than a single integrated system that might not be ideal for every scenario.

Legacy and future
  • Skylake's return to external VRMs: Intel dropped FIVR starting with the Skylake generation, moving back to a more traditional power delivery architecture where the voltage regulation modules were on the motherboard.
  • FIVR's potential return in different forms: Although FIVR was dropped, Intel has explored related technologies. For instance, it experimented with Digital Linear Voltage Regulators (DLVR) for power saving, but this was also reportedly scrapped for Raptor Lake due to performance concerns. Intel's Core Ultra processors feature a different on-die voltage regulator technology called High-Speed On-Die Linear Voltage Regulator (DLVR) that is designed to be more efficient and responsive.

Foveros-B will place regulator on separate die which will help with transistor arch differences, wonder how will it improve other points.
 
Could we simply relate wattage to the amount of compute, i.e., FP4 TFLOPs?
 
So here is the problem Architecture and Node has to be known to give any FLOP ratings
Related to architecture is also the RAM type, bus, and size could be a large (or small) portion of that power consumption.

Some 3080Ti's with 12GB of RAM were often faster than a 3090 with 24GB in pure compute or gaming tasks because of the extra RAM eating up a larger portion of the GPUs power budget. (If both GPUs were set to the same power limit).
 
5kW per chip. Immersive cooling required (per the article) but the cooling system details still TBD. 2x higher than Nvidia Rubin plans.

I've been wondering about chip lifetimes (mtbf, etc) for awhile already for the present crop of power dense rack designs. I know these proposals can help vendors gain interest, but do they really make sense for the data center operators and their investors? Or is it more about the fomo and the circular investment charade?
 
5kW per chip. Immersive cooling required (per the article) but the cooling system details still TBD. 2x higher than Nvidia Rubin plans.

I've been wondering about chip lifetimes (mtbf, etc) for awhile already for the present crop of power dense rack designs. I know these proposals can help vendors gain interest, but do they really make sense for the data center operators and their investors? Or is it more about the fomo and the circular investment charade?
Could it be paired with emmersion cooling?
 
Back
Top