Hi folks -
For many years there have been predictions that silicon chips will hit a wall on thermal density that cannot be solved by conventional cooling solutions. Items like local hot spots, and not just total power draw come into play reducing the maximum heat output that a die can remain stable. Yet, it appears CPUs and GPUs (for example) are still increasing the amount of power output per mm2 of die/wafer.
For example, on Samsung 8nm - Nvidia has GPUs now approaching 1 watt per mm2 (3090Ti), as compared to about half of that 5 years ago (NVidia Titan X). The CPU side is similar -- Alderlake on desktop (12900K-KS) pushing 240-270W in some workloads on a ~215mm die, vs. the previous "power hog" AMD FX-9590 using 220W on a 315mm die back on 32nm.
What's the current status of this?, and what are some thermal density limits that we really cannot exceed? Will that limit of watts per mm2 reduce as transistors get smaller?
Thanks!
For many years there have been predictions that silicon chips will hit a wall on thermal density that cannot be solved by conventional cooling solutions. Items like local hot spots, and not just total power draw come into play reducing the maximum heat output that a die can remain stable. Yet, it appears CPUs and GPUs (for example) are still increasing the amount of power output per mm2 of die/wafer.
For example, on Samsung 8nm - Nvidia has GPUs now approaching 1 watt per mm2 (3090Ti), as compared to about half of that 5 years ago (NVidia Titan X). The CPU side is similar -- Alderlake on desktop (12900K-KS) pushing 240-270W in some workloads on a ~215mm die, vs. the previous "power hog" AMD FX-9590 using 220W on a 315mm die back on 32nm.
What's the current status of this?, and what are some thermal density limits that we really cannot exceed? Will that limit of watts per mm2 reduce as transistors get smaller?
Thanks!