Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/the-memory-supercycle-why-phison%E2%80%99s-ceo-predicts-a-ten-year-shortage-amid-the-ai-boom.23732/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

The Memory Supercycle: Why Phison’s CEO Predicts a Ten-Year Shortage Amid the AI Boom

karin623

Member
On September 21, Morgan Stanley issued a sweeping report authored by 12 analysts across Korea, the U.S., Taiwan, and Japan, titled “Memory Supercycle - Rising AI Tide Lifting All Boats.” The report highlights that today’s “crazy” DDR4 rally is proof that the AI wave has spilled over into the broader memory sector.

NAND flash, in particular, may be entering a multi-year supercycle. One leading indicator: enterprise QLC SSD demand is projected to double next year, with new orders from cloud giants already surpassing the total enterprise SSD demand of this year.

Conventional DRAM is also set to tighten. Nvidia’s most stripped-down AI GPU for China—which is expected to launch as the B40 or RTX6000—will forgo HBM in response to U.S. export bans, instead using graphics-grade GDDR7. That too will squeeze standard DRAM capacity.

So where did this unprecedented memory boom come from—and where is it heading?

To find out, TechTaiwan spoke with Pua Khein Seng, CEO of Phison Electronics, the world’s largest independent supplier of NAND flash controllers.

Phison has recently been promoting its own SSDs as an alternative to HBM, paired with conventional GPUs in what it calls a “budget AI” solution, aiDAPTIV+. Benefiting from the flash rally, its shares have surged 67% over the past month.

 
re: "SSDs as an alternative to HBM",

I'm curious how they're going to make SSDs responsive enough for AI LLMs, given that even 8+ channel Optane DIMM setups are pretty slow.
 
Back
Top