Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/exclusive-jensen-huang%E2%80%99s-remark-sparks-storage-rally%E2%80%94phison-ceo-responds-from-ces.24331/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Exclusive: Jensen Huang’s Remark Sparks Storage Rally—Phison CEO Responds from CES

karin623

Member
A single remark from Jensen Huang was enough to jolt the global storage market. During his CES keynote, the Nvidia CEO described a new AI storage architecture that could become “the largest storage market in the world,” triggering a sharp rally across memory stocks. As traders rushed to interpret the implications, a photo circulating online appeared to show a Phison flash controller inside Nvidia’s next-generation Vera Rubin server—fueling speculation that a major supply-chain shift was underway.

Speaking directly from CES, Phison CEO K.S. Pua offered an exclusive and more nuanced response, cutting through the market noise. This piece unpacks what Huang’s comment really means, where the true storage opportunity lies, and why flash memory is becoming unavoidable in the future of AI—even if the biggest winners are not who the market initially expects.

 
Great report.

I really do not understand why Micron does not have grander expansion plans. Are they not as competitive as Samsung and Hynix? Why is the manufacture in America not big for memory? Without memory there will be no need for logic.
 
It looks like the context is AI SSDs:
Yup, data center disaggregated inference can benefit tremendously from new tiering of fast, high-capacity shared KV cache storage instead of leveraging the traditional CPU-oriented storage hierarchy/tiering.


We’re seeing GPU-centric hardware turn into application-specific accelerators for transformer-based LLMs using specialized and co-optimized:
* Context/prefill hardware - Rubin CPX
* Shared KV cache storage (inference context memory)
* Decode - Groq (coming) ?
* Processor interconnect/networking/switching
 
Last edited:
Back
Top