Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/why-micron-memory-and-storage-matter-in-fueling-ai-acceleration-micron-technology-micron-tec.19972/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Why Micron Memory and Storage Matter in Fueling AI Acceleration | Micron Technology Micron Tec

Daniel Nenni

Admin
Staff member
1712497834974.png


From data centers to autonomous vehicles, discover how Micron's broad portfolio of #ai solutions collectively contributes to shaping the future of AI, powering innovation across all sectors. Today’s generative AI models require an ever-growing amount of data as they scale to deliver better results and address new opportunities.

Micron’s 1-beta memory technology leadership and packaging advancements ensure the most efficient data flow in and out of the GPU. Micron’s 8-high and 12-high #HBM3E memory further fuel AI innovation at 30% lower power consumption than the competition. The 8-high 24GB solution will be part of #nvidia H200 Tensor Core GPUs, which will begin shipping in the second calendar quarter of 2024.

Learn more about Micron's portfolio of products that enable AI: https://www.micron.com/markets-indust...
Learn more about Micron HBM3E: https://www.micron.com/products/memor...


 
If we see leading edge AI models continue to progress the way they are going, with exponential growth in model parameter count, with reduced compute for a large percentage of the parameters (FP4, sparsity harvesting), we’re likely to see logic-in-memory approaches that offer lower power and less costly hardware.
 
If we see leading edge AI models continue to progress the way they are going, with exponential growth in model parameter count, with reduced compute for a large percentage of the parameters (FP4, sparsity harvesting), we’re likely to see logic-in-memory approaches that offer lower power and less costly hardware.
The problem with logic-in-memory, it was called computational memory for a long time, is more software than hardware. There are no standards for what the commands look like, how they're communicated on to the memory processing elements (e.g. write a command to a certain location, along with operands) and how the computational memory processing is extracted from popular programming languages. If you need specialized programming at the application level you're probably going to have a very limited market. These are messy problems. Even computational storage, which is a much easier problem, hasn't taken off after decades of talk and announcements, and even products. It is difficult to be hopeful.
 
Back
Top