Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/samsung-sk-hynix-suffer-16-billion-loss-in-chip-business-rewind-2023.19367/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Samsung, SK hynix suffer $16 billion loss in chip business [REWIND 2023]

Daniel Nenni

Admin
Staff member
Samsung Electronics' chip plant located in Pyeongtaek, Gyeonggi [YONHAP]

Samsung Electronics' chip plant located in Pyeongtaek, Gyeonggi [YONHAP]

Samsung Electronics and SK hynix incurred a combined loss of more than 21 trillion won ($16.1 billion) in their chip businesses, primarily attributed to the cyclical downturn in the semiconductor industry in 2023.

The third quarter revenue of Samsung Electronics, the world's No. 1 memory chipmaker, plunged 77.6 percent year-on-year as its chip division, a cash cow for the conglomerate, continued to reel from a global supply glut.

Samsung's chip business accumulated an operating loss of 12.7 trillion won in the first three quarters of this year.

The oversupply of memory chips began to impede the global semiconductor industry from the end of 2022, as the demand for electronic devices, particularly smartphones and laptops, did not rebound even after the pandemic due to ongoing economic uncertainties.

SK hynix, a global top-tier memory chipmaker, was also unable to escape the industry slowdown.

SK hynix's third quarter revenue plunged by 17 percent year-on-year in 2023. It accumulated 8.1 trillion won of operating loss in the first three quarters of this year.

On the bright side, there are signs of the memory chip industry recovering, thanks to surging demand for advanced memory chips like high-bandwidth memory chips for AI applications. The production cuts implemented in the first half of the year have helped control inventory and stabilize price declines.

 
I think the memory rebound will follow logic in 2024. Memory seems to have a much bigger swing though, bigger inventories? If you remove memory, 2023 was not that bad of a year for logic. Single digit negative revenue growth for the semiconductor industry on a whole? Logic would be low single digit? Someone run the numbers...

Double digit semiconductor revenue growth for 2024? I think TSMC will have a double digit year, absolutely.
 
If you remove memory, 2023 was not that bad of a year for logic.
That is not accidental. Cheaper memory leaves more budget for logic, and memory has scaled far less than logic. A cloud server a few years ago with DRAM at $3/GB had over half the BOM going on memory. Today at $1.5 / GB, the first qualitative fall since 2015, the balance is a bit better although with server chip having jumped from 28 cores to 96 cores (x64 dual-thread equivalent) it is just barely sensible. Memory is the anchor dragging along the bottom.

In AI the balance is different. The dominant logic chips have high cost and relatively low quantities of memory attached. At maybe $7 or $8 / GB the 144GB of HBM3e memory on an H200 will be about 5% of BOM. But DRAM is still the dragging anchor in bandwidth and energy per bit.

If you look at consumer electronics, where mobile is the vast majority, it was a slow year on both sides.

So overall seeing them as opposites is not a surprise. Memory is not keeping up with logic.
 
That is not accidental. Cheaper memory leaves more budget for logic, and memory has scaled far less than logic. A cloud server a few years ago with DRAM at $3/GB had over half the BOM going on memory. Today at $1.5 / GB, the first qualitative fall since 2015, the balance is a bit better although with server chip having jumped from 28 cores to 96 cores (x64 dual-thread equivalent) it is just barely sensible. Memory is the anchor dragging along the bottom.
Agreed, and I've been wondering lately about the effect on DRAM demand if CXL 3.0 fabrics become successful in cloud data centers. So-called stranded memory in cloud servers is a significant drag on cloud computing costs, and memory pooling probably has the best business case I've seen in a long time, assuming it can be practically implemented. The latest white papers I've seen, and I've attached a link to a well-written example, are focusing on tiered memory strategies and using CXL 3.0 as "far memory" or "storage memory". This is a big improvement over the more general "let's create a data center-wide NUMA system" that I heard from CXL member companies initially. The implementations in the Micron/AMD paper are complicated to develop at scale (the fabric managers and sub-100ns low latency switches strike me as difficult to develop), but this path looks do-able. I can't decide if I think CXL 3.0 tiered memory will reduce the demand for cloud server DRAM because of improved utilization, or increase demand because of the potential application performance improvements.

 
Last edited:
Agreed, and I've been wondering lately about the effect on DRAM demand if CXL 3.0 fabrics become successful in cloud data centers. So-called stranded memory in cloud servers is a significant drag on cloud computing costs, and memory pooling probably has the best business case I've seen in a long time, assuming it can be practically implemented. The latest white papers I've seen, and I've attached a link to a well-written example, are focusing on tiered memory strategies and using CXL 3.0 as "far memory" or "storage memory". This is a big improvement over the more general "let's create a data center-wide NUMA system" that I heard from CXL member companies initially. The implementations in the Micron/AMD paper are complicated to develop at scale (the fabric managers and sub-100ns low latency switches strike me as difficult to develop), but this path looks do-able. I can't decide if I think CXL 3.0 tiered memory will reduce the demand for cloud server DRAM because of improved utilization, or increase demand because of the potential application performance improvements.


This is actually quite fun, and remains to be seen. But for me, CXL is likely to carnivalize conventional server memory demand. We already know that there are tons of 'medium-temperature' data in IDC. For example, large(TBs of) IMDB which moderately accessed by users, cloud virtual machines with burst workloads. These workloads are not enough to saturate Multi-channeled RDIMM bandwidths. In order words, RDIMM bandwidths are wasted(not used) most of times. This is where Intel XPoint DIMM wanted to play its part. Since Intel thought they can carnivalize DRAM market, why not for CXL expanders?

Of course, there can be an new application which can be enabled by CXL expander(Thus pull memory demands). But which one? Maybe AI accelerators with small batch-sized inference? Something that integrates FPGA, GPU, %PU together with shared memory all of which requires mediocre bandwidth? CXL cached NAS hmm... Or maybe memory can carnivalize some of the Intel revenue, by reducing demand of S4, S8 servers(used by IMDB), making IMDB implementation easier. But everything seems niche...
 
This time, it's not an ordinary market downturn. It's a big chunk of European market going dark overnight, and not returning. It's not a natural inventory cycle triggered downturn.
 
Back
Top