Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/samsung-reportedly-trials-2nd-gen-3nm-chips-aims-for-60-yield.19642/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Samsung Reportedly Trials 2nd Gen 3nm Chips, Aims for 60%+ Yield

Fred Chen

Moderator

According to industry sources cited by South Korean media The Chosun Daily, Samsung has commenced the production of prototypes for its second-generation 3nm process and is testing the chip’s performance and reliability. The goal is to achieve a yield rate of over 60% within the next six months.

TSMC and Samsung are both actively vying for customers. Samsung is preparing to commence mass production of the second-generation 3nm GAA architecture in the first half of the year. The key to success in the competition lies in whether Samsung can meet the demands of major clients such as Nvidia, Qualcomm, AMD, and simultaneously achieve a rapid increase in production.

Samsung is currently testing the performance and reliability of prototypes for the second-generation 3nm process, with the initial product set to feature in the soon-to-be-released Galaxy Watch 7 application processor (AP). It is expected to be used in the Galaxy S25 series Exynos 2500 chip next year.

If the production yield and performance of the second-generation 3nm process are stable, there is a chance that customers who had previously switched to TSMC may return to Samsung, especially considering Qualcomm’s movements.

As per report, Qualcomm is collaborating with TSMC in the production of the next-generation Snapdragon 8 Gen 3. Additionally, Nvidia’s H200, B100, and AMD’s MI300X are expected to adopt TSMC’s 3nm process.

Samsung announced in November of last year that it would commence mass production of the second-generation 3nm process in the latter half of 2024. While Samsung has not responded to Chosun’s report regarding the production of prototypes for the second-generation 3nm process, the timeline seems plausible.

However, the report mentions a chip yield rate of 60% without specifying transistor count, chip size, performance, power consumption, or other specifications.

Furthermore, according to Tom’s Hardware’s report, the chip size, performance, and power consumption targets for processors used in smartwatches, mobile phones, and data centers are entirely different. A 60% yield rate for small chips would make commercial use challenging, but for chips with a reticle size of 60% yield rate, it would be reasonably acceptable.

However, caution is advised in interpreting this report due to the uncertainties surrounding Samsung’s second-generation 3nm process production targets at its semiconductor foundries.

Nonetheless, the commencement of the second-generation 3nm process production is a significant development for both Samsung and the semiconductor industry as a whole.
 
> A 60% yield rate for small chips would make commercial use challenging, but for chips with a reticle size of 60% yield rate, it would be reasonably acceptable.

In other words the yield is crap, as it's nearly unheard of to have a reticle sized chip.
 
it's nearly unheard of to have a reticle sized chip.
Most of IBM's stuff, All of NVIDIA's DC chips over the past few gens, and Intel's Server CPUs. Granted nobody makes those chips as a lead product, so if true not good. Not that I expect the world from SF3, but for what is worth these yield % rumors are almost always off base. They are and are completely meaningless without context of how it is being measured and a detailed view of the chip itself.
 
Last edited:
Most of IBM's stuff, All of NVIDIA's DC chips over the past few gens, and Intel's Server CPUs. Granted nobody makes those chips as a lead product, so if true not good. Not that I expect the world from SF2, but for what is worth these yield % rumors are almost always off base. They are and are completely meaningless without context of how it is being measured and a detailed view of the chip itself.
Ok, I was thinking of wafer size, not reticle size. But if they don't specify the chip size you should just assume it's a small arm core test chip, because that's what they normally use to test a new process
 
Most of IBM's stuff, All of NVIDIA's DC chips over the past few gens, and Intel's Server CPUs. Granted nobody makes those chips as a lead product, so if true not good. Not that I expect the world from SF2, but for what is worth these yield % rumors are almost always off base. They are and are completely meaningless without context of how it is being measured and a detailed view of the chip itself.
Plenty of reticle sized chips at TSMC and elsewhere, but even with TSMC defect densities -- which AFAIK are the best in the business -- the yield of such monsters is always going to be low. But then if the chip ASP is a high four-digit or even five-digit number, even percentage yields of 10% or lower may still be OK... ;-)
 
Plenty of reticle sized chips at TSMC and elsewhere, but even with TSMC defect densities -- which AFAIK are the best in the business -- the yield of such monsters is always going to be low. But then if the chip ASP is a high four-digit or even five-digit number, even percentage yields of 10% or lower may still be OK... ;-)
Such large chips always have extensive redundancy so they can ship with a fraction of cores, cache, and other structures shut down. The H100 had one whole HBM stack unused in order to meet yield - after a while they started shipping top-bin versions with all 6 stacks functioning, and the numbers on AMD MI300 indicate they just turn off a couple of channels, not whole stacks.

Chips like GPUs or server CPUs are very repetitive, so tolerating faults is an effective way to raise yield.
 
> A 60% yield rate for small chips would make commercial use challenging, but for chips with a reticle size of 60% yield rate, it would be reasonably acceptable.

In other words the yield is crap, as it's nearly unheard of to have a reticle sized chip.
If it is 1 chip using full reticle and the yield reaches 60%, it is amazingly great with D0 ~0.06. Yield 20% will be reasonably acceptable with D0=0.18~0.2 at early stage. FYI.
 
Last edited:
They are not testing a new node with reticle size chips. The standard practice is to use small test chips. For instance, TSMC testing their 5mm node with a 5mm2 test chip: https://www.anandtech.com/show/15219/early-tsmc-5nm-test-chip-yields-80-hvm-coming-in-h1-2020
I thought you are not so familiar with chip yield and typical test vehicle plan. If we convert 99% yield in 5nm 5mm2 chip ( small area in test chip), its D0 will be ~0.2 which is reasonably acceptable.
 
I thought you are not so familiar with chip yield and typical test vehicle plan. If we convert 99% yield in 5nm 5mm2 chip ( small area in test chip), its D0 will be ~0.2 which is reasonably acceptable.
ok, point taken ;)
 
Is the reticle size chip @4X on mask?

Or is the chip really something like 4 or 5 inches across?
Current optical systems do a 4x reduction of the mask. So a reticle sized die is 1/4th the size of the mask.

Random image I found from Purdue. As for why 4x reduction is the limit of projection optics (if we ignore high-NA's 4x by 8x), I have absolutely no clue. Optics and waves were always my weakest physical discipline; but I work in etch, so that is fine by me. ;)
1708486239670.png
 
Last edited:
Current optical systems do a 4x reduction of the mask. So a reticle sized die is 1/4th the size of the mask.

Thanks , was thinking actual reticle size chip would be quite mighty in size!

Actually is there any scenario where a "mega size" chip would be of any use?

Would it be more physically durable?
 
Actually is there any scenario where a "mega size" chip would be of any use?
The 4 to 1 reduction allows the design of masks to be more relaxed and detailed than the final image. The mask can be made as precisely as the wafer, even more precise using ebeam lithography for mask writing, and then the fuzziness is (simplifying) only at the wafer, no additional losses due to difficulties at the mask. 4 to 1 is simply a compromise between making the mask large enough to get that clarity advantage, but an even higher ratio would make the lenses even more massive and expensive.

More than you ever wanted to know about lenses from David Shafer. He references his work for Zeiss beginning around slide 55 of this restrospective slide set: https://www.slideshare.net/operacrazy/highlights-of-my-48-years-in-optical-design

There are some larger chips. Look up Cerebras for an extreme example. Large chips are physically more difficult to handle and package, they are thin and brittle, and expansion coefficient differences with packaging become larger. I believe Cerebras spent more on the packaging solution, including power and cooling, than on the circuit design.

Stitching multiple reticles is also difficult, there tend to be scattered light effects at the edge of the mask. These are fine if that is the edge of the chip, but problematic if you want to make a seamless join. Companies like Nikon provide specialized scanners that can draw higher level wires to join multi-reticle chips, where if you look deeper in the finished chip you will find an empty gap at the lower levels.
 
The 4 to 1 reduction allows the design of masks to be more relaxed and detailed than the final image. The mask can be made as precisely as the wafer, even more precise using ebeam lithography for mask writing, and then the fuzziness is (simplifying) only at the wafer, no additional losses due to difficulties at the mask. 4 to 1 is simply a compromise between making the mask large enough to get that clarity advantage, but an even higher ratio would make the lenses even more massive and expensive.

More than you ever wanted to know about lenses from David Shafer. He references his work for Zeiss beginning around slide 55 of this restrospective slide set: https://www.slideshare.net/operacrazy/highlights-of-my-48-years-in-optical-design

There are some larger chips. Look up Cerebras for an extreme example. Large chips are physically more difficult to handle and package, they are thin and brittle, and expansion coefficient differences with packaging become larger. I believe Cerebras spent more on the packaging solution, including power and cooling, than on the circuit design.

Stitching multiple reticles is also difficult, there tend to be scattered light effects at the edge of the mask. These are fine if that is the edge of the chip, but problematic if you want to make a seamless join. Companies like Nikon provide specialized scanners that can draw higher level wires to join multi-reticle chips, where if you look deeper in the finished chip you will find an empty gap at the lower levels.
Thanks , post mask is not my area of knowledge , so its always interesting to read here and get answers from those with the knowledge of mask usage
 
Such large chips always have extensive redundancy so they can ship with a fraction of cores, cache, and other structures shut down. The H100 had one whole HBM stack unused in order to meet yield - after a while they started shipping top-bin versions with all 6 stacks functioning, and the numbers on AMD MI300 indicate they just turn off a couple of channels, not whole stacks.

Chips like GPUs or server CPUs are very repetitive, so tolerating faults is an effective way to raise yield.
That's true up to a point, but it can rarely completely fix yield problems though it can certainly improve yield from extremely low to acceptable -- if it was a complete fix then there would be more full-reticle chips and they'd be cheaper... ;-)
 
That's true up to a point, but it can rarely completely fix yield problems though it can certainly improve yield from extremely low to acceptable -- if it was a complete fix then there would be more full-reticle chips and they'd be cheaper... ;-)
Yeah, maybe 80 to 90% of the chip can tolerate a fault. That still leaves 20% or so that needs to be faultless - but that is just a 1.6 cm2 chip equivalent, allowing reasonable yields.
 
Back
Top