Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/q4-2022-review-of-gaa-fet-process-from-ibm.17647/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Q4 2022 review of GAA-FET process from IBM


Interesting, I forgot about them, saw the news articles last year.

Rapidus was formed by semiconductor veterans such as Rapidus President Atsuyoshi Koike, with backing by leading Japanese technology and financial firms, including Denso, Kioxia, Mitsubishi UFJ Bank, NEC, NTT, Softbank, Sony, and Toyota Motor. The Japanese government is also subsidizing Rapidus. The big change for Japan compared to prior national efforts is the collaboration with international organizations. It’s a recognition Japan cannot go it alone. This appears to be a fundamental change in Japanese attitudes. Building a fab in Japan will be helped by Japan’s strong manufacturing ecosystem of materials, equipment, and engineering talent.

So that potentially leads to TSMC, Samsung, Intel, and Rapidus as advanced foundries.
 
Are you guys (and article) really saying 2nm literally, or are you saying "the equivalent" of say, 2nm if extrapolated out from the current standard/SRAM cells that they made at say, 16nm? The size of water (H2)) is 0.27nm. The size of silicon is 0.2nm. The article refers to a gate size of 12nm. How large is the molecule that is being used in this process? You guys think this can be reliably done? At what cost (private and government)? At what throughput? I gotta hear this. I will pick up popcorn.
 
"2nm" is just the node "class" or "generation". The number has had no relation to the size of the features since like 130/90nm when Denard scaling hit a wall. However with that said there are features that are very tiny (like Cu liner thickness, hafnium oxide thickness, fin width, and now nanosheet height/thickness, contact to gate spacing, etc). Heck if memory serves before HKMG the SiO2 gate oxides were getting to single digit atoms thick. There are also features on leading edge nodes that are way larger than the nm number (like the size of the gates, the metal lines, the whole transistor, etc).

How large are the molecules: however large the laws of physics and chemistry deem they should be for maximum stability. In the case of a Si unit crystal it seems like the avg bond length is like 0.15nm and a volume of like 0.032nm^3 but I don't feel like ruffling through my old textbooks/notes to double check the internet here. As for SRAM cells, they are nowhere near 16nm in size. Try 21000 nm^2 for N5/N3E. For the gates it depends on what materials are being used for that exact gate.

Can it be reliably done? Of course it already is being done today. You have to remember that "2nm" nodes (besides maybe Samsung) don't seem to be targeting aggressive feature size scaling. Heck TSMC claims they will have the densest process in 2025 with only ">1.1x chip density scaling over N3E" (hint hint the devices are probably the same or similar size to N3E with various block level improvements from the move to GAA). Can it be done economically and at high volumes? I don't see why not. GAA nodes mostly borrow their FEOL flows from finFETs with a few novel insertions and integration challenges (as shown in the linked paper). However whitepapers I have seen on IEEE don't seem to indicate that these insertions will significantly boost structural costs.

Will there be any cost to government? No; zero; zilch. IBM is burning their own money here not Uncle Sam's.

Who are IBM's production partners? Samsung?
The fab they co-own with SUNNY Poly Tech. Samsung is an R&D partner, and Rapidus is supposedly going to directly licence IBM's "2nm" technology.
 
Are you guys (and article) really saying 2nm literally, or are you saying "the equivalent" of say, 2nm if extrapolated out from the current standard/SRAM cells that they made at say, 16nm?
"2nm" is just the node "class" or "generation". The number has had no relation to the size of the features since like 130/90nm when Denard scaling hit a wall.
 
I like putting gate pitch (helps set X pitch at lowest layers), m2 pitch (helps set stdcell height), m5 or m6 pitch (interconnect) into the names of the process on our PDKs, so I am mostly in agreement with the "A Better Way to Measure Progress in Semiconductors". I am irritated by the bastardizing the MKS system. The use of the pseudo dimension of 2nm is ridiculous.

I thought what we netlist in Spice, or draw in the layout (or possibly physically shrink to during the naming of the process) was the standard. Do the modern BSEE programs teach foundry language based on the BS number system and Intel acronyms? I will have our MSEE interns educate me on this new language.
 
I have a couple of problems with that Stanford metric. It doesn't account for DTCO, and other library scaling options. Finflex is a great example of a way to sqeueze out that last extra bit of density or PPW from a design over the conventional approach of every block having the same library. I also think intel 4 is another great example for this, given the very small feature size scaling, that naming scheme would not accurately communicate how large the actual density boost was from the reduction to n-p spacing and the 4:3 fin depopulation they showed off at VLSI. This metric also does nothing to communicate the PP part of PPAC. For TSMC and intel present good examples of how this can be important. For example the intial intel 10nm having slightly worse PPW than 14nm++. Presumably this might be part of what lead to the extra extended stay on 14nm desktop parts even after 10nm reached sufficient PPW and yield for laptop and server products. Another example would be N3 offering a 10-15% speed boost over N5. Meanwhile TSMC claims N4P offers 11%. Depending on the exact product this could make N4P better for some applications as N5 soon reaches the point where the tools should be depreciated.

I wouldn't mind logic doing what dram does and what nand did before they started layering 2x or 1x nodes with the x being a greek or latin letter. I also think that TSMC saying here is our alpha1 node which will be followed by alpha2 and then the beta1 node sounds cooler than say N3->N3E->N2. Either way it doesn't matter what the name is at the end of the day. Designers know all of the capabilities of their nodes and the implied "nm" numbers (I say implied because none of these nodes get called xnm anymore) are close enough to each other I don't think it is a big deal. It is also a secrecy thing. Why would companies want their competitors to know the exact metal and poly pitch for their nodes years in advance? And that is an extra problem whenever things get relaxed for extra PPW (see 14++ and N3E). Are these nodes suddenly way worse because they are slightly less dense for better PPW?
 
Since we create stdcells to work with our P&R (and hand crafted blocks), we need to know the actual numbers. If I had a manual, I wouldn't be asking. We don't have the required $50M bank account (spread across 200 banks of course), so I try to extract information by other means to at least start the process going.. to get around the secrecy thing. Guilty as charged. We are in an industry that is changing rapidly in the last few years and I am forced to predict which directions to head with our limited work force. A 10% gain in any technology is not worth our change in direction.

28=>16nm... worth it.

16nm => 7nm (DUV)... nah

16nm => the node that you mentioned several weeks ago.... yeah! We will head that way next. The numbers were public enough for us to get started on it.

There is a parallel thread where Ian communicated what he is looking for. That is in our sweet spot (within reason. Ian is on the extreme end) and I speculate that he uses custom logic and has runs his own characterizations based on voltage levels. Our cells are a bit larger than what the foundry provides. We need dimensions when we make our own cells, and we are adjusting our 16/14nm process to prepare for the 6'ish process, which you helped push us towards that conclusion. It was based on real numbers and available layers.
 
Back
Top