Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/hkmg-on-dram-nodes.17431/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

HKMG on DRAM nodes

nghanayem

Well-known member
I recently learned that the first HKMG DRAM only entered HVM relatively recently, having scanned some IEEE papers from the early finFET era they just say that HKMG would make for stronger lower power DRAM (duh) and that it would require alot of work. More recent papers make a big deal of "DRAM compatible" HKMG schemes. For anyone on this forum was/is clued into the world of DRAM, why did it take so long to move to HKMGs? If my understanding is correct they already etch fins and use the surface along the side/bottom between the two fins as the channel (as opposed to logic which uses the fins themselves as the channel), and there is a move towards vertical nanowire transistors underway for DRAM. If the DRAM industry could adopt these different higher performing/lower leakage transistor architectures, how could they have only just now started reaching HKMGs at 1x? What about DRAM nodes made them incompatible with gate last HKMGs prior to the 1x/y nodes from Samsung?
 
Last edited:
I recently learned that the first HKMG DRAM only entered HVM relatively recently, having scanned some IMEC papers from the early finFET era they just say that HKMG would make for stronger lower power DRAM (duh) and that it would require alot of work. More recent papers make a big deal of "DRAM compatible" HKMG schemes. For anyone on this forum was/is clued into the world of DRAM, why did it take so long to move to HKMGs? If my understanding is correct they already etch fins and use the surface along the side/bottom between the two fins as the channel (as opposed to logic which uses the fins themselves as the channel), and there is a move towards vertical nanowire transistors underway for DRAM. If the DRAM industry could adopt these different higher performing/lower leakage transistor architectures, how could they have only just now started reaching HKMGs at 1x? What about DRAM nodes made them incompatible with gate last HKMGs prior to the 1x/y nodes from Samsung?
It's for periphery CMOS not cell. Thermal budget should be careful.
 
Wouldn’t HKMG lower heat by letting you pump less current to get the same effect? If nothing else the lower off current leakage should help, no?
I meant the thermal budget of the process following the periphery CMOS fabrication. Yes, the HKMG benefit is to reduce the short channel effects such as leakage.
 
I meant the thermal budget of the process following the periphery CMOS fabrication. Yes, the HKMG benefit is to reduce the short channel effects such as leakage.
Are the capacitors formed before the periphery CMOS, or is it like logic and are they formed with the BEOL?
Given how long it has taken for HKMG to make it's way to the periphery CMOS am I right to assume that a simple replacement metal gate scheme is not enough to fix the heat frying your devices? If this is the case what part is at risk of getting fried?
 
Are the capacitors formed before the periphery CMOS, or is it like logic and are they formed with the BEOL?
Given how long it has taken for HKMG to make it's way to the periphery CMOS am I right to assume that a simple replacement metal gate scheme is not enough to fix the heat frying your devices? If this is the case what part is at risk of getting fried?
It's not so serious since periphery FEOL is not even FinFET.
 
It's not so serious since periphery FEOL is not even FinFET.
Are they made at much larger sizes than the DRAM cell transistors? It was my understanding that the DRAM cell transistors were around N7 sized and moved beyond planar in an effort to minimize leakage at such small sizes.
 
The DRAM cells are (last I read) "saddle" cells which wrap under the word line (gate) to lengthen the effective channel while not increasing surface area. From your perception of a fin, it is more like a fin upside down and inside out, where the fin is the gate down into the surface, and the channel flows around it. The saddle is a W shape where the central top contact (to the data line) is shared by two cells, the dips in the W are the two word-line gates, and two capacitors connect to the outside tops of the W.

There is never any risk of these overheating during operation - the energy per operation is set by capacitor size and charge voltage levels, those transistors are unpowered switches moving around 10 to 20 fJ per operation. The main design problem is to ensure a low leakage while also having 10^7 or better conductivity on/off ratio, so that a 10ns read/write flow is matched to a typical 100ms retention time.

The periphery, as Fred said, is not burning up in operation either. There are power goals for operation of course, which can be challenging on DDR signals, but the heat budget is for the capacitor etching which is part of the BEOL which is the most distinctive part of DRAM fabs. I have heard that it can take weeks for wafers to complete the meticulous near vertical 1200 nm channels at their 40nm spacing. These are then filled with ALD and final metal over top. The heat budget of so many hours at high temp is what has made DRAM chips separate from logic processes for the last 30 years. The tradeoffs and material limitations for the peripheral logic have become much more performant over time, but are still choices that make no sense for a true logic process.
 
The DRAM cells are (last I read) "saddle" cells which wrap under the word line (gate) to lengthen the effective channel while not increasing surface area. From your perception of a fin, it is more like a fin upside down and inside out, where the fin is the gate down into the surface, and the channel flows around it. The saddle is a W shape where the central top contact (to the data line) is shared by two cells, the dips in the W are the two word-line gates, and two capacitors connect to the outside tops of the W.
That was kind of what I meant when I said using the inner surface between fins rather than the fins themselves. It was my understanding that these might soon be replaced with a vertical nanowire structure for even better control and better density (and unlike logic you aren't worried about low current carrying capability).

There is never any risk of these overheating during operation - the energy per operation is set by capacitor size and charge voltage levels, those transistors are unpowered switches moving around 10 to 20 fJ per operation. The main design problem is to ensure a low leakage while also having 10^7 or better conductivity on/off ratio, so that a 10ns read/write flow is matched to a typical 100ms retention time.

The periphery, as Fred said, is not burning up in operation either. There are power goals for operation of course, which can be challenging on DDR signals, but the heat budget is for the capacitor etching which is part of the BEOL which is the most distinctive part of DRAM fabs. I have heard that it can take weeks for wafers to complete the meticulous near vertical 1200 nm channels at their 40nm spacing. These are then filled with ALD and final metal over top. The heat budget of so many hours at high temp is what has made DRAM chips separate from logic processes for the last 30 years. The tradeoffs and material limitations for the peripheral logic have become much more performant over time, but are still choices that make no sense for a true logic process.
Maybe not the best phrasing by me; by overheating I did mean the high temps during certain etch processes. Logic had this issue as well and adopted the replacement gate process flow to sidestep the issue. As for the capacitor etch times I couldn't tell you, but that doesn't seem right to me. Given 200+ layer NAND stacks are etched at even higher aspect ratios for 12 microns of depth and have etch times measured in hours (from what I can gather anyways). As a result spending weeks to etch like 1/9 the material at ~75% the aspect ratio doesn't seem right to me.
 
That was kind of what I meant when I said using the inner surface between fins rather than the fins themselves. It was my understanding that these might soon be replaced with a vertical nanowire structure for even better control and better density (and unlike logic you aren't worried about low current carrying capability).
Well, the 10^7 ratio does imply current carrying for speed when on. 10fC flow in 10ns is about a microamp. It will be interesting to see if vertical GAA is more compact, perhaps modern litho resolution for these elements is finer than the pitch of the cells anyway.
Given 200+ layer NAND stacks are etched at even higher aspect ratios for 12 microns of depth and have etch times measured in hours (from what I can gather anyways). As a result spending weeks to etch like 1/9 the material at ~75% the aspect ratio doesn't seem right to me.
Yeah, I could well be wrong, and the numbers I heard were how long the wafers took from start to finish, possibly only a small fraction of that is actual etching. I believe the holes are much smaller diameter than NAND, fwiw.
 
Are they made at much larger sizes than the DRAM cell transistors? It was my understanding that the DRAM cell transistors were around N7 sized and moved beyond planar in an effort to minimize leakage at such small sizes.
DRAM array cell uses buried word line, and channel is not straight but curves underneath the WL gate, thus extending Lg effectively. This cell size is of course much smaller than SRAM's smallest size 0.021 um2.
 
HKMG is used in the periphery for high performance DRAM. Samsung started making DRAM with and without HKMG beginning at 1x, Micron at 1z and SK Hynix at 1a. Samsung 1x is 7 years old, HKMG isn't that recent an addition. It is only used when needed for performance due to the cost sensitivity of DRAM.
 
HKMG is used in the periphery for high performance DRAM. Samsung started making DRAM with and without HKMG beginning at 1x, Micron at 1z and SK Hynix at 1a. Samsung 1x is 7 years old, HKMG isn't that recent an addition. It is only used when needed for performance due to the cost sensitivity of DRAM.
Given the high densities of modern DRAM data cells color me impressed that the U shaped channel is enough to not have leakage go out of control without HKMGs. Given that logic node leakages started going out of control around 2000, this is quite the feat of channel control with just SiO2.
 
Back
Top