Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/competitive-foundry-memory-discussion.5318/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Competitive Foundry Memory Discussion

Samsung may create technological advantages over TSMC’s 16nm if it can leverage its leads in memory chips, by integrating memory with the 14nm A9 better than TSMC can.

However, so far, I am not aware of any such developments. If anyone knows of such info, please post. Thanks.
 
Samsung may create technological advantages over TSMC’s 16nm if it can leverage its leads in memory chips, by integrating memory with the 14nm A9 better than TSMC can.

However, so far, I am not aware of any such developments. If anyone knows of such info, please post. Thanks.

I think this is an interesting topic. Embedded memory is a very big part of SoC design (SRAM) but that may change with eDRAM RERAM, CBRAM, etc... Also DRAM on package. Memory offerings can be a serious competitive advantage for the foundry business between TSMC, Intel, Samsung, and GF/IBM.

Being from the SRAM side of things I have a pretty good idea of what the future holds. In regards to the other memory technologies not so much, so hopefully others will weigh in here.

One thing I can tell you is that the big SoC companies generally do not use commercial SRAM anymore including Apple. You can do a quick search on LinkedIn to see who has internal SRAM teams to double check. I'm pretty sure Apple made the switch at the A8. When doing your own SRAM you still use the foundry bitcell but you design and verify everything else around it. In-house SRAM is not just a cost saver for the high volume SoC companies (no royalties), there are also competitive advantages.

For SoCs you may have noticed that the size of SRAMs are increasing and design speed is critical so I'm not sure if eDRAM or on package DRAM will be competitive.

<script src="//platform.linkedin.com/in.js" type="text/javascript">
lang: en_US
</script>
<script type="IN/Share" data-counter="right"></script>
 
If a customer wanted to make a custom memory chip, or some design with a ton of Flash, perhaps Samsung would have an advantage. But embedded memories in a logic process.... eh.
 
Any opinion how Crossbar memory that TSM developed the production process for figures in this?
Yes, one potential challenger to Samsung’s domination in memory chips is the combination of Crossbar RRAM and TSMC process.

However, in the past two decades, many alternative memory schemes had been proposed and researched. None have turned into commercially competitive solutions comparing to DRAM and NAND. It remains to be seen if RRAM is a viable one.

Crossbar made a presentation in the December IEDM.

Monday, December 15, 2014
Crossbar Demonstrates Breakthrough Resistive RRAM at IEDM 2014

Please post, if anyone has more RRAM info.
 
Agree, DRAM/NAND are the current champs and will have to be soundly beaten to be designed out by any new memory technology. Don't get me wrong we need a new memory technology to move the industry forward. After all, isn't it true that the great leaps forward have always been stimulated by new forms of memories?

PC's were enabled by DRAM and mobile by NAND





Yes, one potential challenger to Samsung’s domination in memory chips is the combination of Crossbar RRAM and TSMC process.

However, in the past two decades, many alternative memory schemes had been proposed and researched. None have turned into commercially competitive solutions comparing to DRAM and NAND. It remains to be seen if RRAM is a viable one.

Crossbar made a presentation in the December IEDM.

Monday, December 15, 2014
Crossbar Demonstrates Breakthrough Resistive RRAM at IEDM 2014

Please post, if anyone has more RRAM info.
 
Samsung is currently the only producer of VNAND, a premium type of NAND that scales a 40nm node up 32 layers to create density like a 1xnm NAND, but with speed and endurance advantages. That could conceivably provide a competitive advantage for Samsung. TSMC doesn't have VNAND.
 
benb, I think the discussion here is about embedded memory implemented in logic process. IBM has been using eDRAM for a few generation as on-chip L3 cache. Starting from 22nm, Intel did eDRAM but it is still a stand alone chip packaged with the logic in the same package. Embedded non volatile memory - mostly in the form of NOR - typically lags behind a few node to integrate on logic process. Most foundries are developing their 40nm version.
 
OK, khaki, I follow what you're saying. If IBM has eDRAM, on-chip, what might Samsung or others have up their sleeves to compete? I read something a few years ago about embedded FRAM--Something TI came up with. FRAM is a non-volatile memory with speed advantages compared to NAND. Compared to NOR, or SRAM, not so much.

Echoing previous comments, it's hard to break the hold that SRAM has for on-chip memory, DRAM has for main memory, and NAND has for non-volatile storage, I think. There are believers in STT-MRAM though--it has speed and low power, plus nonvolatile. Everspin is listing an embedded MRAM on their home page also.
 
Last edited:
eDRAM on chip is a huge cost burden for logic process, because it touches front end. RRAM is only related to back end process, which is much cheaper.
FRAM materials might not be compatible with current lines. MRAM also introduces some additional material to the line and cell size could be big.
Before hails for rram, we need to know whether embedded memory on chip is necessary. I personally feel it is much safer than stand-alone memory.
 
IBM's claim has been that as long as L3 is at least 10% of the chip, replacing it with eDRAM leads to a net cost reduction. Of course, this is probably true for IBM's products and for general market something like 20-25% might be more realistic. The trench DRAM only adds one critical mask to the process. Starting from 22nm, IBM has been using special SOI wafers with n+ epi layer under the BOX, which adds to the starting wafer costs, but simplifies the process. After trenches are made and filled, the wafer is transparent to the downstream FEOL process.

Compatibility with standard process is something all embedded memories need to comply with. Memories integrated in BEOL, need to limit their processing temperature to 400oC or lower. Memories integrated in FEOL, such as trench eDRAM or flash need to withstand FEOL thermal budget. But a more important decision factor is what we expect from the memory. If used as a cache, such as the case for eDRAM, you need high (infinite) endurance, very high density (tens of MB), and fast access time (~1ns or less), but no need to data retention. You might be able to tweak some of "non-volatile" memories to meet these requirements, i.e. sacrifice data retention for better access time and endurance. The typical eNVM, however, is geared for data retention at the cost of limited endurance and slow access time.
 
Back
Top