Events EDA2025 esig 2024 800X100
WP_Term Object
(
    [term_id] => 178
    [name] => IP
    [slug] => ip
    [term_group] => 0
    [term_taxonomy_id] => 178
    [taxonomy] => category
    [description] => Semiconductor Intellectual Property
    [parent] => 0
    [count] => 1756
    [filter] => raw
    [cat_ID] => 178
    [category_count] => 1756
    [category_description] => Semiconductor Intellectual Property
    [cat_name] => IP
    [category_nicename] => ip
    [category_parent] => 0
)

ReRAM Revisited

ReRAM Revisited
by Bernard Murphy on 11-06-2019 at 6:00 am

I met with Sylvain Dubois (VP BizDev and Marketing of Crossbar) at TechCon to get an update on his views on ReRAM technology. I’m really not a semiconductor process guy so I’m sure I’m slower than the experts to revelations in this area. But I do care about applications so I hope I can add an app spin on the topic, also Sylvain’s views on differentiation from Intel Optane and Micron 3D XPoint ReRAM products (in answer to a question I get periodically from Arthur Hanson, a regular reader).

Memory

I’ll start with what I thought was the target for this technology and why apparently that was wrong. This is non-volatile memory, so the quick conclusion is that it must compete with flash. ReRAM has some advantages over flash in not requiring a whole block or page be rewritten on a single word update. Flash memories require bulk rewrite and should therefore wear out faster than ReRAM memories which can be rewritten at the bit-level. ReRAM should also deliver predictable latency in updates, since they don’t need the periodic garbage collection required for flash. Sounds like a no-brainer, but the people who know say that memory trends always follow whoever can drive the price lowest. Flash has been around for a long time; ReRAM has a very tough hill to climb to become a competitive replacement in that market.

Given this, where does Sylvain see ReRAM playing today? The short answer is embedded very high bandwidth memory, sitting right on top of an application die – no need for a separate HBM stack. He made the following points:

  • First, flash can’t do this; this is barely at 28nm today whereas applications are already at much lower nodes. ReRAM is a BEOL addition and is already proven at 12nm
  • (My speculation) I wonder if this might be interesting to crossover MCUs which have been ditching flash for performance and cost reasons. Perhaps ReRAM could make non-volatile memory interesting again for these applications?
  • Power should be much more attractive than SRAM since ReRAM has no leakage current

These characteristics should be attractive for near-memory compute in AI applications. AI functions like object recognition are very memory intensive, yet want to maintain highest performance and lowest power, both in datacenters and at the edge. Even at the edge it is becoming more common to support updating memory intensive training, such as adding a new face to recognize on checking in at a lobby. Requirements like this are pushing to embedding more memory at the processing element level (inside the accelerator), and having HBM buffers connected directly to those accelerators for bulk working storage. Both needs could be met through ReRAM on top of the accelerator, able to connect at very high data rates (50GB/sec) directly to processing elements or tiles where needed.

A different application is in data centers as a high-density alternative to DRAM, as sort of a pre-storage cache between disk/SSD and the compute engine. In this case ReRAM layers would be stacked in a memory-only device. Apparently this could work well where data is predominantly read rather than written. Cost should be attractive – where DRAM runs $5-6/GB, ReRAM could be more like $1. Which bring me back to Intel and Micron. Both deliver chips, not IP so this should be in their sweet spot. I suspect the earlier comment about size and price winning in memory will be significant here. ReRAM may succeed as a pre-storage cache, but it will most likely be from one of the big suppliers.

Another AI-related application Sylvain mentioned which is especially helped by the Crossbar solution is massive search across multi-model datasets. We tend to think of recognition of single factors – a face, a cat, a street sign – but in many cases a multi-factor identification may be more reliable – recognizing a car type plus a license plate plus the face of the driver for example. This can be very efficient if the factors can be searched in parallel, possible with the Crossbar solution which allows for accessing 8k bits at a time.

Particularly for embedded applications with AI, I think Crossbar should have a fighting chance. Neither Intel nor Micron are interested in being in the IP business and neither are likely to become the dominant players in AI solutions, simply because there are just too many solutions out there for anyone to dominate at least in the near-term. Crossbar will have to compete with HBM (GDDR6 at lower price-points), but if they can show enough performance and power advantage, they should have a shot. Consumers for these solutions have very deep pockets and are likely to adopt (or acquire) whatever will give them an edge.

You can learn more about Crossbar HERE.

Share this post via:

Comments

6 Replies to “ReRAM Revisited”

You must register or log in to view/post comments.