Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-and-softbank-launch-saimemory-joint-venture-targeting-high-bandwidth-memory-alternatives-for-ai-and-revitalizing-japans-chip-industry.22934/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel and SoftBank Launch Saimemory Joint Venture Targeting High-Bandwidth Memory Alternatives for AI and Revitalizing Japan's Chip Industry

Daniel Nenni

Admin
Staff member
Intel and SoftBank Launch Saimemory Joint Venture Targeting High-Bandwidth Memory Alternatives for AI and Revitalizing Japan's Chip Industry

American chip behemoth Intel hooks up with Japanese tech and investment powerhouse SoftBank; their grand objective: to build a new, highly potent alternative for High-Bandwidth Memory (HBM), which has been driving a majority of AI processors so far. This initiative takes form under a brand new joint venture named Saimemory, and it's definitely attracting attention.

Saimemory, according to reports from Nikkei Asia, will be borrowing heavily from Intel's technology and innovative patents from Japanese academic institutions, including the prestigious University of Tokyo, toward this ambitious plan: to have a working prototype and a solid plan for mass production ready by 2027. The ultimate goal? To put this fray of memory on the market before the decade closes.

You may ask: so what's the fuss about HBM? It is great because it can handle lots of data, which is precisely what AI GPUs consume. It also has that quality of speed, which makes it excellent for temporary storage. But HBM has its disadvantages:
  • - It is pretty complicated to manufacture, so it is expensive.
  • - It can get quite hot during operation.
  • - It can be quite power-hungry.
The Intel-SoftBank partnership is hoping to tackle all these problems. Their idea involves stacking standard DRAM chips together and finding some clever ways to more efficiently connect them. If they pull it off, they believe this new stacked DRAM might use as little as half the power of a comparable HBM chip. That's a massive potential saving!

If successful, SoftBank is purportedly expected to secure priority access to the end product, which sounds pretty reasonable with the current concept of the AI chip frenzy. At the moment, only three manufacturers – those of Samsung, SK hynix, and Micron – still make the latest HBM chips. Given the demand, procuring enough HBM can prove to be quite a challenge.

Saimemory should swoop in and have a good alternative, especially for Japanese data centers. This is also important to Japan because in a real way, it would attempt to recreate this country as a major player in the memory chip market, a position it has not held for over two decades. Back in the 1980s, Japanese companies were kings of memory, accounting for about 70 percent of the world's supply! Competition from South Korea and Taiwan has changed that, but this initiative signals that ambition is back again.

Saimemory is not even the only one looking to bring 3D stacked DRAM to the market. Samsung has already spoken of plans along those lines, while another company, called NEO Semiconductor has also been doing its own research with its 3D X-DRAM. Those, however, seem more keen on maximally increasing storage for the chips - think memory modules having a whopping 512GB!

On the one hand, Saimemory is laser-focused on cutting power consumption. This is a critical need for data centers, which are seeing increased demand for energy as workloads mounted through AI use intensify. A more power-efficient memory solution could be a game-changer.

Keep an eye on this alliance between Intel and SoftBank. If Saimemory takes off, it has the potential to forever change the marriage of AI hardware architecture, not to mention would mark a grand return for Japan in the global memory chip theater.

 
HBM is way more efficient than regular Jedec DDR DRAM in per-bit power consumption. What they are talking about?
 
More details here:

SAIMEMORY plans to develop next-generation memory that consumes significantly less power and costs less to produce while offering more than twice the storage capacity of current leading-edge memory chips. The company’s organizational structure reflects a strong collaborative effort: SoftBank will oversee financial operations as CFO, Intel will manage technology as CTO, and a University of Tokyo scientist will serve as CSO (Chief Science Officer). The CEO will be a former Toshiba executive, and operations are scheduled to begin in earnest from July 1st.

SAIMEMORY seeks to address this with a hybrid approach combining innovations from both Intel and the University of Tokyo. Intel contributes a chip stacking technology developed in collaboration with DARPA, the U.S. Department of Defense’s advanced research agency, which significantly reduces power consumption. Meanwhile, the University of Tokyo provides a high-speed data transmission technology that increases data flow between memory and GPUs by widening the data channel, thereby boosting performance and reducing costs.

The company anticipates spending approximately 15 billion yen in its initial research and development phase. For mass production, it is considering outsourcing to semiconductor manufacturers in Taiwan. One potential future scenario is for Japan’s own next-generation chipmaker, Rapidus, to handle the production of SAIMEMORY’s new chips—an ecosystem that would consolidate Japan’s domestic semiconductor ambitions.

In an interview with TV Tokyo, SoftBank’s head of semiconductor development expressed confidence in the project, citing strong demand for low-power, low-cost memory solutions. When asked about potential collaboration with Rapidus, the spokesperson said it would depend on future technical developments but expressed optimism that the partnership could work.

As for manufacturing partners, I wonder if Micron or Winbond would be their partners? As far as I know, those are the only Taiwanese DRAM players, and this seems like an alternative memory cube/packaging technology rather than a process technology or device level innovation. But maybe they were thinking about using TSMC's upcoming BEOL oxide DRAM? But that would of course be higher risk than just using standard DRAM. But of course lower risk than using the upcoming IGZO memory from Samsung or Kioxia due to the obvious competitive concerns.
 
Last edited:
More details here:









As for manufacturing partners, I wonder if Micron or Winbond would be their partners? As far as I know, those are the only Taiwanese DRAM players, and this seems like an alternative memory cube/packaging technology rather than a process technology or device level innovation. But maybe they were thinking about using TSMC's upcoming BEOL oxide DRAM? But that would of course be higher risk than just using standard DRAM. But of course lower risk than using the upcoming IGZO memory from Samsung or Kioxia due to the obvious competitive concerns.

Nanya Technology is another Taiwanese DRAM manufacturer.

What are the benefits for Intel, in terms of revenue and profit, if product manufacturing is eventually outsourced to Taiwan?

If Intel is heavily involved in developing and setting the standard for this new memory, it may negatively impact the industry's willingness to adopt it. The reason is simple: Intel competes with many of the same companies that are potential major buyers of this new memory product.
 
More details here:
The packaging technology co-developed with DARPA doesn't look like it can easily be licensed to any partners.
 
Nanya Technology is another Taiwanese DRAM manufacturer.
Thanks for the correction!
What are the benefits for Intel, in terms of revenue and profit, if product manufacturing is eventually outsourced to Taiwan?
Intel doesn't have modern DRAM process technology in manufacturing to supply the dies needed to assemble a memory cube. The closest they had was the long out of production 22nm eDRAM they used for some mobile CPUs in the 2010s. So with no Internal DRAM manufacturing outsourcing isn't possible. Intel also doesn't exactly have the money to build out and fully develop uncertain moonshot projects right now. So if the options are let Intel's more periphery technologies rot on a shelf or bring in an external investor for 2/3 of the funding and use external manufacturing rather than spending more capex and enteringthe DRAM market that risks sitting empty in a time when Intel products is too battered to fund any more investment. I guess you might as well do it this way.

I'm sure Intel 10 years ago would have without a doubt done this all internal and done some JV to manufacturer the DRAM too. But also the Intel 10 years ago would have let it sit on a shelf because they wouldn't make this product unless Intel's CPU design teams would commit to using it; and Intel designers wouldn't use it because they refuse to do anything new or unproven unless somebody does it first and starts whipping their existing design (IMC, 64 bit, lower power focus in the 2000s, multi core, abandoning race to idle, Big.little, homogeneous integration to exceed reticle limits, on package memory, low power focus in the 2020s, and the list goes on).
If Intel is heavily involved in developing and setting the standard for this new memory, it may negatively impact the industry's willingness to adopt it. The reason is simple: Intel competes with many of the same companies that are potential major buyers of this new memory product.
1) Intel has set most of the industries computing standards, and there has been no resistance in the past due to Intel being a CPU manufacturer. Even modern things post monopoly period like UCIE Intel has a strong hand in.

2) University of Tokyo owns the transmission IP. So I assume their bit has more to due with how the stack connects to compute

3) They said it uses DRAM and that the only thing special is how it is stacked. So presumably a chip won't really know the difference between regular HBM and Saimemory HBM.
The packaging technology co-developed with DARPA doesn't look like it can easily be licensed to any partners.
Yes I thought the same, but here we are... So I guess it is kosher, or at the very least DARPA has given their blessing. Maybe it was developed independently and was later used to meet some DARPA requirement but isn't directly classified? Or maybe University of Tokyo also worked with DARPA on this Intel technology so they have the green light?
 
Last edited:
Now, it sounds like "we want independence from HBM suppliers"

We want HBM, but without greedy premiums from HBM vendors.
 
Nanya Technology is another Taiwanese DRAM manufacturer.

A few generations of DRAM before, the DRAM space still looked very comfortable for 3rd tier players, that lag by years, and life off single digit, but predictable margins.

DRAM wasn't a big user of bleeding edge tools, and somewhat derated products from Elpida, Nanya, Hynix etc were very well received by garage factories in Shenzhen.

Now, Samsung has EUV steppers making memory to squeeze whatever is left possible from physical limits. I don't think there is any way for companies without newest tools to keep following the market on the cheap like they did before. This is why I think Elpida bailed.
 
Back
Top