You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Japan’s newly formed chip foundry venture Rapidus Corp. said it is seeking to invest several trillion yen to help reboot the country’s semiconductor industry. Backed by Toyota Motor Corp., Sony Group Corp. and six other Japanese companies, the Tokyo-based venture signed a partnership with International Business Machines Corp. to develop the US firm’s leading-edge 2-nanometer technologies. Rapidus said it will begin mass production of the chips in 2027 at a plant it plans to build in Japan.
The company secured an allocation of 70 billion yen ($510 million) in subsidies from the country’s Ministry of Economy, Trade and Industry last month. For comparison, the US government is spending more than $50 billion to rebuild its chip production capabilities.
“This is a start,” Rapidus President Atsuyoshi Koike said at a news conference Tuesday, adding that the company will seek continued government support. “We will need to invest several trillion yen.”
Sector leader Taiwan Semiconductor Manufacturing Co. plans to mass produce 2nm chips in 2025, while Samsung Electronics Co. began mass production of 3nm chips in June. While home to some of the world’s leading suppliers of semiconductor-making equipment, Japan’s domestic factories are generations behind. Rapidus is also backed by auto parts maker Denso Corp., memory chipmaker Kioxia Corp., MUFG Bank, NEC Corp. and telecom firms Nippon Telegraph & Telephone Corp. and SoftBank Corp. The chip venture previously announced a partnership with Belgium-based microelectronics research hub IMEC on advanced semiconductor technologies.
“We are confident that we will be able to secure the support of the Japanese government and that of the trade ministry,” Koike said. “The ramifications will be huge if we cannot achieve 2nm capabilities.”
My guess is that this is supposed to be an integrated circuits research cartel electric boogaloo. Their investment might get them something similar in size to IBM/NY's college of nanoscale science (especially if the tools are getting donated by Japanese semi firms). The problem is I don't know what the point is? The leading edge is so expensive that it is no longer like the 70s where you can just jump in. What would they even make if they did? All of the Japanese firms are fab-lite at this point. And even if all these questions were answered IBM is evidently not the best research partner. Because while I'm sure Samsung/GF are not free of blame for their process issues, but at some point one must ask the question of if the problem is not at least partially IBM's IP not being manufacturable. Samsung has (and GF once had) a bunch a bunch of brilliant engineers. With Samsung arguably being the DRAM HVM leader. My opinion it can't all be their fault.
This is such a strange initiative to me. Not only is the investment wholly inadequate for a task of this magnitude but the players behind it have zero practical business being on the leading edge of semis, much less their fabrication at scale, profitable or otherwise. What is even the goal of this? To prove they can do it? What are the economic prospects of such an endeavour? Very strange
This is such a strange initiative to me. Not only is the investment wholly inadequate for a task of this magnitude but the players behind it have zero practical business being on the leading edge of semis, much less their fabrication at scale, profitable or otherwise. What is even the goal of this? To prove they can do it? What are the economic prospects of such an endeavour? Very strange
It gets you started. Nothing wrong with baby steps. I suspect that Japan will be a popular place for Taiwanese process engineers to move to. Expertise can be acquired quickly. I wouldn't underestimate Japan. They are an industrious people.
They are probably putting their big $$$ on 28-16 and packaging, as they should.
Thanks Dan. I'm super curious about Japan semiconductor news. It is hard to get news, and my impression is Japanese firms are quietly much more sophisticated and advanced than we hear about in English-language media.
IBM keeps popping up in the news in the context of these new facilities. The cloud hyperscalers all roll their own silicon (IBM having a much longer, deeper history of this than Amazon or Google). I wonder if Amazon and Google are planning to get a share in this facility as well. As the world rapidly becomes more protectionist, is this the path to establish a Japan presence: Build silicon at Rapidus, deploy it in a cloud facility in Japan. The alternative being difficulty getting around the protections of doing business in Japan as an outside firm.
Thanks Dan. I'm super curious about Japan semiconductor news. It is hard to get news, and my impression is Japanese firms are quietly much more sophisticated and advanced than we hear about in English-language media.
I spent part of my early career working with Japanese semiconductor companies. I love the country, the food, the people, one of the safest places I have been to. I was not impressed with the companies however. This was in the 1980s and early 1990s and being from Silicon Valley Japan just had a very different semiconductor culture. Much more conservative, very friendly, many many more meetings, dinners, drinks, and lots of handshaking with no deals made. I have not been back in quite a few years so things may have changed. Quite the opposite of business in China. Taiwan is a somewhere in the middle in this comparison. Silicon Valley was much more hard core driven by greed and ego which quickly churned out one innovation after another. We will probably never again see anything like fabless transformation we experienced in Silicon Valley. Just my opinion of course.
But yes, bragging, leaking, and click based media is not popular in Japan, yet. I remember when Starbucks first came to Japan I did not think it would do well. I was very very wrong.
A senior Japanese lawmaker said on Friday that Taiwan Semiconductor Manufacturing Co, the world's largest contract chip maker, is considering building a second plant in Japan in addition to an $8.6 billion dollar facility now under construction. Yoshihiro Seki, secretary general of a ruling...
You said the only company that benefits from a deal with IBM is IBM, while Intel benefitted by becoming the sole provider of microprocessors for the original IBM PC (the 8088). I doubt Intel would be the company it became as a dominant player in CPUs without the IBM relationship. If IBM had chosen the technically superior Motorola 88000, I wonder how different the industry would be today. IBM's selection of the Microsoft-sourced MS-DOS (or PC-DOS) operating system for the PC was similarly important for Microsoft's ascension in the industry. Amazing, since MS-DOS was not originally created by Microsoft, and was a cheap acquisition Gates made. While these are examples from a long time ago, Intel and Microsoft are perhaps the two most important beneficiaries of industry partnerships in computing history, and they were with IBM.
You said the only company that benefits from a deal with IBM is IBM, while Intel benefitted by becoming the sole provider of microprocessors for the original IBM PC (the 8088). I doubt Intel would be the company it became as a dominant player in CPUs without the IBM relationship. If IBM had chosen the technically superior Motorola 88000, I wonder how different the industry would be today. IBM's selection of the Microsoft-sourced MS-DOS (or PC-DOS) operating system for the PC was similarly important for Microsoft's ascension in the industry. Amazing, since MS-DOS was not originally created by Microsoft, and was a cheap acquisition Gates made. While these are examples from a long time ago, Intel and Microsoft are perhaps the two most important beneficiaries of industry partnerships in computing history, and they were with IBM.
That is my personal experience. The IBM/Intel thing was before my time. I would also say IBM did not get the best of the GF deal either. I have worked with IBM on numerous occasions and never had a win/win. I was even on the right side of a patent dispute with IBM. They were clearly in the wrong, we had email proof, an actual apology from the IBM head who infringed, yet IBM legal treated us like a baby does a diaper.
That is my personal experience. The IBM/Intel thing was before my time. I would also say IBM did not get the best of the GF deal either. I have worked with IBM on numerous occasions and never had a win/win. I was even on the right side of a patent dispute with IBM. They were clearly in the wrong, we had email proof, an actual apology from the IBM head who infringed, yet IBM legal treated us like a baby does a diaper.
I wasn't trying to argue, I was only saying that the INTC/MSFT partnerships with IBM did pay off for those two companies. Normally I wouldn't bring up a couple of exceptions, but they were history book class exceptions.
While I still have a lot of respect for the innovation I see in IBM z-systems, I don't mind mentioning that I have seldom agreed with IBM's business decisions over the years. If I were an IBM stockholder, I would have thought the decisions to give away technology leadership underlying the PC were terrible. The same with the sale of their Thinkpad and x86 server businesses to Lenovo, which Lenovo has managed to make very successful. Who were they afraid of? Dell? And the lack of broad commercial exploitation of the Power-series microprocessors, which I think might be the best server CPUs out there, well, let's just say, contrary to what Warren Buffett thought a while back, I haven't and can't consider IBM investible. Too many questionable business decisions over the years, and the hits just keep on coming.
They never had tech leadership in the PC, that was kind of the problem. They required Intel to deliver a chip with halved bus so the perf would not challenge other IBM systems. Never mind systems with other chips, the PC was crap compared to Chuck Peddle's Sirius 1 or the Convergent workstations on 8086 which came out earlier. All the PC had going for it was credibility in business, and the runaway hit Lotus 1-2-3 which bet on IBM for business with down-and-dirty physical compatibility requirements. dBase and WordStar early on made the same bets, sealing the business market for "IBM compatible PC" in North America (Europe and Japan were MSDOS-portable until after the clone era).
The arrival of clean-roomed BIOS and hardware compatible clones that survived legal challenge broke down the business wall, not the (non-existent) technology wall. IBM in the late 80s tried to construct a technology wall with the PS2 and its bus, which eventually became PCIe, only to find itself on the outside of the wall they themselves had created. With no business wall, no technology wall, and minicomputers eating the mainframe market, they retrenched to consultancy (which Ballmer referred to as their "corporate pension plan") and the sunset but forever-walled island of Power fka Mainframe. Lenovo was simply useful cash from a direction they would not go.
That's not really accurate. The original IBM PC had an I/O bus called ISA, which was invented by IBM. Intel invented PCI in the early 1990s to replace ISA, and kicked off the PCISIG industry group and donated the specifications to the SIG to align the industry, including the OS driver model, which was really the important part. Every significant company in the computer industry joined the SIG. Since the PCISIG is a one-company-one-vote organization, the specs for PCI eventually went off in directions Intel didn't like, primarily PCI-X, and in the early 2000s Intel created a project to define what became the PCIe specification. The most important aspect of PCIe, other than it being a scalable point-to-point link modeled after the InfiniBand x-wide concept, was that it was PCI software driver compatible (no waiting for an adapter software ecosystem to emerge). To get the industry bought in, the announcement was made as if the PCIe spec was developed in a SIG working group code-named Arapahoe, which was odd, since that sounds more like an Intel-internal project name. I'm not sure how much of the packet format and transaction layers were influenced by the industry, though I suspect very little. The industry influence in PCIe was most visible in the PHY specification, because they demanded that PCIe use the InfiniBand Architecture 1.0 PHY. Very odd. Fortunately the SIG came to its senses in PCIe Gen3 and beyond, and now the PCIe PHY is arguably on the shortlist of the most important PHY specs in the computer industry. (Ethernet, PCIe, DDR, USB/Thunderbolt). UCIe and CXL use the PCIe PHY.
ISA was pretty much identical to the Intel IO bus, which went back at least to the 8080. I was working at board level with an 8086 design before the PC was announced and if there were differences they were minor. The signals were driven by the CPU and they used Intel standard support chips. The biggest difference I noticed was that IBM screwed up the interrupts by assigning the BIOS to vectors reserved by Intel for future chips (the 80186 in particular was a really nice integrated chip that IBM blocked) and they screwed up the 8087 by assigning it the same vector as NMI. Ugh. I'm not saying IBM did not tweak the ISA somehow, just it was not significant. The underlying design came from Intel and they probably shared ideas with other designs of the mid-70s. No technology wall unless you were trying to make an addon and had to meet license terms on a patent - that is business, not tech advantage.
Intel invented PCI in the early 1990s to replace ISA, and kicked off the PCISIG industry group and donated the specifications to the SIG to align the industry, including the OS driver model, which was really the important part. Every significant company in the computer industry joined the SIG. Since the PCISIG is a one-company-one-vote organization, the specs for PCI eventually went off in directions Intel didn't like, primarily PCI-X, and in the early 2000s Intel created a project to define what became the PCIe specification. The most important aspect of PCIe, other than it being a scalable point-to-point link modeled after the InfiniBand x-wide concept, was that it was PCI software driver compatible (no waiting for an adapter software ecosystem to emerge).
Thanks for setting me straight on the origins of PCIe. I was confusing it with MCA, which apparently had little or no influence on PCIe. MCA did, however, cause IBM to lose momentum because IBM was not "IBM-compatible".
I'm pretty sure UCIe uses, or at least offers, a PHY designed for ultrashort distances to get the very low energy per bit. The transport and link layers must have been simplified too in order to reduce the latency.
I remember evaluating an Intel design with two chiplets adjacent in a package connected by EMIB but using PCIe, a couple of years ago. The power and latency were monstrous for that purpose. UCIe was the fix, shortly after.
I'm pretty sure UCIe uses, or at least offers, a PHY designed for ultrashort distances to get the very low energy per bit. The transport and link layers must have been simplified too in order to reduce the latency.
I remember evaluating an Intel design with two chiplets adjacent in a package connected by EMIB but using PCIe, a couple of years ago. The power and latency were monstrous for that purpose. UCIe was the fix, shortly after.
I can't get a copy of the UCIe 1.0 spec, but I could have sworn I read that the UCIe 1.0 PHY was shared with PCIe 6.0, and 6.0 is different than 5.0. Perhaps I'm mistaken. I did a few searches, and can't verify anything specific about the PHY, except for an Intel VP saying UCIe will be a whole lot faster and lower power per bit than PCIe, but he didn't give specifics either.
Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram...
www.hpcwire.com
Edit: This article is better, and it looks like I'm mistaken about PCIe 6.0 at the PHY level: