Dr. Gordon Moore was the Director of Research and Development at Fairchild when he wrote the paper, “Cramming More Components onto Integrated Circuits” that was published in the April 19, 1965 issue of Electronics. Following this publication, Dr. Carver Mead of Caltech declared Dr. Moore’s predictions as “Moore’s Law”.
Very few people understand the essence of Moore’s Law or know about the myriad of tangential projections Dr. Moore made in this relatively short paper; these included home computers, automatic controls for automobiles, personal portable communications equipment and many other innovations that at the time may have seemed like science fiction to some readers.
Among Dr. Moore’s projections for Integrated Circuits (ICs) was that “by 1975 economics may dictate squeezing as many as 65,000 components on a single silicon chip.” It took a couple years longer than the projection, but the first 64Kb DRAM (Dynamic Random Access Memory) was released in 1977 with 65,536 transistors on a “single silicon chip.” That is a remarkable projection since the first commercially viable DRAM was introduced in 1970; five years after Dr. Moore’s paper was published.
The essence of Moore’s Law
While there are a number of projections included in Moore’s Law and virtually all of them panned out to a reasonable degree, there are two projections that are the “essence” of Moore’s Law. If we do a little math, we can add some color to these projections. Below are two quotes from the original 1965 article and my extrapolation of the predictions.
- “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.” This suggests that over the next ten years, we will see transistor (component) density increase by a factor of approximately 1,024.
- “In 1970, the manufacturing cost per component can be expected to be only a tenth of the present cost.” This projects that while transistor (component) density will double every year, the cost per component will decrease at a rate of about 37% per year. This is important to understand, so let’s take a moment to run through the math. With each doubling of component density there are higher manufacturing costs, but Dr. Moore correctly projects these higher costs will be far more than offset by the annual doubling of density. The result is a net compounded cost reduction of 37% per transistor (component) that results in a 90% cost decrease in five years and a 99% cost decrease in ten years.
Following this ten-year run to 1975, which worked out very similar in most ways to the projections of Moore’s Law, Dr. Moore reset forward expectations to a doubling of transistor density every 18 to 24 months versus every year. As a result of this remarkable progress, if you live at or above the middle class in a developed nation, there is a very good chance you are a “transistor trillionaire” – that with all the electronic stuff you own, you have over a trillion transistors.
How Far Have We Come – A case study
When I entered the semiconductor industry in 1976, the dominant DRAM device was the 16Kb (16K x 1) Mostek MK41161 (Intel had the 2116, but Mostek was the leading provider). Its power consumption (active state) was approximately 0.432 Watts (432mW). Due to the large package sizes used in 1976, you could only fit about 1.5 devices per square inch of printed circuit board (PCB) area. As best as I can recall, the MK4116 sold for about $10 (1976 dollars) in production volume.
(1) While the 64Kb DRAM was released in 1977, its cost per bit remained higher than the 16Kb DRAM until about 1980.
If we extrapolate these data we can see that the typical 16GB (16Gb x 8) memory used in consumer PCs today would cost about $80 million just for the memory chips ($400 million in 2021 dollars), require a PCB that is about 37,000 square feet in size (larger than the 35,000 square foot concourse at Grand Central Station) and would consume about 3,500,000 Watts of electricity. At $0.10 per KWh it would cost over $250,000 per month to power this memory board.2
(2) To keep things simple, all the calculations are based on only the 8,000,000 MK4116 DRAMs that would be required to deliver 16GB of memory. In addition these, a myriad of additional passive and active components would also be required. These components are not included in any of the calculations.
Today, you can buy a 16GB DRAM module for a laptop PC in a retail store for about $40 (about $8 1975 dollars) that is about the size of your index finger and consumes less than 3 Watts of power. This is easily powered from a laptop PC battery, but at $0.10 per KWh, the monthly cost would be a little over $0.20.
Obviously, from so many perspectives (cost, thermal, size and reliability to name a few) it would have not only been impractical, but literally impossible to build a 16GB DRAM memory board in 1976. Of course, it wouldn’t have been useful anyway – the microprocessors available in 1976 could only address 64KB of memory. However, this illustration of the advances driven by Moore’s Law since I joined the industry is simply a case study illustration of how far the industry has come.
If we adjust for inflation, our data tell us the advancements predicted by Moore’s Law have led a 99.9999995% reduction in cost (that is 30% compounded annually for 45 years) and a 99.9999993% reduction in power consumption. And, when you combine these advancements with an even greater reduction in the area required, you can better appreciate what Moore’s Law has not only made possible, but much more importantly, practical and affordable.
While it’s fairly straightforward to extrapolate the advancements in semiconductor fabrication have driven the cost per bit of DRAM down by a factor of about 10 million, it’s much more tedious to estimate the improvement for processors. Industry luminaries who are much smarter than me have stated that when you consider the advancements in compute architecture that have been enabled by Moore’s Law, the economic efficiency of processor ICs has improved by a factor greater than one billion since the introduction of the 4004 in 1971.
While it is hard to visualize and quantify these improvements with numbers, it is very easy to substantiate that even an average smartphone today has FAR more computing power than all of NASA did when the Apollo 11 mission landed astronauts on the moon in 1969. Think about that the next time you ask Siri, Alexa or Google a question…
Transistor Economics
There are all sorts of fancy words you can use to describe various business models, but I like to keep things as simple as possible. Within any business model, you can divide the costs between “fixed” (capital) and “variable” (marginal). If the model is heavily weighted to variable expenses, there is little scaling (leverage) and profitability runs a fairly linear line with volume. However, if the model is heavily weighted to fixed costs, the model scales (often dramatically) and profitability increases steeply as volume grows.
For example, if you were going to drill for oil, you would have to build a rig and make all the associated capital investments needed to drill for oil (fixed costs), but once it is built and the oil starts to flow, the costs to maintain that flow (variable costs) are very low. In this business model, the high fixed costs are amortized across the barrels of oil that are pumped. The obvious conclusion is the more barrels of oil that are produced, the lower the total cost per barrel (fixed costs are amortized across more barrels of oil).
The somewhat less obvious conclusion is the “marginal cost” of the “next” barrel produced is very low. Since marginal (variable) cost represents the total cost increase to produce one more unit (barrel) and there are no additional fixed costs required, only the variable costs are counted. Obviously, given these data, volume is VERY important in business models that operate with high fixed and low variable costs.
This classic example of a high fixed / low variable cost business model is more or less aligned with what we see in the classic semiconductor business model. It costs an enormous amount of money to open a leading edge semiconductor fabrication line (measured in tens of billions of dollars today) and designing a relatively complex IC for a leading edge fabrication process (5nm) could easily cost a half a billion. However, once the fabrication plant is operational and the IC is in production, the marginal cost for fabricating the next silicon wafer is small relative to these fixed costs.
The semiconductor industry has one huge advantage over the oil industry; unlike oil where there are limitations to the ultimate supply (discovered reserves), there is a virtually endless supply of relatively cheap silicon (the base material for most semiconductor wafers), which means there are solid reasons to continuously drive prices lower to stimulate more demand, and produce more volume.
This phenomenon is demonstrated in the data. Bell Labs produced exactly one transistor in its lab in 1947 and it would take several years beyond that before a handful were produced for limited applications. In 2022, only 75 years later, the semiconductor industry will produce literally hundreds of billions if not trillions of transistors for every man, woman and child on earth and sell them in the form of ICs for infinitesimal fractions of a penny.
There are probably a number of stories behind how this amazing growth trend was launched, but one of my favorites was told by George Gilder in his book, Microcosm.
As the story was related by George, Fairchild Semiconductor was selling a transistor (part number 1211) in relatively small volumes to military customers for $150 each. With a cost of roughly $100, Fairchild made a nice profit. However, given the stringent military specifications, it was left with scrap parts that didn’t meet the customer requirements.
To find a home for these transistors, Jerry Sanders3, who had been recently promoted to run Fairchild’s consumer marketing group, was tasked to find a buyer willing to pay $5 for the rejects. He found some willing buyers, but in 1963, when the FCC mandated that all new televisions include UHF reception, a huge new market opportunity opened.
(3) Jerry Sanders later left Fairchild to start Advanced Micro Devices (AMD)
The problem here was that at even $5, the consumer version of the 1211 could not compete with RCA’s innovative metal cased vacuum tube called the Nuvistor that it was offering to TV manufacturers for only $1.05. Sanders tried every angle he could to get around the $3.95 price difference – the consumer 1211 could be soldered directly to the PCB avoiding the use of a socket for the Nuvistor and the transistor was clearly more reliable. However, he simply couldn’t close the deal.
Given the market potential for TVs in 1963 was approximately 10 million units per year; Sanders went to Fairchild headquarters in Mountain View and met with Dr. Robert Noyce at his home in the Los Altos hills. He was hesitant at first to ask for the $1.05 price he needed to close the deal, but once Sanders described the opportunity, Dr. Noyce took the request in stride and after brief contemplation, approved it.
Sanders returned to Zenith and booked the first consumer 1211 order for $1.05. To drive down costs, Fairchild opened its first overseas plant in Hong Kong that was designed to handle the anticipated volume and in conjunction with that developed its first plastic package for the order (TO-92). Prior to this, all 1211s were packaged as most transistors were at the time, in a hermitically sealed (glass to metal sealed) metal can (TO-5).
Once Fairchild had production dialed in, it was able to drop the price to $0.50, and within two years (in 1965) it realized 90% market share for UHF tuners and the new plastic 1211 generated 10% of the company’s total profit. 1965 happened to also be the year that Dr. Moore wrote the article that was later deemed “Moore’s Law.”
The lesson from the 1211 transistor about how to effectively leverage low marginal costs to drive volume was tangential to Dr. Moore’s paper. However, when coupled with the prophesy of Moore’s Law that correctly predicted the cost per transistor on an IC would fall rapidly as fabrication technology advanced, the mold for the semiconductor business model was cast and capital flowed freely into the industry.
The March of Moore’s Law in Processors:
In 1968, three years after “Moore’s Law” was published, Dr. Moore and Dr. Noyce, who is credited for inventing the planar Integrated Circuit (IC) in 1959, left Fairchild to start Intel (INTC). They were soon joined by Dr. Andy Grove, who with his chemical engineering background ran fabrication operations at Intel. Following Dr. Noyce and Dr. Moore, Dr. Grove was named as Intel’s third CEO in 1987.
Intel started out manufacturing Static Random Access Memory (SRAM) devices for mainframe computers (semiconductor memories were a part of Moore’s Law predictions), but quickly developed ICs for watches and calculators, and moved from there to general purpose processors. In an effort to optimize continuity, I’ll focus mostly on the evolution of Intel processors in this section.
Intel’s first processor, the 4-bit 4004, was released in 1971. It was manufactured using 10,000nm fabrication technology and had 2,250 transistors on a 12mm2 die (187.5 transistors per mm2). Intel followed this a year later with its first 8-bit processor, the 8008. It used the same process technology as the 4004, but with better place and route, it had 3,500 transistors on a 14mm2 die (250 transistors per mm2).
Intel released its first 16-bit processor, the 8086 in 1978, which introduced the world to the x86 architecture that continues to dominate personal computing and data center applications today.
A year later, Intel released the 8088, which was virtually identical to the 8086, but used an external 8-bit data bus, which made it much more cost-effective to use in the first IBM PC. Both the 8086 and 8088 were fabricated using a 3,000nm process and both had 29,000 transistors on a 33mm2 die (879 transistors per mm2). What’s not widely known or appreciated is the 8086 and 8088 developed such a vast design base outside the PC market that Intel manufactured both ICs until 1998.
Intel released the 32-bit 80386 in 1985, which was fabricated using a 1,500nm process and with 275,000 transistors and a 104mm2 die size (2,644 transistors per mm2), it far surpassed everything that came before. This marks the first time I remember reading a Wall Street prediction that Moore’s Law is dead. It was several years later when I realized Wall Street opinions about the semiconductor industry were almost always wrong, but that goes into another story for another time…
As Intel’s current CEO, Patrick (Pat) Gelsinger covers in this linked article: “Pat Gelsinger Takes us on a Trip Down Memory Lane – and a Look Ahead”.
As the years passed, the cadence of Moore’s Law continued; running more efficiently sometimes than others, but with consistency when viewed over the longer term. To make it a little easier to track the progress of Moore’s Law, the following table displays PC processors fabricated on the various processes from 1,000nm to 14nm from 1989 through 2015. Since I don’t have a reliable source for data beyond 14nm for Intel, I stopped there.
Processor | Year | Fabrication Process | Die Size | Transistor Count | Transistors per mm2 |
80486 | 1989 | 1,000nm | 173mm2 | 1.2 million | 6,822 |
Pentium | 1993 | 800nm | 294mm2 | 3.1 million | 10,544 |
Pentium Pro | 1995 | 500nm | 307mm2 | 5.5 million | 17,915 |
Pentium II | 1997 | 350nm | 195mm2 | 7.5 million | 38,462 |
Pentium III | 1999 | 250nm | 128mm2 | 9.5 million | 74,219 |
Pentium IV Willamette | 2000 | 180nm | 217mm2 | 42 million | 193,548 |
Pentium IV Northwood | 2002 | 130nm | 145mm2 | 55 million | 379,310 |
Pentium IV Prescott | 2004 | 90nm | 110mm2 | 112 million | 1,018,182 |
Pentium C Cedar Mill | 2006 | 65nm | 90mm2 | 184 million | 2,044,444 |
Core i7 | 2008 | 45nm | 263mm2 | 731 million | 3,007,760 |
Core i7 Quad + GPU | 2011 | 32nm | 216mm2 | 1,160 million | 5,370,370 |
Core i7 Ivy Bridge | 2012 | 22nm | 160mm2 | 1,400 million | 8,750,000 |
Core i7 Broadwell | 2015 | 14nm | 133mm2 | 1,900 million | 14,285,714 |
This table and the data above it, illustrates Intel increased transistor density (transistors per mm2) by an amazing factor of 76,190 in the 44-year span from its first processor (4004) to its Core i7 Broadwell.
When we consider server ICs (as opposed to just PC processors in the table above), we can see significantly higher transistor counts as well as substantially larger die sizes.
Intel released its first 2 billion transistor processor, the 64-bit Quad-core Itanium Tukwilla in 2010 using its 65nm process. With the large cache memories, the die size was 699mm2 (2.86 million transistors per mm2).
Intel went on to break the 5 billion transistor barrier in 2012 with the special purpose Xeon Phi. It was fabricated using a 22nm process on a massive 720mm2 die (6.9 million transistors per mm2). This is the largest die size I can find for an Intel processor.
The Xeon Phi is one of only three monolithic processors I’ve found that used a die size larger than 700mm2. The other two are the Fujitsu SPARC VII fabricated on a 20nm process4 in 2017, which used a massive 795mm2 die (6.9 million transistors per mm2), and the AMD (AMD) Epyc fabricated on a 14nm process using a slightly smaller 768mm2 die, but with the smaller fabrication process, it had much higher transistor density (25 million transistors per mm2). The Oracle (ORCL) SPARC M7 was probably larger than the Fujitsu SPARC VII, but I could not find die size data for the Oracle processor.
Intel has a long history of more conservatively stating its fabrication process nodes, which explains why its transistor density at 22nm is approximately the same as Fujitsu’s was for its 20nm SPARC processor.
While the days of microprocessor die approaching the size of a postage stamp are gone, advances in fabrication technology continue to enable higher and higher transistor density. The highest density I can quantify today for a processor is the Apple (AAPL) M1-Max that has 57 billion transistors on its 432mm2 die (131.9 million transistors per mm2) and is fabricated using TSMC (TSM) 5nm technology.
The transistor density of the Apple M1-Max is over 700,000 times greater than Intel’s first 4004 processor, and from a technical perspective, that tells us the Moore’s Law prediction of doubling transistor density is still alive; albeit at a slower pace than it once was. However, while transistor density will continue to increase, two things have happened during recent advancements of fabrication technology that are important to understand.
First, my contacts tell me the curve of lower and lower cost per transistor that has been the economic driver for Moore’s Law for over 50 years began flattening after the 10nm fabrication node. This means the days of cheaper transistors offsetting the rapidly increasing fixed costs to design and get a new IC into production are at least numbered if not gone. This means if the primary economic driver of Moore’s Law isn’t dead, it’s on life-support.
Second, the data tell us that processor manufacturers have moved away from the massive die sizes introduced between 2012 and 2017 and even leading processor manufacturers like AMD and Intel have adopted Chiplet strategies. In the case of the Intel Ponte Vecchio, the design includes 47 Chiplets using a variety of fabrication technologies.
Intel: Meteor Lake Chiplet SoC Up and Running
Intel Xeon Sapphire Rapids: How To Go Monolithic with Tiles [Chiplets]
Intel Ponte Vecchio and Xe HPC Architecture: Built for Big Data
AMD ON WHY CHIPLETS—AND WHY NOW
The king is dead, long live the king!
Defect Density (D0) for a given fabrication process is defined as the number of defects per silicon wafer, divided by the area of the wafer, that are large enough to be classified as “killer” defects for the targeted fabrication process. The problem is, as the fabrication process (fabrication node) size shrinks so does the size of what is determined to be a “killer” defect.
In general, a killer defect is defined as a defect that is 20% the size of the fabrication node. For example, a defect that is less than 9nm may be acceptable for the 45nm fabrication node, but a defect larger than 2.8nm would be defined as a “killer” defect for the 14nm fabrication node. For the 5nm fabrication node, a defect measuring only 1nm could be a killer.
This is one of the primary reasons that it has become increasingly difficult to yield large monolithic ICs (as measured in die area) when using leading edge fabrication process technology5. We can see evidence of this in the data above that shows die sizes for processors peaked during the six year span running from 2012 to 2017 when the state of the art was moving from 22nm to 14nm.
Memory devices, FPGAs, GPUs and some specialized Machine Learning (ML) ICs are subject to the same yield challenges. However, in these ICs you’ll find billions of identical cells (function blocks) that are literally identical to one another. To optimize yields, these ICs that still use monstrous die sizes are commonly designed with redundant cells that can be either masked or programmed to replace cells that don’t yield. It is unclear if this trend will continue.
There are a variety of opinions as to when Defect Density became an insurmountable issue. However, from what I’ve read, it appears to have entered the equation in the 22nm to 14nm window, and below 14nm the data suggest it became significant, and looking beyond that, a problem that would only get worse.
Given the fact a large die size IC is more likely to have a defect within its borders than a small die size; there is an inverse correlation between die size and yield, and the trend will become even more vexing as fabrication technology advances to smaller and smaller nodes.
This problem was highlighted by TSMC during Q2 2020 when it was running test wafers for its new 5nm fabrication node. Following these tests, TSMC stated its average yield for an 18mm2 die was ~80%, but that yield dropped dramatically to only 32% for a 100mm2 die. As has been the case throughout the reign of Moore’s Law, TSM has improved its yield since these early tests, but in spite of that, I’m sure the yield at 5nm remains less favorable than the yield at larger fabrication nodes and the trend going forward is clear; the era of large monolithic die has passed.
Several years before TSMC released early data on its 5nm process, AMD CEO, Dr. Lisa Su presented the Defect Density problem in a very simple graph at the 2017 IEEE International Electron Devices Meeting (IDEM). This graph shows the increase in cost per yielded mm2 for a 250mm2 die size as AMD moved forward from 45nm to smaller fabrication nodes. The understated conclusion is increasing die sizes become economically problematic, and once you go below 14/16nm, the yielded cost increases dramatically.
Defect Density is not a new problem – it has literally existed since day one. However, lessons learned have always pushed it forward beyond the current fabrication node and the ability to cure yield problems at the current node is what drove Moore’s Law for over 50 years. While you can rest assured there are continued efforts to reduce the impact of Defect Density at leading edge fabrication nodes, there are five reasons that suggest the Chiplet trend is not only here to stay, but that it is also poised to expand rapidly and enable new market opportunities.
(1) There have been very significant investments in Chiplets to reduce assembly costs and optimize performance. While there are inherent cost and performance penalties when you move a design away from a single-chip monolithic piece of silicon, it appears performance penalties will be minimized and cost penalties will be more than offset as Chiplet technology is fully leveraged.
(2) The Universal Chiplet Interconnect Express (UCIe) consortium has specified a die-to-die interconnect standard to establish an open Chiplet ecosystem. The charter members of the consortium include: ASE, AMD, Arm, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung, and TSMC. UCIe is similar to the PCIe specification that standardized computing interfaces. However, UCIe offers up to 100 times more bandwidth, 10 times lower latency and 10 times better power efficiency than PCIe. With this standard in place, I believe we’ll see a flood of new Chiplets come to market.
(3) With the release of its Common Heterogeneous Integration and Intellectual Property Reuse Strategies (CHIPS) program in 2017, the Defense Advanced Research Projects Agency (DARPA) was ahead of the Chiplet curve. The goal for CHIPS is to develop a large catalog of third party Chiplets for commercial and military applications that DARPA forecasts will lead to a 70% reduction in cost and turn-around time for new designs. The DARPA CHIPS program extends beyond leveraging the benefits of incorporating heterogeneous fabrication nodes to also incorporating heterogeneous materials in a Chiplet design.
(4) The magic of Moore’s Law was that the fabrication cost per transistor would decline far more than fixed costs increased as fabrication technology advanced. I can’t find data to quantify this, but I can find wide agreement that the declining fabrication cost curve flattened around 10nm and that it is heading in an unfavorable direction. Since advanced fabrication costs are increasing, a Chiplet strategy enables IC architects to target leading edge (expensive) fabrication nodes only for the portions of Chiplet designs that absolutely need the highest possible performance and target other portions of Chiplet designs to fabrication processes that are optimized for low power and/or low cost.
(5) Chiplet designs can accelerate time to market, lower fixed costs, lower aggregate fabrication costs for a given design and leverage architectures that can be extended and/or changed over time. In other words, Chiplet designs provide unique flexibilities that are not economically viable in monolithic designs. This trend will become more apparent and accelerate as we see new UCIe-compliant Chiplets introduced.
Not only are manufacturers facing a Defect Density yield challenge that has a direct correlation with die size, as you can see from the following graph, the fixed costs associated with designing and moving a new complex monolithic IC into production have skyrocketed along with advances in fabrication technology. In other words, the data suggest we have hit a tipping point and Chiplet is the answer; not only to the challenges of yield and higher costs, but also enable the semiconductor industry to open new market opportunities.
While my focus in this paper has been on processor ICs (mostly Intel processors for the sake of continuity), increasing fixed costs and the inverse correlation between yields and die size are impacting System on a Chip (SoC) designs too. There is already evidence that MediaTek will move to a Chiplet design at 3nm with TSMC for its smartphone Applications Processor (AP) and my bet is Qualcomm has a Chiplet design brewing that it has yet to make public.
With UCIe standardization and the DARPA CHIPS program, SoC manufacturers that target the vast array of markets beyond smartphone APs will adopt Chiplet designs to lower costs, shorten development cycles and increase flexibility. This will open new opportunities for support chip manufacturers and a wide variety of IP companies.
I believe we will also see IP companies expand their traditional market approach by leveraging the new UCIe specification to “harden” their IP into known good die (KGD) and effectively sell their IP as a hardware Chiplet directly to semiconductor manufacturers and IC fabrication companies as well as OEM customers that develop their own Application Specific Chiplet.
One of the more interesting things that I think Chiplets will enable is SoCs for new markets that don’t have the volume or are too fragmented to drive a several hundred million dollar investment in a monolithic IC design. These include a wide variety of IoT, AI and Machine Learning (ML) opportunities where FPGA technology that can be used for accelerators that can quickly adapt to changing algorithms and provide the design flexibility needed to extend market reach and SoC lifecycle.
Chiplets can also enable SoC solutions for new and existing markets by providing scalable processor solutions and other customer specific options (add more processor cores, add an accelerator, add more memory, even change / update the RF section for a new standard, etc.). These sorts of changes and flexibilities were virtually impossible with monolithic IC designs.
Bottom Line: Without the benefit of declining variable costs (lower fabrication costs per transistor) offsetting sharply higher fixed costs and the increased complications of Defect Density, Moore’s Law is over as we’ve known it. However, as it has in the past, the semiconductor ecosystem is adapting and as Chiplet technology builds traction, we will very likely see a period of accelerating innovation and new market opportunities opening as we move forward.
The point here (tipping point if you will) is that Chiplets open new doors for creativity and the continued broadening of technology in how we live and work. We have reached a point where we no longer need to think only about what makes sense for monolithic IC designs that are hindered with ultra-high fixed costs and painfully long lead times; we can now focus on heterogeneous Chiplets that leverage new open standards to optimize designs for the ultimate cost and performance dictated by the use case.
When you couple these new benefits with the standardization of UCIe and the DARPA CHIPS program, there is great potential to open new markets and new use cases that have yet to even see the back of a cocktail napkin.
Also Read:
UCIe Specification Streamlines Multi-Die System Design with Chiplets
Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC
Five Key Workflows For 3D IC Packaging Success
Share this post via:
More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay