Hi all – I'm interested in the root causes or factors that account for TSMC's ability to deliver shrink after shrink, on schedule, and with excellent performance characteristics.
Conversely, what explains the apparent reality that no one else in the world is able to deliver comparable nodes in the same time frames?
I've seen bits of speculation here and there, mostly about Intel betting too big on dramatic jumps from 14nm to their 10nm (purportedly comparable to TSMC's N7, though this might only apply to the nth-generation of Intel's 10nm), while TSMC allegedly took a more iterative approach. I'm not sure what that idea logically comes down to, but it doesn't explain why Intel would struggle for so many years to mass produce performant and profitable 10nm products across laptop, desktop, and server. It also doesn't match the reality of the jumps that TSMC delivered from its 10nm to 7nm, and then to 5nm and now 3nm. These would be considered major advances by any foundry, not cautious iterations (especially in terms of density, maybe less so on performance). Nor does it explain why Samsung has struggled so much and evidently can't do what TSMC does. Or GlobalFoundries' inability to develop any FinFET nodes on its own, not even at the 14nm entry point, and canceling its promised 7nm process.
TSMC's ability to develop and mass produce cutting edge FinFET nodes at 7nm and below is apparently unique. That's surprising and scientifically interesting. Combined with Intel's protracted failure, I'm wondering what's going on here. Does TSMC have a large advantage in the R&D process? Does it have a new science of engineering management? What does TSMC do differently from Intel and Samsung? How deterministic is human engineering performance in this domain? For example, if Intel's CEO says that Intel is going to recapture its leadership position, is it actually possible to decide in advance that you're going to succeed at (TSMC-equivalent) 7, 5, and 3nm nodes and then go do it? They presumably decided to succeed at 14, 10, 7nm, etc. nodes in recent years, and yet failed (especially with their 10 and 7nm nodes). What do we know, scientifically, about this kind of R&D performance issue? What would Intel be able to do differently to achieve a different outcome? Is it a talent disparity across firms?
Samsung's case is interesting because they were at the same level as TSMC, in terms of the science and engineering of FinFET processes. Their 14nm node was denser than TSMC's 16nm. I'm not sure about performance differences, but Samsung always took shots at TSMC's node for being based on one of their 20-something nm planar nodes or something, like it wasn't a true FinFET, or at least not a clean-sheet design. TSMC pulled ahead of Samsung at 10nm, at least on density. (And Samsung's consistent extremely poor performance in SoC development is even more puzzling. Their Exynos SoCs were always dramatically worse than Apple, Qualcomm, and Arm designs. How Apple is able to design much more powerful and efficient SoCs than Qualcomm and Samsung is a related mystery. Samsung has the advantage of owning the SoC design, the process nodes, and the fabs, and yet apparently there's no discernable optimization advantage in that stack? They produced the worst chips by a wide margin. How is Apple years ahead using someone else's nodes?)
Then, TSMC never looked back. By contrast, Samsung hung around at 10nm, and is still producing large volumes of their 10nm family (Nvidia's current desktop GPUs use a Samsung "8nm" process, part of the 10nm family). Samsung has struggled to improve much beyond their 10/8nm node, with their 7/5nm process performing very poorly compared to TSMC's 7/6nm nodes, and well behind TSMC's 5nm.
Is there something happening once foundries venture beyond the roughly "14nm" FinFET processes? Intel hit a wall there. Samsung hit a wall there (their 10nm is only slightly denser than Intel's 14nm). The precipice seems to be around 50 M/mm transistor density. I think Intel's 10nm is closer to 50 M/mm in actual products than to the 90-100 M/mm they initially touted. Samsung's 8LPU might be 60 M/mm. Is there something scientifically, technologically, or economically special about venturing beyond that density? How has TSMC overcome it, and no one else?
Thanks.
Conversely, what explains the apparent reality that no one else in the world is able to deliver comparable nodes in the same time frames?
I've seen bits of speculation here and there, mostly about Intel betting too big on dramatic jumps from 14nm to their 10nm (purportedly comparable to TSMC's N7, though this might only apply to the nth-generation of Intel's 10nm), while TSMC allegedly took a more iterative approach. I'm not sure what that idea logically comes down to, but it doesn't explain why Intel would struggle for so many years to mass produce performant and profitable 10nm products across laptop, desktop, and server. It also doesn't match the reality of the jumps that TSMC delivered from its 10nm to 7nm, and then to 5nm and now 3nm. These would be considered major advances by any foundry, not cautious iterations (especially in terms of density, maybe less so on performance). Nor does it explain why Samsung has struggled so much and evidently can't do what TSMC does. Or GlobalFoundries' inability to develop any FinFET nodes on its own, not even at the 14nm entry point, and canceling its promised 7nm process.
TSMC's ability to develop and mass produce cutting edge FinFET nodes at 7nm and below is apparently unique. That's surprising and scientifically interesting. Combined with Intel's protracted failure, I'm wondering what's going on here. Does TSMC have a large advantage in the R&D process? Does it have a new science of engineering management? What does TSMC do differently from Intel and Samsung? How deterministic is human engineering performance in this domain? For example, if Intel's CEO says that Intel is going to recapture its leadership position, is it actually possible to decide in advance that you're going to succeed at (TSMC-equivalent) 7, 5, and 3nm nodes and then go do it? They presumably decided to succeed at 14, 10, 7nm, etc. nodes in recent years, and yet failed (especially with their 10 and 7nm nodes). What do we know, scientifically, about this kind of R&D performance issue? What would Intel be able to do differently to achieve a different outcome? Is it a talent disparity across firms?
Samsung's case is interesting because they were at the same level as TSMC, in terms of the science and engineering of FinFET processes. Their 14nm node was denser than TSMC's 16nm. I'm not sure about performance differences, but Samsung always took shots at TSMC's node for being based on one of their 20-something nm planar nodes or something, like it wasn't a true FinFET, or at least not a clean-sheet design. TSMC pulled ahead of Samsung at 10nm, at least on density. (And Samsung's consistent extremely poor performance in SoC development is even more puzzling. Their Exynos SoCs were always dramatically worse than Apple, Qualcomm, and Arm designs. How Apple is able to design much more powerful and efficient SoCs than Qualcomm and Samsung is a related mystery. Samsung has the advantage of owning the SoC design, the process nodes, and the fabs, and yet apparently there's no discernable optimization advantage in that stack? They produced the worst chips by a wide margin. How is Apple years ahead using someone else's nodes?)
Then, TSMC never looked back. By contrast, Samsung hung around at 10nm, and is still producing large volumes of their 10nm family (Nvidia's current desktop GPUs use a Samsung "8nm" process, part of the 10nm family). Samsung has struggled to improve much beyond their 10/8nm node, with their 7/5nm process performing very poorly compared to TSMC's 7/6nm nodes, and well behind TSMC's 5nm.
Is there something happening once foundries venture beyond the roughly "14nm" FinFET processes? Intel hit a wall there. Samsung hit a wall there (their 10nm is only slightly denser than Intel's 14nm). The precipice seems to be around 50 M/mm transistor density. I think Intel's 10nm is closer to 50 M/mm in actual products than to the 90-100 M/mm they initially touted. Samsung's 8LPU might be 60 M/mm. Is there something scientifically, technologically, or economically special about venturing beyond that density? How has TSMC overcome it, and no one else?
Thanks.