You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Interesting tidbit from the story (source is "industry analysts"):
"However, according to some industry analysts, the 18A process is roughly equivalent to TSMC's so-called N3 manufacturing technology, which went into high-volume production in late 2022."
18A used Intel restrictive design rules to achieve parity with TSMC (and boost yields). Once you relax the design rules to a more industry standard, the performance advantage disappears. This means there is a new performance gap to overcome: The de-restriction gap between how Intel designs their x86 custom parts and how fabless houses design for a standard PDK.
My fear is that a slimmed down Intel won't have the resources to choose between custom and standard PDK. If they go all-in on standard PDK, they lose yield and performance on x86, with no assurance they will gain external customers. It must be agonizing to leave yield and performance on the table.
Becoming fabless and supplying legacy nodes are both alright. It is good that Lip-Bu is intercepting further capital spending. If it was under PG, I don't know where he would lead Intel to.
Cost saving initiatives including capital spending reductions for 2025 were started under PG not LBT! Don't need to twist history to suit our own framework of mind.
There is Capital expenditures and there is Operating expenditures.
Intel guided for $1B in savings for Cost of Goods Sold for 2025, $17.5B Opex and $20B capex target with $10B form partner contribution (Net Capex is $10B) for 2025 under PG before he got "retired".
LBT has changed these targets a little bit by $17B Opex ($500 mill savings) and $18B capex (net capex will be $8B - $2B savings) so far. No doubt Q2'25 earning call will revise this further down if another 20% head count reductions are required (another $1B to $1.5B reduction in Opex) and but anymore reduction in Capex is dangerously low (imo) if they are serious about being a foundry.
Note Altera being sold alone will lead to some head count reduction. Reported Intel Automotive axing will add to that and I recently learnt that Lip Bu is comparing Intel's org structure to that of Broadcom under Hock Tan and AMD under Lisa Su to inform him of head count reductions.
My semi-informed $0.02 as a process engineer and not integration or defmet. Though that does mean I've sat in plenty of meetings where performance and defect issues in my section of the line are discussed, so I'm not totally ignorant.
TSMC uses industry standard PDK's and yields just fine. I don't believe the implementation of industry standard PDK's has to equate to reduced yields.
And there is no real magic to Intel's performance leadership. It has traditionally come from being ahead of the rest of the industry. For example they introduced fin-fet at 22nm ahead of the rest of the industry and then refined it on 14nm to stay ahead of the curve. They are looking to do the same thing with BSPD. They are introducing the first gen at 18A and looking to refine it on 14A to further improve performance.
It is my belief that Intel's restrictive design rules allow them to compensate for smaller design windows. Allowing smaller design windows than the industry standard makes it easier to develop the process. They can say this flow gets us what we need, we'll just pass some of the pain on to the design team with a more restrictive design rule and move on to the next thing. Then like the 22nm fin-fet example above they will refine the process over the next generation and relax some of the restrictions. At the same time they will introduce something new in that next generation that requires new restrictions to the design rules.
With TSMC using standard design rules TSMC can't do that, they have to start with the assumption up front that their process must fall in a less restrictive design space. As a consequece they have to approach their process development differently. That is what will causes Intel pain. Intel has an army of integrators that have learned to do the job one way and now need to learn a new way to approach process design.
My semi-informed $0.02 as a process engineer and not integration or defmet. Though that does mean I've sat in plenty of meetings where performance and defect issues in my section of the line are discussed, so I'm not totally ignorant.
TSMC uses industry standard PDK's and yields just fine. I don't believe the implementation of industry standard PDK's has to equate to reduced yields.
And there is no real magic to Intel's performance leadership. It has traditionally come from being ahead of the rest of the industry. For example they introduced fin-fet at 22nm ahead of the rest of the industry and then refined it on 14nm to stay ahead of the curve. They are looking to do the same thing with BSPD. They are introducing the first gen at 18A and looking to refine it on 14A to further improve performance.
It is my belief that Intel's restrictive design rules allow them to compensate for smaller design windows. Allowing smaller design windows than the industry standard makes it easier to develop the process. They can say this flow gets us what we need, we'll just pass some of the pain on to the design team with a more restrictive design rule and move on to the next thing. Then like the 22nm fin-fet example above they will refine the process over the next generation and relax some of the restrictions. At the same time they will introduce something new in that next generation that requires new restrictions to the design rules.
With TSMC using standard design rules TSMC can't do that, they have to start with the assumption up front that their process must fall in a less restrictive design space. As a consequece they have to approach their process development differently. That is what will causes Intel pain. Intel has an army of integrators that have learned to do the job one way and now need to learn a new way to approach process design.
My semi-informed $0.02 as a process engineer and not integration or defmet. Though that does mean I've sat in plenty of meetings where performance and defect issues in my section of the line are discussed, so I'm not totally ignorant.
TSMC uses industry standard PDK's and yields just fine. I don't believe the implementation of industry standard PDK's has to equate to reduced yields.
Curious - after 10nm, there's only TSMC, Samsung, and Intel. Is TSMC using "industry standard" PDKs with 7nm, 5nm, 3nm, or are they really just setting the standard because they're the leader at this point?
Does "industry standard" mean that there are certain ("less restrictive") design rules that carry across nodes, and that it's Intel with something more restrictive than this generally?
Does "industry standard" mean that there are certain ("less restrictive") design rules that carry across nodes, and that it's Intel with something more restrictive than this generally?
I believe that in terms of PDK's the "industry standard" refers to tools that are used to do the design. I've been told that Intel has historically used proprietary, home grown tools for their design process while TSMC uses tools that are used throughout the industry. It is the widespread use of these tools that makes them industry standard.
I have to disagree. A design rule is the result of a decision made during the design of the process flow that imposes restrictions on how the design can be laid out and still yield.
Metal gate would be a prime example of the counterpoint to your argument. TSMC and the rest of the industry chose to go gate first when metal gate was initially implemented. This resulted in a simpler process flow which I'm sure was less restrictive and more flexible.
Intel chose gate last which meant that a dummy gate had to be put in place, much of the transistor structure was built around that dummy gate and finally the dummy gate was etched out and replaced with a metal gate.
My suspicion is that due to the complexity of the gate last process there were more restrictions on how the transistors could be laid out in the circuit design to offset the added process complexity.
As things worked out there were more problems getting gate first to yield and gate last proved to be the better option. Eventually everyone went to gate last. But it was the process flow that dictated the design rules.
Where I think you do have a point, is that Intel's adherence to Copy Exact has been taken to the extreme and has become a straight jacket. It still has a place during process transfer. You need a fixed target to work against when a new fab is ramping up. However, once that fab is ramped to refuse to let that fab contribute to process improvement does add extra points for degree of difficulty
I don't dispute there is a grain of truth to this, but Intel does work to improve their processes after TD is done. Failure to work on yield is failing to increase your profits and no corporation wants to do that. The catch at Intel is that until TD exits a process and moves on to the next one TD acts as the gatekeeper for what can and cannot be done. This does indeed result in a slower rate of improvement than TSMC experiences. Intel will have to change that if they want to be successful as a foundry and it is my understanding that they are starting to take steps to address this issue.
Does "industry standard" mean that there are certain ("less restrictive") design rules that carry across nodes, and that it's Intel with something more restrictive than this generally?
I think the notions of a “standard PDK” and “PDK quality” are both tricky because a PDK is the complete design tool and methodology embodiment of a process node (and specifically one specific variant of that node). It reflects a lot of things:
* Core process design
* EDA tool support
* Underlying foundational IP - transistors, std cells, memories, I/Os
* DTCO - design technology co-optimization of all three above for one or more design points in the PPA space
* Associated rules to get full entitlement and yield at those design points.
It used to be that Intel would develop the whole stack above for a single processor design point using a highly tuned methodology that was a mix of commercial and proprietary tools. But they have made a bunch of leaps in the past few years moving to all commercial EDA, more generalized foundational IP (not just the IP required for a specific processor design), and better DTCO using IP outside of Intel’s design space (Arm, RISC-V, 3rd party interface IP).
I would suspect right now that the challenge on the “standard PDK” and “PDK quality” side is mostly the hard stuff associated with doing DTCO for a wide range of design points using the same process, which among many things, includes keeping the rules simple enough that they can be used for different design styles. There are also important niceties like doing DCTO, such that 3rd party IP can survive a design shrink, without redesign, with only recharacterization. It really takes having a broad swath of representative designs and IP in house (or at least passed through) to do that level of DCTO effectively.
Folks argue that Intel is important to the US given that it’s one of two us based companies doing leading edge logic semiconductor research (the other being IBM research). Given the broken culture, terrible execution and unsustainable finances, should the US government just give up, withdraw CHIPs act money for Intel and beg TSMC to be more aggressive about ramping leading edge nodes faster in Arizona. It seems like Intel Foundry makes Boeing look like a competent and well run manufacturer.
Yield... Widen the metal a bit. Increases via enclosure and make it easier on the end-of-line rule. Perhaps add a dummy at the end of the diffusion. Perhaps the FAB guys can comment on the yeild hit on putting the contact above the thin diffusion.
There are also important niceties like doing DCTO, such that 3rd party IP can survive a design shrink, without redesign, with only recharacterization. It really takes have a broad swath of representative designs and IP in house (or at least pass through) to do the DCTO effectively.
We create the full automation tools, PDKs (primitives, stdcells) and migrate analog (ADCs, DACs, SerDes, PLLs, bandgaps, wifi) from SiGe (gf9hp and Tower) down to TSMC 16 / GF14 with optimizers and layout automation. We move between processes with almost a touch of a button, but we cannot work with foundries that don't run an MPW. We and other mixed signal companies could have helped them.
It is ironic. I live across the fairway from a company who I should be working with, but they are now run by the major investor of my enemy. This is an antirust waiting to happen.
Folks argue that Intel is important to the US given that it’s one of two us based companies doing leading edge logic semiconductor research (the other being IBM research). Given the broken culture, terrible execution and unsustainable finances, should the US government just give up, withdraw CHIPs act money for Intel and beg TSMC to be more aggressive about ramping leading edge nodes faster in Arizona. It seems like Intel Foundry makes Boeing look like a competent and well run manufacturer.
What was the least IBM Process that yielded well and had good volumes? This is the reason I don't have confidence in Rapidus should have asked TSMC or Intel.
What was the least IBM Process that yielded well and had good volumes? This is the reason I don't have confidence in Rapidus should have asked TSMC or Intel.
That’s fair about IBM but I think the question remains. Would you advise the US government to continue pouring money into Intel via Chips Act while they struggle or should the government reconsider and reallocate money to TSMC instead?
Perhaps the focus of INTEL should also be more on 3D-packaging as the next decisive thing for running a successful Foundry, than only/just on 14A PDK?
However, TSMC being located on Taiwan (and Arizona ramping fast with 2 new packaging Fabs as well in their GFab 165 B$ plans) and being tightly integrated/connected with the AI-driven 3D-packaging tech-push by NVIDEA may have some decisive advantages in the packaging ecosphere relative to INTEL?
Read this always interesting view from Taiwan 4 July (!!) newsletter, note on photonic integration the author writes:
There’s also the issue of mechanical effects. Warpage itself is a manifestation of mechanical stress. And then there’s optics. Last year, TSMC began integrating silicon photonics — optical components — into its packaging systems. Since light is more energy-efficient and transmits faster than electricity, combining optical and electrical elements can significantly reduce power consumption.
TSMC demonstrated a package that consumed 2,400 watts using only electrical signaling, but with photonic integration, the same system could be brought down to just 850 watts.
Taiwan has been working on packaging technologies for over 20 years. Today’s major technology drivers, such as AI, high-performance computing, autonomous vehicles, big data analytics, and smart healthcare, all depend heavily on computing power.
That’s fair about IBM but I think the question remains. Would you advise the US government to continue pouring money into Intel via Chips Act while they struggle or should the government reconsider and reallocate money to TSMC instead?
Struggle is part of development either Intel will come out of struggle better or they will suffer another great setback this is something only time can tell but what I can tell you Intel's problem is not tech it's mostly money and culture as of now.
However, TSMC being located on Taiwan (and Arizona ramping fast with 2 new packaging Fabs as well in their GFab 165 B$ plans) and being tightly integrated/connected with the AI-driven 3D-packaging tech-push by NVIDEA may have some decisive advantages in the packaging ecosphere relative to INTEL?
Read this always interesting view from Taiwan 4 July (!!) newsletter, note on photonic integration the author writes:
There’s also the issue of mechanical effects. Warpage itself is a manifestation of mechanical stress. And then there’s optics. Last year, TSMC began integrating silicon photonics — optical components — into its packaging systems. Since light is more energy-efficient and transmits faster than electricity, combining optical and electrical elements can significantly reduce power consumption.
Huh? The speed of electrical transmission through copper is 95% of the speed of light, so for all practical purposes at the circuit level optics do not transmit signals significantly faster than electricity. Also optics are only more energy efficient with silicon photonics, which are nascent. If you need optical transceivers you need more power, have more cost, and higher latency.