Surely TechInsights (Scotten Jones) has better cost information than Semiaccurate (a rumor site).
And their assumption based on the currently available information is that 18A has a cost advantage. Scotten has made this very projection on semiwiki before. Once there are 18A and N2/A16 chips in the public to teardown, it will be fairly easy to figure out roughly where the true wafer costs lie with a high degree of confidence. The 20/18A BEOL is supposedly cheaper than the Intel 4/3 BEOL (per Intel at VLSI 2023). The intel 4/3 BEOL should be significantly cheaper than the N3E or N2 BEOLs (with the same number of metal layers) due to coarser pitches and simpler metallization. Both the N2/A16 and the 20A/18A FEOLs are 1st gen GAA and N2 is widely believed to be denser than 18A. So just with the publicly avaliable information I think it is also plausible to assume that 18A should be cheaper to produce than N2 and especially A16. From a structural cost perspective, the rest comes down to MEOL complexity, and tool selection. If we want to look at complete wafer costs we also would need to look at wafer yield (note this is different from the die yield people most commonly talk about), tool uptime, uptime variability, consumables, manufacturing OPEX, total volume, wafer loadings, and if the fab is a greenfield or brownfield site. But those later points change over time, so structural cost is the most apple to apples way of evaluating a technology.
However, the things I think TSMC could teach them are how to design a cost effective process and how to run their fabs efficiently. Intel has been focused on making the fastest chips for so long I think they have something to learn on where that balance between speed and cost lies. I'm also of the opinion that TSMC runs their fabs far more efficiently.
Intel's fab group did what was good for Intel, the CPU IDM. For example, metal layers. CPUs want many upper back end metal layers due to the complexity and high PDN requirements. If we talk iso-process, but one chip has more metal layers and the other has fewer, well yeah the higher layer count wafer will ALWAYS have a higher cost and lower yield. Now, is that to say there is no room for improvement? ABSOLUTELY NOT. Even though Intel has some cost victories over the years (like i22nm being significantly cheaper to produce than the comparable 16FF from TSMC or 14LPP from Samsung), Intel 7 is a shining example of being way more expensive to produce even for a CPU optimized 7"nm" process than it has any right to be. But my point was more so that when what the customer wants is performance at all costs and is okay paying a reasonable premium for it, then that is how things will be. The IDM in a manufacturing organization will always optimize for their customer's needs in the same way as a foundry would. When you have many customers like TSMC does, that gives you something that is broadly desirable to your main customer and easily extendable to many different use cases with some additions here or there. When you are an IDM, it becomes specialized for whatever you make because doing anything else would be a waste of time or money. Now that the incentive structure is different (i.e. the customer base is not just Intel products) the tradeoffs made will be somewhat different as the factory organization optimizes to the new reality. Not much different than how the design side said they were starting to lower test times, do more pre-Si validation so they could use fewer steppings, and design products to be more cost conscious rather than focusing everything on performance now that they have to pay TSMC, Samsung Foundry, and Intel foundry margins for things they used to get at cost or for ""free"".
Intel has always run their fabs to build-to-projection based on what the client group said they would sell. Running a fab to build-to-order is a different game and again Intel could learn a lot from TSMC here.
How is this any different? Intel products projects a demand and tells the factory group to build some capacity. Ideally the capacity gets built and whatever the actual demand is determines utilization. TSMC gets wafer agreements from their customers, and TSMC builds factories to support that. The only difference is that TSMC customers are contractually bound to use it, and termination carries a hefty fee. If a firm does cancel their orders, TSMC still needs to find someone to use the capacity, or they sit under utilized (see TSMC's non N3 lines post covid demand crash).
Intel also has to overcome their CPU history. CPU's allow production at lower yields early in the process life
Margin stacking is the main part of the IDM advantage. As an example, if I am Micron I would be stupid to not want to run my product as soon as it is profitable to do so. Let's say my lead product for a new dram node is some 4GB LPDDR5 die. Lets for easy math say that while ramping the die yield to sell this LPDDR5 die for a profit is 40% and the yield where the cost per bit is lower than the prior node is 60%. If I start HVM at 40% yield, I will make less money per chip. But I am building scale, which lowers my wafer costs dramatically and will make it easier for me to get my yield up above 60%. This newer DRAM node will also have lower power consumption and faster speed, so I will have a better chance of winning business from smartphone and laptop makers with this newer LPDDR product than my last gen one. All in all, it just makes more sense to gradually ramp this new product to market rather than waiting for it to be cheaper than the old one when you operate the fab. Then once your cost is lower than the last gen convert all your fabs to the new process.
IF Intel consistently runs at lower yield than TSMC when they start their ramp, I think that is more to do with business model than with CPUs letting Intel get away with a lower standard. If the design is done and the yield is good enough to make a profit once you start ramping, you might as well pull the trigger and have a faster ramp. At the end of the day the end product is the chips not the wafer for an IDM, and it is a wafer for a foundry. Put another way, I wouldn't call it lower early yields, rather starting production earlier in the process development lifetime. It may sound like a small distinction, but it feels significant to me. One implies failure and the other indicates different priorities.
With that said, I don't suspect that Intel ramps their process at a DD that is much (if at all) higher than where TSMC starts. Over the past 3 decades, Intel has demonstrated processes with high yields out of the gate on numerous occasions. And that is with Intel's lead product almost always being more complex than TSMC's lead product (die size, layout complexity, HD and HP logic cells in the same chip, number of metal layers, MIM, high frequency operation, etc.). In these kinds of circumstances, if Intel has the same DD on a lead product as TSMC has on theirs; Intel's yield being lower is expected. But if Intel is sitting at a similar yield, then their DD would by extension have to be lower than what TSMC is getting.
as you can always fuse off a core or sell it as a bin with lower speed. Many foundry products don't allow this so Intel needs to learn to get yields up faster than they have historially.
That is not how that works... First different bins would be variation related as yield as 0 impact on power-performance. If a die has a defect, it is just dead. That is unless you devote die area to repair circuitry; in which case you can
sometimes save it by cutting the die down depending on where the defect landed. The one part you do have right is intel's main product being CPUs does change the incentive structure. For a CPU or GPU all you need to worry about is having your variability be low enough that you can get whatever SKU distribution you sell. From Intel product's perspective, there is no added value if 90% of yielded dies make the cut to be i9s. All CCG would care about is that one wafer gives them at least as many i9s as they need (plus some extra margin to deal with slight fluctuations in the proportions of which chips are sold). Putting effort to go way above and beyond is wasteful energy that could be better spent improving yield (wafer or die) or finding cost reduction opportunities. Whatever Cpk would be acceptable for CPU or GPU production would be insufficient for a mobile or embedded system where all units are expected to behave uniformly. Cut down or binned part are fine when you are AMD, Intel, or NVIDIA selling CPUs and GPUs. But be you are a systems company like Apple/Google/Sony/GM or an IDM like onsemi/TI/SK-Hynix/Sony-imaging, the same chips having highly variable power or performance are unacceptable for their end customers.
The Intel CFO agrees with me. The Intel Finances agree with me
No he doesn't. Dave said Intel 7 cost to produce is less than TSMC price (if only barely) as evidenced by Intel 7 gross margin being single digit positive when priced vs a comparable TSMC process technology. He also said that 18A wafers aren't particularly more expensive to produce than intel 7 wafers, with ASPs 3X intel 7 wafers. You believe, if memory serves, that Intel 7 cost is over 2x TSMC price. And also for some reason that IF's operating loss cannot be explained by investment, R&D, and TWO simultaneous process ramps exceeding the meager operating income generated from poorly utilized Intel 7 fabs.