Assuming Broadcom was a customer, it would have received PDK 1.0: https://hwbusters.com/news/intels-1...rrow-lake-are-canceled-for-foundry-customers/
So PM's comment suggests they got the bad results with an older PDK.
Array ( [content] => [params] => Array ( [0] => /forum/index.php?threads/intel-foundry-fails-to-impress-once-again-18a-process-%E2%80%9Cyield-rates%E2%80%99-are-reported-to-be-only-10-making-mass-production-impossible-from-wccftech.21631/page-2 ) [addOns] => Array ( [DL6/MLTP] => 13 [Hampel/TimeZoneDebug] => 1000070 [SV/ChangePostDate] => 2010200 [SemiWiki/Newsletter] => 1000010 [SemiWiki/WPMenu] => 1000010 [SemiWiki/XPressExtend] => 1000010 [ThemeHouse/XLink] => 1000970 [ThemeHouse/XPress] => 1010570 [XF] => 2021770 [XFI] => 1050270 ) [wordpress] => /var/www/html )
Assuming Broadcom was a customer, it would have received PDK 1.0: https://hwbusters.com/news/intels-1...rrow-lake-are-canceled-for-foundry-customers/
Interesting - so this is saying the narrative wasn't that Pat lost the 40% discount, it was really that he was unable to re-gain it after it expired?“TSMC (2330) has seen strong demand for its advanced manufacturing processes. It is reported (private source) that tSMC has expired the original discount of up to 40% of the 3nm process foundry price, and asked Intel to pay the original (current) price instead. This will help TSMC's subsequent profits soon.“
It appears to me that, as anticipated, Pat was unsuccessful in the price negotiation mission, which has further strained Intel’s outsourcing balance sheet shortly after a grace period granted by tSMC (a Quarter or a few months). This seems likely, I can see, to be the final tipping point for the Board of Directors to make a decisive judgment to find a CEO replacement.
Why is Pat Gelsinger commenting on Intel related issues one week after he got retired?Assuming Broadcom was a customer, it would have received PDK 1.0: https://hwbusters.com/news/intels-1...rrow-lake-are-canceled-for-foundry-customers/
So PM's comment suggests they got the bad results with an older PDK.
Defending his honor at the very least. Effectively these stories were dragging his credibility through the mud as he was the one communicating updates on 18A node readiness.Why is Pat Gelsinger commenting on Intel related issues one week after he got retired?
Looked and couldn't find the reference. It was a chart showing what chips were to be produced on 18A and when as well as expected relative volume.no qualcomm part anytime soon... where did you see that?
The readiness reports shows 50 items that need to be fixed .... just like Naga said. and they are probably at the "we are finding new issues faster than we are solving old ones" due to ramping on new tools.
No production parts will be shipped until 2H 2025 at earliest... which is as expected
You have all the cool toys .Any mention of Yield of a process should come along with Die size of the chip!. Based on the leaks of Panther Lake tile size of 8mm x 14.288 mm and Pat's statement of D0<0.4 in September, the roughly calculated Yield is 65%. Obviously there is more to it in reality but 10%, that must be a big die!
View attachment 2539SemiAnalysis Die Yield Calculator
Experiment with Semiconductor Die Yield, all from the comfort of your browser.semianalysis.com
SRAM is the easiest to yield due to redudant logic and it's only 256mbit a complex chip gives us a better ideaYou have all the cool toys .
Thanks for the analysis.
If the defect rate is also dependent on the libraries used and (I am guessing) the mix of different types of gates used, then how can we know from Intel's information what the conditions of the reported D0 were?
TSMC's reported 80-90% yield on 256mbit SRAM for N2 seems a little less arbitrary, but potentially less useful since I am guessing it is much easier to yield SRAM than it is to yield high speed logic (someone correct me if I am mistaken).
Using a bit of SWAG, I am guessing that a 256mbit SRAM on N2 would be around 6mm2 .... so 80-90% would still be a significant defect rate correct?
Intel said announced Qualcomm was a customer in 2021. Nothing happened, there was no chip. Intel's mistake in claiming Qualcomm as a customer, not yoursLooked and couldn't find the reference. It was a chart showing what chips were to be produced on 18A and when as well as expected relative volume.
It sucks getting old .
Automotive is the same. Some issues can be easily addressed within the assembly plant. Other issues require minor redesign tweaks that have to wait until mid-year refresh. Some require a big change in design. These have to wait until the next big platform release (which happens about every 7 years).Most companies during development benchmark SRAM non-redundant yields, add in a couple rows and columns for each block and allow swapping on makes little sense in development but for real product important and is tracked often as Bin1 versus Bin2 and important for yield and DPM.
Actual yield what ever number is meaningless unless one knows dies size and process it is run on. I know the Blackwell and Hopper die are very low % due to die size, what is in a % relies on knowing a lot of things most people in the press have no clue about!
All yield starts low and improves. Sometimes the yield improvement for random can follow well known trends others can make big jumps. The big question at every stage is what is the Pareto and if the company knows Pareto drivers and fixes for the yield Pareto. At early stages most of the loss are systemic and the question is do they have a fix for the systematics and when they are coming.
18A Yields are fine at Intel for where they are in the development process (still early). Naga gave the correct and accurate answer.Automotive is the same. Some issues can be easily addressed within the assembly plant. Other issues require minor redesign tweaks that have to wait until mid-year refresh. Some require a big change in design. These have to wait until the next big platform release (which happens about every 7 years).
Regardless, the plant management lives and dies by the reporting system. Comparisons between "this shift" and "last shift" are real-time. This week vs last week, etc, etc, help management track the effectiveness of their process improvements.
God have mercy on your soul if you are responsible for one of the "red light" items . You get an all expense paid trip to the plant managers office and get to stay in the plant (such a lovely place by the way) until things are back under control.
Key questions to know down cold before seeing the plant manager:
1) How many were effected before the issue was contained.
2) When was the problem contained (ICA "Interim Containment Action")
3) What was the root cause.
4) When will the root cause be addressed (PCA "Permanent Corrective Action")
Worst answer of all time? "I don't know" followed by "I can't tell you when I'll know".
Plant managers have absolutely no sense of humor .
I spent most of my career in manufacturing. It is a high pressure, thankless and brutal existence. One "awe sh!t" erases 100 "ada-boy's".
My hat is off to those of you that still live this existence!
SRAM is the easiest to yield due to redudant logic and it's only 256mbit a complex chip gives us a better idea
That doesn't seem intuitive (I am not doubting that what you are saying is true though). Do you have reasoning why SRAM would be harder to yield than logic?That was not my experience when I worked for an SRAM company back in the CMOS days. SRAM was much more challenging to yield than logic. With FinFETs variability made it even harder. After SRAM I worked for an EDA company that made high sigma simulation tools for SRAM. TSMC, Apple, QCOM, and Nvidia were early customers. On the leading-edge, constraints on transistor matching and variability are a serious challenge which is why foundries use SRAM on early test chips for process ramp.
The SRAM there are many varieties. If you look at most offerings there are HD and HC as well as multiport.That doesn't seem intuitive (I am not doubting that what you are saying is true though). Do you have reasoning why SRAM would be harder to yield than logic?
On the surface, SRAM doesn't need to clock nearly as high. One would think that this alone would make it much easier to get a good yield on a new process with.
On the other hand, the fact that it is a very simple circuit that isn't clocked as high as a compute tile makes it much easier to compare across many different nodes.
Still, one could imagine a process that could yield SRAM better than another process, but not be able to yield a complex compute tile as well.
Please elaborate on why this is not true.
Thanks!
OT But I'd appreciate hearing a little more about this.That was not my experience when I worked for an SRAM company back in the CMOS days. SRAM was much more challenging to yield than logic. With FinFETs variability made it even harder. After SRAM I worked for an EDA company that made high sigma simulation tools for SRAM. TSMC, Apple, QCOM, and Nvidia were early customers. On the leading-edge, constraints on transistor matching and variability are a serious challenge which is why foundries use SRAM on early test chips for process ramp.
Sure, you are quite right, "it is much easier to yield SRAM than it is to yield high speed logic". If desired and given the SRAM yield rate 80%-90%, you are able to compute the equivalent DD of 'full process on N2' by using the 'cool toys' aforementioned. You can see the 90% SRAM yield will not be able to reach the 'SWAG' bar of allowing the risk-production on N2 yet.You have all the cool toys .
Thanks for the analysis.
If the defect rate is also dependent on the libraries used and (I am guessing) the mix of different types of gates used, then how can we know from Intel's information what the conditions of the reported D0 were?
TSMC's reported 80-90% yield on 256mbit SRAM for N2 seems a little less arbitrary, but potentially less useful since I am guessing it is much easier to yield SRAM than it is to yield high speed logic (someone correct me if I am mistaken).
Using a bit of SWAG, I am guessing that a 256mbit SRAM on N2 would be around 6mm2 .... so 80-90% would still be a significant defect rate correct?