Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/nova-lake-to-use-tsmc-n2p-for-all-but-entry-configuration-according-to-moores-law-is-dead.23375/page-4
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Nova Lake to use TSMC N2P for all but Entry Configuration According to Moore's Law is Dead

What I am trying to clarify is that the 8p/8e , is it manufactured as a single tile or this compute unit I comprising of smaller compute tiles
 
we can but that can only be done sometimes in end of 2026 or 2027
Just curious. Why wouldn't Intel produce in smaller tiles like a tile with 4 p cores and another tiles with 8 E cores, I suppose they are around the same size. Then you can mix and match to form 4p+8E or 8p+16E configurations. That way you can use 18a in all your configurations if capacity allows and use more tsmc if capacity not enough. Or is it doing things in this manner will incurred too much chiplet to chiplet latency. Just wondering.
 
Just curious. Why wouldn't Intel produce in smaller tiles like a tile with 4 p cores and another tiles with 8 E cores, I suppose they are around the same size. Then you can mix and match to form 4p+8E or 8p+16E configurations. That way you can use 18a in all your configurations if capacity allows and use more tsmc if capacity not enough. Or is it doing things in this manner will incurred too much chiplet to chiplet latency. Just wondering.
cache coherency issues L3 have to be connected through some fabric and they have to use interconnect to connect two different tiles with no added benefits but more complexity it's better to make a single tile for such small dies.
 
cache coherency issues L3 have to be connected through some fabric and they have to use interconnect to connect two different tiles with no added benefits but more complexity it's better to make a single tile for such small dies.
Work to improve cash coherency can be seen from Lunar Lake
Considering that Panther Lake will improve, it seems to improve in the Nova Lake generation.
 
cache coherency issues L3 have to be connected through some fabric and they have to use interconnect to connect two different tiles with no added benefits but more complexity it's better to make a single tile for such small dies.
I think I asked you this before.

The way Intel does Chiplets is not the same way AMD does CCD/CCX correct? AMD can add cores. Intel has a new chiplet.

Please correct me if I am wrong but I understood that to mean AMD has a whole next level of flexibility.
 
I think I asked you this before.

The way Intel does Chiplets is not the same way AMD does CCD/CCX correct? AMD can add cores. Intel has a new chiplet.

Please correct me if I am wrong but I understood that to mean AMD has a whole next level of flexibility.
Oh. Is that the case? AMD can add additional cores to the same tiles/chiplets when needed? Is that what you mean ?
 
Do you really think Intel has a chance against Nvidia and AMD as a design only house? Not to mention the dozens of start-ups gunning after Nvidia or the cloud companies making their own chips?
One of Intel's fundamental problems is that they failed at design. The last true product invention was the microprocessor and that started over 50 years ago. Now that they've fallen behind AMD in general purpose micro-processor design the revenue generating portion of the business is in a near free-fall. Watch for two things: a successful 18A ramp that lands them an external partner for 14A, and performance specs on Panther Lake. The first indicates a narrow path to foundry success, the second suggests a narrow path to "prod-co" survival.
 
I think I asked you this before.
Yeah but the DMs have no way to send Images so i couldn't explain properly :ROFLMAO: :ROFLMAO:
The way Intel does Chiplets is not the same way AMD does CCD/CCX correct? AMD can add cores. Intel has a new chiplet.

Please correct me if I am wrong but I understood that to mean AMD has a whole next level of flexibility.
so AMD uses two Core complex(CCX) aka 8 Core Chiplets that are similar and binned differently that is one CCX can hit higher clocks and one CCX can't hit that high of a clock on one core they have different V/F curves . Here is how an AMD 8 Core Chiplets looks like. An AMD desktop CPU Contains 2 CCX and 1 IO Die containing IMC.
1755107213930.png
1755107202049.png
1755107396323.png

Here is a Zen 5 CPU with TWO CCX and IOD the problem arises when the core in one CCX has to access the data from another CCX it causes latency as the request has to route through IOD where the IMC lies to go from one CCX to another. This results in poor latency. Also i have added a curve for the different bins of Threadripper with 4 CCX.

1755107483434.png


Now for Intel they also are based on similar principle they have an SoC die Containing IMC and a compute die containing the Cores but they only have a single Compute die or CCX in AMDs terms and a shared SoC die between different compute die.

1755107680402.png


The difference between AMD and Intel is Intel tapes out multiple compute die while AMD reuses the same CCX for all the Desktop SKU depending on binning and number of CCX but they tape out different dies for mobile.
Intel just uses two compute dies shared across Desktop and Mobile so for example ARL-H and ARL-S both share the 6+8 die while HX and S series share the 8+16 die with different SoC one with LP-E cores and one SoC without LP-E Cores. Of course there are different Binning for Mobile and Desktop. There is difference in Packing as well Intel uses exxpensive Foveros while AMD uses cheap packing.

 

Attachments

  • 1755107654444.jpeg
    1755107654444.jpeg
    300.2 KB · Views: 29
I think I asked you this before.

The way Intel does Chiplets is not the same way AMD does CCD/CCX correct? AMD can add cores. Intel has a new chiplet.

Please correct me if I am wrong but I understood that to mean AMD has a whole next level of flexibility.
No, it's not
It's just a difference in the concept of a chiplet
In terms of maturity, AMD is more mature
 
If it matures, I think Intel will be able to widen the bandwidth of the chiplet connection method.
Intel is a silicon interposer type, so I think there is room for growth.
 
Yeah but the DMs have no way to send Images so i couldn't explain properly :ROFLMAO: :ROFLMAO:

so AMD uses two Core complex(CCX) aka 8 Core Chiplets that are similar and binned differently that is one CCX can hit higher clocks and one CCX can't hit that high of a clock on one core they have different V/F curves . Here is how an AMD 8 Core Chiplets looks like. An AMD desktop CPU Contains 2 CCX and 1 IO Die containing IMC.
View attachment 3489View attachment 3488 View attachment 3490
Here is a Zen 5 CPU with TWO CCX and IOD the problem arises when the core in one CCX has to access the data from another CCX it causes latency as the request has to route through IOD where the IMC lies to go from one CCX to another. This results in poor latency. Also i have added a curve for the different bins of Threadripper with 4 CCX.

View attachment 3491

Now for Intel they also are based on similar principle they have an SoC die Containing IMC and a compute die containing the Cores but they only have a single Compute die or CCX in AMDs terms and a shared SoC die between different compute die.

View attachment 3493

The difference between AMD and Intel is Intel tapes out multiple compute die while AMD reuses the same CCX for all the Desktop SKU depending on binning and number of CCX but they tape out different dies for mobile.
Intel just uses two compute dies shared across Desktop and Mobile so for example ARL-H and ARL-S both share the 6+8 die while HX and S series share the 8+16 die with different SoC one with LP-E cores and one SoC without LP-E Cores. Of course there are different Binning for Mobile and Desktop. There is difference in Packing as well Intel uses exxpensive Foveros while AMD uses cheap packing.

So what are the pros and cons of Intel chiplet implementation scheme vis a vis that of AMD's?
 
So what are the pros and cons of Intel chiplet implementation scheme vis a vis that of AMD's?
Both have their advantages and disadvantages, but Intel can keep costs down by sharing parts with the most numbered laptop products.
The disadvantage is that it is less mature than the AMD chiplet, since it has been in full swing.
The packaging costs are a bit more expensive than the AMD ones.
Well, if it's mass-produced, the cost issue may be offset.
It's just my opinion
 
It's only been a short time since Intel introduced chiplets in earnest.
It's just immature compared to AMD
Intel has a lot of interesting things in terms of technical terms.
 
So what are the pros and cons of Intel chiplet implementation scheme vis a vis that of AMD's?
AMDs biggest pro is binning and cost and IP Reuse
Intel's mostly is able to mix Match different IP so more combination is possible but not cost more chiplets cost more more so on different node's. Intel kind of botched their chiplet implementation as well
 
Back
Top