Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/will-the-hybrid-core-approach-be-used-in-the-future-to-minimize-excess-x86-baggage.22258/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Will the "hybrid" core approach be used in the future to minimize excess x86 baggage?

Xebec

Well-known member
Modern x86 cores can still run/do a lot of things that are no longer really needed for the vast majority of computing. Examples include 16-bit x86 code, legacy BIOS/boot methods, obsolete memory protection schemes, etc. I presume these items are replicated across all cores in a modern multicore CPU, which means even a 128-core Epyc chip, these functions all exist 128x.

Is it possible we will see a situation where future CPUs come with a small # of 'compatibility cores', but the remaining cores lack a vast amount of legacy? (Is that already in place where some of the functions are handled 'at the chip level' but not integrated with each core?). (And do the e-cores eschew some legacy today?).

I recognize designing and qualifying unique core types is not cheap..

Thanks!
 
Modern x86 cores can still run/do a lot of things that are no longer really needed for the vast majority of computing. Examples include 16-bit x86 code, legacy BIOS/boot methods, obsolete memory protection schemes, etc. I presume these items are replicated across all cores in a modern multicore CPU, which means even a 128-core Epyc chip, these functions all exist 128x.

Is it possible we will see a situation where future CPUs come with a small # of 'compatibility cores', but the remaining cores lack a vast amount of legacy? (Is that already in place where some of the functions are handled 'at the chip level' but not integrated with each core?). (And do the e-cores eschew some legacy today?).

I recognize designing and qualifying unique core types is not cheap..

Thanks!

Here are tweets from guys at Intel and AMD clarifying the issue even Jim Keller said ISA doesn't matter it's become a myth

 
Thanks - at the hardware level does that literally mean that x86-16 instructions take almost no die space then? Like less than 1mm2?
 
Thanks - at the hardware level does that literally mean that x86-16 instructions take almost no die space then? Like less than 1mm2?
Given that Intel CPUs use microcode engines as the instruction processing hardware for most(?) operations, not state machines, I suspect the incremental die space per core for legacy operations is close to zero.

Intel used to have an initiative called X86S, which had the objective of removing some support for 16bit operations, and simplifying 32bit execution, but for some reason terminated the initiative. I suspect because it would have reduced design reuse at a critical time for Intel, but I really don't know.


I think Intel's biggest CPU design issues are not in the cores, they're in the interconnects (like Ring Bus). Legacy instruction set issues are probably not in their top 10 design problems.
 
Given that Intel CPUs use microcode engines as the instruction processing hardware for most(?) operations, not state machines, I suspect the incremental die space per core for legacy operations is close to zero.

Intel used to have an initiative called X86S, which had the objective of removing some support for 16bit operations, and simplifying 32bit execution, but for some reason terminated the initiative. I suspect because it would have reduced design reuse at a critical time for Intel, but I really don't know.


I think Intel's biggest CPU design issues are not in the cores, they're in the interconnects (like Ring Bus). Legacy instruction set issues are probably not in their top 10 design problems.
It was rumored to be used for Royal Core project which was killed apparently
 
even less than that i think barely 0.1mm2 considering how dense the nodes have become it's a rounding error in terms of xtor cost but not in term of validation effort.

X86 decoders are really big and complex, I was once glanced at AMD Bulldozer decoders.

1741775710561.jpeg


In addition, I cannot fathom how they avoid instruction scheduling constraints of X86, and memory model from further complicating the rest of the front-end. So the decoder size penalty should also be counted together with 20%-30% of the remaining frontend area.

What Keller said means that fixing x86 congenital deformities is a very well understood, and more or less solved problem in AMD and Intel.

If you don't need to solve this problem at all, you are still better off. I am not talking about stuff like 16 bit math, which is trivial to emulate, but overall architecture assumptions.

ARM's mushrooming instruction set may not require much silicon today, but will limit the ISA in the future with overall architectural choices too.
 
X86 decoders are really big and complex, I was once glanced at AMD Bulldozer decoders.

View attachment 2867

In addition, I cannot fathom how they avoid instruction scheduling constraints of X86, and memory model from further complicating the rest of the front-end. So the decoder size penalty should also be counted together with 20%-30% of the remaining frontend area.

What Keller said means that fixing x86 congenital deformities is a very well understood, and more or less solved problem in AMD and Intel.

If you don't need to solve this problem at all, you are still better off. I am not talking about stuff like 16 bit math, which is trivial to emulate, but overall architecture assumptions.

ARM's mushrooming instruction set may not require much silicon today, but will limit the ISA in the future with overall architectural choices too.
I meant the area required for the legacy 16 bit support and stuff not the decoders decoder are fairly large
One way out of the decoder problem is cluster decoding used in Intel atom and Zen 5
 
Modern x86 cores can still run/do a lot of things that are no longer really needed for the vast majority of computing. Examples include 16-bit x86 code, legacy BIOS/boot methods, obsolete memory protection schemes, etc. I presume these items are replicated across all cores in a modern multicore CPU, which means even a 128-core Epyc chip, these functions all exist 128x.

Is it possible we will see a situation where future CPUs come with a small # of 'compatibility cores', but the remaining cores lack a vast amount of legacy? (Is that already in place where some of the functions are handled 'at the chip level' but not integrated with each core?). (And do the e-cores eschew some legacy today?).

I recognize designing and qualifying unique core types is not cheap..

Thanks!

Did Intel stop supporting 16-bit natively with the Lunar Lake?
 
Back
Top