Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/the-risc-vs-cisc-debate-redux.6515/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

The RISC vs. CISC debate Redux

I recently came across an interesting technical article, with a current interpretation of the infamous RISC vs. CISC instruction set architecture debate, which has been ingrained in the computer science field for over 25 years. A link to the article is here.

The paper provides a good background on the origins of the debate, followed by a thorough analysis of several x86 and ARM designs, targeted for different power/performance tradeoffs. The authors try to normalize their results for these different targets.

Their conclusion is that advances in compute architecture and compiler optimization technology over time have rendered the ISA debate rather moot. For general workloads, they assert that the specific ISA is not the key differentiator – other microarchitectural details are much more important in establishing the power/performance/cost design point. They highlight the use of “micro-ops” to execute complex instructions, the I-cache architecture in current core designs, etc.

Would you agree? Although x86 is no doubt the best example of a current CISC implementation, is their analysis skewed by the omission of MIPS, Power, and SPARC in their RISC repertoire?

This conclusion sheds an interesting light on the “ARM-based (micro)servers will naturally be more efficient” market, IMHO.

-chipguy
 
One goal for the CPU Architect is to match the memory cycle time to the instruction execution time so that both systems are never in idle. Back in the 60's when it took X microseconds to read the next instruction from drum memory then you used CISC instructions to do as much work as possible in that time span.

As memory grew faster RISC was used to keep the instruction execution time to within the memory access time.

Back in the mid 90's Intel came out with the I486DX-2 which decoupled the cpu's clock from the IO clock. The CPU could now run faster than the clock that determined the rate that instructions entered the chip.

Internal rams such as caches further gave the Architects more options than just RISC v CISC for tuning performance.

In short, your right ISA really doesn't matter any more.

That's why you should check our riscv.org. Those guys have figured it out.
 
I don't think it matters if you omit MIPS and other RISC architectures as any differences will be second order effects at most.
 
Of course, the instruction set still matters, but is it salient?

Jim Keller, most recently of AMD, commented on how they were able to achieve better efficiency with the ARM instruction set. The video is on YouTube, so anyone can see it. He'd know more than just about anyone on the subject, since he was heading both ARM and x86 processor development, unlike anyone at any other company.

CISC made sense when memory was super expensive, and many instruction sets were even designed with a specific programming language in mind. Back in the 1970s, when the 8086 was developed, memory was extremely expensive, and processors were quite limited in how much memory they could address. Although not compatible with the 8085, the 8086 instruction set was based on it, and the 8085/8080 could only see 64K. The 8086 could see 1M, but only is 64K segments (which again saved memory, since you didn't need to use a 20-bit address, but a 16-bit one that was then offset by the segment register in some convoluted addition).

CISC was a far superior solution when the primary issue was memory size. Memory speed has increased extremely slowly, but sizes have increased.

Anyone who thinks the ISA makes no difference doesn't understand how x86 (besides the lowly Atom, which is too poor to be relevant) processors work. They do NOT execute x86 instructions. These instructions have to be decoded into RISC-ops, or micro-ops, depending upon company, which are then dispatched to be executed. These obviously take up transistors, use power, and add additional stages which increase the branch-mispredict penalty, lowering performance. The result is inescapable, x86 are slower, use more power, and are larger than they would be if they had a more efficient instruction set. The amount, however, is now quite small on each, maybe a couple percent at this point, so isn't salient by any means. It's far less important than many other factors in the processor, included process technology. But, it is there.

Put another way, can anyone really believe that an instruction set design in 1978, for an extremely different workload scenario, with extremely different design goals, works as well as one designed a few years ago, with much better insight into current technologies, and workloads?

And even one gets that answer wrong, does anyone really think they know more than Jim Keller, who's designed processors based on both instruction sets? He has a level of understanding none of these researchers even approaches. It's like thinking a taster knows more about the ingredients in something than the cook himself.
 
Put another way, can anyone really believe that an instruction set design in 1978, for an extremely different workload scenario, with extremely different design goals, works as well as one designed a few years ago, with much better insight into current technologies, and workloads?

The AMD64 instruction set is not designed in 1978 and has improved on the 32 bit instruction set.
 
Back
Top