I don't think that it is a reasonable conclusion to draw that since RISC architectures, namely Arm, won the mobile market, RISC is an inherently superior architecture to CISC. As with most architecture strategies, a large part of success are the design choices you make and the technical quality of the implementation. Intel x86 CISC starts with a more difficult to implement and less power efficient concept, variable length instructions. Variable length instructions were a cool optimization when transistors, especially for caches, were precious and expensive, and is more difficult to implement in instruction decoding. And the CISC strategy of specialized instructions eventually gets out of hand when you think CPUs are the center of the universe (IMO, because engineers think it's the pinnacle of accomplishment to get a new instruction approved). Of course, the more specialization you put in circuitry, the more power that is consumed, and before you know it you need, voila, microcode engines (talk about RISC...) to implement your bloated instruction set that isn't practical to commit to state machine logic.
In the last decade, because Intel wanted to lock-in Microsoft to their x86 instruction set strategy (remember that for many years, by microprocessor sales volume, Windows PCs were the only market that mattered), and the Wintel partnership was the most profitable in computer systems history, nothing - absolutely nothing - was allowed to rock that boat. And Intel's fabrication lead was an important factor in keeping their relatively clunky x86 architecture alive longer than it should have for technical reasons. As we all know, business reasons trump technical reasons every time, and they should, but to succeed long-term you have to know when the technical risks are piling up, or the market is changing so much, that you need to revisit your assumptions and strategies.
Many senior executives in Intel didn't believe that tablets and smartphones were going to be more important than desktops and laptops. Many worried that mobile devices would have lower margins than PC and server CPUs, and that Intel should focus its fabulous fabs on the high margin markets. Remember, Intel allocated fab capacity by projected product gross margin. Were these decisions a failure of imagination? Yeah, but Intel had and has a corporate environment where failure is not a valuable learning experience, it often ends careers, so conservative decision-making is the norm.
Of course, let's not forget Intel's biggest CPU fumble, Itanium, which was intended to be the 64bit server processor of the future, and x86 was to continue to be 32bit for the foreseeable future. Who would ever need a PC with more than 4GB of memory? And Itanium, a VLIW design... was the 64bit future intended for general purpose applications? Really? Designed by HP? Ridiculous. "Databases are Itanium forever." Uh-huh.
Intel Corp, which has long dominated computer microprocessor market it created in 1971, does abrupt about-face, announcing it will follow lead of its much smaller rival Advanced Micro Devices by building 64-bit capability into its most popular chips; strategy reversal is setback for Intel, which...
www.nytimes.com
Unbelievable.
For a long time Intel x86 CISC CPUs were winning because no other company could match Intel's R&D investments in CPU design and fabrication. And computing was very CPU-oriented. Now transistors are more plentiful and cheaper, and the markets are bigger, and CPUs are more and more just for mainline application logic execution and difficult application-specific work is offloaded to different kinds of processors (GPU, AI, etc.) and accelerators (security, virtualization, compression, video processing, network protocols, etc.) and getting all of this stuff to work together is more important and a bigger win than worrying about RISC versus CISC. That's why everyone is talking about CXL and UCI. IMO, RISC is taking over because it's good enough, and now there's a hell of a lot more software in the world than Windows and databases.
CXL is also important because DRAM has been getting an increasing share of wallet in servers for years, and if you're a cloud data center one of your biggest efficiency challenges is making the sequestered DRAM in servers a sharable datacenter resource. CPUs just aren't as important as they used to be, so why not use an easier to implement strategy and focus on where the new wins are.
I hope for Intel's sake that Gelsinger isn't stuck in the past.