With respect to the DC takeover by big DC companies vs purchasing Intel/AMD x86 .... possibly true. There are significantly fewer apps to port and optimize in this space than in client (PC and mobile).
For the cloud data centers, the cloud companies own the entire stack, including the OS distributions. It makes software stack porting and tuning a process under their control.
In client, business still make up the lions share of x86 I believe. Asking every company around the world to get all their apps working on something other than x86 will be a long journey so I think this market is secure for Intel/AMD for at least another 10+ years.
Yup, by unit volume there's no comparison. Client CPUs are a multi-hundred-million unit market, server CPUs are tens of millions, certainly less than 50 million per year, even including proprietary CPUs.
Still, losing the lucrative DC market would be a huge blow to both Intel and AMD.
Very much so.
With respect to this line of thought I do have a question though. In what way is x86 inferior to ARM in DC? Seems like there isn't an ARM out there that is competitive in this market yet.
x86 CPUs aren't inferior to Arm CPUs in any way except possibly power consumption, but being "competitive" has a different meaning when the CPU is custom-designed for the customer. The benchmarks that AMD and Intel like to tout are mostly for marketing purposes. When the customer has a target application in mind where they design and develop the software, stuff like memory systems and I/O systems mean more to most applications than instructions per clock, and what the clock speeds are. Intel and AMD talk about that stuff because they're trying to differentiate cores, and they have to make compromises on caches and memory systems, because they're trying to develop one-design-fits-many solutions. If you're in charge of running EC2, which consumes a huge number of Annapurna CPUs per year, you're only interested in the overall application price-performance, response times, power consumption, reliability, and implementation of the most important features to EC2.
Furthermore, there are issues with the "lots of little efficient cores" strategy. There are lots of DC software out there that charge an annual license per core. This is why very powerful cores are chosen for most applications. x86 first decode stage is just changing the variable length CISC to equal length "RISC like" instructions, so I don't see a fundamental advantage for ARM in this space. Also, features like SMT appear to be lacking in ARM to date.
Arm Neoverse cores can be impressive enough. This is how Ampere won over Oracle, though Ampere now has an Arm architecture license for the instruction set, and is now designing their own custom cores, which they claim are even more impressive.
It is also a common misconception that Arm cores do not use variable length instructions. Some Arm cores do. And Arm has complicated-up their designs to compete with x86 with stuff like vector operations. If you want more RISC-like designs, I think you need to stick to Cortex IP, but I'm not an Arm expert.
Arm cores do not support hardware multi-threading, and I doubt they will in the near future.