You're very amusing. Itanium silicon design engineering. A couple of questions to ask yourself.
1. Do you really think Intel would let HP design a VLIW CPU if it was intended to replace Intel's own x86 product line? Think hard.
Hint: Don't you think it's more likely Intel let HP design Itanium so they would dump the PA-RISC CPU development program and use Itanium?
2. Let's say Intel did intend to carve out a chunk of IBM's mainframe business with Itanium... where would the software stack come from, everything from OS to storage systems, and transaction processing software and databases, which are also IBM compatible to get the applications? HP? IBM itself? Oracle... in 2001? (I think none of the above.)
You're being silly.
Let's see... who's worth $100B++ and who isn't?

Ellison had a plan. Java and its derivatives have been wildly popular, especially with applications developers. Ellison also acquired app companies... remember Peoplesoft? FWIW, I'm not fond of Java either, but I was never in the target developer population.
The only VLIW implementation I'm aware of in production is from Kalray, but I haven't looked lately. I think off-loading instruction-level parallelization to compilers to compete with superscalar hardware is a losing strategy.
Ampere is designing their own Arm instruction set server CPU core. I'm anxious to see how it turns out.
Are you really going to argue that Intel did not initially want to replace x86 with EPIC? Seriously???? I mean, this is completely wrong. Intel wanted to move to 64-bit, and weren't having a great time extending x86, and using a new instruction set was highly advantageous.
For one, it got them to 64-bit, and got them off an antiquated instruction set that required de-coupling, with the inherent disadvantages in size, power use, and performance that entailed. Most importantly, it was an instruction set that would belong to them, and wouldn't be shared with AMD. Plus, they expected the performance benefits to be significant. The question is, why would they not? They were going to own the processor and intellectual property as they did. What possible reason would they have against this???? Merced even had x86 compatibility built into it, as I remember, but it was way behind a high performing x86 processor while in emulation mode.
I can find about 20 articles saying the same, I guess, but hopefully one will do, since this is simply common knowledge and no one (besides you) seems to argue it.
https://www.engadget.com/2017-05-13-intel-ships-last-itanium-chips.html
By the way, Pentium Pro was the basis for both client machines and servers. And did quite well at both. Of course, modern processors are as well. Itanium was meant to be both as well. It's not a foreign concept at all, even back then.
You seem to be implying that Intel could only choose replacing PA-RISC or move off of x86. They chose both. And more than that, as many RISC designers adopted EPIC and stopped designing their own processors.
With regards to trying to encroach on mainframes, there's simply no argument, because they were trying to win mainframe business with Itanium, and even Xeon. Just as I won't take your word for your meetings, don't take mine. But take Intel's. That's fair right?
https://www.intel.com/content/dam/d...itanium-xeon-hp-mainframe-workloads-paper.pdf . I guess you weren't at the meetings, but clearly Intel was trying to migrate people off of mainframes (probably lower end ones), to their server products. I guess it didn't work that great.
Your remarks about 100 billion are typically unrelated to anything I said. I said I didn't like the language, not that Ellison made a mistake buying Sun. I don't know enough to comment with such a wide brush. I also said it's not as significant as C derivatives, and it seems a lot less popular than it was. For a while, it was the greatest thing since the Atom bomb, but from what I can see, C derivatives are clearly more widely used than Java. But, I can't say for sure, since I haven't seen definitive data on it, but I sure see C much more than I see Java.
I completely agree that compiler optimization does not even approximate out-of-order hardware on modern processors. For throughput type applications, I guess I can see reasons for it, given how much simpler in order is, but the single-threaded performance is never going to even approximate a well designed OoO processor. I guess they learned that lesson the hard way. But, McKinley did a really good job for when it was made, if I remember correctly, and was very competitive even in single-threaded performance.
What I've seen of Ampere is they use relatively simple cores, and a lot of them. Maybe they will work on a single-threaded beast and surprise us, but so far that's not been the case. Apple tried, and got a smackdown when Alder Lake came out. Given they are stuck with TSMC's processes, which are not primarily focused on performance, some of it is that, but I'm certain their architecture is also responsible given it is also somewhat behind AMD's best, which is also using similar processes. Of course, the power efficiency is quite good, and that's got to be very important to them as well.
I was really disappointed when AMD killed of K12. At this point, I think it's more likely Intel will come out with a very powerful ARM core, since it would help with their foundry business if they could offer to put these together for companies. But, even then, it's so expensive, I don't know enough to determine if it would be worth the cost to them. Maybe a little down the road when their foundry business is established enough to benefit more significantly from such a part, but it seems like right now, it wouldn't shift a lot of business to them yet. Again, I don't know, it's just speculation.