Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/how-many-patents-and-copyrights-does-tsm-have-68860.18413/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

How many patents and copyrights does TSM have? 68860

IBM's z-series is a unique systems-level product-line. Very innovative technology at multiple architecture layers.

Apple does not design enterprise systems. Nvidia doesn't either, as HPC is a different market altogether than enterprise business systems.

You're comparing Macs to the z-series? That's silly. Macs are used in business, especially in content creations applications for example, but only as workstations or work-group application servers.

Itanium was not a success, but it was aimed at the 64bit RISC Unix systems (e.g. PA-RISC), not IBM's z-series. Since all but one of the 64bit RISC processors of 20 years ago went away (the exception being the Sun/Oracle SPARC CPU), I suppose if we squint we could call Itanium a kamikazi-like success. (I don't.) In the end AMD's x64 strategy finally killed Itanium, but you know that.

MacOS? Seriously? Nope, but it doesn't try to be. Mach for hardware interface and abstraction, a highly-customized FreeBSD Unix layer for OS guts, and the Mac user interface, file management, and networking? I don't understand why you're pressing this. These ingredients are not a recipe for highly-available enterprise servers, and they weren't meant to be.

I look at IBM's strategy as defending their gross margin, which isn't that great, considering the value they add. It's 55%. Oracle's is about 73%, to compare them to another CPU/systems/application software/cloud vendor. Put another way, if I were choosing a stock to own, it would be Oracle before IBM.
Mr. Blue, you are impressive.
 
IBM's z-series is a unique systems-level product-line. Very innovative technology at multiple architecture layers.

Apple does not design enterprise systems. Nvidia doesn't either, as HPC is a different market altogether than enterprise business systems.

You're comparing Macs to the z-series? That's silly. Macs are used in business, especially in content creations applications for example, but only as workstations or work-group application servers.

Itanium was not a success, but it was aimed at the 64bit RISC Unix systems (e.g. PA-RISC), not IBM's z-series. Since all but one of the 64bit RISC processors of 20 years ago went away (the exception being the Sun/Oracle SPARC CPU), I suppose if we squint we could call Itanium a kamikazi-like success. (I don't.) In the end AMD's x64 strategy finally killed Itanium, but you know that.

MacOS? Seriously? Nope, but it doesn't try to be. Mach for hardware interface and abstraction, a highly-customized FreeBSD Unix layer for OS guts, and the Mac user interface, file management, and networking? I don't understand why you're pressing this. These ingredients are not a recipe for highly-available enterprise servers, and they weren't meant to be.

I look at IBM's strategy as defending their gross margin, which isn't that great, considering the value they add. It's 55%. Oracle's is about 73%, to compare them to another CPU/systems/application software/cloud vendor. Put another way, if I were choosing a stock to own, it would be Oracle before IBM.
I think you missed why I was comparing MacOS to z/OS. It's because it is ridiculous. It's not very easy, at all, to make something super reliable like z/OS. And there's nothing to suggest Apple would have even the slightest ability to do so. MacOS is mess compared to it. So is Windows. No one makes anything remotely as reliable, and I feel like that's the biggest impediment.

Itanium was aimed primarily for workstations, but there were also attempts to compete with IBM mainframes. That didn't work too well, which is why it's barely a foot note.

Gross margin for software is pretty well known to be a lot higher than hardware. Even for IBM, their software margins are way higher than their hardware, but hardware also tends to drive software sales, so there's that.

I thought SPARC died a long, long time ago, in a galaxy far, far away. That trash is still around? Also, you left out POWER, which is still around, and still IBM, and still an excellent processor in many scenarios.
 
Really? I thought you were FinFET all the way.
finFET is a better technology for 99% of use cases and is far more scalable than planar FDSOI. All else being equal, bulk finFET is also a cheaper process flow than FDSOI planar (at least if you take intel's word on the matter and the fact that nobody has succeeded in replacing bulk finFETs with FDSOI for general purpose applications). But I do have an appreciation for FDSOI as I think it is neat.
 
I think you missed why I was comparing MacOS to z/OS. It's because it is ridiculous.
I know. You think I'm always misunderstanding you. I'm not.
It's not very easy, at all, to make something super reliable like z/OS.
I know. Long ago and far away I worked for a company which produced high-availability enterprise systems. IBM's were better, IMO.
And there's nothing to suggest Apple would have even the slightest ability to do so. MacOS is mess compared to it.
Apple would hire/acquire the team.
So is Windows. No one makes anything remotely as reliable, and I feel like that's the biggest impediment.
I don't know enough about modern Windows internals to discuss it,
Itanium was aimed primarily for workstations, but there were also attempts to compete with IBM mainframes. That didn't work too well, which is why it's barely a foot note.
Not correct. HP, the company that did the CPU design on the Itanium, based the Itanium processor design on the next generation VLIW workstation CPU design they were working on. Intel designed the memory controller and the scalable chipset (the 870). Itanium, as in the Intel product, was aimed at the 64bit server market. I know, because I was a member of the Intel program. HP did try to sell Itanium workstations for a while, and quit in 2004.
Gross margin for software is pretty well known to be a lot higher than hardware. Even for IBM, their software margins are way higher than their hardware, but hardware also tends to drive software sales, so there's that.

I thought SPARC died a long, long time ago, in a galaxy far, far away. That trash is still around? Also, you left out POWER, which is still around, and still IBM, and still an excellent processor in many scenarios.
Oracle is mostly analogous to IBM. They designed their own chips, they design, build, and sell proprietary server systems, custom applications for those systems, and have their own cloud. Oracle has recently switched to AMD Epyc CPUs for their latest Exadata systems, and Ampere for cloud CPUs, after dumping their Intel/Optane strategy, but SPARC M7 systems were sold until relatively recently. They even (uniquely) integrated their own InfiniBand interconnect interface design on-die in the M7. I don't know if they did their own switch chips, or used Mellanox (now Nvidia). Oracle is still very much an on-prem enterprise systems vendor, but lately their cloud-based applications have eclipsed on-prem systems as a revenue generator. (Not surprising, given Oracle's infamous software licensing strategy.)

This is Oracle's 2017 Exadata product literature. I'm not a big Oracle fan, so I don't know the detailed history since then.


As for Power, I've previously posted that the Power10 was my favorite server CPU of its generation, and I'm looking forward to seeing Power11. I haven't seen any announcements lately. Have you?
 
Last edited:
I know. You think I'm always misunderstanding you. I'm not.

I know. Long ago and far away I worked for company which produced high-availability enterprise systems. IBM's were better, IMO.

Apple would hire/acquire the team.

I don't know enough about modern Windows internals to discuss it,

Not correct. HP, the company that did the CPU design on the Itanium, based the Itanium processor design on the next generation VLIW workstation CPU design they were working on. Intel designed the memory controller and the scalable chipset (the 870). Itanium, as in the Intel product, was aimed at the 64bit server market. I know, because I was a member of the Intel program. HP did try to sell Itanium workstations for a while, and quit in 2004.

Oracle is mostly analogous to IBM. They designed their own chips, they design, build, and sell proprietary server systems, custom applications for those systems, and have their own cloud. Oracle has recently switched to AMD Epyc CPUs for their latest Exadata systems, and Ampere for cloud CPUs, after dumping their Intel/Optane strategy, but SPARC M7 systems were sold until relatively recently. They even (uniquely) integrated their own InfiniBand interconnect interface design on-die in the M7. I don't know if they did their own switch chips, or used Mellanox (now Nvidia). Oracle is still very much an on-prem enterprise systems vendor, but lately their cloud-based applications have eclipsed on-prem systems as a revenue generator. (Not surprising, given Oracle's infamous software licensing strategy.)

This is Oracle's 2017 Exadata product literature. I'm not a big Oracle fan, so I don't know the detailed history since then.


As for Power, I've previously posted that the Power10 was my favorite server CPU of its generation, and I'm looking forward to seeing Power11. I haven't seen any announcements lately. Have you?
"Intel targeted the Itanium at a high end market competing with large Unix or even mainframe systems". https://retrocomputing.stackexchang...the-intel-itanium-failed-to-take-on-the-world . So, I think that although it wasn't their main target, it was something they were also targeting to a limited extent. With all the features Itanium server chips had, that x86 based did not, it was clearly aimed a little higher.

I know it was designed by HP, and assumed by Intel, and always found the VLIW architecture interesting. I was always suspicious that hoping the compiler could schedule instructions efficiently was a concept without proof. I'm still not convinced VLIW can't work, because I don't think the failure of Itanium definitively states that. Also, the super fast L1 cache was really strange to me, and had to be a limitation on clock speed. I won't lie, I keep hoping it, and OS/2, make a comeback, but I know it will never happen.

Yeah, I just read Oracle killed their SPARC design about six or seven years ago. While Oracle stumbled into that market with the purchase of Sun, it was never their primary business, and they didn't approach IBM's level of hardware sophistication. I never liked Solaris, at all, but that's just an anecdote. IBM approached from hardware, and mainframes were their primary business for a very long time, even before computers (punched card machines, for example). Now, IBM is pivoting more to software, but big iron is still a very big business, and their machines are excellent.

15 or so years ago I wrote an article on mainframes on Tom's Hardware, so I guess I have a certain affection for them. But, also, I try to fight the public perception that they are dinosaurs, largely irrelevant, and essentially just legacy machines. I'm sure you end up fighting that battle too.
 
"Intel targeted the Itanium at a high end market competing with large Unix or even mainframe systems". https://retrocomputing.stackexchang...the-intel-itanium-failed-to-take-on-the-world . So, I think that although it wasn't their main target, it was something they were also targeting to a limited extent. With all the features Itanium server chips had, that x86 based did not, it was clearly aimed a little higher.
z-series (or whatever it was called twenty years ago) never came up in internal discussions I was in. UNIX-RISC was the competition, and their CPU development programs were seen as a future threat to x86.
I know it was designed by HP, and assumed by Intel, and always found the VLIW architecture interesting. I was always suspicious that hoping the compiler could schedule instructions efficiently was a concept without proof. I'm still not convinced VLIW can't work, because I don't think the failure of Itanium definitively states that. Also, the super fast L1 cache was really strange to me, and had to be a limitation on clock speed. I won't lie, I keep hoping it, and OS/2, make a comeback, but I know it will never happen.
I'm not a VLIW fan. I thought HP was grossly misguided, but the objective was to get them on an Intel CPU, so I shut up.
Yeah, I just read Oracle killed their SPARC design about six or seven years ago. While Oracle stumbled into that market with the purchase of Sun, it was never their primary business, and they didn't approach IBM's level of hardware sophistication. I never liked Solaris, at all, but that's just an anecdote. IBM approached from hardware, and mainframes were their primary business for a very long time, even before computers (punched card machines, for example). Now, IBM is pivoting more to software, but big iron is still a very big business, and their machines are excellent.
Sun servers and Oracle Exadata appliances were two different markets, and database servers are fertile territory for custom hardware architectures (even though that's mostly a bad idea in the long run, in my experienced opinion). Exadata was the primary market. That's where the high margins are for systems. Ellison was also absolutely fascinated by Java (which Sun owned).
15 or so years ago I wrote an article on mainframes on Tom's Hardware, so I guess I have a certain affection for them. But, also, I try to fight the public perception that they are dinosaurs, largely irrelevant, and essentially just legacy machines. I'm sure you end up fighting that battle too.
IMO, there's room for exactly one z-series class product in the IT market. I think IBM should milk it with innovation and superior technology and implementation for as long as they possibly can.
 
z-series (or whatever it was called twenty years ago) never came up in internal discussions I was in. UNIX-RISC was the competition, and their CPU development programs were seen as a future threat to x86.

I'm not a VLIW fan. I thought HP was grossly misguided, but the objective was to get them on an Intel CPU, so I shut up.

Sun servers and Oracle Exadata appliances were two different markets, and database servers are fertile territory for custom hardware architectures (even though that's mostly a bad idea in the long run, in my experienced opinion). Exadata was the primary market. That's where the high margins are for systems. Ellison was also absolutely fascinated by Java (which Sun owned).

IMO, there's room for exactly one z-series class product in the IT market. I think IBM should milk it with innovation and superior technology and implementation for as long as they possibly can.
Not sure what part of Intel you were from, but Itanium was aimed at a lot more than RISC. They were hoping it was going to replace x86 entirely. And I don't remember it that specifically, it was a long time ago, but at least the press releases were indicating they were going after more than mid-range server. Of course, the question always is, what range of the mainframes were they going after? High-end? Probably not, neither HP nor Intel have anywhere near the expertise to compete with IBM. Not now, not then. But, there's a cross-over point where low-end mainframe usage and high-end servers meet, and Intel/HP was very much positioning Itanium systems there, above their x86 server based products. But, Big Blue stepped on them like they were roaches.

Of course, many others tried to compete with IBM, and failed. Fujitsu and Amdahl were probably the most recent, I believe, but not sure. And yeah, they found out it wasn't easy either, and that maybe IBM actually knew what they were doing, or faked it extremely well.

I hated Java, and never considered it a real programming language. No pointers? Please. Nothing about it impressed me, and it had a time, but C and its derivatives are still dominant. Java irritated me to death, and I never warmed up to it. Mickey Mouse language. But, Disney is popular too. Java certainly was, but, now, I think significantly less.

The IPC potential on VLIW is what fascinates me about it. But, In-Order just seemed like too much of a leap of faith, and ultimately limited the processor in pretty fundamental ways. Also, one clock cycle L1 cache? I just wish I understood better why that was a decision they made. But, you know, the reality is, no one really makes VLIW anymore, so maybe it fundamentally flawed, but I don't think Itanium necessarily concluded that. I think it had some very strange implementation decisions.

Processors were too expensive to design and build for the limited scale that Oracle had for their hardware. Well, unless you get a lot of money for each part, like IBM does. It's different now though, with ARM and such doing a lot of the architectural work as the basis for most of these new designs. But, given how quickly designs get outdated, and how expensive they are to make from scratch, right now it doesn't look like we'll see too many, at least not high single-threaded performance cores. But, maybe I'm wrong, guessing on tech is always iffy, and of course, years from now the situation can change, but I don't see too many big, new CPU designs coming out. And even if they do, keeping up with Intel isn't easy. Small cores, yeah. Big ones? Good luck.
 
Not sure what part of Intel you were from, but Itanium was aimed at a lot more than RISC. They were hoping it was going to replace x86 entirely. And I don't remember it that specifically, it was a long time ago, but at least the press releases were indicating they were going after more than mid-range server. Of course, the question always is, what range of the mainframes were they going after? High-end? Probably not, neither HP nor Intel have anywhere near the expertise to compete with IBM. Not now, not then. But, there's a cross-over point where low-end mainframe usage and high-end servers meet, and Intel/HP was very much positioning Itanium systems there, above their x86 server based products. But, Big Blue stepped on them like they were roaches.
You're very amusing. Itanium silicon design engineering. A couple of questions to ask yourself.

1. Do you really think Intel would let HP design a VLIW CPU if it was intended to replace Intel's own x86 product line? Think hard.
Hint: Don't you think it's more likely Intel let HP design Itanium so they would dump the PA-RISC CPU development program and use Itanium?

2. Let's say Intel did intend to carve out a chunk of IBM's mainframe business with Itanium... where would the software stack come from, everything from OS to storage systems, and transaction processing software and databases, which are also IBM compatible to get the applications? HP? IBM itself? Oracle... in 2001? (I think none of the above.)

Of course, many others tried to compete with IBM, and failed. Fujitsu and Amdahl were probably the most recent, I believe, but not sure. And yeah, they found out it wasn't easy either, and that maybe IBM actually knew what they were doing, or faked it extremely well.
You're being silly.
I hated Java, and never considered it a real programming language. No pointers? Please. Nothing about it impressed me, and it had a time, but C and its derivatives are still dominant. Java irritated me to death, and I never warmed up to it. Mickey Mouse language. But, Disney is popular too. Java certainly was, but, now, I think significantly less.
Let's see... who's worth $100B++ and who isn't? ;) Ellison had a plan. Java and its derivatives have been wildly popular, especially with applications developers. Ellison also acquired app companies... remember Peoplesoft? FWIW, I'm not fond of Java either, but I was never in the target developer population.

The IPC potential on VLIW is what fascinates me about it. But, In-Order just seemed like too much of a leap of faith, and ultimately limited the processor in pretty fundamental ways. Also, one clock cycle L1 cache? I just wish I understood better why that was a decision they made. But, you know, the reality is, no one really makes VLIW anymore, so maybe it fundamentally flawed, but I don't think Itanium necessarily concluded that. I think it had some very strange implementation decisions.
The only VLIW implementation I'm aware of in production is from Kalray, but I haven't looked lately. I think off-loading instruction-level parallelization to compilers to compete with superscalar hardware is a losing strategy.
Processors were too expensive to design and build for the limited scale that Oracle had for their hardware. Well, unless you get a lot of money for each part, like IBM does. It's different now though, with ARM and such doing a lot of the architectural work as the basis for most of these new designs. But, given how quickly designs get outdated, and how expensive they are to make from scratch, right now it doesn't look like we'll see too many, at least not high single-threaded performance cores. But, maybe I'm wrong, guessing on tech is always iffy, and of course, years from now the situation can change, but I don't see too many big, new CPU designs coming out. And even if they do, keeping up with Intel isn't easy. Small cores, yeah. Big ones? Good luck.
Ampere is designing their own Arm instruction set server CPU core. I'm anxious to see how it turns out.
 
You're very amusing. Itanium silicon design engineering. A couple of questions to ask yourself.

1. Do you really think Intel would let HP design a VLIW CPU if it was intended to replace Intel's own x86 product line? Think hard.
Hint: Don't you think it's more likely Intel let HP design Itanium so they would dump the PA-RISC CPU development program and use Itanium?

2. Let's say Intel did intend to carve out a chunk of IBM's mainframe business with Itanium... where would the software stack come from, everything from OS to storage systems, and transaction processing software and databases, which are also IBM compatible to get the applications? HP? IBM itself? Oracle... in 2001? (I think none of the above.)


You're being silly.

Let's see... who's worth $100B++ and who isn't? ;) Ellison had a plan. Java and its derivatives have been wildly popular, especially with applications developers. Ellison also acquired app companies... remember Peoplesoft? FWIW, I'm not fond of Java either, but I was never in the target developer population.


The only VLIW implementation I'm aware of in production is from Kalray, but I haven't looked lately. I think off-loading instruction-level parallelization to compilers to compete with superscalar hardware is a losing strategy.

Ampere is designing their own Arm instruction set server CPU core. I'm anxious to see how it turns out.
Are you really going to argue that Intel did not initially want to replace x86 with EPIC? Seriously???? I mean, this is completely wrong. Intel wanted to move to 64-bit, and weren't having a great time extending x86, and using a new instruction set was highly advantageous.

For one, it got them to 64-bit, and got them off an antiquated instruction set that required de-coupling, with the inherent disadvantages in size, power use, and performance that entailed. Most importantly, it was an instruction set that would belong to them, and wouldn't be shared with AMD. Plus, they expected the performance benefits to be significant. The question is, why would they not? They were going to own the processor and intellectual property as they did. What possible reason would they have against this???? Merced even had x86 compatibility built into it, as I remember, but it was way behind a high performing x86 processor while in emulation mode.

I can find about 20 articles saying the same, I guess, but hopefully one will do, since this is simply common knowledge and no one (besides you) seems to argue it. https://www.engadget.com/2017-05-13-intel-ships-last-itanium-chips.html

By the way, Pentium Pro was the basis for both client machines and servers. And did quite well at both. Of course, modern processors are as well. Itanium was meant to be both as well. It's not a foreign concept at all, even back then.

You seem to be implying that Intel could only choose replacing PA-RISC or move off of x86. They chose both. And more than that, as many RISC designers adopted EPIC and stopped designing their own processors.

With regards to trying to encroach on mainframes, there's simply no argument, because they were trying to win mainframe business with Itanium, and even Xeon. Just as I won't take your word for your meetings, don't take mine. But take Intel's. That's fair right? https://www.intel.com/content/dam/d...itanium-xeon-hp-mainframe-workloads-paper.pdf . I guess you weren't at the meetings, but clearly Intel was trying to migrate people off of mainframes (probably lower end ones), to their server products. I guess it didn't work that great.

Your remarks about 100 billion are typically unrelated to anything I said. I said I didn't like the language, not that Ellison made a mistake buying Sun. I don't know enough to comment with such a wide brush. I also said it's not as significant as C derivatives, and it seems a lot less popular than it was. For a while, it was the greatest thing since the Atom bomb, but from what I can see, C derivatives are clearly more widely used than Java. But, I can't say for sure, since I haven't seen definitive data on it, but I sure see C much more than I see Java.

I completely agree that compiler optimization does not even approximate out-of-order hardware on modern processors. For throughput type applications, I guess I can see reasons for it, given how much simpler in order is, but the single-threaded performance is never going to even approximate a well designed OoO processor. I guess they learned that lesson the hard way. But, McKinley did a really good job for when it was made, if I remember correctly, and was very competitive even in single-threaded performance.

What I've seen of Ampere is they use relatively simple cores, and a lot of them. Maybe they will work on a single-threaded beast and surprise us, but so far that's not been the case. Apple tried, and got a smackdown when Alder Lake came out. Given they are stuck with TSMC's processes, which are not primarily focused on performance, some of it is that, but I'm certain their architecture is also responsible given it is also somewhat behind AMD's best, which is also using similar processes. Of course, the power efficiency is quite good, and that's got to be very important to them as well.

I was really disappointed when AMD killed of K12. At this point, I think it's more likely Intel will come out with a very powerful ARM core, since it would help with their foundry business if they could offer to put these together for companies. But, even then, it's so expensive, I don't know enough to determine if it would be worth the cost to them. Maybe a little down the road when their foundry business is established enough to benefit more significantly from such a part, but it seems like right now, it wouldn't shift a lot of business to them yet. Again, I don't know, it's just speculation.
 
Are you really going to argue that Intel did not initially want to replace x86 with EPIC? Seriously???? I mean, this is completely wrong. Intel wanted to move to 64-bit, and weren't having a great time extending x86, and using a new instruction set was highly advantageous.
Yes, seriously. I told you Intel's motivation, which makes perfect business sense, and you just don't like the answer.
For one, it got them to 64-bit, and got them off an antiquated instruction set that required de-coupling, with the inherent disadvantages in size, power use, and performance that entailed. Most importantly, it was an instruction set that would belong to them, and wouldn't be shared with AMD. Plus, they expected the performance benefits to be significant. The question is, why would they not? They were going to own the processor and intellectual property as they did. What possible reason would they have against this???? Merced even had x86 compatibility built into it, as I remember, but it was way behind a high performing x86 processor while in emulation mode.
I told you why, you just don't like the answer. Your reasoning is faulty. I won't speak for or reveal what individual senior CPU designers thought at the time. If we are skeptical of the EPIC strategy, why would we be so surprised if some Intel CPU designers were too?
I can find about 20 articles saying the same, I guess, but hopefully one will do, since this is simply common knowledge and no one (besides you) seems to argue it. https://www.engadget.com/2017-05-13-intel-ships-last-itanium-chips.html
Do you realize the author of the article has no computer industry experience, and no academic background in computer science or computer engineering, and lists only a masters in English as a credential? You continue to be very amusing.
By the way, Pentium Pro was the basis for both client machines and servers. And did quite well at both. Of course, modern processors are as well. Itanium was meant to be both as well. It's not a foreign concept at all, even back then.
Back then this strategy was called "common core". I don't know if Intel is still using it or not.
You seem to be implying that Intel could only choose replacing PA-RISC or move off of x86. They chose both. And more than that, as many RISC designers adopted EPIC and stopped designing their own processors.
That was the whole idea. They made the wrong decision. How do you know what "they" chose?
With regards to trying to encroach on mainframes, there's simply no argument, because they were trying to win mainframe business with Itanium, and even Xeon. Just as I won't take your word for your meetings, don't take mine. But take Intel's. That's fair right? https://www.intel.com/content/dam/d...itanium-xeon-hp-mainframe-workloads-paper.pdf . I guess you weren't at the meetings, but clearly Intel was trying to migrate people off of mainframes (probably lower end ones), to their server products. I guess it didn't work that great.
That paper was written by HP. Yeah, I missed that meeting. :ROFLMAO: Gee, I wonder what their motivation was?
Your remarks about 100 billion are typically unrelated to anything I said. I said I didn't like the language, not that Ellison made a mistake buying Sun.
Good.
I completely agree that compiler optimization does not even approximate out-of-order hardware on modern processors. For throughput type applications, I guess I can see reasons for it, given how much simpler in order is, but the single-threaded performance is never going to even approximate a well designed OoO processor. I guess they learned that lesson the hard way. But, McKinley did a really good job for when it was made, if I remember correctly, and was very competitive even in single-threaded performance.
Amazing, we agree on something.
What I've seen of Ampere is they use relatively simple cores, and a lot of them.
Ampere currently uses Arm Neoverse CPU core IP.
 
The market has spoken and the market cap on IBM and its partners is dwarfed the market cap of TSM and its partners by a staggering margin.
 
Yes, seriously. I told you Intel's motivation, which makes perfect business sense, and you just don't like the answer.

I told you why, you just don't like the answer. Your reasoning is faulty. I won't speak for or reveal what individual senior CPU designers thought at the time. If we are skeptical of the EPIC strategy, why would we be so surprised if some Intel CPU designers were too?

Do you realize the author of the article has no computer industry experience, and no academic background in computer science or computer engineering, and lists only a masters in English as a credential? You continue to be very amusing.

Back then this strategy was called "common core". I don't know if Intel is still using it or not.

That was the whole idea. They made the wrong decision. How do you know what "they" chose?

That paper was written by HP. Yeah, I missed that meeting. :ROFLMAO: Gee, I wonder what their motivation was?

Good.

Amazing, we agree on something.

Ampere currently uses Arm Neoverse CPU core IP.
Your answer makes absolutely no sense, on any level. Intel was trying to from x86 to EPIC. Look at Wiki, or any articles from that time. I can post them ad nauseum, but because you will simply ignore them as being false, do your own research. This isn't even arguable. It's common knowledge, as evidenced by the number of articles saying the same thing. And even Wiki saying in no uncertain terms it was to replace x86. But, yeah, I'll trust you over everyone else, because you claim to know more. Are you arguing just to argue? Pick a better argument.

Since I no longer put any validity into what you'd say about this, you "revealing" anything would have no meaning. It's posturing. Why was Intel so far behind AMD in releasing an extension to 64-bit x86? Why did AMD do it with Microsoft? That was Intel's intention? It makes no sense. But, yeah, I'll take your word for it.

Since HP was selling machines, of course they would trying to compete with IBM. It's not the processor that makes the mainframe, not even close. It's everything around that really makes it special, the same with a super computer. My point is valid, they WERE trying to compete with mainframes, but I suspect the lower end. That is, something published by Intel, which you said was made by HP (with no proof, but it makes sense since they were a system vendor) indicates they were taking a swipe at mainframes as I remembered it. End of story.

Instead of me pretending to have some esoteric knowledge and using that as the basis for my argument, I post articles, one from the maker of the CPU. It's a stronger basis. And if you want, I can post another half dozen links to articles saying EPIC was intended for desktops. It makes a lot of sense to do it, but it just didn't live up to expectations. That happens a lot in the tech world, right?
 
Also, working with IBM equipment didn't impress me at all compared to the competition.
The competition paid IBM handsomely to use those patents. Their IP revenues continue to be huge, and that partly explains their continued R&D. But yeah, execution on manufacturing what they research has withered for decades. In the 1970s they were like TSMC today, with manufacturing capabilities others did not even dream of. But they repeatedly misjudged the importance of new tech - CMOS, mincomputers, PCs (which they grew almost as an accident and never really embraced), mass market software, ... what you see today is a ghost.
 
The competition paid IBM handsomely to use those patents. Their IP revenues continue to be huge, and that partly explains their continued R&D. But yeah, execution on manufacturing what they research has withered for decades. In the 1970s they were like TSMC today, with manufacturing capabilities others did not even dream of. But they repeatedly misjudged the importance of new tech - CMOS, mincomputers, PCs (which they grew almost as an accident and never really embraced), mass market software, ... what you see today is a ghost.
And they also made the mistake of not advancing certain categories of products fast enough, so as not to compete with their own products. This put those categories at a competitive disadvantage vis-a-vis companies without those artificial constraints.
 
The IPC potential on VLIW is what fascinates me about it. But, In-Order just seemed like too much of a leap of faith, and ultimately limited the processor in pretty fundamental ways. Also, one clock cycle L1 cache? I just wish I understood better why that was a decision they made. But, you know, the reality is, no one really makes VLIW anymore, so maybe it fundamentally flawed, but I don't think Itanium necessarily concluded that. I think it had some very strange implementation decisions.
Scaling did not favor VLIW. Logic got more available than SRAM, and as Seymour Cray said (in various ways) compute is all about the memory. VLIW attempted to keep the logic busy all the time, at the cost of weird SRAM. It is better to keep the SRAM busy and waste/idle logic.

But computer folks find that hard to embrace. Like in the cloud, where vendors spend 2x as much on DRAM as on CPUs, but they then try to max out the core usage even though it forces them to buy excess memory or resort to disaggregated memory. Stranding cores is a lot cheaper.
 
But computer folks find that hard to embrace. Like in the cloud, where vendors spend 2x as much on DRAM as on CPUs, but they then try to max out the core usage even though it forces them to buy excess memory or resort to disaggregated memory. Stranding cores is a lot cheaper.
Great point. I'm very anxious to see measurements of how the first CXL disaggregated memory solutions perform with real cloud VMs and apps. The business case if it's successful is amazing. If disaggregated memory is too slow or doesn't scale well, I know a bunch of people who are going to have career diversions.
 
1. Do you really think Intel would let HP design a VLIW CPU if it was intended to replace Intel's own x86 product line? Think hard.
Hint: Don't you think it's more likely Intel let HP design Itanium so they would dump the PA-RISC CPU development program and use Itanium?
Um, yes, that did indeed happen. Intel kept all 64-bit work on Itanium at the start. I was in Microsoft at the time, and we were given that road map. That was why we jumped at the chance when AMD proposed the AMD-64 architecture (which to this day is still named that in MSFT docs and code) and Intel, when they found out, was forced to follow the AMD lead.

Remember the 6-year architecture pipeline, where 2 years of dreaming locked in on a plan that started to get partner buy-in 4 years before full production. Intel planners made that bet in the 1990s when X32 was just beginning to be credible in enterprise, and a BIG machine had 128MB, when Pentium superscaler was still new and victory over RISC unsure. Intel clearly underestimated the importance of superscalar with its out-of-order micro-op based cores, and instead followed the logic that they needed "better than RISC" to beat RISC and that meant, as many people thought at the time, the better thing was VLIW. It seemed a natural progression, beating the RISC folks at their own game of leveraging compilers to plan the instructions to work well with a simple core. Since 64-bit machines would need new ISA and new compilers and rebuilt apps, swallowing all the medicine in one gulp became the plan. Intel started in a field free of competition (AMD almost vanished around the millenium) and they felt empowered to make a big, bold leap.

By the time 6 years had passed the AMD64 combined with the brilliant success of the Xeon micro-ops had sidelined Itanium. Not sure when you got involved but the team doubtless changed its goals over time to aim at the high end server market and did not keep teaching to old goals. Neither AMD nor Xeon were yet making inroads on large servers yet, allowing Itanium some more live. But Xeon with AMD64 instructions and the competing threats from both AMD and from Itanium, stepped up.

Was it a stupid mistake? No, it was a bet which fitted the patterns of CPU architecture evolution in the mid-90s and VLIW was widely expected to be the future. Intel was bigger in enterprise with Unix than with Windows, which was not saying much, and HP was king of enterprise Linux. The early Itaniums were astounding, if barely manufacturable. It was not stupid. It just got blindsided by brilliant execution that kept CISC ISAs relevant long after most people expected/hoped they would die out.

Xeon eventually borrowed a lot from Itanium. HP eventually championed Intel kit in the enterprise market. Intel won that era, handily. Itanium was plan A, then plan B, then simply a source of good ideas.
 
Last edited:
Um, yes, that did indeed happen. Intel kept all 64-bit work on Itanium at the start.
For enterprise servers. x86 work did not stop.
I was in Microsoft at the time, and we were given that road map. That was why we jumped at the chance when AMD proposed the AMD-64 architecture (which to this day is still named that in MSFT docs and code) and Intel, when they found out, was forced to follow the AMD lead.
The second part is how x86-64 happened. The name is appropriate.
Remember the 6-year architecture pipeline, where 2 years of dreaming locked in on a plan that started to get partner buy-in 4 years before full production. Intel planners made that bet in the 1990s when X32 was just beginning to be credible in enterprise, and a BIG machine had 128MB, when Pentium superscaler was still new and victory over RISC unsure. Intel clearly underestimated the importance of superscalar with its out-of-order micro-op based cores, and instead followed the logic that they needed "better than RISC" to beat RISC and that meant, as many people thought at the time, the better thing was VLIW. It seemed a natural progression, beating the RISC folks at their own game of leveraging compilers to plan the instructions to work well with a simple core. Since 64-bit machines would need new ISA and new compilers and rebuilt apps, swallowing all the medicine in one gulp became the plan. Intel started in a field free of competition (AMD almost vanished around the millenium) and they felt empowered to make a big, bold leap.
You're postulating. Internally, VLIW was very unpopular with the CPU technical leaders I knew. I always found it fascinating that no x86 CPU Fellows were on Itanium, at least not that I noticed. The lack of software compatibility was not popular. The emulator was too slow. The compiler dependencies and unpredictable application performance was embarrassing. Itanium was a top-down business deal. One interesting part was watching Gelsinger trying to defend it when he was appointed CTO, after he successfully fought off the i860/i960 years before to defend x86. He did not make compelling arguments.
By the time 6 years had passed the AMD64 combined with the brilliant success of the Xeon micro-ops had sidelined Itanium. Not sure when you got involved but the team doubtless changed its goals over time to aim at the high end server market and did not keep teaching to old goals. Neither AMD nor Xeon were yet making inroads on large servers yet, allowing Itanium some more live. But Xeon with AMD64 instructions and the competing threats from both AMD and from Itanium, stepped up.
2001.
Was it a stupid mistake? No, it was a bet which fitted the patterns of CPU architecture evolution in the mid-90s and VLIW was widely expected to be the future. Intel was bigger in enterprise with Unix than with Windows, which was not saying much, and HP was king of enterprise Linux. The early Itaniums were astounding, if barely manufacturable. It was not stupid. It just got blindsided by brilliant execution that kept CISC ISAs relevant long after most people expected/hoped they would die out.
Astounding? How? They were incredibly late and the software was later. Only HP sold a few systems. IBM dumped it and focused on Power. Dell puked on it. HP produced PA-RISC CPUs and systems until 2008.
Xeon eventually borrowed a lot from Itanium. HP eventually championed Intel kit in the enterprise market. Intel won that era, handily. Itanium was plan A, then plan B, then simply a source of good ideas.
Like what? Switched point-to-point coherent links? Itanium was first in Intel to have them, but Xeons used a different design. I remember a labs proposal to use page coloring, but that was never implemented. What features are you thinking of?
 
Back
Top