From CPUs that overheated to those with poor performance—plus some that nearly killed the companies that made them.
Credit: Javier Zayas Photography/Getty Images
Processors are built by multi-billion-dollar corporations using some of the most cutting-edge technologies known to man. But even with all their expertise, investment, and know-how, sometimes these CPU makers drop the ball. Some CPUs have just been poor performers for the money or their generation, while others easily overheated or drew too much power.
Some CPUs were so bad that they set their companies back generations, taking years to recover.
But years on from their release and the fallout, we no longer need to feel let down, disappointed, or ripped off by these lame-duck processors. We can enjoy them for the catastrophic failures they were, and hope the companies involved learned a valuable lesson.
Here are some of the worst CPUs ever made.
Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a considerable expense, the actual bug was tiny. It affected no one who wasn't already doing scientific computing, and, in technical terms, the scale and scope of the problem were never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium microarchitecture.
Credit: Intel/Wikimedia Commons
Intel's Itanium was a radical attempt to push hardware complexity into software optimizations. All the work to determine which instructions to execute in parallel was handled by the compiler before the CPU ran a byte of code.
Analysts predicted that Itanium would conquer the world. It didn't. Compilers were unable to extract necessary performance, and the chip was radically incompatible with everything that had come before it. Once expected to replace x86 entirely and change the world, Itanium limped along for years with a niche market and precious little else.
Itanium's failure was particularly egregious because it represented the death of Intel's entire 64-bit strategy (at the time). Intel had originally planned to move the entire market to IA64 rather than extend x86. AMD's x86-64 (AMD64) proved quite popular, partly because Intel had no luck bringing a competitive Itanium to market. Not many CPUs can claim to have failed so egregiously that they killed their manufacturers' plans for an entire instruction set.
Credit: JulianVilla26/Wikimedia Commons
Prescott doubled down on the Pentium 4's already-long pipeline, extending it to nearly 40 stages, while Intel simultaneously shrank it down to a 90nm die. This was a mistake.
The new chip was crippled by pipeline stalls that even its new branch prediction unit couldn't prevent, and parasitic leakage drove high power consumption, preventing the chip from hitting the clocks it needed to be successful. Prescott and its dual-core sibling, Smithfield, are the weakest desktop products Intel ever fielded relative to its competition at the time. Intel set revenue records with the chip, but its reputation took a beating.
Its reputation for running rather toasty would be a recurring issue for Intel in the future, too.
Credit: AMD
AMD's Bulldozer was supposed to steal a march on Intel by cleverly sharing certain chip capabilities to improve efficiency and reduce die size. AMD wanted a smaller core with higher clocks to offset any penalties from the shared design. What it got was a disaster.
Bulldozer couldn't hit its target clocks, drew too much power, and its performance was a fraction of what it needed to be. It's rare that a CPU is so bad that it nearly kills the company that invented it. Bulldozer nearly did. AMD did penance for Bulldozer by continuing to use it. Despite the core's flaws, it formed the backbone of AMD's CPU family for the next six years.
Fortunately, during the intervening years, AMD went back to the drawing board, and in 2017, Ryzen was born. And the rest is history.
Credit: VIA/Wikimedia
Cyrix was one of the x86 manufacturers that didn't survive the late 1990s. (VIA now holds its x86 license.) Chips like the 6x86 were a major part of the reason why.
Cyrix has the dubious distinction of being the reason why some games and applications carry compatibility warnings. The 6x86 was significantly faster than Intel's Pentium in integer code, but its FPU was abysmal, and its chips weren't particularly stable when paired with Socket 7 motherboards. If you were a gamer in the late 1990s, you wanted an Intel CPU but could settle for AMD. The 6x86 was one of the terrible "everybody else" chips you didn't want in your Christmas stocking.
The 6x86 failed because it couldn't differentiate itself from Intel or AMD in a way that made sense or gave Cyrix an effective niche of its own. The company tried to develop a unique product and wound up earning itself a second place on this list instead.
Credit: VIA/Wikimedia Commons
The Cyrix MediaGX was the first attempt to build an integrated SoC processor for desktop, with graphics, CPU, PCI bus, and memory controller all on one die. Unfortunately, this happened in 1998, which means all those components were really terrible.
Motherboard compatibility was incredibly limited, the underlying CPU architecture (Cyrix 5x86) was equivalent to Intel's 80486, and the CPU couldn't connect to an off-die L2 cache (the only kind of L2 cache there was, back then). Chips like the Cyrix 6x86 could at least claim to compete with Intel in business applications. The MediaGX couldn't compete with a dead manatee.
The entry for the MediaGX on Wikipedia includes the sentence "Whether this processor belongs in the fourth or fifth generation of x86 processors can be considered a matter of debate." The 5th generation of x86 CPUs is the Pentium generation, while the 4th generation refers to 80486 CPUs. The MediaGX shipped in 1997 with a CPU core stuck somewhere between 1989 and 1992, at a time when people really did replace their PCs every 2-3 years if they wanted to stay on the cutting edge.
It also notes, "The graphics, sound, and PCI bus ran at the same speed as the processor clock also due to tight integration. This made the processor appear much slower than its actual rated speed." When your 486-class CPU is being choked by its own PCI bus, you know you've got a problem.
Credit: Texas Instruments/Wikimedia Commons
The TMS9900 is a noteworthy failure for one enormous reason: When IBM was looking for a chip to power the original IBM PC, it had two basic choices to hit its own ship date: the TMS9900 and the Intel 8086/8088 (the Motorola 68K was under development but wasn't ready in time).
The TMS9900 only had 16 bits of address space, while the 8086 had 20. That made the difference between addressing 1MB of RAM and just 64KB. TI also neglected to develop a 16-bit peripheral chip, which left the CPU stuck with performance-crippling 8-bit peripherals. The TMS9900 also had no on-chip general purpose registers; all 16 of its 16-bit registers were stored in main memory. TI had trouble securing partners for second-sourcing and when IBM had to pick, it picked Intel.
Good choice.
Credit: PCMag
It's rare to call a top chip of its generation a "bad" CPU, and even rarer to denigrate the name of a company's current fastest gaming CPU, but with the Intel 14900K, it deserves its place on this list. Although it is fantastically fast in gaming and some productivity workloads, and can compete with some of the best chips available at the end of 2025, it is still a bad CPU for a range of key reasons.
For starters, it barely moved the needle. The 14900K is basically an overclocked 13900K (or 13900KS if we're considering special editions), which wasn't much different from the 12900K that came before it. The 14900K was the poster child for Intel's lack of innovation, which is saying a lot considering how long Intel languished on its 14nm node.
The 14900K also pulled way too much power and got exceptionally hot. I had to underclock it when reviewing it just to get it to stop thermal throttling—and that was on a 360mm AIO cooler, too.
The 14th-generation was plagued with bugs and microcode issues, too, causing crashes and stability issues that required regular BIOS updates to try to fix.
The real problem was that the rest of the range was just better. The 14600K is almost as fast in gaming despite being far cheaper, easier to cool, easier to overclock, and less prone to crashes. The rest of the range wasn't too exciting, though the 14100 remains a stellar gaming CPU under $100 today.
The 14900K was the most stopgap of stopgap flagships. It was a capstone on years of Intel stagnation, and a weird pinnacle in performance at the same time. It's not as big a dud as the other chips on this list, but it did nothing to help Intel's modern reputation, and years later, it's still trying to course-correct.
Credit: Qualcomm
The Snapdragon 810 was Qualcomm's first attempt to build a big.LITTLE CPU and was based on TSMC's short-lived 20nm process. The SoC was easily Qualcomm's least-loved high-end chip in recent memory—Samsung skipped it altogether, and other companies ran into serious problems with the device.
Qualcomm claimed that the issues with the chip were caused by poor OEM power management, but whether the problem was related to TSMC's 20nm process, problems with Qualcomm's implementation, or OEM optimization, the result was the same: A hot-running chip that won precious few top-tier designs and is missed by no one.
Credit: Ryanbutterworth/Wikimedia Commons
Apple's partnership with IBM on the PowerPC 970 (marketed by Apple as the G5) was supposed to be a turning point for the company. When it announced the first G5 products, Apple promised to launch a 3GHz chip within a year. But IBM failed to deliver components that could hit these clocks at reasonable power consumption, and the G5 was incapable of replacing the G4 in laptops due to high power draw.
Apple was forced to move to Intel and x86 in order to field competitive laptops and improve its desktop performance. The G5 wasn't a terrible CPU, but IBM wasn't able to evolve the chip to compete with Intel.
Ironically, it would be Intel years later that couldn't compete with ARM that would lead Apple to build its own silicon in the M-series.
Credit: Intel/Wikimedia Commons
The Coppermine Pentium III was a fine architecture. But during the race to 1GHz against AMD, Intel was desperate to maintain a performance lead, even as shipments of its high-end systems slipped further and further away (at one point, AMD was estimated to have a 12:1 advantage over Intel when it came to actually shipping 1GHz systems).
In a final bid to regain the performance clock, Intel tried to push the 180nm Cumine P3 up to 1.13GHz. It failed. The chips were fundamentally unstable, and Intel recalled the entire batch.
Credit: Sony/Wikimedia Commons
We'll take some heat for this one, but we'd toss the Cell Broadband Engine on this pile as well. Cell is an excellent example of how a chip can be phenomenally good in theory, yet nearly impossible to leverage in practice.
Sony may have used it as the general processor for the PS3, but Cell was far better at multimedia and vector processing than it ever was at general-purpose workloads (its design dates to a time when Sony expected to handle both CPU and GPU workloads with the same processor architecture). It's quite difficult to multi-thread the CPU to take advantage of its SPEs (Synergistic Processing Elements), and it bears little resemblance to any other architecture.
It did end up as part of a linked-PS3 supercomputer built by the Department of Defense, which shows just how capable these chips could be. But that's hardly a daily-driver use case.
Some of them just failed to meet overinflated expectations (Itanium). Others nearly killed the company that built it (Bulldozer). Do we judge Prescott on its heat and performance (bad, in both cases) or on the revenue records Intel smashed with it?
Evaluated in the broadest possible meanings of "worst," I think one chip ultimately stands feet and ankles below the rest: the Cyrix MediaGX. Even then, it is impossible not to admire the forward-thinking ideas behind this CPU. Cyrix was the first company to build what we would now call an SoC, with PCI, audio, video, and RAM controller all on the same chip. More than 10 years before Intel or AMD would ship their own CPU+GPU configurations, Cyrix was out there, blazing a trail.
It's unfortunate that the trail led straight into what the locals affectionately call "Alligator Swamp."
Designed for the extreme budget market, the Cyrix MediaGX disappointed just about anyone who ever came in contact with it. Performance was poor—a Cyrix MediaGX 333 had 95% the integer performance and 76% of the FPU performance of a Pentium 233 MMX, a CPU running at just 70% of its clock. The integrated graphics had no video memory at all. There's no option to add an off-die L2 cache, either.
If you found this under your tree, you cried. If you had to use this for work, you cried. If you needed to use a Cyrix MediaGX laptop to upload a program to sabotage the alien ship that was going to destroy all of humanity, you died.
All in all, not a great chip. Others were bad, sure, but none embody that quite like the Cyrix MediaGX.
Credit: Javier Zayas Photography/Getty Images
Processors are built by multi-billion-dollar corporations using some of the most cutting-edge technologies known to man. But even with all their expertise, investment, and know-how, sometimes these CPU makers drop the ball. Some CPUs have just been poor performers for the money or their generation, while others easily overheated or drew too much power.
Some CPUs were so bad that they set their companies back generations, taking years to recover.
But years on from their release and the fallout, we no longer need to feel let down, disappointed, or ripped off by these lame-duck processors. We can enjoy them for the catastrophic failures they were, and hope the companies involved learned a valuable lesson.
Here are some of the worst CPUs ever made.
Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a considerable expense, the actual bug was tiny. It affected no one who wasn't already doing scientific computing, and, in technical terms, the scale and scope of the problem were never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium microarchitecture.
Intel Itanium
Credit: Intel/Wikimedia Commons
Intel's Itanium was a radical attempt to push hardware complexity into software optimizations. All the work to determine which instructions to execute in parallel was handled by the compiler before the CPU ran a byte of code.
Analysts predicted that Itanium would conquer the world. It didn't. Compilers were unable to extract necessary performance, and the chip was radically incompatible with everything that had come before it. Once expected to replace x86 entirely and change the world, Itanium limped along for years with a niche market and precious little else.
Itanium's failure was particularly egregious because it represented the death of Intel's entire 64-bit strategy (at the time). Intel had originally planned to move the entire market to IA64 rather than extend x86. AMD's x86-64 (AMD64) proved quite popular, partly because Intel had no luck bringing a competitive Itanium to market. Not many CPUs can claim to have failed so egregiously that they killed their manufacturers' plans for an entire instruction set.
Intel Pentium 4 (Prescott)
Credit: JulianVilla26/Wikimedia Commons
Prescott doubled down on the Pentium 4's already-long pipeline, extending it to nearly 40 stages, while Intel simultaneously shrank it down to a 90nm die. This was a mistake.
The new chip was crippled by pipeline stalls that even its new branch prediction unit couldn't prevent, and parasitic leakage drove high power consumption, preventing the chip from hitting the clocks it needed to be successful. Prescott and its dual-core sibling, Smithfield, are the weakest desktop products Intel ever fielded relative to its competition at the time. Intel set revenue records with the chip, but its reputation took a beating.
Its reputation for running rather toasty would be a recurring issue for Intel in the future, too.
AMD Bulldozer
Credit: AMD
AMD's Bulldozer was supposed to steal a march on Intel by cleverly sharing certain chip capabilities to improve efficiency and reduce die size. AMD wanted a smaller core with higher clocks to offset any penalties from the shared design. What it got was a disaster.
Bulldozer couldn't hit its target clocks, drew too much power, and its performance was a fraction of what it needed to be. It's rare that a CPU is so bad that it nearly kills the company that invented it. Bulldozer nearly did. AMD did penance for Bulldozer by continuing to use it. Despite the core's flaws, it formed the backbone of AMD's CPU family for the next six years.
Fortunately, during the intervening years, AMD went back to the drawing board, and in 2017, Ryzen was born. And the rest is history.
Cyrix 6x86
Credit: VIA/Wikimedia
Cyrix was one of the x86 manufacturers that didn't survive the late 1990s. (VIA now holds its x86 license.) Chips like the 6x86 were a major part of the reason why.
Cyrix has the dubious distinction of being the reason why some games and applications carry compatibility warnings. The 6x86 was significantly faster than Intel's Pentium in integer code, but its FPU was abysmal, and its chips weren't particularly stable when paired with Socket 7 motherboards. If you were a gamer in the late 1990s, you wanted an Intel CPU but could settle for AMD. The 6x86 was one of the terrible "everybody else" chips you didn't want in your Christmas stocking.
The 6x86 failed because it couldn't differentiate itself from Intel or AMD in a way that made sense or gave Cyrix an effective niche of its own. The company tried to develop a unique product and wound up earning itself a second place on this list instead.
Cyrix MediaGX
Credit: VIA/Wikimedia Commons
The Cyrix MediaGX was the first attempt to build an integrated SoC processor for desktop, with graphics, CPU, PCI bus, and memory controller all on one die. Unfortunately, this happened in 1998, which means all those components were really terrible.
Motherboard compatibility was incredibly limited, the underlying CPU architecture (Cyrix 5x86) was equivalent to Intel's 80486, and the CPU couldn't connect to an off-die L2 cache (the only kind of L2 cache there was, back then). Chips like the Cyrix 6x86 could at least claim to compete with Intel in business applications. The MediaGX couldn't compete with a dead manatee.
The entry for the MediaGX on Wikipedia includes the sentence "Whether this processor belongs in the fourth or fifth generation of x86 processors can be considered a matter of debate." The 5th generation of x86 CPUs is the Pentium generation, while the 4th generation refers to 80486 CPUs. The MediaGX shipped in 1997 with a CPU core stuck somewhere between 1989 and 1992, at a time when people really did replace their PCs every 2-3 years if they wanted to stay on the cutting edge.
It also notes, "The graphics, sound, and PCI bus ran at the same speed as the processor clock also due to tight integration. This made the processor appear much slower than its actual rated speed." When your 486-class CPU is being choked by its own PCI bus, you know you've got a problem.
Texas Instruments TMS9900
Credit: Texas Instruments/Wikimedia Commons
The TMS9900 is a noteworthy failure for one enormous reason: When IBM was looking for a chip to power the original IBM PC, it had two basic choices to hit its own ship date: the TMS9900 and the Intel 8086/8088 (the Motorola 68K was under development but wasn't ready in time).
The TMS9900 only had 16 bits of address space, while the 8086 had 20. That made the difference between addressing 1MB of RAM and just 64KB. TI also neglected to develop a 16-bit peripheral chip, which left the CPU stuck with performance-crippling 8-bit peripherals. The TMS9900 also had no on-chip general purpose registers; all 16 of its 16-bit registers were stored in main memory. TI had trouble securing partners for second-sourcing and when IBM had to pick, it picked Intel.
Good choice.
Intel Core i9-14900K
Credit: PCMag
It's rare to call a top chip of its generation a "bad" CPU, and even rarer to denigrate the name of a company's current fastest gaming CPU, but with the Intel 14900K, it deserves its place on this list. Although it is fantastically fast in gaming and some productivity workloads, and can compete with some of the best chips available at the end of 2025, it is still a bad CPU for a range of key reasons.
For starters, it barely moved the needle. The 14900K is basically an overclocked 13900K (or 13900KS if we're considering special editions), which wasn't much different from the 12900K that came before it. The 14900K was the poster child for Intel's lack of innovation, which is saying a lot considering how long Intel languished on its 14nm node.
The 14900K also pulled way too much power and got exceptionally hot. I had to underclock it when reviewing it just to get it to stop thermal throttling—and that was on a 360mm AIO cooler, too.
The 14th-generation was plagued with bugs and microcode issues, too, causing crashes and stability issues that required regular BIOS updates to try to fix.
The real problem was that the rest of the range was just better. The 14600K is almost as fast in gaming despite being far cheaper, easier to cool, easier to overclock, and less prone to crashes. The rest of the range wasn't too exciting, though the 14100 remains a stellar gaming CPU under $100 today.
The 14900K was the most stopgap of stopgap flagships. It was a capstone on years of Intel stagnation, and a weird pinnacle in performance at the same time. It's not as big a dud as the other chips on this list, but it did nothing to help Intel's modern reputation, and years later, it's still trying to course-correct.
Dishonorable Mention: Qualcomm Snapdragon 810
Credit: Qualcomm
The Snapdragon 810 was Qualcomm's first attempt to build a big.LITTLE CPU and was based on TSMC's short-lived 20nm process. The SoC was easily Qualcomm's least-loved high-end chip in recent memory—Samsung skipped it altogether, and other companies ran into serious problems with the device.
Qualcomm claimed that the issues with the chip were caused by poor OEM power management, but whether the problem was related to TSMC's 20nm process, problems with Qualcomm's implementation, or OEM optimization, the result was the same: A hot-running chip that won precious few top-tier designs and is missed by no one.
Dishonorable Mention: IBM PowerPC G5
Credit: Ryanbutterworth/Wikimedia Commons
Apple's partnership with IBM on the PowerPC 970 (marketed by Apple as the G5) was supposed to be a turning point for the company. When it announced the first G5 products, Apple promised to launch a 3GHz chip within a year. But IBM failed to deliver components that could hit these clocks at reasonable power consumption, and the G5 was incapable of replacing the G4 in laptops due to high power draw.
Apple was forced to move to Intel and x86 in order to field competitive laptops and improve its desktop performance. The G5 wasn't a terrible CPU, but IBM wasn't able to evolve the chip to compete with Intel.
Ironically, it would be Intel years later that couldn't compete with ARM that would lead Apple to build its own silicon in the M-series.
Dishonorable Mention: Pentium III 1.13GHz
Credit: Intel/Wikimedia Commons
The Coppermine Pentium III was a fine architecture. But during the race to 1GHz against AMD, Intel was desperate to maintain a performance lead, even as shipments of its high-end systems slipped further and further away (at one point, AMD was estimated to have a 12:1 advantage over Intel when it came to actually shipping 1GHz systems).
In a final bid to regain the performance clock, Intel tried to push the 180nm Cumine P3 up to 1.13GHz. It failed. The chips were fundamentally unstable, and Intel recalled the entire batch.
Dishonorable Mention: Cell Broadband Engine
Credit: Sony/Wikimedia Commons
We'll take some heat for this one, but we'd toss the Cell Broadband Engine on this pile as well. Cell is an excellent example of how a chip can be phenomenally good in theory, yet nearly impossible to leverage in practice.
Sony may have used it as the general processor for the PS3, but Cell was far better at multimedia and vector processing than it ever was at general-purpose workloads (its design dates to a time when Sony expected to handle both CPU and GPU workloads with the same processor architecture). It's quite difficult to multi-thread the CPU to take advantage of its SPEs (Synergistic Processing Elements), and it bears little resemblance to any other architecture.
It did end up as part of a linked-PS3 supercomputer built by the Department of Defense, which shows just how capable these chips could be. But that's hardly a daily-driver use case.
What's the Worst CPU Ever?
It's surprisingly difficult to pick an absolute worst CPU. All of the ones on this list were bad in their own way at that specific time. Some of them would have been amazing if they'd been released just a year earlier, or if other technologies had kept pace.Some of them just failed to meet overinflated expectations (Itanium). Others nearly killed the company that built it (Bulldozer). Do we judge Prescott on its heat and performance (bad, in both cases) or on the revenue records Intel smashed with it?
Evaluated in the broadest possible meanings of "worst," I think one chip ultimately stands feet and ankles below the rest: the Cyrix MediaGX. Even then, it is impossible not to admire the forward-thinking ideas behind this CPU. Cyrix was the first company to build what we would now call an SoC, with PCI, audio, video, and RAM controller all on the same chip. More than 10 years before Intel or AMD would ship their own CPU+GPU configurations, Cyrix was out there, blazing a trail.
It's unfortunate that the trail led straight into what the locals affectionately call "Alligator Swamp."
Designed for the extreme budget market, the Cyrix MediaGX disappointed just about anyone who ever came in contact with it. Performance was poor—a Cyrix MediaGX 333 had 95% the integer performance and 76% of the FPU performance of a Pentium 233 MMX, a CPU running at just 70% of its clock. The integrated graphics had no video memory at all. There's no option to add an off-die L2 cache, either.
If you found this under your tree, you cried. If you had to use this for work, you cried. If you needed to use a Cyrix MediaGX laptop to upload a program to sabotage the alien ship that was going to destroy all of humanity, you died.
All in all, not a great chip. Others were bad, sure, but none embody that quite like the Cyrix MediaGX.
