Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/intels-mind-boggling-refusal-of-arm-tech.7903/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021171
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel's Mind-boggling Refusal of ARM Tech

benb

Active member
Summary:
-Intel and AMD are now the only chip companies making x86 chips, in a mature slow growth or declining market
-ARM, in contrast, has 249 Cortex-A licensees, in a maturing but still growing market
-AMD has started making ARM-based chips, Opteron A1100 series, with 8xA57 cores @ 1.7-2.0 GHZ depending on model
-AMD's ARM product is not competitive; software required for the 64bit ARM arch (APCI and PCIe) was not available until this year whereas x86 solutions are mature
-Evidence that Intel can only support one architecture (Core) mounted when Intel discontinued Atom
-Intel can share the Core architecture among servers, PCs, tablets, and maybe eventually smartphones, meaning at least internally to Intel, Core has legs
-Intel's costs are inflated because they solely develop and support Core
-Whereas in the foundry world, ARM is a shared, co-developed technology which permits ARM, foundries, Apple and others to share the costs
-To match the foundry world cost structure, Intel would have to cut all Core development costs, and start contributing to/sharing costs with ARM ecosystem; this is probably impossible in Intel's business culture

Why does Intel refuse ARM technology? ARM would make them more cost-competitive, right?

First of all, no. The cost of legacy Core would have to be eliminated before ARM-ecosystem low costs would be operative. Doing both ARM and Core would layer costs, making the problem worse.

Intel would rather invest in Core than ARM, because of the money to be made by Core in servers. The Atom architecture played second fiddle, wasn't developed sufficiently, and ultimately was trashed. The same would likely happen if ARM was substituted for Atom.

It might be too late for Intel to join the ARM party anyway. AMD has not hit with great success in their evolution into ARM products either.

Hopefully Intel will continue to evolve Core so that it can scale from smartphones (2.5 W) to servers (100+W) and compete with ARM at every thermal design level. Core is at 4.5 W since the 14nm node (Core m3). Core was at 17W at the 22nm mode, showing the tremendous improvements new nodes can bring. I expect Intel to have a 2.5W Core part ready at the 10nm node.
 
Last edited:
Intel's business model was vertical integration to protect their IP and mfg entire solution. This was back in late 70s/early 80s. In those days, many/all (?) processor companies owned their own fabs and rarely signed up alternative silicon suppliers. They preferred to build their own additional fabs to supply standard products, not ASIC/SOC embeddable cores. Always removing the weakest link in their ecosystem.

ARM decided to go with a royalty model and enabled ASIC/SOC productsby allowing ARM IP to be synthesized and mfg on ASIC processes. Good timing given EDA and Foundries and their products/services reducing barriers to development. Less investment required (no fabs, process development, etc) and easier to make a profit.

The mobile market is gone and for Intel to spend any additional $s or even becoming an ARM licensee makes no sense. Whether ARM has more licensees or units sold is irrelevant versus Intel's. ARM's license model has strengths and weaknesses, so does Intel's vertical integration. But it would be an interesting investigation on historical trends for Revenue, Net Income, # of employees,
EPS and dividends. If I am not mistaken Intel is mid $50B in revenue and $10-15B in Net Income: Nothing to be ashamed of. Has Intel made "bad bets"? sure. Given their cash flow, they have the resources to continue making "bets" for some time.

Disclosure:
I have never been an Intel employee but own some Intel shares.
 

simguru

Member
The battle is now for the neural-network processor market, ARM will be in the dust as much as Intel at the end of that.
 

hist78

Well-known member
The battle is now for the neural-network processor market, ARM will be in the dust as much as Intel at the end of that.

I am wondering what architecture the new Google Tensor Processing Unit is based on?
 

Jozo035

Member
I am wondering what architecture the new Google Tensor Processing Unit is based on?

Statically scheduled (WLIW, EPIC...) with minimum amount of instructions. This is solution to get best performance from given gates.

Or even better, synthesized neural network with any degree of variability/ reconfigurability.
 

hist78

Well-known member

Here is an article that gives us some more detail:
Google Takes Unconventional Route with Homegrown Machine Learning Chips

Google didn’t disclose what manufacturing node the TPU is built at, but it’s most likely a 28-nanometer node, which was the standard for a new GPU last year. Now the new Pascal chips from Nvidia are manufactured using a FInFET process at 16 nanometers, which wasn’t available a year ago.

As for the design, Jouppi explained that the decision to do an ASIC as opposed to a customizable FPGA was dictated by the economics.
“We thought about doing an FPGA, but because they are programmable and not that power efficient–remember we are getting an order of magnitude more performance per watt — we decided it was not that big a step up to customization.”
 

simguru

Member
ASICs are best for power efficiency, but you do have be making a lot of them for it to be worthwhile. The fact that Google aren't selling their gear as a general purpose solution probably indicates that they had a software solution for something, but that it was too slow or too power hungry and they needed a lot of it so they decided to build special hardware. Not sure there's anything "unconventional" going on here.
 
I

ippisl

Guest
Is there a possibility this is using a structured asic(like eAsic 28nm) ? or the numbers seem to good for that ?
 

hist78

Well-known member
simguru: It is not that bad. Especially for company like Google it is much more cost effective than buying 1000's of Xeons.

presentations/iccs_moore.pdf at master * adapteva/presentations * GitHub (page 16)

Google is not selling TPU as a hardware because they are selling services running on this, which is much more valuable. ;)

Google has more than one million servers three years ago already. So I can assume their server number is much greater today considering all the AI and cloud computing development. Assume each year Google needs to replace or procure 300,000 servers and assume half of them are dual processors servers, then we are talking about at least 450,000 units of processors Google needs each year.

Using this number as a baseline, Google can gain various benefit by utilizing its own hardware/software solutions, such as the TPU.
 

hist78

Well-known member
Back to Benb's subject "Intel's Mind-boggling Refusal of ARM Tech". I think a bigger picture is what exactly Intel stands for?

1. Does "Intel Inside" mean every thing Intel is selling must be manufactured in-house?

2. Does "Intel Inside" mean every thing Intel is selling must be based on x86?

3. Should Intel put most their resources on the product customers want to buy or on those products Intel can make or wants to make?

4. Should Intel avoid competing against its own customers or just double down to become a even bigger system house selling complete server systems, storage solutions, software, and anything Intel can make money?

5. How does Intel plan to do in terms of manufacturing and sales cost, R&D cost, and scale of economy? The competitors and market condition are changing very fast. By looking at Intel's debt burden and gross and net profit margin, Intel doesn't perform better, if not worst, than many other key market players. Can Intel change fast enough to take the challenge?
 

simguru

Member
Google has more than one million servers three years ago already. So I can assume their server number is much greater today considering all the AI and cloud computing development. Assume each year Google needs to replace or procure 300,000 servers and assume half of them are dual processors servers, then we are talking about at least 450,000 units of processors Google needs each year.

Using this number as a baseline, Google can gain various benefit by utilizing its own hardware/software solutions, such as the TPU.

As someone pointed out a while back, Intel (and I presume Google) are major consumers of electricity, which is not (currently) free; so there are major savings to be made on the power bills when you are talking thousands of servers.
 

simguru

Member
simguru: It is not that bad. Especially for company like Google it is much more cost effective than buying 1000's of Xeons.

presentations/iccs_moore.pdf at master * adapteva/presentations * GitHub (page 16)

Google is not selling TPU as a hardware because they are selling services running on this, which is much more valuable. ;)

I like what Adapteva are doing too, but the fundamental problem with whizzy new hardware is that there's usually only a couple of people that know how to program it. Since I have a general solution for how to do that I've tried talking to Adapteva and other people, but generally they persist in doing things the hard way - I worked at Inmos on "Transputer" stuff, and they died for similar reasons (a long time ago).

Intel has the opportunity to dig out of it's architectural hole by leveraging the 3D XPoint memory technology and doing something more like Adapteva with "in-situ" processing, but it seems likely they'll miss that and get into some death spiral of stock-price navel gazing.
 

hist78

Well-known member
As someone pointed out a while back, Intel (and I presume Google) are major consumers of electricity, which is not (currently) free; so there are major savings to be made on the power bills when you are talking thousands of servers.

I totally agree. For many data center operators, how to cool the server farms and manage the ever growing electricity demand is a huge challenge. How much heat generated and how much electricity consumed for a processor or a server architecture is a critical component in the design and selection process, not just the performance.
 

simguru

Member
The fundamental problem for ARM & Intel is that they are still using an architecture and methodology from the 80s that was designed for small single threaded tasks, and only got faster because of Moores Law and Intel's scaling it (which stopped alternatives getting traction). As an approach to implementing algorithms on Silicon it's pretty poor, and equally poor for most tasks, but it was sufficiently bad for graphics (and games) that people went and did special processors for that. Now that things aren't scaling people are working on getting software onto the other processors.

The death of CPU scaling: From one core to many — and why we’re still stuck | ExtremeTech

- I'd say the top line is flattening off now, other than 3D/die-stacking, but 3D doesn't work well for hot CPUs like Intel's.
 

benb

Active member
Intel used to have an ARM license, that came along with DEC when Intel acquired DEC. They continued to make XScale processors for Windows CE devices for a while. Then Intel sold this business to Marvell.

ARM tech was in it's infancy then. It was before the iPhone and mobile devices really became a force. Perhaps Intel was too early into the ARM biz. Or, perhaps, this was a deliberate decision based on Intel evaluating ARM tech and deciding it overlapped with their own developments, or didn't provide a unique advantages.

ARM tech primarily means "low cost", and I believe that is achieved by sharing costs among a large group of licensees. When ARM had a small number of licensees, the low cost advantage was small.

Intel Fab 68 is in a position to produce ARM chips at a competitive cost, leveraging the cost advantages in Asia, low taxes, and the cost sharing among ARM licensees. They are in a position to produce ARM server chips, if and when a market for ARM server chips develops. So they have options.

I think for now, Intel should focus on Core tech, which is better performing than ARM. It can scale down to 2.5 W TDP, at 10nm, just like the big ARM SOCs. Staying focused on a profitable niche, like Apple, is a good long-term strategy. Meanwhile, each of the ARM licensees have to contend with each other, and many/most of them won't survive in the long run.
 

graphexec

New member
The battle is now for the neural-network processor market, ARM will be in the dust as much as Intel at the end of that.

Interestingly the ex head of NASA has spent 10 or so years working on what he calls a neuromorphic processor called KnuPath.

I noticed that there is an ARM logo on their chip ..
wonder whats going on there.

It's a 256 core chip, at around 30 Watts of power dissipation.
My guess is that the cores are clocked at around 600 MHz.

Stealthy Military Startup Launches Neural Processor | EE Times
 
Top