Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-4q-results-are-out-data-centric-business-continues-to-grow-double-digits.10091/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel 4Q results are out! Data-centric business continues to grow double digits

Subramaniam

New member
Intel (INTC) reported $5.4 billion impact due to tax reforms. It also declared a cash dividend of $0.30 per share. GAAP loss was $0.15 per share, while non-GAAP EPS came in at $1.08, which is an increase of 37% vs. last year. Data-centric businesses continues its stellar performance, while PC-centric business remains flat. Check out the earnings highlights: AlphaStreet – Bite – INTC Infographic
 
This board hates INTC, making it an outstanding contrarian stock pick (IMHO). They are obtaining growth while operating a cash cow that requires low maintenance, with few competitors (a sweet place to be) via DC. They are no longer unparalleled as a leading process developer (all true Semiwiki), but still invest heavily in R&D and obtain decent returns on these investments, meaning they have a future as a leader, just not the only leader.
 
You can still count me in the long term Intel bear camp. Intel's focus on data center looks to me like a classic upmarket retreat as described by Christensen - who also acknowledged that an upmarket retreat can be profitable for a time.

2017 was the year that the foundries eliminated Intel's historic process advantage, and it won't be long until the foundries are a node ahead. How long will Intel be able to hold the high end if they fall 2-3 years behind on process?
 
If INTC can bring a node up at a cost that permits economic returns for 5 years, then perhaps their new position as a fast follower will continue to be profitable. In DCF it’s the later years further out that make you the most money. Being the leader buys you 6-12 months of safe but expensive returns, in the early innings. I think it’s finding uses for older fabs that makes INTC the most money. And that’s true at TSMC also, they burn their 28nm dominance in the race for advanced nodes.
 
I grew up with Intel and have a great deal of respect for the company. Unfortunately management missteps have cost them dearly. Intel Mobile and Custom Foundry are absolute failures. Today Intel 10nm has been delayed and that puts the Altera guys at a huge disadvantage against Xilinx who will have 7nm chips soon. And I don't even know where to start with MobilEye. Very distracting acquisitions when Intel could be focusing on their core business and making a ton of cash for investors, my opinion.
 
This board hates INTC

It has nothing to do with hate. Intel's 10nm is 2 years behind schedule and to compound matters Intel gives every indication that it has no clue as to when the node will be up and running. The last solid information I have heard is Intel's 10nm prototype chips offer no measurable increase in performance over the current 14nm++ chips.

TSMC says it will be producing microprocessors on their new 7nm Node in the 2nd Quarter of 2018. Likewise Global Foundries says it will be producing 7nm Microprocessors by the 4th Quarter. Lets see what they deliver. Will they be full sized microprocessors or BB Chips used for Smart Phones and IOT Devices ? I haven't read anything about Samsung but I expect they will be in the mix although I expect them to focus on Ram chips which have suddenly become very lucrative.

In the end once the Foundries start with 7nm DUV they can make the transitions to 7nm EUV and then 5nm fairly easily and quickly. And that is where Moore's Observation (Law) will hit a brick wall. Yes TSMC is claiming they will be doing 3nm by the Mid 2020s but my sense is the laws will catch up with the foundries after 7nm EUV. The smart move maybe to sink resources into refining 7nm EUV bbefore heading off to 5nm.

The last 40 years has been a helluva ride for the Silicon Business but we must come to grips with the reality the curtain is rising on the final act of this opera and the fat lady is in the corner warming up her vocal chords.
 
Intel's focus on data center looks to me like a classic upmarket retreat as described by Christensen

Google, Facebook and the like are all about data and this data is handled data centers. Growth of mobile/wearable/IOT will also mean growth in data center.
 
[...]
In the end once the Foundries start with 7nm DUV they can make the transitions to 7nm EUV and then 5nm fairly easily and quickly. And that is where Moore's Observation (Law) will hit a brick wall. Yes TSMC is claiming they will be doing 3nm by the Mid 2020s but my sense is the laws will catch up with the foundries after 7nm EUV. The smart move maybe to sink resources into refining 7nm EUV bbefore heading off to 5nm.

Agreed on the brick wall coming up soon, although it may be 3 nm (~10 SiO2 molecules), not 5, if they push hard. I have pointed this out on discussion boards for years. In essence, this will make EUV a 1.5 or 2.5 node technology and I would not be surprised if it is still to be canceled or at least not widely adopted because of that. It will likely only be used by 3-4 chip makers, that's it. Hardly a cash cow for ASML.

Not sure if the fat lady will have a big, splashy Broadway show, though. Yes to ending the core paradigm of Moore's law (linear shrinkage of nodes), but I would be surprised if the industry won't come up with some paradigm shift, such as going 3D for DRAM, to continue doubling performance without linear shrinkage (even if not at the same price).
 
I think that Meltdown and Spectre already point to "Moore's Law" ending in the data center. The speculative execution and branch prediction tricks were responses to the end of Denard scaling and clock speed advances. This resulting move to parallelism and fancy footwork to run serial code to this alien environment is not going well and we have already hit a brick wall in uniprocessor throughput. It is flat to down from here, or to more specialized architectures. Transputers anyone? GPU processing?
 
Back
Top