Why Intel 14nm is NOT a Game Changer!

Why Intel 14nm is NOT a Game Changer!
by Daniel Nenni on 02-02-2014 at 10:00 am

On one hand the Motley Fool is saying, “Intel 14nm could change the game” and on the other hand the Wall Street Cheat Sheet is saying, “Intel should shut down mobile”. SemiWiki says Intel missed mobile and should look to the future and focus on wearables and in this blog I will argue why.

Let’s look back to 2009 when Intel and TSMC signed an agreement to “collaborate on addressing technology platform, intellectual property (IP) infrastructure, and System-on-Chip (SoC) solutions.” Intel and TSMC ported the Atom Core to 40nm and offered it to more than 1,000 of TSMC’s customers:

“We believe this effort will make it easier for customers with significant design expertise to take advantage of benefits of the Intel Architecture in a manner that allows them to customize the implementation precisely to their needs,” said Paul Otellini, Intel president and CEO. “The combination of the compelling benefits of our Atom processor combined with the experience and technology of TSMC is another step in our long-term strategic relationship.”

Unfortunately this venture was a complete failure for business and technical reasons and was put on hold a year later. I was a frequent visitor to Taiwan at the time so I had a front row seat to this one. The excuse was that you can’t just flip a switch and be successful in the mobile market, meaning that Intel’s Atom effort will require patience and persevance. Fast forward to 2012:

“We are moving Intel[SUP]®[/SUP] Atom[SUP]TM[/SUP] processors to our leading-edge manufacturing technologies at twice our normal cadence. We shipped 32nm versions in 2012, and we expect to launch the 22nm generation in 2013, and 14nm versions in 2014. With each new generation of technology, we can boost performance while reducing costs and power consumption—great attributes for any market, but particularly for mobile computing.”Our Mobile Edge by Paul Otellini, Intel 2012 Annual Report.

Clearly that did not happen at 22nm with Intel literally GIVING AWAY 40 million 22nm SoCs to get “traction” in the mobile market. And Intel 14nm SoCs are delayed until 2015 which will be in lock step with the next generation of 14nm ARM based processors from QCOM, Apple, Samsung, and a handful of other fabless SoC companies.

As a stopgap measure to fill their new 14nm fabs, Intel dipped its toe into the shark infested foundry business waters. Unfortunately the only taker was Altera and their 14nm wafer demand is 3+ years out and the volume is a fraction of what is needed to keep a fab open. Intel is lucky to have only lost a toe here as they also risked exposing the secret manufacturing sauce they are famous for. Intel then shuttered fab #42 which could have been filled by foundry customers.

Let us not forget the other multi-billion dollar Intel forays away from their core competency: McAffee? Intel TV? Can someone help me complete this list in the comment section please? There are just too many for me to remember.

That brings us to where we are today: Intel still does not have a competitive SoC offering and time is running out. I strongly suggest that Intel take note of Google’s recent move out of the Smartphone business selling Motorola Mobility to Lenovo:

The smartphone market is super competitive, and to thrive it helps to be all-in when it comes to making mobile devices…..Larry Page Google CEO.

If Intel is going to go all-in I strongly suggest Intel focus on Quark and the wearable (embedded) market. Mobile has hit commodity status and is moving way too fast for a semiconductor giant to keep up (TI already gave up their mobile SoC business). Intel has had a historically strong position in the embedded market and it is time for them to get back to a business they truly believe in, absolutely.

More Articles by Daniel Nenni…..

lang: en_US


RTL Sign-off – At an Edge to become a Standard

RTL Sign-off – At an Edge to become a Standard
by Pawan Fangaria on 02-01-2014 at 10:00 am


Ever since I have seen Atrenta’s SpyGlass platform providing a comprehensive set of tools across the semiconductor design paradigm, I felt the need for a common set of standards to evolve for sign-off at RTL level. Last December, when I read an EE Times articleof Piyush Sancheti, VP, Product Marketing at Atrenta, where he talks about a billion gate SoC design, shrinking market windows, and design cycles to the level of 3-6 months, I was looking for an opportunity to talk to him in a broader sense on how RTL level design paradigm is proliferating and what we can see in future. This week I had a nice opportunity talking to him face-to-face in Atrenta’s Noida office. Here is the conversation –

Q: SpyGlass is primarily providing a platform for designs at RTL and for sign-off at that stage. What has been your experience so far?

In today’s SoC design environment, you have size, scale and complexity of advanced nodes being the prime factors. Most of the SoCs use several soft IPs, configurable at different levels, and some hard IPs as well. Iterative design closures do not serve the purpose for such large designs. Add to it very short market windows; there is another level of market segment coming up for Internet-of-Things, that has very short turn-around-time in the order of 3 months. RTL sign-off has become a need today to answer this faster design closure with lesser cost.

So, to answer in short, our leading edge customers are executing on RTL sign-off and are happy to see the value in it. Last year was the best year for us in terms of business and growth and we are looking at a bright future from here.

Q: Considering the amount of IP re-use and sourcing from third party for SoC design, a standard RTL sign-off criteria can help in reliable IP exchange as most of the IPs are sourced at RTL level. Your comments?

Yes, definitely, at the top level an SoC can have just connectivity between many IPs connected through glue logic. So, quality of the SoC will depend on the quality of IPs and therefore a standard criterion must be there for IPs, internal or external. We have been working with TSMCon a standard for soft IP qualification.

Q: That’s quite encouraging. Looking at your talk in EE Times about billion gate SoCs becoming a reality, I can definitely see that RTL sign-off is a must. But do you see common standard RTL sign-off criteria or rather RTL coverage factors evolving across the industry for the overall semiconductor design?

Yes, it’s required. Even if all IPs on an SoC are qualified, it doesn’t guarantee the quality of the SoC. What if there is a clocking scheme mismatch between IPs? Even at the connectivity level between IPs, we need to look at the common plane issues, consistency, synchronous versus asynchronous and the like. So, a standard at SoC level sign-off is again a must for the industry. And we are working at it, along with some of our leading customers; it depends on a majority of the design houses adopting this path. It will take time to break that inertia; people will realize that this change in methodology is needed when they are no longer able to continue with the same old methodology.

We have talked about the problems so far, let’s talk about some solutions. We now offer a smart abstract model concept for blocks in SoC design. RTL sign-off can be done at a hierarchical level; this has very fast turnaround. This is now in use in some of the most complex SoC designs with multiple levels of hierarchy. We have seen amazing results in performance, capacity, memory utilization, number of violations etc. We are talking gains that are in one or two orders of magnitude. So, we definitely would be interested in evolving the common standard for SoC sign-off at RTL.

Q: What all should get covered in RTL sign-off?

It’s across various design domains; clocking, testability, physical, timing, area, and power. Rules to avoid congestion and ensure routing completion such as fan-in, fan-out, mux sizes and cell pin density. On the timing side, there is logic depth, CDC, clock gating etc. Similarly there are rules for power and area. We have about 300 rules of the first order. These have broad applicability across a wide range of the market segments.

Q: RTL sign-off is a must at the beginning of an SoC design and a post layout sign-off at the end. Do you see the need for any intermediate level of sign-off such as post floorplan level?

Yes, SoC design needs a continuous monitoring at each stage. Quality and sign-off is a culture which must be exercised at each stage as the SoC passes through the design phases such as floorplan, placement and so on. By doing sign-off at RTL, one can get to design closure much faster, more productively and at lesser cost. As we pass through lower levels of design, the cost and iteration time increases. The other advantage at RTL signoff is that it minimizes iterations at lower levels. Overall it can reduce the design schedule risk by 30-50%.

Q: Do you see a possibility of leading organizations working at RTL, joining together to define a common standard for RTL sign-off of IPs and SoCs for the semiconductor industry? Can Atrenta take a lead? Who should own the standard?

As I said earlier, we are already working with TSMCand some of our other leading customers on this. We would be very interested in a common standard evolution which can benefit the whole semiconductor design industry. However, it needs about 10-12 major players from the design community, foundry and EDA to get the ball rolling. Eventually it will become a success only when majority of the semiconductor design community embraces it, as we have seen in other spaces. At this moment, we are not limited by capability; we are limited by the number of users which need to be large enough to provide that kind of momentum.

So, yes we can give it a start, mature it, but going forward some standard body should own it. It may be a new standard body or any of the existing one, we have to see.

Q: How far from now do you see that standard evolving?

I guess it should take minimum 18-24 months from now. It will not fly until we have a critical mass of the community starting to use it.

I felt extremely happy after talking to Piyush, especially on learning that what I was thinking is already in progress. This was one of my best conversations with industry leads. I really admire Piyush’s thought process when he said, “we are not doing it on our own. We continuously learn from our customers and partners who provide us the right direction to do things better in this challenging environment and change the ways that can lead to better productivity.” Let’s watch what’s there in store for future.

More Articles by Pawan Fangaria…..

lang: en_US


The Changing Semiconductor Foundry Landscape!

The Changing Semiconductor Foundry Landscape!
by Daniel Nenni on 01-29-2014 at 8:00 am

The foundry landscape is changing again and it is definitely something that should be discussed. There are some people, mostly influenced by Intel, that feel the foundry business has hit the wall at 20nm which couldn’t be further from the truth. After spending 30 years working in Silicon Valley, I have experienced a lot of change which is why I founded SemiWiki.com and co-authored a book on the fabless semiconductor revolution. Chronicling this change and looking towards the future is for the greater good of the semiconductor industry, absolutely.


A big change happened at 28nm, when TSMC was the only foundry to yield, which resulted in wafer shortages and fab capacity issues. Of course TSMC did not initially build capacity for 90% market share. What semiconductor company would (with the exception of Intel)? Fabless companies such as Qualcomm, Broadcom, and Marvell that were used to multiple manufacturing sources were limited to a single source at 28nm which was not a comfortable position for them at all. Pricing and delivery is everything in this business thus the multiple manufacturing source business model. As it stands today, 20nm looks to be the same with TSMC in a dominant market position.

The top fabless companies will make a correction at 14nm and use both TSMC and Samsung for competitive pricing and delivery. There really was no other choice since GlobalFoundries does not have the capacity yet to source a QCOM or Apple and Intel 14nm failed to make a passing foundry grade. With the exception of Altera, NONE of the top fabless semiconductor companies will use Intel at 14nm, which is one of the reasons why the Intel fab #42 in Arizona is being shuttered, my opinion. If fabless companies had the choice between Samsung/Intel and GlobalFoundries they would chose GF without a doubt. Working with an IDM/foundry that competes with you is a last resort for sure.

This change is of great help to the fabless semiconductor ecosystem in regards to jobs and design enablement (EDA and IP for example). Due to ultra-strict security measures and process differences it will require many more engineers, tools, and IP to manufacture at both TSMC and Samsung at 14nm. This cost of course will be offset by cheaper wafers due to the pricing pressure competition brings.

If you want a more detailed understanding of the changing foundry landscape there are three very good sources of information:

[LIST=1]

  • IC Insights’ McLean Report
  • GSA 2014 Foundry Almanac
  • Me

    Why me? Because pound for pound I have access to more reports, attend more conferences, and talk to more semiconductor people than anyone else in this industry, believe it. I am connected to 17,962 semiconductor professionals on LinkedIn so if I don’t know the answer to your question I most certainly know someone that does. Generally I make people buy me lunch for a discussion on the foundry business but now that my book “Fabless: The Transformation of the Semiconductor Industry” is out, if you buy the book I would be happy to take your call or email and answer whatever questions you may have. Connect with me on LinkedIn, if you haven’t already, and let’s talk.

    More Articles by Daniel Nenni…..

    lang: en_US


  • TSMC OIP presentations available!

    TSMC OIP presentations available!
    by Beth Martin on 01-27-2014 at 6:27 pm

    Are you a TSMC customer or partner? If so, you’ll want to take a look at these presentations from the 2013 TSMC Open Innovation Platform conference:

    Through close cooperation between Mentor and Synopsys, Synopsys Laker users can check with Calibre “on the fly” during design to speed creation of design-rule correct layout, including electrically-aware voltage-dependent DRC checks.

    • Verify TSMC 20nm Reliability Using Calibre PERC(Mentor Graphics)
      Calibre PERC was used in close collaboration with TSMC IO/ESD team to develop an automatic verification kit to verify CDM ESD issues for the N20 node.

    • EDA-Based DFT for 3D-IC Applications (Mentor Graphics)
      Testing of TSMC’s 2.5D/3D ICs implies changes to traditional Built-In Self-Test (BIST) insertion flows provided by commercial EDA tools. Tessent tools provide a number of capabilities that address these requirements while reducing expensive design iterations or ECOs, which ultimately translates to a lower cost per device.

    • Advanced Chip Assembly & Design Closure Flow Using Olympus-SoC (Mentor Graphics & NVIDIA)
      Mentor and NVIDIA discuss the chip assembly and design closure solution for TSMC processes, including concurrent MCMM optimization, synchronous handling of replicated partitions, and layer promotion of critical nets for addressing variation in resistance across layers.

    More articles by Beth Martin…


    TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016

    TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016
    by Herb Reiter on 01-27-2014 at 11:00 am

    At TSMC’s latest earnings call held mid January 2014, an analyst asked TSMC for a revenue forecast for their emerging 2.5/3D product line. C.C. Wei, President and Co-CEO answered: “800 Million Dollars in 2016 ”. TSMC has demonstrated great vision many times before. For me, an enthusiastic supporter of this technology, this statement represents a big moral boost. I had the opportunity to drive Synopsys’ support for the early TSMC reference flows and saw how this strategic move has paid off very well, for the entire Fabless EcoSystem. In my humble opinion, 2.5 and 3D ICs will have a great impact on our industry such as the TSMC’s reference flows have.

    TSMC’s prediction for 2.5/3D revenues confirms what I see and hear: Several large companies and an impressive number of smaller ones are starting or are already relying on 2.5/3D technology for their products that will become available sometime between 2014-16. Why rely on 2.5/3D technology? Because continued shrinking of feature sizes, including FinFETs, is no longer economical for many applications. Likewise, wire-bonded multi-die solutions or package-on-package can no longer meet performance- and power requirements.

    How can busy engineering teams quickly evaluate and choose the best alternative between current and the new 2.5 or 3D-IC solutions?

    Based on the fact that this technology shifts a major part of the value creation into the package – packaging is becoming more important and must be considered PRIOR to silicon development. This new book expresses much of the packaging expertise Professor Swaminathan has gained in the last 20 years while working at IBM and teaching / researching at GeorgiaTech. Together with Ki Jin Han, they address most of the topics system- and IC designers need to consider when utilizing 2.5 and 3D-ICs solutions. Professor Swaminathan is also accumulating hands-on 2.5 and 3D experiences as CTO of E-System Design, an EDA start-up in this field. Their 2.5/3D book is available at Amazon.com.

    The book explains in Chapter 1 why interconnect delays and the related power dissipation are constraining designers and how Through-Silicon-Vias (TSVs) help to finally break down the dreaded “Memory Wall”. Either a 2.5D IC (die side-by-side on an interposer) OR a 3D IC (vertically stacked die) solution can better meet the performance, power, system cost, etc requirements. But before expensive implementation is started, the various options available in either need to be objectively evaluated. Both solutions increase bandwidth while lowering power dissipation, latency and package height. In addition, they simplify integration of heterogeneous functions in a package, for example combining a large amount of memory with a multi-core CPU or adding analog/RF circuits to a logic die.

    Chapter 2’s primary target audience is modeling and design tools developers. It explains how to accurately simulate the impact of TSVs, solder balls and bonding wires on high-speed designs – information also useful for package and IC designers.

    Chapter 3 dives into a lot of practical considerations for designing with the above mentioned IC building blocks.

    Chapter 4 focuses on signal integrity challenges, coupling between TSV as well as power and ground plane requirements. Both silicon and glass interposers are covered.

    Chapter 5 addresses power distribution and thermal management and Chapter 6 looks at future concepts currently in development for solving 2.5/3D-IC design challenges.

    The many formulas and examples in this book make it a great reference for experienced IC and package designers.

    Herb@eda2asic

    lang: en_US


    Is Altera Leaving Intel for TSMC?

    Is Altera Leaving Intel for TSMC?
    by Daniel Nenni on 01-24-2014 at 9:00 am

    There is a rumor making the rounds that Altera will leave Intel and return to TSMC. Rumors are just rumors but this one certainly has legs and I will tell you why and what I would have done if I were Altera CEO John Daane. Altera is a great company, one that I have enjoyed working with over the years, but I really think they made a serious mistake at 14nm, absolutely. Altera moving to Intel was not necessarily the mistake, in my opinion it is how they went about it.

    The rumor started here:

    “Altera’s recent move [contacting TSMC] is probably due to its worry of the recent Intel’s 14nm process delay causing delay in its new product will let Xilinx win”
    ChinaEconomic Daily News 12/2/13

    It became more real when Rick Whittington, Senior Vice President of Drexel Hamilton, released a downgrade on Intel stock (INTC) from buy to hold titled “A Business Model in Flux”. There are more than a dozen bullet points but this one hit home:

    While Altera’s use of 14nm manufacturing late this year wasn’t to ramp until mid-late 2015, it has been a trophy win against other foundries

    A trophy win indeed, the question is why did Altera allow itself to be an Intel trophy? After working with TSMC for 25 years and perfecting a design ecosystem and early access manufacturing partnership, it was like cutting off your legs before a marathon.

    The EDA tools, IP, and methodology for FPGA design and manufacturing are not mainstream to say the least. It is a very unique application which requires a custom ecosystem and ecosystems are not built in a day or even a year. Ecosystems develop over years of experience and partnerships with vendors. FPGAs are also used by foundries to ramp new process nodes which is what TSMC has done with Altera for as long as I can remember. This early access not only gave Altera a head start on design, it also helped tune the TSMC manufacturing process for FPGAs. Will Intel allow this type of FPGA optimization partnership for their “Intel Inside” centric processes? That would be like a flea partnering with a dog, seriously.

    What would I have done? Rather than be paraded around like a little girl in a beauty pageant, Altera should have been stealthy and designed to both Intel and TSMC for FinFETs. Seriously, what did Altera REALLY gain by all of the attention of moving to Intel? Remember, TSMC 16nm is in effect 20nm using FinFETs. How hard would it have been to move their 20nm product to TSMC 16nm while developing the required Intel design and IP ecosystem? Xilinx will tape out 16nm exactly one year after 20nm and exactly one year before Altera tapes out Intel 14nm. Remember, Altera gained market share when they beat Xilinx to 40nm by a year or so.

    Correct me if I’m wrong here but this seems to be a major ego fail for Altera. And if the rumor is true, which I hope it is for the sake of Altera, how is Intel going to spin Altera going back to TSMC for a quick FinFET fix?

    More Articles by Daniel Nenni…..

    lang: en_US


    ESD at TSMC: IP Providers Will Need to Use Mentor to Check

    ESD at TSMC: IP Providers Will Need to Use Mentor to Check
    by Paul McLellan on 01-22-2014 at 1:24 pm

    I met with Tom Quan of TSMC and Michael Beuler-Garcia of Mentor last week. Weirdly, Mentor’s newish buildings are the old Avant! buildings where I worked for a few weeks after selling Compass Design Automation to them. Odd sort of déja vu. Historically, TSMC has operated with EDA companies in a fairly structured way: TSMC decided what capabilities were needed for their next process node, specified them and then the EDA companies developed the technology. It wasn’t quite putting out an RFP, in general TSMC wasn’t paying for the development, the EDA companies would recover their costs and more from the mutual customers using the next node.

    The problem with this approach is it doesn’t really allow for innovation the originates within the EDA companies. Mentor and TSMC have spend the last couple of years working very cooperatively on a flow for checking ESD (electro-static discharge), latchup and EOS (electrical overstress). All of these can permanently damage the chip. ESD can be a major problem during manufacture, manufacturing test, assembly and even in the field. EOS causes oxide breakdown (some one-time programmable memories use this deliberately to program the bit cells, but when it kills other kinds of transistors it is a big problem). Like most things, it is getting worse from node to node, especially 20nm and 16nm. The gate oxide is getting thinner and so it is simply easier to damage. FinFETs are even more fragile.


    Historically TSMC has had layout design rules for these types of issues. But they required marker layers to be added to the cells to indicate which checks should be done in which areas. This causes two problems. Adding the marker layers is tedious and not really very productive work. But worse, if the marker layers are wrong then checks can be omitted and, often, without causing any DRC violations to give a hint that there is a problem. Another issue is that the design rules from 20nm on are sometimes voltage dependent, again a solution that was addressed historically with marker layers. Even then, not all rules could be checked. In fact, previously 35% of rules could not be checked and 65% required marker layers to check.

    This is increasingly a problem. It is obviously not life-threatening if the application processor in your smartphone fails (although obviously more than annoying). But medical, automotive and aerospace have fast growing electronic content and they have much higher reliability requirements. If your ABS system or your heart pacemaker fails it is a lot more than annoying.

    So Mentor and TSMC decided that they wanted a flow for checking that didn’t require marker layers and covered all the rules. It would obviously need to pull in not just layout data, but netlist and other electrical data (voltage dependent design rules obviously require knowing the voltages). The flow is intended for checking IP as part of the TSMC9000 IP quality program.

    This is built on top of Mentor’s PERC (programmable electrical rule checker). They focused on 3 areas where these problems occur:

    • I/Os (ESD is mostly a problem in I/Os)
    • IP with multiple power domains
    • analog

    Voltage dependent DRC checking is another area of cooperation. Many chips today have multiple voltages. In automotive and aerospace these may include high voltages and, as a general rule, widely separated voltages require widely separated layout on the chip to avoid problems. Again, the big gains in both efficiency and reliability come from avoiding marker layers.


    The current status is that Calibre PERC is available for full-chip checking 28nm, 20nm with 16nm in development. As part of the IP 9000 program it is available for IP verification for 20SoC, 16FF and 28nm. Use of Calibre PERC will become a requirement (currently it is just a recommendation) in 20nm, 16nm and below.


    More articles by Paul McLellan…


    TSMC Responds to Intel’s 14nm Density Claim!

    TSMC Responds to Intel’s 14nm Density Claim!
    by Daniel Nenni on 01-21-2014 at 9:30 pm

    TSMC responded to Intel’s 14nm density advantage claim in the most recent conference call. It is something I have been following closely and have written about extensively both publicly and privately. Please remember that the fabless semiconductor ecosystem is all about crowd sourcing and it is very hard to fool a crowd of semiconductor professionals, absolutely. To see Intel’s infamous density presentation click HERE.


    First let’s take a look at what TSMC had to say:

    Morris Chang – Chairman:So I now would ask Mark Liu to speak to TSMC’s competitiveness versus Intel and Samsung:

    Let me comment on Intel’s recent graph shown in their investor meetings, showing on the screen. I — we usually do not comment on other companies’ technology, but this — because this has been talking about TSMC technology and, as Chairman said, has been misleading, to me, it’s erroneous based on outdated data. So I’d like to make the following rebuttal:

    On this new graph, the vertical axis is the chip area on a large scale. Basically, this is compared to chip area reduction. On the horizontal axis, it shows the 4 different technologies: 32, 28; 22, 20; 14, 16-FinFET; and 10-nanometer. 32 is Intel technology, and 28 is TSMC technology so is the following 3 nodes, the smaller number, 20, around — 14-FinFET is Intel, 16-FinFET is TSMC. On the view graph shown at Intel investor meeting, it is with the gray plot, showing here. The gray plot showed the 32- and the 20-nanometer TSMC is ahead of the area scaling and — but — however, with 16, the data — gray data shows a little bit uptick. And following the same slope, go down to the 10-nanometer, was the correct data we show on the red line. That’s our current TSMC data. The 16, we have in volume production on 20-nanometer. As C.C. just mentioned, this is the highest density technology in production today.

    We take the approach of significantly using the FinFET transistor to improve the transistor performance on top of the similar back-end technology of our 20-nanometer. Therefore, we leverage the volume experience in the volume production this year to be able to immediately go down to the 16 volume production next year, within 1 year, and this transistor performance and innovative layout methodology can improve the chip size by about 15%. This is because the driving of the transistor is much stronger so that you don’t need such a big area to deliver the same driving circuitry.

    And for the 10-nanometer, we haven’t announced it, but we did communicate with many of our customers that, that will be the aggressive scaling of technology we’re doing. And so in the summary, our 10 FinFET technology will be qualified by the end of 2015. 10 FinFET transistor will be our third-generation FinFET transistor. This technology will come with industry’s leading performance and density. So I want to leave this slot by 16-FinFET scaling is much better than Intel’s set but still a little bit behind. However, the real competition is between our customers’ product and Intel’s product or Samsung’s product.

    Morris Chang – Chairman:Thank you, Mark. In summary, I want to say the following: First, in 2014, we expect double-digit revenue growth and we expect to maintain or slightly improve our structural profitability. As a result, we expect our profit growth to be close to our revenue growth. In 2014, the market segment that most strongly fuels our growth is the smartphone and tablet, mobile segment. The technologies that fuel our growth are the 20-SoC and the 28 high-K metal gate, in both of which we have strong market share. In 2015, our strong technology growth will be 16-FinFET. We believe our Grand Alliance will outcompete both Intel and Samsung, outcompete.

    If there is anyone out there that doubts these numbers please post in the comment section or send me a private email. I will follow up with a rebuttal blog based on feedback next week.

    More Articles by Daniel Nenni…..


    Intel is NOT Transparent Again!

    Intel is NOT Transparent Again!
    by Daniel Nenni on 01-19-2014 at 9:00 am

    Recent headlines suggest that Intel was not transparent about some of the products they showed at the CES keynote. Intel confirmed on Friday that they used ARM-based chips for some of the products but would not say which ones. When your company’s tag line is “Intel Inside” and you hold up a product during your keynote wouldn’t one assume that Intel was actually inside?

    Today saying someone is not transparent really means they are being deceptive and when that someone is the CEO of a publicly traded semiconductor company it is serious business, my opinion. Even more glaring is the Intel claim of a 35% density advantage over TSMC at 14nm. This was presented during the November 21[SUP]st[/SUP] 2013 Intel analyst meeting. There is a barely noticeable disclaimer in the bottom right corner that says:

    Sources: TSMC keynote, ARM Tech Con 2012, Oct. 30, 2012. Intel data alignment based on internal assessment.

    This goes to my argument that Intel is NOT serious about the foundry business. They used a trade show marketing presentation from 2012 for this technical analysis? Is that the best the mighty Intel can do for competitive information?

    Based on a thorough investigation by myself and just about every other company in the fabless semiconductor ecosystem this claim has proven to be absolutely FALSE. I write this now so when silicon is out and scrutinized we can go back and see who was telling the truth. Spoiler alert: It is not Intel!

    The other interesting Intel news is that their big 14nm fab in Arizona will not be in production anytime soon. The delay was called a “minor correction”. The real reason for the delay, in my opinion, is so that Intel can continue to claim 80% capacity utilization so Wall Street does not downgrade INTC stock. If Intel counted idle fabs, their capacity numbers would be closer to 50% than 80%.

    The other big news is that TSMC 20nm is in full production. We already knew this but it is nice to see TSMC talking about it:

    “We have two fabs, fab 12 and fab 14 that complete the core of the 20nm-SoC. As a matter of fact, we have started production. We are in the [high]-volume [20nm] production as we speak right now,” said C. C. Wei, co-chief executive officer and co-president of TSMC, during a conference call with investors and financial analysts.

    Do you remember last year TSMC said on a conference call that 20nm would be in volume production Q2 2014? And I said they were being cautious, that it would happen in Q1 2014? I know things, believe it. TSMC also said 20nm will account for 10% of wafer revenues in 2014 which would be more than $2B worth of 20nm wafers.

    TSMC also did a FinFET update:

    Talking at the company’s latest financial meeting, Mark Liu, TSMC co CEO, claimed its 16nm FinFET process is now ready for tape out and could be in volume production this year. “Our 16FinFET yield improvement has been ahead of our plan. This is because we have been leveraging the yield, learning from 20SoC. Currently, the 16FinFET SRAM yield is already close to that of the 20SoC process.”

    Let’s not forget what Mark Bohr of Intel said about TSMC last year:

    “Bohr claims in TSMC’s recent announcement it will serve just one flavor of 20nm process technology is an admission of failure. The Taiwan fab giant apparently cannot make at its next major node the kind of 3-D transistors needed to mitigate leakage current, Bohr said.”

    TSMC 16nm is a FinFET version of 20nm, right? Maybe Mark saw that in a marketing presentation years ago? Intel, you really are better than this. If you don’t have something transparent to say maybe you should say nothing at all.

    More Articles by Daniel Nenni…..

    lang: en_US


    Intel Wafer Pricing Exposed!

    Intel Wafer Pricing Exposed!
    by Daniel Nenni on 12-28-2013 at 12:00 pm

    One of the big questions on Intel’s foundry strategy is: Can they compete on wafer pricing? Fortunately there are now detailed reports that support what most of us fabless folks already know. The simple answer is no, Intel cannot compete with TSMC or Samsung on wafer pricing at 28nm, 20nm, and 14nm.

    In fact, recent reports have shown that Intel 32nm versus TSMC 28nm gives TSMC a 30%+ wafer cost advantage. At Intel 22nm versus TSMC 20nm the cost advantage is 35%+. This is an apple to apple comparison with Atom SoC versus ARM SoC silicon. Another key metric is capacity. During the recent investor meeting Intel CFO Stacy Smith claimed Intel was at 80% capacity so that is the number that was used in the wafer cost calculations for both Intel and TSMC. I question this number since Intel has three idle fabs (OR, AZ, Ireland) and TSMC 28nm was at 100% capacity up until recently but I digress…..

    On the technical side we now know that, even with Intel’s superior process claims, TSMC 28nm SoCs easily beat Intel at 32nm in both power and performance. TSMC 20nm SoCs will again beat Intel 22nm. 14nm SoCs have yet to launch but one thing I can tell you is that Intel will NOT win business from TSMC’s top customers which will make up more than 50% of fabless revenues:

    [LIST=1]

  • Qualcomm: TSMC and Samsung
  • Apple: TSMC and Samsung
  • NVIDIA: TSMC and Samsung
  • AMD: TSMC and GlobalFoundries
  • MediaTek: TSMC
  • Marvell: TSMC and Samsung
  • Broadcom: TSMC and Samsung
  • TI: TSMC
  • Spreadtrum: TSMC and Samsung
  • Xilinx: TSMC

    As you can see most of these customers will straddle TSMC and Samsung at 14nm to get pricing concessions which will make it even more difficult for Intel to compete. Additionally, Intel will have the added burden of the three idle fabs which brings utilization down to 50% (my guess since Intel was not “transparent” about it during analyst day). I’m really looking forward to the utilization conversation on the next earnings call. Mr. Smith has some explaining to do! Let’s see what kind of answer $15M+ in CFO compensation will get us. Since TSMC 20nm and 16nm use the same metal fabric the fabs are the same so expect a very high utilization rate.

    Also read: Should Intel Offer Foundry Services?

    Bottom line is that the Intel 14nm “Fill the Fab” foundry strategy is a paper tiger to appease Wall Street. At 10nm it may be a different story all together. If Intel does in fact deliver 10nm SoCs a year or two ahead of the foundries they may get business at the normal Intel price premium. But at 14nm it is simply not going to happen, no way, no how.

    I also question the business model where you allow your products to be manufactured by a direct competitor. It is a conflict of interest. It is a desperate business move. It is the reason why pure-play foundries exist. But these are desperate times with only one pure play foundry (TSMC) for leading edge SoC silicon. If GlobalFoundries and UMC had the capacity and were able to deliver wafers lockstep with TSMC, Samsung and Intel would not have a chance in the foundry business, absolutely.

    More Articles by Daniel Nenni…..

    lang: en_US