Intel 14nm Delayed Again?

Intel 14nm Delayed Again?
by Daniel Nenni on 02-12-2014 at 9:00 am

From the sources in which I confirmed the last Intel 14nm delay, I just confirmed another. Intel 14nm is STILL having yield problems. Remember Intel bragging about 14nm being a full node and deriding TSMC because 16nm is “just” 20nm with FinFETs added? Judging by the graph, clearly FinFETs are not the problem here. Intel used a much more aggressive metal fabric to get better density which is challenging modern lithography methods.

“People in the trenches are usually in touch with impending changes early” ― Andrew S. Grove,Only the Paranoid Survive

Meanwhile, back at the fabless semiconductor ecosystem, 20nm is yielding ahead of schedule so TSMC will see revenue this quarter versus next. I would put the chances of TSMC realizing their forecast of 20nm providing 10% of 2014 revenue as being very good. Given the more cautious approach TSMC took to FinFETs, 16nm is also on track with tape-outs happening now. If all goes as planned, 16nm will ramp in 2015 as 20nm does in 2014.

TSMC expects 20nm to be 2% of Q2 2014 revenue so the ramp begins. Looking at the 28nm ramp, 20nm is expected to be 20-30% faster:

[LIST=1]

  • 28nm 2% Q4 2011
  • 28nm 5% Q1 2012
  • 28nm 7% Q2 2012
  • 28nm 13% Q3 2012

    Back to Intel; new Intel CEO Brian Krzanich committed 14nm for Q3 2013 which was later pushed out to Q1 2014 even though he held up a laptop at the Intel Developers Forum and boasted that 14nm was in fact on track. At an analyst meeting two months later he showed the slide above and said there were yield “challenges” that they are still working on. Well, from what I have heard, they are still working on it so the Intel 14nm ramp may be delayed yet again.

    The questions I have are: If this is true when will Intel disclose this new yield challenge? How much will it delay 14nm products? What about Altera? I’m sure delaying this type of bad news until the problem is fixed is best for damage control but I find this type of behavior not transparent and untrustworthy, just my opinion of course.

    Meanwhile the Intel pumping Seeking Alpha published an article, “Does Intel’s new CEO have what it takes?” This is pure entertainment. Thus far Intel management has made many mistakes that the author glossed over but have been covered in painful detail on SemiWiki. The lack of transparency started here with BK’s first conference call:

    Intel’s Q2 Conference Call
    Intel 14nm Delayed?
    Intel Is Continuing to Scale While Others Pause To Do FinFETs
    No Mention of 14nm at the 2013 Intel Developer Forum?
    Intel Really is Delaying 14nm Move-in. 450mm is Slipping Too. EUV, who knows?
    Intel Quark: Synthesizable Core But You Can’t Have It
    Intel Bay Trail Fail
    Yes, Intel 14nm Really is Delayed…And They Lost $600M on Mobile
    Intel’s Mea Culpa!
    Intel Bay Trail Fail II
    Intel Comes Clean on 14nm Yield!
    Intel is NOT Transparent Again!
    Why Intel 14nm is NOT a Game Changer!

    We write these articles from the trenches to set the record straight. We also write these articles as research for an upcoming book on Intel to chronicle the rise and fall and hopefully the rise again of the number one semiconductor company.

    More Articles by Daniel Nenni…..

    lang: en_US


  • Designing an SoC with 16nm FinFET

    Designing an SoC with 16nm FinFET
    by Daniel Payne on 02-11-2014 at 9:35 pm

    IC designers contemplating the transition to 16nm FinFET technology for their next SoC need to be informed about design flow and IP changes, so TSMC teamed up with Cadence Design Systems today to present a webinar on that topic. I attended the webinar and will summarize my findings.

    Shown below is a 3D layout concept of an ideal FinFET transistor, followed by the actual manufactured device which is rotated 90 degrees from the layout:

    Continue reading “Designing an SoC with 16nm FinFET”


    The Great Wall of TSMC

    The Great Wall of TSMC
    by Paul McLellan on 02-03-2014 at 5:27 pm

    TSMC doesn’t just sell wafers, it sells trust. It’s the Colgate Ring of Confidence for fabless customers. This focus on trust started at the very beginning when Morris Chang founded TSMC over 25 years ago, and still today trust remains an essential part of their business.

    When TSMC started, the big thing it brought was that it was a pure play foundry. It had no product lines of its own. Foundry had existed before, but it was semiconductor companies selling excess capacity to each other. This meant that the buyer of wafers was always vulnerable to the seller company being successful and needing that capacity and they would get thrown out. And that was without even considering that companies might be buying wafers from a competitor, sending them masks of their crown-jewels and trusting that nobody would try and reverse engineer anything.

    So when TSMC started, it brought the confidence that TSMC wasn’t going to suddenly stop supplying wafers since they needed the capacity for themselves, nor that TSMC was competing with them in the same end-markets. That is not to say that there never have been capacity issues: TSMC cannot afford to build excess capacity “just in case” any more than anyone else, so when businesses take off better than forecast or some other event happens, wafers can end up on allocation just as has always been the case in semiconductor, an inherently cyclical business.

    Not competing with its customers remains the case today (as, to be fair, it does for GlobalFoundries, SMIC, Jazz and other pure-play foundries). But it is not the case for Samsung, which is in the slightly bizarre situation of having Apple as its largest foundry customer while competing with it as the volume leader in the mobile market (and never mind the lawsuits). Samsung is large diversified conglomerate, almost a lot of different companies all using the Samsung brand-name. Samsung makes all the retina displays for iPhone too, and doesn’t even use them themselves. They are a huge memory supplier. Apple is rumored to be moving from Samsung to TSMC for its next application processor (presumably to be called A8).

    Intel has made a lot of noise about entering the foundry business but the only significant company that has been announced is Altera. And there are even rumors that they are thinking of going back to TSMC. But a company like Altera using Intel for its high end FPGA products might need 1000 wafers a month when a fab has a capacity of 50-100K wafers a month. It won’t “fill the fab”. It needs to get an Apple or a Qualcomm or an nVidia for that. But at least Altera can be confident that no matter how successful Intel’s other businesses are, at those volumes they are unlikely to be squeezed out, the amount of capacity they need is in the noise.

    The other area that foundries have had to invest is to create an ecosystem around them of manufacturing equipment and material suppliers, IP and EDA companies. This grand alliance has made a huge investment in R&D. In aggregate, it has invested more than any single IDM. As a result the Grand Alliance has produced more innovation in high performance, lower power, lower cost than any single IDM.

    At a modern process node deep cooperation is required. It is not possible for everything to be done serially: get the process ready, get the tools working on a stable process, use the tools to build the IP, start customer designs using the IP and the tools, ramp to volume. Everything has to happen almost simultaneously. This requires an even greater sense of trust among everyone involved, and the fact that changing PDKs means changing IP means redoing designs means inevitably an increased investment too.

    So TSMC has a competitive edge, the great wall of TSMC to keep out the barbarian hordes:

    • it sells confidence and trust, not just wafers
    • it does not compete with its customers
    • it has orchestrated a grand alliance to create an ecosystem around its factories that has made a bigger R&D investment than any single IDM.


    More articles by Paul McLellan…


    Why Intel 14nm is NOT a Game Changer!

    Why Intel 14nm is NOT a Game Changer!
    by Daniel Nenni on 02-02-2014 at 10:00 am

    On one hand the Motley Fool is saying, “Intel 14nm could change the game” and on the other hand the Wall Street Cheat Sheet is saying, “Intel should shut down mobile”. SemiWiki says Intel missed mobile and should look to the future and focus on wearables and in this blog I will argue why.

    Let’s look back to 2009 when Intel and TSMC signed an agreement to “collaborate on addressing technology platform, intellectual property (IP) infrastructure, and System-on-Chip (SoC) solutions.” Intel and TSMC ported the Atom Core to 40nm and offered it to more than 1,000 of TSMC’s customers:

    “We believe this effort will make it easier for customers with significant design expertise to take advantage of benefits of the Intel Architecture in a manner that allows them to customize the implementation precisely to their needs,” said Paul Otellini, Intel president and CEO. “The combination of the compelling benefits of our Atom processor combined with the experience and technology of TSMC is another step in our long-term strategic relationship.”

    Unfortunately this venture was a complete failure for business and technical reasons and was put on hold a year later. I was a frequent visitor to Taiwan at the time so I had a front row seat to this one. The excuse was that you can’t just flip a switch and be successful in the mobile market, meaning that Intel’s Atom effort will require patience and persevance. Fast forward to 2012:

    “We are moving Intel[SUP]®[/SUP] Atom[SUP]TM[/SUP] processors to our leading-edge manufacturing technologies at twice our normal cadence. We shipped 32nm versions in 2012, and we expect to launch the 22nm generation in 2013, and 14nm versions in 2014. With each new generation of technology, we can boost performance while reducing costs and power consumption—great attributes for any market, but particularly for mobile computing.”Our Mobile Edge by Paul Otellini, Intel 2012 Annual Report.

    Clearly that did not happen at 22nm with Intel literally GIVING AWAY 40 million 22nm SoCs to get “traction” in the mobile market. And Intel 14nm SoCs are delayed until 2015 which will be in lock step with the next generation of 14nm ARM based processors from QCOM, Apple, Samsung, and a handful of other fabless SoC companies.

    As a stopgap measure to fill their new 14nm fabs, Intel dipped its toe into the shark infested foundry business waters. Unfortunately the only taker was Altera and their 14nm wafer demand is 3+ years out and the volume is a fraction of what is needed to keep a fab open. Intel is lucky to have only lost a toe here as they also risked exposing the secret manufacturing sauce they are famous for. Intel then shuttered fab #42 which could have been filled by foundry customers.

    Let us not forget the other multi-billion dollar Intel forays away from their core competency: McAffee? Intel TV? Can someone help me complete this list in the comment section please? There are just too many for me to remember.

    That brings us to where we are today: Intel still does not have a competitive SoC offering and time is running out. I strongly suggest that Intel take note of Google’s recent move out of the Smartphone business selling Motorola Mobility to Lenovo:

    The smartphone market is super competitive, and to thrive it helps to be all-in when it comes to making mobile devices…..Larry Page Google CEO.

    If Intel is going to go all-in I strongly suggest Intel focus on Quark and the wearable (embedded) market. Mobile has hit commodity status and is moving way too fast for a semiconductor giant to keep up (TI already gave up their mobile SoC business). Intel has had a historically strong position in the embedded market and it is time for them to get back to a business they truly believe in, absolutely.

    More Articles by Daniel Nenni…..

    lang: en_US


    RTL Sign-off – At an Edge to become a Standard

    RTL Sign-off – At an Edge to become a Standard
    by Pawan Fangaria on 02-01-2014 at 10:00 am


    Ever since I have seen Atrenta’s SpyGlass platform providing a comprehensive set of tools across the semiconductor design paradigm, I felt the need for a common set of standards to evolve for sign-off at RTL level. Last December, when I read an EE Times articleof Piyush Sancheti, VP, Product Marketing at Atrenta, where he talks about a billion gate SoC design, shrinking market windows, and design cycles to the level of 3-6 months, I was looking for an opportunity to talk to him in a broader sense on how RTL level design paradigm is proliferating and what we can see in future. This week I had a nice opportunity talking to him face-to-face in Atrenta’s Noida office. Here is the conversation –

    Q: SpyGlass is primarily providing a platform for designs at RTL and for sign-off at that stage. What has been your experience so far?

    In today’s SoC design environment, you have size, scale and complexity of advanced nodes being the prime factors. Most of the SoCs use several soft IPs, configurable at different levels, and some hard IPs as well. Iterative design closures do not serve the purpose for such large designs. Add to it very short market windows; there is another level of market segment coming up for Internet-of-Things, that has very short turn-around-time in the order of 3 months. RTL sign-off has become a need today to answer this faster design closure with lesser cost.

    So, to answer in short, our leading edge customers are executing on RTL sign-off and are happy to see the value in it. Last year was the best year for us in terms of business and growth and we are looking at a bright future from here.

    Q: Considering the amount of IP re-use and sourcing from third party for SoC design, a standard RTL sign-off criteria can help in reliable IP exchange as most of the IPs are sourced at RTL level. Your comments?

    Yes, definitely, at the top level an SoC can have just connectivity between many IPs connected through glue logic. So, quality of the SoC will depend on the quality of IPs and therefore a standard criterion must be there for IPs, internal or external. We have been working with TSMCon a standard for soft IP qualification.

    Q: That’s quite encouraging. Looking at your talk in EE Times about billion gate SoCs becoming a reality, I can definitely see that RTL sign-off is a must. But do you see common standard RTL sign-off criteria or rather RTL coverage factors evolving across the industry for the overall semiconductor design?

    Yes, it’s required. Even if all IPs on an SoC are qualified, it doesn’t guarantee the quality of the SoC. What if there is a clocking scheme mismatch between IPs? Even at the connectivity level between IPs, we need to look at the common plane issues, consistency, synchronous versus asynchronous and the like. So, a standard at SoC level sign-off is again a must for the industry. And we are working at it, along with some of our leading customers; it depends on a majority of the design houses adopting this path. It will take time to break that inertia; people will realize that this change in methodology is needed when they are no longer able to continue with the same old methodology.

    We have talked about the problems so far, let’s talk about some solutions. We now offer a smart abstract model concept for blocks in SoC design. RTL sign-off can be done at a hierarchical level; this has very fast turnaround. This is now in use in some of the most complex SoC designs with multiple levels of hierarchy. We have seen amazing results in performance, capacity, memory utilization, number of violations etc. We are talking gains that are in one or two orders of magnitude. So, we definitely would be interested in evolving the common standard for SoC sign-off at RTL.

    Q: What all should get covered in RTL sign-off?

    It’s across various design domains; clocking, testability, physical, timing, area, and power. Rules to avoid congestion and ensure routing completion such as fan-in, fan-out, mux sizes and cell pin density. On the timing side, there is logic depth, CDC, clock gating etc. Similarly there are rules for power and area. We have about 300 rules of the first order. These have broad applicability across a wide range of the market segments.

    Q: RTL sign-off is a must at the beginning of an SoC design and a post layout sign-off at the end. Do you see the need for any intermediate level of sign-off such as post floorplan level?

    Yes, SoC design needs a continuous monitoring at each stage. Quality and sign-off is a culture which must be exercised at each stage as the SoC passes through the design phases such as floorplan, placement and so on. By doing sign-off at RTL, one can get to design closure much faster, more productively and at lesser cost. As we pass through lower levels of design, the cost and iteration time increases. The other advantage at RTL signoff is that it minimizes iterations at lower levels. Overall it can reduce the design schedule risk by 30-50%.

    Q: Do you see a possibility of leading organizations working at RTL, joining together to define a common standard for RTL sign-off of IPs and SoCs for the semiconductor industry? Can Atrenta take a lead? Who should own the standard?

    As I said earlier, we are already working with TSMCand some of our other leading customers on this. We would be very interested in a common standard evolution which can benefit the whole semiconductor design industry. However, it needs about 10-12 major players from the design community, foundry and EDA to get the ball rolling. Eventually it will become a success only when majority of the semiconductor design community embraces it, as we have seen in other spaces. At this moment, we are not limited by capability; we are limited by the number of users which need to be large enough to provide that kind of momentum.

    So, yes we can give it a start, mature it, but going forward some standard body should own it. It may be a new standard body or any of the existing one, we have to see.

    Q: How far from now do you see that standard evolving?

    I guess it should take minimum 18-24 months from now. It will not fly until we have a critical mass of the community starting to use it.

    I felt extremely happy after talking to Piyush, especially on learning that what I was thinking is already in progress. This was one of my best conversations with industry leads. I really admire Piyush’s thought process when he said, “we are not doing it on our own. We continuously learn from our customers and partners who provide us the right direction to do things better in this challenging environment and change the ways that can lead to better productivity.” Let’s watch what’s there in store for future.

    More Articles by Pawan Fangaria…..

    lang: en_US


    The Changing Semiconductor Foundry Landscape!

    The Changing Semiconductor Foundry Landscape!
    by Daniel Nenni on 01-29-2014 at 8:00 am

    The foundry landscape is changing again and it is definitely something that should be discussed. There are some people, mostly influenced by Intel, that feel the foundry business has hit the wall at 20nm which couldn’t be further from the truth. After spending 30 years working in Silicon Valley, I have experienced a lot of change which is why I founded SemiWiki.com and co-authored a book on the fabless semiconductor revolution. Chronicling this change and looking towards the future is for the greater good of the semiconductor industry, absolutely.


    A big change happened at 28nm, when TSMC was the only foundry to yield, which resulted in wafer shortages and fab capacity issues. Of course TSMC did not initially build capacity for 90% market share. What semiconductor company would (with the exception of Intel)? Fabless companies such as Qualcomm, Broadcom, and Marvell that were used to multiple manufacturing sources were limited to a single source at 28nm which was not a comfortable position for them at all. Pricing and delivery is everything in this business thus the multiple manufacturing source business model. As it stands today, 20nm looks to be the same with TSMC in a dominant market position.

    The top fabless companies will make a correction at 14nm and use both TSMC and Samsung for competitive pricing and delivery. There really was no other choice since GlobalFoundries does not have the capacity yet to source a QCOM or Apple and Intel 14nm failed to make a passing foundry grade. With the exception of Altera, NONE of the top fabless semiconductor companies will use Intel at 14nm, which is one of the reasons why the Intel fab #42 in Arizona is being shuttered, my opinion. If fabless companies had the choice between Samsung/Intel and GlobalFoundries they would chose GF without a doubt. Working with an IDM/foundry that competes with you is a last resort for sure.

    This change is of great help to the fabless semiconductor ecosystem in regards to jobs and design enablement (EDA and IP for example). Due to ultra-strict security measures and process differences it will require many more engineers, tools, and IP to manufacture at both TSMC and Samsung at 14nm. This cost of course will be offset by cheaper wafers due to the pricing pressure competition brings.

    If you want a more detailed understanding of the changing foundry landscape there are three very good sources of information:

    [LIST=1]

  • IC Insights’ McLean Report
  • GSA 2014 Foundry Almanac
  • Me

    Why me? Because pound for pound I have access to more reports, attend more conferences, and talk to more semiconductor people than anyone else in this industry, believe it. I am connected to 17,962 semiconductor professionals on LinkedIn so if I don’t know the answer to your question I most certainly know someone that does. Generally I make people buy me lunch for a discussion on the foundry business but now that my book “Fabless: The Transformation of the Semiconductor Industry” is out, if you buy the book I would be happy to take your call or email and answer whatever questions you may have. Connect with me on LinkedIn, if you haven’t already, and let’s talk.

    More Articles by Daniel Nenni…..

    lang: en_US


  • TSMC OIP presentations available!

    TSMC OIP presentations available!
    by Beth Martin on 01-27-2014 at 6:27 pm

    Are you a TSMC customer or partner? If so, you’ll want to take a look at these presentations from the 2013 TSMC Open Innovation Platform conference:

    Through close cooperation between Mentor and Synopsys, Synopsys Laker users can check with Calibre “on the fly” during design to speed creation of design-rule correct layout, including electrically-aware voltage-dependent DRC checks.

    • Verify TSMC 20nm Reliability Using Calibre PERC(Mentor Graphics)
      Calibre PERC was used in close collaboration with TSMC IO/ESD team to develop an automatic verification kit to verify CDM ESD issues for the N20 node.

    • EDA-Based DFT for 3D-IC Applications (Mentor Graphics)
      Testing of TSMC’s 2.5D/3D ICs implies changes to traditional Built-In Self-Test (BIST) insertion flows provided by commercial EDA tools. Tessent tools provide a number of capabilities that address these requirements while reducing expensive design iterations or ECOs, which ultimately translates to a lower cost per device.

    • Advanced Chip Assembly & Design Closure Flow Using Olympus-SoC (Mentor Graphics & NVIDIA)
      Mentor and NVIDIA discuss the chip assembly and design closure solution for TSMC processes, including concurrent MCMM optimization, synchronous handling of replicated partitions, and layer promotion of critical nets for addressing variation in resistance across layers.

    More articles by Beth Martin…


    TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016

    TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016
    by Herb Reiter on 01-27-2014 at 11:00 am

    At TSMC’s latest earnings call held mid January 2014, an analyst asked TSMC for a revenue forecast for their emerging 2.5/3D product line. C.C. Wei, President and Co-CEO answered: “800 Million Dollars in 2016 ”. TSMC has demonstrated great vision many times before. For me, an enthusiastic supporter of this technology, this statement represents a big moral boost. I had the opportunity to drive Synopsys’ support for the early TSMC reference flows and saw how this strategic move has paid off very well, for the entire Fabless EcoSystem. In my humble opinion, 2.5 and 3D ICs will have a great impact on our industry such as the TSMC’s reference flows have.

    TSMC’s prediction for 2.5/3D revenues confirms what I see and hear: Several large companies and an impressive number of smaller ones are starting or are already relying on 2.5/3D technology for their products that will become available sometime between 2014-16. Why rely on 2.5/3D technology? Because continued shrinking of feature sizes, including FinFETs, is no longer economical for many applications. Likewise, wire-bonded multi-die solutions or package-on-package can no longer meet performance- and power requirements.

    How can busy engineering teams quickly evaluate and choose the best alternative between current and the new 2.5 or 3D-IC solutions?

    Based on the fact that this technology shifts a major part of the value creation into the package – packaging is becoming more important and must be considered PRIOR to silicon development. This new book expresses much of the packaging expertise Professor Swaminathan has gained in the last 20 years while working at IBM and teaching / researching at GeorgiaTech. Together with Ki Jin Han, they address most of the topics system- and IC designers need to consider when utilizing 2.5 and 3D-ICs solutions. Professor Swaminathan is also accumulating hands-on 2.5 and 3D experiences as CTO of E-System Design, an EDA start-up in this field. Their 2.5/3D book is available at Amazon.com.

    The book explains in Chapter 1 why interconnect delays and the related power dissipation are constraining designers and how Through-Silicon-Vias (TSVs) help to finally break down the dreaded “Memory Wall”. Either a 2.5D IC (die side-by-side on an interposer) OR a 3D IC (vertically stacked die) solution can better meet the performance, power, system cost, etc requirements. But before expensive implementation is started, the various options available in either need to be objectively evaluated. Both solutions increase bandwidth while lowering power dissipation, latency and package height. In addition, they simplify integration of heterogeneous functions in a package, for example combining a large amount of memory with a multi-core CPU or adding analog/RF circuits to a logic die.

    Chapter 2’s primary target audience is modeling and design tools developers. It explains how to accurately simulate the impact of TSVs, solder balls and bonding wires on high-speed designs – information also useful for package and IC designers.

    Chapter 3 dives into a lot of practical considerations for designing with the above mentioned IC building blocks.

    Chapter 4 focuses on signal integrity challenges, coupling between TSV as well as power and ground plane requirements. Both silicon and glass interposers are covered.

    Chapter 5 addresses power distribution and thermal management and Chapter 6 looks at future concepts currently in development for solving 2.5/3D-IC design challenges.

    The many formulas and examples in this book make it a great reference for experienced IC and package designers.

    Herb@eda2asic

    lang: en_US


    Is Altera Leaving Intel for TSMC?

    Is Altera Leaving Intel for TSMC?
    by Daniel Nenni on 01-24-2014 at 9:00 am

    There is a rumor making the rounds that Altera will leave Intel and return to TSMC. Rumors are just rumors but this one certainly has legs and I will tell you why and what I would have done if I were Altera CEO John Daane. Altera is a great company, one that I have enjoyed working with over the years, but I really think they made a serious mistake at 14nm, absolutely. Altera moving to Intel was not necessarily the mistake, in my opinion it is how they went about it.

    The rumor started here:

    “Altera’s recent move [contacting TSMC] is probably due to its worry of the recent Intel’s 14nm process delay causing delay in its new product will let Xilinx win”
    ChinaEconomic Daily News 12/2/13

    It became more real when Rick Whittington, Senior Vice President of Drexel Hamilton, released a downgrade on Intel stock (INTC) from buy to hold titled “A Business Model in Flux”. There are more than a dozen bullet points but this one hit home:

    While Altera’s use of 14nm manufacturing late this year wasn’t to ramp until mid-late 2015, it has been a trophy win against other foundries

    A trophy win indeed, the question is why did Altera allow itself to be an Intel trophy? After working with TSMC for 25 years and perfecting a design ecosystem and early access manufacturing partnership, it was like cutting off your legs before a marathon.

    The EDA tools, IP, and methodology for FPGA design and manufacturing are not mainstream to say the least. It is a very unique application which requires a custom ecosystem and ecosystems are not built in a day or even a year. Ecosystems develop over years of experience and partnerships with vendors. FPGAs are also used by foundries to ramp new process nodes which is what TSMC has done with Altera for as long as I can remember. This early access not only gave Altera a head start on design, it also helped tune the TSMC manufacturing process for FPGAs. Will Intel allow this type of FPGA optimization partnership for their “Intel Inside” centric processes? That would be like a flea partnering with a dog, seriously.

    What would I have done? Rather than be paraded around like a little girl in a beauty pageant, Altera should have been stealthy and designed to both Intel and TSMC for FinFETs. Seriously, what did Altera REALLY gain by all of the attention of moving to Intel? Remember, TSMC 16nm is in effect 20nm using FinFETs. How hard would it have been to move their 20nm product to TSMC 16nm while developing the required Intel design and IP ecosystem? Xilinx will tape out 16nm exactly one year after 20nm and exactly one year before Altera tapes out Intel 14nm. Remember, Altera gained market share when they beat Xilinx to 40nm by a year or so.

    Correct me if I’m wrong here but this seems to be a major ego fail for Altera. And if the rumor is true, which I hope it is for the sake of Altera, how is Intel going to spin Altera going back to TSMC for a quick FinFET fix?

    More Articles by Daniel Nenni…..

    lang: en_US


    ESD at TSMC: IP Providers Will Need to Use Mentor to Check

    ESD at TSMC: IP Providers Will Need to Use Mentor to Check
    by Paul McLellan on 01-22-2014 at 1:24 pm

    I met with Tom Quan of TSMC and Michael Beuler-Garcia of Mentor last week. Weirdly, Mentor’s newish buildings are the old Avant! buildings where I worked for a few weeks after selling Compass Design Automation to them. Odd sort of déja vu. Historically, TSMC has operated with EDA companies in a fairly structured way: TSMC decided what capabilities were needed for their next process node, specified them and then the EDA companies developed the technology. It wasn’t quite putting out an RFP, in general TSMC wasn’t paying for the development, the EDA companies would recover their costs and more from the mutual customers using the next node.

    The problem with this approach is it doesn’t really allow for innovation the originates within the EDA companies. Mentor and TSMC have spend the last couple of years working very cooperatively on a flow for checking ESD (electro-static discharge), latchup and EOS (electrical overstress). All of these can permanently damage the chip. ESD can be a major problem during manufacture, manufacturing test, assembly and even in the field. EOS causes oxide breakdown (some one-time programmable memories use this deliberately to program the bit cells, but when it kills other kinds of transistors it is a big problem). Like most things, it is getting worse from node to node, especially 20nm and 16nm. The gate oxide is getting thinner and so it is simply easier to damage. FinFETs are even more fragile.


    Historically TSMC has had layout design rules for these types of issues. But they required marker layers to be added to the cells to indicate which checks should be done in which areas. This causes two problems. Adding the marker layers is tedious and not really very productive work. But worse, if the marker layers are wrong then checks can be omitted and, often, without causing any DRC violations to give a hint that there is a problem. Another issue is that the design rules from 20nm on are sometimes voltage dependent, again a solution that was addressed historically with marker layers. Even then, not all rules could be checked. In fact, previously 35% of rules could not be checked and 65% required marker layers to check.

    This is increasingly a problem. It is obviously not life-threatening if the application processor in your smartphone fails (although obviously more than annoying). But medical, automotive and aerospace have fast growing electronic content and they have much higher reliability requirements. If your ABS system or your heart pacemaker fails it is a lot more than annoying.

    So Mentor and TSMC decided that they wanted a flow for checking that didn’t require marker layers and covered all the rules. It would obviously need to pull in not just layout data, but netlist and other electrical data (voltage dependent design rules obviously require knowing the voltages). The flow is intended for checking IP as part of the TSMC9000 IP quality program.

    This is built on top of Mentor’s PERC (programmable electrical rule checker). They focused on 3 areas where these problems occur:

    • I/Os (ESD is mostly a problem in I/Os)
    • IP with multiple power domains
    • analog

    Voltage dependent DRC checking is another area of cooperation. Many chips today have multiple voltages. In automotive and aerospace these may include high voltages and, as a general rule, widely separated voltages require widely separated layout on the chip to avoid problems. Again, the big gains in both efficiency and reliability come from avoiding marker layers.


    The current status is that Calibre PERC is available for full-chip checking 28nm, 20nm with 16nm in development. As part of the IP 9000 program it is available for IP verification for 20SoC, 16FF and 28nm. Use of Calibre PERC will become a requirement (currently it is just a recommendation) in 20nm, 16nm and below.


    More articles by Paul McLellan…