RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Is Altera Leaving Intel for TSMC?

Is Altera Leaving Intel for TSMC?
by Daniel Nenni on 01-24-2014 at 9:00 am

There is a rumor making the rounds that Altera will leave Intel and return to TSMC. Rumors are just rumors but this one certainly has legs and I will tell you why and what I would have done if I were Altera CEO John Daane. Altera is a great company, one that I have enjoyed working with over the years, but I really think they made a serious mistake at 14nm, absolutely. Altera moving to Intel was not necessarily the mistake, in my opinion it is how they went about it.

The rumor started here:

“Altera’s recent move [contacting TSMC] is probably due to its worry of the recent Intel’s 14nm process delay causing delay in its new product will let Xilinx win”
ChinaEconomic Daily News 12/2/13

It became more real when Rick Whittington, Senior Vice President of Drexel Hamilton, released a downgrade on Intel stock (INTC) from buy to hold titled “A Business Model in Flux”. There are more than a dozen bullet points but this one hit home:

While Altera’s use of 14nm manufacturing late this year wasn’t to ramp until mid-late 2015, it has been a trophy win against other foundries

A trophy win indeed, the question is why did Altera allow itself to be an Intel trophy? After working with TSMC for 25 years and perfecting a design ecosystem and early access manufacturing partnership, it was like cutting off your legs before a marathon.

The EDA tools, IP, and methodology for FPGA design and manufacturing are not mainstream to say the least. It is a very unique application which requires a custom ecosystem and ecosystems are not built in a day or even a year. Ecosystems develop over years of experience and partnerships with vendors. FPGAs are also used by foundries to ramp new process nodes which is what TSMC has done with Altera for as long as I can remember. This early access not only gave Altera a head start on design, it also helped tune the TSMC manufacturing process for FPGAs. Will Intel allow this type of FPGA optimization partnership for their “Intel Inside” centric processes? That would be like a flea partnering with a dog, seriously.

What would I have done? Rather than be paraded around like a little girl in a beauty pageant, Altera should have been stealthy and designed to both Intel and TSMC for FinFETs. Seriously, what did Altera REALLY gain by all of the attention of moving to Intel? Remember, TSMC 16nm is in effect 20nm using FinFETs. How hard would it have been to move their 20nm product to TSMC 16nm while developing the required Intel design and IP ecosystem? Xilinx will tape out 16nm exactly one year after 20nm and exactly one year before Altera tapes out Intel 14nm. Remember, Altera gained market share when they beat Xilinx to 40nm by a year or so.

Correct me if I’m wrong here but this seems to be a major ego fail for Altera. And if the rumor is true, which I hope it is for the sake of Altera, how is Intel going to spin Altera going back to TSMC for a quick FinFET fix?

More Articles by Daniel Nenni…..

lang: en_US


Parasitic Debugging in Complex Design – How Easy?

Parasitic Debugging in Complex Design – How Easy?
by Pawan Fangaria on 01-23-2014 at 9:00 am

When we talk about parasitic, we talk about post layout design further expanded in terms of electrical components such as resistances and capacitances. In the semiconductor design environment where multiple parts of a design from different sources are assembled together into highly complex, high density SoC, imagine how complex it would be to debug that design at parasitic level. We definitely need smart tools to be able to analyse different parts of a design, at different levels of hierarchy, and at different levels of abstraction such as transistor, gate and RTL.

Good news is that we do have such tools available from Concept Engineeringwhich enable designers to do very fast design exploration, visualize the design at different levels, reduce complexity, and thus debug the design easily, precisely and in lesser time. I was delighted to go through a webinarhighlighting parasitic debugging by using StarVision and SpiceVision. The webinar included a demo as well, conducted very nicely by Lokesh Akkipeddi at Eda Direct. Lokesh demonstrated features with live menus which help locate the exact problem area with great navigation and cross-probing, simplify portion of the design view at a desired level (e.g. modify symbols, move up to gate or down to transistor level, remove RC etc.) to understand the problem, review Spice netlist and fix at any level as appropriate.

The design can be visualized at various levels such as transistor, gate or RTL and those can be mixed with each other as required. Parasitic for different wires can be viewed in different colors for easy correlation. Similarly the source code can be viewed for any module or component of the design.

Industry leading Spice and post-layout interfaces (including those from EDA majors like Synopsys, Cadenceand Mentor) are supported which StarVision can read and also write out Spice netlist. Schematic can be exported to Cadence Virtuoso through SKILL.

During the demo, I could see a good level of navigation moving through different levels of hierarchies connected through nets, signal distribution, looking inside a module or individual pin, and provision to hide unconnected pins to remove clutter and many other features.

Cone extraction is a special feature which caught my attention. It can expose all inputs connected to a pin as well as all outputs from it to probe with closer vision.


[Circuit with RC and without RC; Parallel transistors merged to recognize gates]

Similarly, to view a circuit in simple form, there is an interesting feature to reduce netlist where RC can be filtered out from a circuit to view it in simple form of transistors. Also, parallel transistors can be merged to easily recognize CMOS gates. Large resistances and capacitances can be recognized, viewed and values observed, if that asks for any modification in the circuit.

Then there is cross-probing up to the source level and the code can be highlighted in the same color as selected for the particular component.

Spice code can be written for any desired portion of the circuit and that can be used for external partial simulation for analysis and decision making.


An excellent extensible feature is that APIs can be developed for customized functionality at any level (Spice, gate or RTL) by using tcl scripts. Over 100 example APIs have been developed; some of those are shown in the table.

I can go on and on to mention more features and still not be able to justify the real essence of those through these pictures. It would really help designers to gain the gist of those excellent features by seeing the live presentation and demo in the webinar. Go for it!!

More Articles by Pawan Fangaria…..


Rekeying the IoT with eMTP

Rekeying the IoT with eMTP
by Don Dingee on 01-22-2014 at 4:10 pm

For non-volatile storage in IoT devices, there is technology designed to be reprogrammed many times, and technology designed to be programmed once. The many times mode is for application code, while the once mode is for keying and calibration parameters. We are about to enter the IoT rekeying zone, in between these two extremes.
Continue reading “Rekeying the IoT with eMTP”


Wearables the Big Hit at CES

Wearables the Big Hit at CES
by Paul McLellan on 01-22-2014 at 3:00 pm

There were a number of trends discernible at CES this year, one of the big ones being wearables, especially in the medical and fitness areas. I wear a FitBit Flex and I have, but rarely wear, a Pebble Watch that links to my iPhone. I would say that at this point they are promising but are more gimmicks than truly useful. My Fitbit measures how much I walk but it gets confused by cycling and I have to tell it when I go to sleep and wake up, which I usually forget to do. It contains an ARM Cortex-M3, a bluetooth interface, accelerometer and a power control IC. The Pebble is getting better as they upgrade the software and it works with more Apps. It contains a 120MHz ARM chip, 3-axis accelerometer, and a Bluetooth 2.1 and low-energy 4.0 chip. So despite the very different applications both the Fitbit Flex and the Pebble watch have the same IC functionality and could almost certainly use a single chip SoC incorporating everything. The displays are obviously different.

One thing that all these devices have is lots of IP that wasn’t particularly designed to work together, or even work at the same speed. One of the challenges is getting everything so that fast devices can communicate with slow devices without overloading them. Often different blocks are running at different voltages and in different clock domains, adding to the complexity of the interfaces. Level shifters are needed when voltages are different, and there is always plenty of opportunity for introducing subtle and not-so-subtle bugs at clock domain crossing boundaries.


Sonics has a solution that solves all these problems, SonicsExpress. SonicsExpress provides a high bandwidth bridge between two clock domains, with optional voltage domain isolation. SonicsExpress supports AXI or OCP protocols and is capable of crossing clock boundaries, power boundaries, and large spans of physical distance. In addition, SonicsExpress is optimized for high-bandwidth, low-latency communication. It supports both single threaded and multi-threaded configurations and can operate in either blocking or non-blocking modes.

Because clock domain boundaries also often occur at power domain crossings, tactical cells are instantiated on the signals that cross the asynchronous boundary addressing voltage level shifting and clock domain safety. Combined with the features of Sonics NoC, this allows IP blocks from different suppliers, operating with different data rates, different supply voltages, different clock fequencies and different protocols to operate seamlessly together and build a working SoC.

More information on SonicsExpress is here.


Dan Niles: Strong Developed Markets, Weak Emerging

Dan Niles: Strong Developed Markets, Weak Emerging
by Paul McLellan on 01-22-2014 at 2:15 pm

Yesterday was Dan Niles’s economic review that he presents quarterly for GSA. As always he starts from big macroeconomic picture and ends up looking at the implications for semiconductor end-markets and thus the implication for semiconductors in general and the fabless ecosystem in particular.

The big picture is that the developed markets such as US, Japan and some of Europe seem to be recovering reasonably well if not dramatically so. But with bond yields essentially zero, hot money has moved into the developing markets, the so-called BRICs (Brazil, Russia, India, China) and they are struggling. Brazil, India and Russia are all experiencing slowing GDP, high inflation and a falling currency. China is much stronger, but not as strong as it was. It has to rebalance its economy more towards domestic consumption, slow the growth of the shadow banking credit markets (now up to 15%). Hopefully they can avoid a Lehman type event, but there are lots of bad loans and malinvestments around. As the fed starts to “taper” and stop quantitative easing and, eventually, see interest rates increase, some of the hot money will return to the US from the developing countries which will make their problems worse. Japan also has an issue as interest rates rise since it debt is over 200% of GDP. Of course if the US did its accounting properly for future entitlements we are much worse.


The big semiconductor end markets are computing and mobile. Computing is showing some uptick finally, although that is after two years of fairly major decline. In computing, flat is the new normal. The only bright spot is the buildout of datacenters. The traditional desktop market is largely gone and a lot of the notebook market is being superseded by tablets (iPad and the like) or just smartphones. More internet access was made on mobile devices last year than traditional PCs.


Mobile will also slow, although it will still grow a lot, over 20%. But that is down from 40% and more in the last few years. But smartphone and tablet markets are starting to be mature, at least at the high end. Future growth will mostly be at the low end and a lot of that in China. There is now only one north american manufacturer in the top 10 (Apple at #2). Motorola, Palm and Blackberry used to be there. Samsung is #1. But Huawei, Lenovo, ZTE and probably Coolpad (Yulong), all from China, and LG from Korea are all there. Nokia (purchased by Microsoft) should just scrape into the top 10 for the year, Europe’s only entrant.

After growing 4% this year, mostly due to a doubling in memory prices, and basically being flat for several years, semiconductor should grow 6% this year. There has not been overspending in capital investment so this time there is not excess capacity.

TL;DR developed markets in good shape; BRICS not so much. Semi should break out and have a good year after 3 years of being flat


More articles by Paul McLellan…


ESD at TSMC: IP Providers Will Need to Use Mentor to Check

ESD at TSMC: IP Providers Will Need to Use Mentor to Check
by Paul McLellan on 01-22-2014 at 1:24 pm

I met with Tom Quan of TSMC and Michael Beuler-Garcia of Mentor last week. Weirdly, Mentor’s newish buildings are the old Avant! buildings where I worked for a few weeks after selling Compass Design Automation to them. Odd sort of déja vu. Historically, TSMC has operated with EDA companies in a fairly structured way: TSMC decided what capabilities were needed for their next process node, specified them and then the EDA companies developed the technology. It wasn’t quite putting out an RFP, in general TSMC wasn’t paying for the development, the EDA companies would recover their costs and more from the mutual customers using the next node.

The problem with this approach is it doesn’t really allow for innovation the originates within the EDA companies. Mentor and TSMC have spend the last couple of years working very cooperatively on a flow for checking ESD (electro-static discharge), latchup and EOS (electrical overstress). All of these can permanently damage the chip. ESD can be a major problem during manufacture, manufacturing test, assembly and even in the field. EOS causes oxide breakdown (some one-time programmable memories use this deliberately to program the bit cells, but when it kills other kinds of transistors it is a big problem). Like most things, it is getting worse from node to node, especially 20nm and 16nm. The gate oxide is getting thinner and so it is simply easier to damage. FinFETs are even more fragile.


Historically TSMC has had layout design rules for these types of issues. But they required marker layers to be added to the cells to indicate which checks should be done in which areas. This causes two problems. Adding the marker layers is tedious and not really very productive work. But worse, if the marker layers are wrong then checks can be omitted and, often, without causing any DRC violations to give a hint that there is a problem. Another issue is that the design rules from 20nm on are sometimes voltage dependent, again a solution that was addressed historically with marker layers. Even then, not all rules could be checked. In fact, previously 35% of rules could not be checked and 65% required marker layers to check.

This is increasingly a problem. It is obviously not life-threatening if the application processor in your smartphone fails (although obviously more than annoying). But medical, automotive and aerospace have fast growing electronic content and they have much higher reliability requirements. If your ABS system or your heart pacemaker fails it is a lot more than annoying.

So Mentor and TSMC decided that they wanted a flow for checking that didn’t require marker layers and covered all the rules. It would obviously need to pull in not just layout data, but netlist and other electrical data (voltage dependent design rules obviously require knowing the voltages). The flow is intended for checking IP as part of the TSMC9000 IP quality program.

This is built on top of Mentor’s PERC (programmable electrical rule checker). They focused on 3 areas where these problems occur:

  • I/Os (ESD is mostly a problem in I/Os)
  • IP with multiple power domains
  • analog

Voltage dependent DRC checking is another area of cooperation. Many chips today have multiple voltages. In automotive and aerospace these may include high voltages and, as a general rule, widely separated voltages require widely separated layout on the chip to avoid problems. Again, the big gains in both efficiency and reliability come from avoiding marker layers.


The current status is that Calibre PERC is available for full-chip checking 28nm, 20nm with 16nm in development. As part of the IP 9000 program it is available for IP verification for 20SoC, 16FF and 28nm. Use of Calibre PERC will become a requirement (currently it is just a recommendation) in 20nm, 16nm and below.


More articles by Paul McLellan…


Have you Tried ALDEC?

Have you Tried ALDEC?
by Luke Miller on 01-22-2014 at 1:00 pm

I must admit. I was too comfortable. Let me explain, I’m a ModelSim guy from Mentor Graphics. I did not really think nor care much of the other RTL simulator options. How could someone build a better tool with respect to simulation? Let me introduce you to Aldec. Aldec was founded in 1984 by Dr. Stanley M. Hyduke. 30 years later they are still in business and growing strong. I think I heard of them once in my other life but had no time nor money to perform an evaluation of their software. Well, I get a second chance. Before I get into this I must note that my opinion has not changed, HDL coding will become less and less BUT on all FPGA design HDL coding will always be needed to mesh together all the sub-components. Aldec seems to compliment this very well including VHDL 2008 Support. So I still recommend using Xilinx’s Vivado HLS to design much of your DSP and other cores and then use Aldec to tie the wrapper and the design together. Note that these tools are all pre-synthesis tools.

Aldec today has over 30,000 active worldwide licenses. Now here is a bit of writing genius, but that is a lot of licenses. I know if you are like me, one of the major concerns when using any new tool is the health and size of a company. That is, what if I switch over to the Aldec tool suite and they go belly up? That is a realistic concern, remember the PA Semi days! I do! Let me be the first to assert that Aldec is a very healthy company, with a large user base, active software developments and world class help. That means you can pick up a phone and talk with a real person based in the United States. That is all important. They can also tailor some of the tools and license options as well. For example say your company needs 5 licenses but during design sign off, you need 50 licenses. That’s easy, they will pro rate your licenses for the time you need the extra licenses without any extra penalty. Aldec licenses are worldwide and not node locked to a particular geographic location. That equates to more cost savings.

Before I go into any deep dive, there are few tools I would like to review over the course of this year. I highly encourage you to go to the Aldec website and study what is available to accelerate your FPGA design cycle.

Active-HDL is a Windows® based, integrated FPGA Design Creation and Simulation solution for team-based environments. Active-HDL’s Integrated Design Environment (IDE) includes a full HDL and graphical design tool suite and RTL/gate-level mixed-language simulator for rapid deployment and verification of FPGA designs.

The design flow manager evokes 120+ EDA and FPGA tools, during design entry, simulation, synthesis and implementation flows and allows teams to remain within one common platform during the entire FPGA development process. Active-HDL supports industry leading FPGA devices including Xilinx and even Xilinx Zynq.

ALINT design analysis tool decreases verification time dramatically by identifying critical issues early in the design stage. Smart design rule checking (linting) points out coding style, functional, and structural problems that are extremely difficult to debug in simulators and prevents those issues from spreading into the downstream stages of design flow. The tool features highly tune-able and intuitive framework that seamlessly integrates into existing environments and helps to automate any existing design guidelines. The framework delivers configurable sets of rules, efficient Phase-Based Linting (PBL) methodology, and feature rich result-analysis tools that significantly improve user productivity and overall efficiency of the design analysis and refinement process.

Riviera-PRO™ addresses verification needs of engineers crafting tomorrow’s cutting-edge FPGA and SoC devices. Riviera-PRO enables the ultimate test bench productivity, re-usability, and automation by combining the high-performance simulation engine, advanced debugging capabilities at different levels of abstraction, and support for the latest Language and Verification Library Standards.

My next blog will walk my readers thru a real design using the ALDEC tools! Stay tuned!

lang: en_US


Just Released! Fabless: The Transformation of the Semiconductor Industry

Just Released! Fabless: The Transformation of the Semiconductor Industry
by Daniel Nenni on 01-22-2014 at 12:00 pm

The book “Fabless: The Transformation of the Semiconductor Industry” is now available in the Kindle (mobi) and iBooks (ePub) formats. We are really looking forward to your feedback before we go to print in March. This was truly a Tom Sawyer experience for me. As the story goes Tom made whitewashing a fence seem like fun so his friends would do the work for him.

Paul McLellan was my first friend. Paul and I met during the EDA’s Next Top Blogger Contest at DAC in 2009 and we lost! Paul has since proven them wrong as he is without a doubt EDA’s Top Blogger publishing 279 blogs on SemiWiki in 2013. What I discovered in writing this book is that as a blogger I can only write 500 words at a time so Paul had to fill in the rest. His depth of semiconductor knowledge and willingness to trust me is unparalleled.

My next friends were companies that work with SemiWiki. Since this is a book chronicling the rise of the fabless semiconductor ecosystem, having the companies that actually participated in that rise contribute sub chapters made complete sense. This was much more of a challenge than I imagined. Fortunately, Mentor is a very game company and they contributed first, then came TSMC, Xilinx, Cadence, Synopsys, ARM, Imagination Technologies, VLSI Technology, Chips and Technology, Global Unichip, GLOBALFOUNDRIES, and eSilicon.

Book Preview

The real challenge in writing a book like this is how to end it. “Chapter 8: What’s Next For The Semiconductor Industry?” was simply brilliant if I say so myself. Getting 300 word passages from CEO’s and other executives of the top companies in the fabless semiconductor ecosystem was not easy! But again, Mentor’s CEO Wally Rhines went first. After that it was an avalanche:

Cliff HouVice President Research and Development of TSMC (book Foreword)
Moshe Gavrielov President and CEO of Xilinx, Inc.
Simon Segars CEO of ARM

Aart de GeusChairman and Co-CEO of Synopsys, Inc.
Lip-Bu TanPresident and CEO of Cadence Design Systems
Walden RhinesCEO and Chairman of Mentor Graphics
Subi KengeriVice President Advanced Technology Architecture of GLOBALFOUNDRIES
Ajoy BoseChairman, President, and CEO of Atrenta Inc.
Jack HardingPresident and CEO of eSilicon
Kathryn KranenCEO of Jasper Design Automation
Hossein YassaieCEO of Imagination Technologies
Sanjiv KaulCEO of Calypto Design Systems
Srinath AnantharamanFounder and CEO of Cliosoft
Charlie JanacPresident and CEO of Arteris
David HallidayCEO of Silvaco
John TannerFounder and CEO of Tanner EDA
Xerxes WaniaPresident and CEO of Sidense
Ghislain KaiserCo-founder and CEO of Docea Power
Jodi SheltonCo-founder & president of the Global Semiconductor Alliance
Gideon WertheizerCEO of CEVA, Inc
Grant PierceCEO of Sonics, Inc
Trent McConaghyCo-founder and CTO of Solido Design
Mike JamiolkowskiCEO of Coventor, Inc.
Joe SawickiVice President and General Manager at Mentor Graphics
Martin LundSenior VP of the IP Group at Cadence Design Systems
Rich GoldmanVice President of Corporate Marketing and Strategic Alliances at Synopsys
Raymond LeungVP of SRAM at Synopsys
Richard GoeringVeteran EDA editor and author of the Cadence Industry Insights Blog

There are also passages from the SemiWiki bloggers, myself included, which I think you will find interesting. The book is a very good read, it’s for the greater good of the fabless semiconductor ecosystem, I sincerely hope you enjoy it!


A Power Optimization Flow at the RTL Design Stage

A Power Optimization Flow at the RTL Design Stage
by Daniel Payne on 01-21-2014 at 10:20 pm

SoC designers can code RTL, run logic synthesis, perform place and route, extract the interconnect, then simulate to measure power values. Though this approach is very accurate, it’s also very late in the implementation flow to start thinking about how to actually optimize a design for the lowest power while meeting all of the other design requirements. Ideally, you would want a flow that starts with your RTL code at the design phase and then provides a method to estimate power and even provide feedback on how to best reduce power long before physical implementation even begins.

Here is a flow-chart for such a methodology where power estimation happens early during the design phase, instead of too late:

The first box depicts how at the architectural stage a decision is made about using voltage domains, power domains, and clock gating techniques. Any IP re-used from a previous design can be quickly tallied in this new design for an early power estimate.

At the second box you should have enough RTL together so that power numbers can be added up based on re-use or spreadsheet estimates. There’s even commercial EDA software from Atrentacalled SpyGlass Power that can quickly estimate power numbers at the RTL design phase. Either the manual or automated approach will benefit your project early in the process to see if the power requirements have been met.

Most designs will not be within the power budget at this early design stage, so it’s time to move on to the third step: Power Reduction methods. The SpyGlass Power tool will help automate this power reduction process by changing an RTL design:

  • Applies gate enables, then provides an activity-based power calculation
  • Inserts clock-gates
  • Checks existing clock enable
  • Identifies new clock enables

You are likely also adding level shifters between voltage domains and inserting isolation logic. These changes and SpyGlass Power changes need to be verified against the original design intent, which is the fourth step in the flow-chart.

Implementation is the fifth step and it’s where logic synthesis transforms RTL into technology-specific cells for use by a place and route tool, driven by timing, DFT and placement constraints.

Once the IC layout is complete a final post-layout power verification is required to ensure that no surprises or errors have crept in.

Power Saving Overview

SoC power can be divided into dynamic and static categories. Examples of controlling dynamic power are:

  • Using voltage domains
  • Adding clock gating techniques

And examples for managing static power include:

  • Multiple power domains where an entire domain can be put to sleep
  • Using multi-voltage threshold transistors

Power Estimation in SpyGlass Power

This tool can calculate power by cycle, average for leakage, internal and switching, then display it graphically or numerically:

Power Reduction

Pawan Fangaria blogged recently about many of the clock gating techniques used for power reduction in more detail. Taking automation one step further SpyGlass Power has a feature called AutoFix that finds new clock enable opportunities then goes ahead and fixes the RTL code to take advantage of it.

To put your mind at ease about the integrity of any changes to RTL you can run a sequential equivalency checking (SEC) tool on the original RTL versus AutoFixed version.

Summary

Estimating and reducing power are popular topics for SoC designers today, fortunately we have EDA tools in place by vendors like Atrenta that help automate the task in a timely fashion by letting you work with early RTL to see if your power requirements are being met.

Further Reading

A 12 page white paper from Guillaume Boillet and Kiran Vittal is available at Atrenta, and you’ll need to complete a simple registration process before downloading.

lang: en_US


TSMC Responds to Intel’s 14nm Density Claim!

TSMC Responds to Intel’s 14nm Density Claim!
by Daniel Nenni on 01-21-2014 at 9:30 pm

TSMC responded to Intel’s 14nm density advantage claim in the most recent conference call. It is something I have been following closely and have written about extensively both publicly and privately. Please remember that the fabless semiconductor ecosystem is all about crowd sourcing and it is very hard to fool a crowd of semiconductor professionals, absolutely. To see Intel’s infamous density presentation click HERE.


First let’s take a look at what TSMC had to say:

Morris Chang – Chairman:So I now would ask Mark Liu to speak to TSMC’s competitiveness versus Intel and Samsung:

Let me comment on Intel’s recent graph shown in their investor meetings, showing on the screen. I — we usually do not comment on other companies’ technology, but this — because this has been talking about TSMC technology and, as Chairman said, has been misleading, to me, it’s erroneous based on outdated data. So I’d like to make the following rebuttal:

On this new graph, the vertical axis is the chip area on a large scale. Basically, this is compared to chip area reduction. On the horizontal axis, it shows the 4 different technologies: 32, 28; 22, 20; 14, 16-FinFET; and 10-nanometer. 32 is Intel technology, and 28 is TSMC technology so is the following 3 nodes, the smaller number, 20, around — 14-FinFET is Intel, 16-FinFET is TSMC. On the view graph shown at Intel investor meeting, it is with the gray plot, showing here. The gray plot showed the 32- and the 20-nanometer TSMC is ahead of the area scaling and — but — however, with 16, the data — gray data shows a little bit uptick. And following the same slope, go down to the 10-nanometer, was the correct data we show on the red line. That’s our current TSMC data. The 16, we have in volume production on 20-nanometer. As C.C. just mentioned, this is the highest density technology in production today.

We take the approach of significantly using the FinFET transistor to improve the transistor performance on top of the similar back-end technology of our 20-nanometer. Therefore, we leverage the volume experience in the volume production this year to be able to immediately go down to the 16 volume production next year, within 1 year, and this transistor performance and innovative layout methodology can improve the chip size by about 15%. This is because the driving of the transistor is much stronger so that you don’t need such a big area to deliver the same driving circuitry.

And for the 10-nanometer, we haven’t announced it, but we did communicate with many of our customers that, that will be the aggressive scaling of technology we’re doing. And so in the summary, our 10 FinFET technology will be qualified by the end of 2015. 10 FinFET transistor will be our third-generation FinFET transistor. This technology will come with industry’s leading performance and density. So I want to leave this slot by 16-FinFET scaling is much better than Intel’s set but still a little bit behind. However, the real competition is between our customers’ product and Intel’s product or Samsung’s product.

Morris Chang – Chairman:Thank you, Mark. In summary, I want to say the following: First, in 2014, we expect double-digit revenue growth and we expect to maintain or slightly improve our structural profitability. As a result, we expect our profit growth to be close to our revenue growth. In 2014, the market segment that most strongly fuels our growth is the smartphone and tablet, mobile segment. The technologies that fuel our growth are the 20-SoC and the 28 high-K metal gate, in both of which we have strong market share. In 2015, our strong technology growth will be 16-FinFET. We believe our Grand Alliance will outcompete both Intel and Samsung, outcompete.

If there is anyone out there that doubts these numbers please post in the comment section or send me a private email. I will follow up with a rebuttal blog based on feedback next week.

More Articles by Daniel Nenni…..