BannerforSemiWiki 800x100 (2)

HDMI, DisplayPort, MHL IPs + Engineering Team = Good Move

HDMI, DisplayPort, MHL IPs + Engineering Team = Good Move
by Eric Esteve on 02-17-2014 at 10:18 am

This news is certainly not as amazing that the acquisition of MIPS by Imagination, or Arteris by Qualcomm… but it shows that Cadence is building a complete Interface IP port-folio, brick after brick. The result will be that a complete wall is being built on the Synopsys road to monopoly and complete success on Interface IP market. When evaluating HDMI and DisplayPort IP segment, the two big names are Synopsys and Silicon Image, and Transwitch comes after, quite far from the two leaders. Let’s hope for Cadence that this lagging position was due to a lack of investment, rather than from the quality of the engineering team. In this case, the very strong motivation, and deep pockets of Cadence should help the company to head to head compete with Synopsys in the near future, in this IP market segment where Cadence had so far no product to offer… Thus, we think these asset acquisition will be generate new IP sales for Cadence. If we want to forecast the volume of these IP sales, it can be wise to see the starting point, or what was the latest available business figure for Transwitch.

In fact, Transwitch is under chapter 11 since November 21, 2013, and the company website has been pirated. But, if you keep searching, you can find the latest quarterly and annual report from Transwitch. I have read the complete annual report (2012) to understand that you had to search under “Customer Premise Equipment” product line to understand where the IP and services revenue are located:

It seems that in 2013, Transwitch had finally decided to call a spade a spade, and name this product line “IP and service revenue” as we can see on the below picture, extracted from the last published quarterly report.

Thus, IP and service have generated $K 3,676 revenue in 2012, and $K 2,868 during the first half of 2013. As we don’t know what is the share between IP and service (and also what type of product the service revenue is related to) , we have to dig into another source, still extracted from the 2012 annual report, the “Consolidated Statement of Operation”. We can find the “Cost of service revenue” line ($K 1,274 in 2012). This is important to discriminate between “service” and “IP”: IP are developed by the R&D team, or say that the cost of IP development can be classified as “R&D Cost”, when the service related cost are classified apart. Thus, we have to make some assumption, deciding that design service should generate 50% GPM. This lead to service revenue being evaluated in the $2.5million in 2012, leaving a maximum of $5million for IP revenue, or less. This is consistent with the $3,676million ranked in Customer Premise Equipment, or HDMI, DisplayPort, MHL, HDPlay and Ethernet IP sales in 2012.

If we compare this revenue generated by these various IP with Synopsys or Silicon Image HDMI revenue, Transwitch is clearly below, with HDMI IP revenue being four time lower than Synopsys (and six or seven time lower than Silicon Image). Nevertheless, the engineering team will now be part of a far healthy company, able to make the right investment to target the latest technology nodes, pay for the shuttles, develop demonstration board. In short invest upfront to enhance the product quality and invest again to promote the IP in such a way that the license sales should follow. This is not a surprise if, even the best product doesn’t really sale untill you develop the right marketing plan, better position the product if needed, and be able to rely on a strong sales network to access customers on a world-wide basis.

If you take a look at the new Cadence IP Port-Folio, you can see an IP offer as wide as the company direct competitor. How long could it take for Cadence to generate the same level of revenue in Interface IP market than Synopsys? Some time… but maybe not that long.

Anyway: HDMI, DisplayPort, MHL IPs + Engineering Team = Good Move

Eric Esteve from IPNEST

More Articles by Eric Esteve…..

lang: en_US


Dr. Cliff Hou, TSMC VP of R&D, Keynote

Dr. Cliff Hou, TSMC VP of R&D, Keynote
by Daniel Nenni on 02-16-2014 at 9:00 am

This will be my 30[SUP]th[/SUP] Design Automation Conference. I know this because my first DAC was the same year I got married and forgetting how many years you have been married can cost you half your stuff. I have known Cliff Hou for half of that time and he has proven to be one of the most humble and honorable men I have worked with, definitely.

Cliff started at TSMC in the PDK group and produced the first TSMC Reference Flow which really was the starting point for the fabless semiconductor ecosystem (Grand Alliance) that we have today. Cliff then took over the TSMC IP group before becoming the Senior Director at Design and Platform which included the PDK, IP, and other design enablement Groups inside TSMC. In 2011 Cliff was appointed TSMC’s Vice President of Research and Development. Clearly Dr. Cliff Hou is rising star in the semiconductor industry and it has been an honor to work with him.

Cliff was our choice to write the foreword to the book, “Fabless: The Transformation of the Semiconductor Industry” as he and TSMC led this transformation. The foreword alone is worth the price of the book and I can’t wait to get Cliff to sign a copy for me at #51DAC where he will be keynoting:

Industry Opportunities in the Sub-10nm Era

The human thirst for connectivity and experience, as enabled by the electronics industry and the ongoing march of Moore’s Law, has already brought, and will bring even more, profound changes in way we interact with the world and each other. This profound enhancement of the human experience enabled by constant mobile connectivity, the Cloud, and sensors, brought to an ever widening worldwide audience, will bring untold opportunity to all of us here at DAC.

All of these changes demand continued chip and wafer-based scaling to deliver the power and performance necessary to enable wondrous, new applications. In less than two years we’ll be in production at 10nm, and shortly after 7nm, all made possible by a “Grand Alliance” of design ecosystem, equipment and material suppliers. At the same time, a new paradigm is being realized: heterogeneous silicon integration combining chips from multiple process technologies with 3D packaging to deliver compelling economics for a “System in a Si Superchip.”

New design techniques will be required for those applications becoming reality, including how 10nm, and 7nm will support those requirements, new manufacturing techniques, and the benefits they will provide. The introduction of 10nm and 7nm processes will alter today’s ecosystem while opening greater EDA and IP opportunities, and present new system and chip design challenges such as near threshold design, thermal and battery limitations, and 3D IC considerations.

IC designers, ecosystem providers and foundries have been committed to open innovation and mutually beneficial teamwork for many process technology generations, but success in the sub-10nm era will require unprecedented levels of collaboration and cooperation between all of us here at DAC. Our teamwork will drive industry progress, and the more we “collaborate to innovate,” the more successful our customers and all of us will become.

More Articles by Daniel Nenni…..

lang: en_US


Speeding Up AMS Verification by Modeling with Real Numbers

Speeding Up AMS Verification by Modeling with Real Numbers
by Daniel Payne on 02-15-2014 at 7:00 pm

My first introduction to modeling an AMS behavior using a language was back in the 1980’s at Silicon Compilers using the Lsim simulator. Around the same time the VHDL and Verilog languages emerged to handle the modeling of both digital and some analog behaviors. The big reason to model analog behavior with a language is for improved simulation speeds to boost productivity of both the design and verification phases, but the challenge is to consider the trade-off in accuracy versus a reference like SPICE circuit simulation.


Pulse response of an Operational Amplifier:SPICE in blue, Real Number model in Orange
Continue reading “Speeding Up AMS Verification by Modeling with Real Numbers”


Power Control Moving into Hardware

Power Control Moving into Hardware
by Paul McLellan on 02-14-2014 at 6:30 pm

Sonics have been building networks-on-chips (NoCs) for a long time and have amassed a rich patent portfolio. So being granted a new one isn’t usually deemed press-release-worthy. However, their latest patent on power management is pretty significant. It is patent 8,601,288 titled “Intelligent Power Controller”.

Historically SoCs have had a manageable number of blocks and a limited number of power domains that could be powered down. The responsibility for actually powering them down and bringing them back up again was the responsibility of the embedded software. Even then, chip designers would complain that the software developers didn’t power down blocks aggressively enough and the embedded programmers would complain that they didn’t understand all the implications of setting bits in the power registers, how much power would be saved, how long before they needed to use the block must they power it up and so on. Just as with the move to multicore processors, the hardware people pushed their problem up into the software world and assumed that the software people would be able to solve it without much difficulty.

Now SoCs have a couple of hundred blocks and the software architecture has also got very complex. Think of iOS or Android, with a portfolio of apps loaded onto the hardware after it shipped, compared to, say, the software in a 5-year old digital camera with a lightweight real-time operating system (RTOS) and a single application. How can a part of the software even know whether it can power down the LTE modem since other parts of the software may be using it to access the internet. When a phone was either making a call or not making a call, life was a lot simpler.

NoCs offer an opportunity to move the responsibility for powering blocks up or down into the hardware. The Noc knows two useful things: whether a block is receiving or transmitting anything, and whether another block is trying to send a block something. Blocks can automatically be powered down when they have been idle for too long, and when another block tries to communicate with a powered-down block then the NoC can buffer the message, power up the block, and then deliver it. Of course this needs to be done with some input from the architect. If it takes too long to power up a block then this automatic approach may not work. And some blocks might want to be powered up whether anyone is communicating with them or not if they run some continuous monitoring process, for example.

All of this is complementary to other power saving approaches such as clock-gating, multi-Vt cells and so on. Power is too important not to address it at every level from low-power processes up to choice of algorithms that are power-sensitive. But one of the biggest possible power savers is to do what your parents told you: turn the lights off when you are not using them.

Sonics press release here. Full patent here.


Happy Birthday Xilinx

Happy Birthday Xilinx
by Luke Miller on 02-14-2014 at 4:00 pm

I have never done this before, wished a company happy birthday. So here goes, Happy Birthday Xilinx! How does it feel to be 30? Looking good eh? Signing up for AARP? My family and I just sang and had cake and ice cream. They did look at me like I was nuts when I set a place at the table for a Xilinx FPGA. In all seriousness, over the years Xilinx has grown from your grandma’s plain old PLD to the only World Leader in the (SoC) ‘System on a Chip’ All Programmable FPGAs (you know what I’m trying to say). You are not going to find a better SoC FPGA, and the gap widens as Xilinx pulls away from Altera and owns 70% of the 28nm market, and will dominate the next nodes and beyond.

Xilinx’s transformation did not happen by chance or overnight. A company is made of up of people. Knowing many of the people at Xilinx, these people describe ‘Excellence’. The people believe in the product, their leadership and truly have a love for the customer. They also are the brightest and best in the world. When Moshe became CEO in 2008 he brought a necessary mindset with him, which is an EDA mind. FPGAs are not all about silicon. You can have the best FPGAs in the world but the tools make the FPGAs breathe life. What has changed since 2008? Pretty much everything. No the baby was not thrown out with the bath water but I now liken Xilinx to the old Ford Motor Company. Raw materials would come in, and a car would come out every 24 seconds. Xilinx not only has a software team, but the world’s best. Same for layout, design, DSP, Analogue, Power, Reliability, IP, HLS, Anti-Tamper, services, interface designs… the list is near endless. What I mean is Xilinx could spin another company for EDA if they wanted to because their team is so strong. The only thing Xilinx does not do is fab the FPGAs, which is the correct business model.

In 2011, Moshe and team purchased AutoESL and created Vivado HLS, the world’s fastest way to program an FPGA in an open C/C++ or SystemC language. Then Xilinx committed vast amounts of resource to change the way place and route is done from annealing to a analytic solver, not a trivial task. So dear reader, you may think this is fluffy or biased but the proof is in the pudding. Compare the Errata over at Altera’s 28nm to Xilinx’s 28nm and it’s not even a race. In my humble opinion, Errata is bar of how well a team is executing and understanding the internal and external environments. The Xilinx 28nm node is near Errata free (Stop and just think about that, that is amazing), with a few stragglers for the ARM core beyond Xilinx’s control. They’re not perfect but they believe they’re the best and say that in humility. Xilinx’s claims for SoC’s FPGAs and such are factual and are not manipulated. They say what they mean and strive for accuracy and transparency.

Xilinx is not just an FPGA company, they are the leader in 3D IC, SSIT on the second generation already. Xilinx was the first to 28nm, first to Zynq (ARM) SoC, first ASIC class architecture, first to 20nm, first to 4 Million LC’s (think about that), first to ASIC/SoC tools. Please dear reader do not read this as blog as tooting ones horn, but how should one respond to the Altera claims ‘Industry-leading FPGA devices’, I say look to the errata and look to the market share. Or claims, ‘the fastest compile times’. This plot below shows the reality of compile times.

Xilinx’s UltraFast methodology is the fastest FPGA router in the planet. My question to my readers is why are you still using Altera? Maybe you had a bad experience some time ago, or someone rubbed you the wrong way, maybe it’s time for a change? All I can say is Xilinx is not the same as it was years ago, and their reputation and leadership proves it. See if these things be so and explore the world of Xilinx…how can they help you?

More articles by Luke Miller…

lang: en_US


ISO 26262 driving away from mobile SoCs

ISO 26262 driving away from mobile SoCs
by Don Dingee on 02-13-2014 at 10:00 pm

Connected cars may be starting to resemble overgrown phones in many ways, but there are critical differences now leading processor teams in a different direction away from the ubiquitous mobile SoC architecture – in turn causing designers to reevaluate interconnect strategies.

The modern car has evolved into a microcontroller jungle, with 100 or more devices found in high-end automobiles today. Many of these MCUs have been placed at points of control, next to actuators or sensors they oversee, and interconnected throughout the vehicle for remote control and status. This has made subsystem design and integration far easier, but has complicated the overall reliability picture of the system with many more parts and data paths subject to failure.

image via Daimler, appearing in “This Car Runs on Code“, IEEE Spectrum

Many of these MCU-powered subsystems in a car are less-than-critical. Failures in some areas may create inconvenience or discomfort compared to optimum conditions for the driver or passengers, but do not compromise the functional safety or primary operational capability of the vehicle. Some failures result in degraded operation, requiring more urgent service. For the most part, the automotive industry has gone to great lengths to prioritize safety and catastrophic failures, and has hardened electronics in critical systems to rather high reliability levels.

As the processing capability and software content of a car continues to increase, maintaining that level of critical reliability is getting more challenging. The temptation is to coalesce more functions into a centralized processor, and hardening it further. This may be a much more feasible approach than hardening 100 individual MCUs over a distributed interconnect, but exposes some issues immediately that the average mobile SoC is ill prepared to deal with.

Mobile SoCs designed to excel at tasks within a smart phone are architected for performance and power consumption, but generally not reliability from a system standpoint – and it goes way beyond simply dealing with harsh environmental conditions. Cores common in mobile SoCs are not designed to be redundant, or able to provide hardware lock-step or voting schemes. Memory lacks error correction code (ECC) able to fix single bit and detect double bit errors, as do many data paths to and from peripheral interfaces using conventional interconnect (MIPI has some optional ECC capability). Software can only do so much for reliability when the underlying hardware architecture doesn’t support even the basics of data integrity.

This is pretty much the same experience the avionics and industrial control industries went through with microprocessors and digital control systems in safety critical applications, and the automotive industry is responding in a similar way – with standards such as ISO 26262, AUTOSAR, and MISRA C. With much of the focus rightly on software, the question reverts to how to get more reliable hardware.

Fortunately, the fabless transformation makes designing SoCs for mid- to high-volume applications easier than ever. The first step is a different breed of processor core. Eric Esteve introduced us to the Synopsys ARC EM SEP (Safety Enhancement Package) core, and the ARM Cortex-R core also brings features targeting ISO 26262. For example, TI is leveraging the Cortex-R in their Hercules safety-critical MCU family.

Once the core is rigged for safety, the second step is redefining chip-level interconnect beyond a shared bus structure. As Kurt Shuler of Arteris puts it: “For automotive applications, we are being asked to replicate interconnect for resiliency without just doubling circuitry.” This is where network-on-chip technology really excels, with ability to map initiators and targets at low latency without an inflexible crossbar structure that becomes unscalable for larger designs. NoCs can also handle end-to-end ECC capability, crucial for data integrity across a system, and are able to help adjust traffic for QoS criteria.


With the SoC design itself more flexible and reliable, the third step is improving system-wide interconnect. When connections were fairly slow, LIN and CAN served well, but for faster control data rates several solutions have emerged with varying success so far: notable entrants include CAN FD (an 8x data rate expansion), FlexRay (on life support by some accounts, but still out there), MOST (high end multimedia), and one-pair Ethernet (relatively new to the game). Bluetooth, Wi-Fi, and LTE are also in the mix, along with ISM-band wireless for key fobs.

Cars are likely to need a gateway, one capable of ensuring message delivery, integrity, and security, to handle all the devices inside plus the connection to the outside world – with reliability designed in. To make matters worse, not only are the interconnect protocols different, but the geographic preferences vary, making a single system interconnect winner even more unlikely, and higher-end protocols are likely to evolve for a while.

All that points to the case for designing with NoCs in automotive SoCs, not only to simplify chip-level interconnect and incorporation of new peripherals quickly, but to provide error correction, flow control, message prioritization and other features needed for this highly connected safety-critical scenario.

More Articles by Don Dingee…..

lang: en_US


Quoting Automatically the eSilicon Way

Quoting Automatically the eSilicon Way
by Paul McLellan on 02-13-2014 at 2:31 pm

Every ASIC company has a major challenge: they have to work out what it is going to cost to build the customer’s product and commit to deliver it at that price. Too high and you lose the business. Too low and you will wish you’d lost the business. Historically this has been done largely manually. This is an expensive process. A typical ASIC project will be quoted by 2 or 3 potential companies which means that over half of the quotes done do not result in business and are an overhead.

Quoting is very complex, especially in a modern process. eSilicon has it even worse since they use several foundries, several testing houses, several packaging houses and so on. There are a lot of moving parts. Here are just a few:

  • die size obviously feeds into how many die a wafer will hold (gross die per wafer)
  • die size feeds into yield based on theoretical defect density models and historical databases of other parts in the same process
  • there are various options for how many metal layers
  • mask costs
  • choice of foundry/process
  • cost to manufacture depends on volume: low volume parts are more expensive due to setup time for the manufacturing equipment, especially steppers
  • cost to manufacture depends on when it will be manufactured: foundries build yield learning into their pricing
  • if wafer sort is done then a probe card needs to be built and test-time costed
  • if the part goes in a standard package that needs to be costed
  • the part may need to be bumped depending on packaging
  • if the part goes in a non-standard package (flip-chip etc) then the design cost of the package substrate needs to be included
  • choice of test house, tester, test-time
  • foundry, packaging, testing, delivery may all be geographically separate and transport costs need to be included

eSilicon had a group of 4 people just doing quotes and even so, each one took 1-2 weeks to deliver. I talked to Geoff Porter who ran that group. He decided that things were out of hand and the solution was to automate everything. Having myself worked on VLSI Technology’s DesignAssistant tool I have some experience of just how hard this sort of thing is.

eSilicon had databases of all the parts they had manufactured, they had databases of manufacturers pricing, they had knowledge of where different test houses’ sweet spots were and so on. They built a lot of visual basic function that ran in Excel (or in a web interface) for all the stuff they needed.

The whole process is now completely automatic and standardized. They put in the customer parameters (die size, process, package, test-time and so on). The output is a contract ready for the customer to sign with all the appropriate legal clauses required and omitting the ones that are not (if bare die are to be delivered there is no verbiage about packaging, for example).

Originally the tool was only used internally in the quoting group. But even so they spent a fair bit of time entering data supplied by the customer, updating it as the customer iterated the quote. This was especially true for multi-project wafer (MPW) quotes which by its nature generates a high volume of quotes. MPWs are a mixture of running prototype SoCs, doing IP qualification (nobody is going to license your serdes without silicon data), academia and so on.

So the next stage of “automation” was to make the customer put the data in themselves and generate their own quote. For MPW quotes, the entire process was put online and even available through an iPhone app.

They have done 300 quotes in 6 months, resulting in 250 new accounts without the involvement of sales. Without the tool when it was manual they could do maybe 100 quotes per year. The automation frees sales up to concentrate on major opportunities. Since the process is so quick and easy, customers iterate their quotes maybe a dozen times, so they can answer questions like “i want more memory if it is not too expensive”.

I’ll talk more about how that works in a future post after I’ve tried it out and generated my own quote. On my iPhone! To try it out yourself, register here. Don’t worry, you won’t get any silicon showing up at your door unless you sign the quote and place an order.


More articles by Paul McLellan…


Will Google Design Server SoCs?

Will Google Design Server SoCs?
by Beth Martin on 02-13-2014 at 12:22 pm

Google is search, of course, but it is also OS (Android), systems (Glass) and increasingly, maybe, hardware. Rumors are swirling that through careful acquisitions and focused internal development, Google is set to make its own server SoCs.

Google’s Larry Page has stated that they are in the hardware business. They’ve been making the server motherboards for their datacenters, and Google Glass includes a Google-branded CPU board. The Google job listings include a number of openings under the category of Hardware, but they are spread over PCB, CAD, mechanical, test, etc. They want engineers to design electronics for ground station and flight avionics, automotive systems, electricity delivery controls, Google Fiber, and other projects. There is no doubt that Google is designing hardware, but there looks to be little in the job listings to indicate they are ramping up a processor design team.

What about acquisitions? They snapped up one semiconductor company, processor maker Agnilux, in 2010 (which was founded by folks from PA Semi after it was bought by Apple in 2008). They also bought PeakStream, a startup who made tools to program multicore processors, in 2007. However, they didn’t buy any assets from Calxeda, a maker of ARM-based server SoCs that went under in December 2013.

I find it hard to argue that the ROI is there for Google to make server chips. Google currently uses Intel chips on server motherboards of Google design. Intel has economy of scale, and the cost and complexity of designing and fabricating a server SoC is huge. What advantage over Intel chips could they get? If Google did come up with a chip that beats Intel on performance per watt, would they then sell to Facebook, Amazon, and Microsoft in order to make the costs worth it? On the other hand, Google has a silly amount of money to blow on proof of concept projects and advanced research. I mean, they do have a space program. Could a network processor SoC be more complex and expensive? Hmm, maybe.

More articles by Beth Martin…



I switched to Aldec Active-HDL

I switched to Aldec Active-HDL
by Luke Miller on 02-12-2014 at 3:00 pm

I have written this before, but I was a ModelSim snob. That has changed after trying Active-HDL from Aldec. I have no plans on going back to ModelSim. You ask why? Well astute reader, great question. Unfortunately these blogs are text limited and there is no way to write about all the bells and whistles of Active-HDL. So before I continue, please go to this Active-HDL download link and evaluate for yourself, I assure you will not be sorry. I know this is word salad, but they also have great customer service (real people). After installing you will be up in running in minutes without even reading the instructions, go men!

Where do I begin…? Active-HDL, is a design environment. In this environment you have a compiler, simulator, IP generator, debugger, text editor (which by the way highlights where your compile errors are, I love that feature, not that I have errors), test bench generator, waveform compare, Code to graphics, version control, (very cool) and the list goes on and on. Seriously, if I listed all the features, I would run out of blog. Active-HDL is friendly to all FPGAs and you can link the tool to your FPGA environments.

Whether we like it or not, RTL simulation is a fact of FPGA life, so why not use a flexible environment? When I use the GUI, I feel like the designers know what I’m thinking. So here is how I began, you can start with a reference design, or use one you already have. I started with a design I already had. Simply follow the prompts, name your project and add your Verilog or VHDL. About 20 seconds later I’m compiling. The design I had, did NOT have a testbed, I was using hardware in the loop. I went to the ‘tools tab’ and clicked generate test bench! It worked! Then I opened the testbed file and added my clock, resets etc.. And I’m off and running. To be honest with you, I was a bit overwhelmed with all the swizzles the tool has. You even can custom tweak the waveform to look just like the ‘old green signals and black background’. That’s how I roll, creature of habit. Please pray for Mrs. Miller.

For you companies that keep track of SLOC’s (may I say you are driving your engineers nuts trying to count FPGA lines of code!) I clicked the HDL statistics and the tool gives you the break down and totals for each module. By the way I used this tool without using the help tab at all, I’m one of them, so it is very, very intuitive. I also tried the Code to graphics and automated header template generation. By now I hope you have caught onto my enthusiasm, to be honest I just thought this was going to be another RTL simulator but I see what I have been missing. From now on ‘The FPGA Expert’ is proudly using Active-HDL, and this is not marketeering (Is this a word?). Contact Aldec today for pricing and licensing flexibility, they are very workable and know the FPGA design cycle very well.

More articles by Luke Miller…

lang: en_US


Intel 14nm Delayed Again?

Intel 14nm Delayed Again?
by Daniel Nenni on 02-12-2014 at 9:00 am

From the sources in which I confirmed the last Intel 14nm delay, I just confirmed another. Intel 14nm is STILL having yield problems. Remember Intel bragging about 14nm being a full node and deriding TSMC because 16nm is “just” 20nm with FinFETs added? Judging by the graph, clearly FinFETs are not the problem here. Intel used a much more aggressive metal fabric to get better density which is challenging modern lithography methods.

“People in the trenches are usually in touch with impending changes early” ― Andrew S. Grove,Only the Paranoid Survive

Meanwhile, back at the fabless semiconductor ecosystem, 20nm is yielding ahead of schedule so TSMC will see revenue this quarter versus next. I would put the chances of TSMC realizing their forecast of 20nm providing 10% of 2014 revenue as being very good. Given the more cautious approach TSMC took to FinFETs, 16nm is also on track with tape-outs happening now. If all goes as planned, 16nm will ramp in 2015 as 20nm does in 2014.

TSMC expects 20nm to be 2% of Q2 2014 revenue so the ramp begins. Looking at the 28nm ramp, 20nm is expected to be 20-30% faster:

[LIST=1]

  • 28nm 2% Q4 2011
  • 28nm 5% Q1 2012
  • 28nm 7% Q2 2012
  • 28nm 13% Q3 2012

    Back to Intel; new Intel CEO Brian Krzanich committed 14nm for Q3 2013 which was later pushed out to Q1 2014 even though he held up a laptop at the Intel Developers Forum and boasted that 14nm was in fact on track. At an analyst meeting two months later he showed the slide above and said there were yield “challenges” that they are still working on. Well, from what I have heard, they are still working on it so the Intel 14nm ramp may be delayed yet again.

    The questions I have are: If this is true when will Intel disclose this new yield challenge? How much will it delay 14nm products? What about Altera? I’m sure delaying this type of bad news until the problem is fixed is best for damage control but I find this type of behavior not transparent and untrustworthy, just my opinion of course.

    Meanwhile the Intel pumping Seeking Alpha published an article, “Does Intel’s new CEO have what it takes?” This is pure entertainment. Thus far Intel management has made many mistakes that the author glossed over but have been covered in painful detail on SemiWiki. The lack of transparency started here with BK’s first conference call:

    Intel’s Q2 Conference Call
    Intel 14nm Delayed?
    Intel Is Continuing to Scale While Others Pause To Do FinFETs
    No Mention of 14nm at the 2013 Intel Developer Forum?
    Intel Really is Delaying 14nm Move-in. 450mm is Slipping Too. EUV, who knows?
    Intel Quark: Synthesizable Core But You Can’t Have It
    Intel Bay Trail Fail
    Yes, Intel 14nm Really is Delayed…And They Lost $600M on Mobile
    Intel’s Mea Culpa!
    Intel Bay Trail Fail II
    Intel Comes Clean on 14nm Yield!
    Intel is NOT Transparent Again!
    Why Intel 14nm is NOT a Game Changer!

    We write these articles from the trenches to set the record straight. We also write these articles as research for an upcoming book on Intel to chronicle the rise and fall and hopefully the rise again of the number one semiconductor company.

    More Articles by Daniel Nenni…..

    lang: en_US