RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Will Intel Have a Bigger FinFET Market Share Than TSMC in 2015?

Will Intel Have a Bigger FinFET Market Share Than TSMC in 2015?
by Daniel Nenni on 08-05-2014 at 10:00 pm

Speculation is running rampant after last month’s conference call where Dr. Morris Chang, who is often referred to as “The Chairman”, commented that at 16nm TSMC will have a smaller market share than a major competitor in 2015. TSMC will however regain the FinFET lead in 2016 and 2017. Of course the blogosphere went crazy on this which resulted in a hefty TSM stock price drop and some lengthy calls for me with Wall Street. Everybody, including myself, speculated that the major competitor referenced is Samsung. Is the Chairman using strategy to motivate the troops or does he really think TSMC will lose the first wave of FinFET designs? Now that the dust has settled let’s take another look at this hotly debated topic but first a little background:

SoC design increases in complexity as the architecture changes: 32 to 64 bit for example. Apple made this change with the iPhone 5s last year using the Samsung 28nm HKMG process node. Apple’s prior SoCs for the iPhone5 and iPhone 4s were also HKMG (Samsung 32nm) so this was more of an architectural design challenge versus a process design challenge. The other SoC vendors will not have 64 bit architectures in production until 2015 so this was not a trivial engineering feat.

SoC design also increases in complexity as more functions are integrated. The next big integration challenge will be putting a high speed radio (modem) on a 64 bit SoC using FinFETs. QCOM has both the leading mobile SoC and leading mobile modem and has already integrated them at 28nm. But I would not count Apple out since they have an experienced modem team working on it and they already have a 64-bit architecture in production.

SoC design at leading edge nodes is extremely challenging as we can see by the delays in 20nm and 14nm. TSMC 20nm was delayed six months and Intel 14nm is more than a year late. TSMC 16nm and Samsung 14nm are not in production yet but will no doubt be later than we all expected. Delays happen when you challenge the laws of physics as we do most every day, absolutely.

Now let’s go back to the conference call and look at a key piece of information in the Q&A that most people glossed over:

Elizabeth Sun: “Randy’s question is with respect to Chairman’s comment on 2015’s market share is lower than a major competitor in 2015. So Randy’s asking why will it be lower and what is the impact to TSMC if we have a lower market share. And what gives us the confidence that we will regain the market share in following year?”

Morris Chang – TSMC – Chairman: “Oh, okay. Well, we need to go back to history a little bit. 32 — 28-nanometer followed 32 and that particular major competitor that I referred to, chose 32 and skipped 28. And then of course we came to 20 and 16, 16 for us, 14 for him. And we chose to do both. Actually we chose to do 20 first and 16 about a year or so later, but it was a pretty quick succession. And this major competitor skipped 20 and went on to 16.”

As I mentioned, Samsung did both 32nm and 28nm. Intel did 32nm and skipped 28nm so it seems the Chairman was referring to Intel as the competitor that will have a larger 14/16nm foundry market share in 2015, not Samsung. Comments?


CEVA actively preparing the future

CEVA actively preparing the future
by Eric Esteve on 08-05-2014 at 11:00 am

I have recently blogged about CEVA acquisition of Riviera Waves, a very positive move to consolidate CEVA leading position of connectivity IP vendor (Bluetooth and WiFi). We know CEVA for years as being the leading supplier of DSP IP cores for the wireless phone market and it look like that we will have to rethink this definition, as the company is currently redefining their addressable market. CEVA still supplies DSP IP cores to support 3G, 4G and now LTE BaseBand, to customer like Intel or Samsung to name a few, enjoying a solid royalty flow. But, if you take a look at the product port-folio and related target applications today and compare it to what it was for example in 2011, you can assess the strong repositioning effort made by the company.

Before looking into the new applications targeted by CEVA, I thought it could be wise to compare the financial results for the second quarter in 2011 and 2014. The license and royalty revenues are very similar for these two quarters and we can’t expect any seasonally effect. Thus you may be critical and remark that the company revenue is flat. If you think further, you then realize that CEVA revenue in 2011 was coming in majority from the wireless phone market. Remember, in 2011 we could see Apple just starting to grow the smartphone business, media tablet was still in the infant stage, the top 5 Application Processor makers list was: Freescale, Marvell, NVIDIA, Qualcomm, and TI.Moreover, the smartphone shipment was 302 million units (in 2010) and the analyst projections in 2011, as you can see below, have appeared to be completely wrong! On top of this, or because of this erratic market, CEVA customer base has completely changed, seeing well established chip makers exiting the market (TI, Broadcom, Marvell, Nvidia and more) and new comers from China coming up to speed and shipping full featured application processors. Staying alive on such a market is certainly a challenge!

2Q-2011
Of the eight new license agreements concluded during the second quarter of 2011, seven agreements were for CEVA DSP cores, platforms and software, and one agreement was for CEVA SATA/SAS product lines. Target applications for customer deployment are 4G and 3G baseband processors for handsets, infrastructure, smart grid, portable game consoles and SSD drives. Geographically, four of the agreements signed were in the U.S. and four were in Asia.

2Q-2014
During the second quarter of 2014, the Company concluded 11 new license agreements. Six of the agreements were for CEVA DSP cores and platforms, three for Bluetooth and two for SATA. Target applications include LTE-Advanced baseband, audio, connectivity and SSD drives. Geographically, nine of the agreements signed were in the APAC, including Japan, and two were in the U.S.

If we analyze the market data from CEVA with these two extracts, we can first notice that CEVA customer base has definitely move to East (50% in 2011 compared with more than 80% in 2014), which is a good sign that CEVA is moving with the market. Then, analyzing the product mix, CEVA has signed 8 new licenses in 2Q-2011, compared with 11 in 2Q-2014. But, in 2011, 7 were for DSP and 1 for SATA, when in 2014 the difference comes from the 3 licenses signed for Bluetooth. Here comes the positive point, illustrating that CEVA is on track for future growth. The company did not give up with DSP IP products addressing the wireless market, and we can guess that most of the royalty flow is still coming from this market, but CEVA has successively added or rework existing DSP products, to address, Audio (ASSP and always-on audio for Android) application, imaging (MM3000 family) and wireless connectivity.

Wireless connectivity is certainly the most promising product line developed by CEVA, and through the Riviera Waves acquisition. Why? Simply think about all the future stand-alone devices populating the IoT and Digital Home applications. What is the common feature, whatever the application? This stand-alone device will have to be connected! If you don’t want your (digital) home to look like to the backstage of a rock concert, preferably wirelessly connected. If you agree that 3G or LTE is overwhelming for a thermostat or the like, then WiFi or Bluetooth are better candidates.

From the analyst call hold by CEVA on July 31[SUP]st[/SUP], we learn that CEVA foresee 400 million devices to be shipped by 2018, and generating royalties thanks to the Riviera Waves acquisition. Such a number looks reasonable, when compared with analyst forecast showing 30 Billion connected devices. The important point is that CEVA is completely reworking the product port-folio, and the company will not anymore be forced to rely on a single market segment –namely the rather erratic and difficult to forecast wireless phone- to sustain growth and build the future.

Eric Esteve from IPNEST

More Articles by Eric Esteve…..


The Carrington Event

The Carrington Event
by Paul McLellan on 08-05-2014 at 7:01 am

Back in the pre-SemiWiki days when I had the EdaGrafitti blog I wrote about the Carrington event. This was a solar storm in 1859 that lasted for several days. On September 1st there was a coronal mass ejection (CME) traveling directly towards earth. Normally such an event would take several days to reach earth but an earlier ejection had cleaned out all the ions in space and it took less than a day to get here. It was the biggest electrical storm in recorded history. People got up thinking it was daylight. Aurora Borealis (Northern Lights) were seen as far south as the Caribbean and Hawaii. Telegraph systems failed, in some cases shocking their operators and in other cases having enough power to continue functioning even though they were turned off.

Of course we didn’t have electronics in those days. So what would happen today? In fact the reason I’m writing this now is that we had a near miss a few weeks ago. An event like this has the potential to knock out satellites, kill the power grid, and maybe kill anything connected to it. There was a huge CME but luckily not pointed towards earth. If it had happened a week later we would have been in the cross-wires pointed straight towards us. Carrington II.

Solar flares go in an 11-year cycle, aka the sunspot cycle. The peak of the current cycle is pretty much now. This cycle is unusual for its low number of sunspots and there are predictions that we could be in for an extended period of low activity like the Maunder minimum from 1645-1715 (the little ice age when the Thames froze every winter) or the Dalton minimum from 1790-1830 (where the world was also a couple of degrees colder than normal). This might (or might not) be connected to why there has been no global warming for 17 years despite the huge increase in carbon dioxide. But for electronics, the important thing is the effect of CME which seems to cause solar flares (although the connection isn’t completely understood). Obviously the most vulnerable objects are satellites since they lack protection from the earth’s magnetic field but power grids are also especially vulnerable because their long wires are perfect for concentrating the electrical pulse. There seems to be an event like this about every 150 years which means we are overdue.

In 1859 telegraph systems were down for a couple of days and people got to watch some interesting stuff in the sky. But from a NASA conference a couple of years ago in Washington looking at what would happen if another Carrington Event occured:The situation would be more serious. An avalanche of blackouts carried across continents by long-distance power lines could last for weeks to months as engineers struggle to repair damaged transformers. Planes and ships couldn’t trust GPS units for navigation. Banking and financial networks might go offline, disrupting commerce in a way unique to the Information Age. According to a 2008 report from the National Academy of Sciences, a century-class solar storm could have the economic impact of 20 hurricane Katrinas.

Actually it sounds worse to me. This is the result of a much smaller event:In March of 1989, a severe solar storm induced powerful electric currents in grid wiring that fried a main power transformer in the HydroQuebec system, causing a cascading grid failure that knocked out power to 6 million customers for nine hours while also damaging similar transformers in New Jersey and the United Kingdom.

Wow. Doesn’t sound good, does it? Imagine that scaled up an order of magnitude. Lloyds (the London insurers) reckon the cost for a similar event could be $2.6T.

Here is a video from the University of Bristol (in England) about the cover story on Physics World, which covers solar super-storms.


More articles by Paul McLellan…


Xilinx, 100 Reasons to use them

Xilinx, 100 Reasons to use them
by Luke Miller on 08-04-2014 at 4:00 pm

We all like compliments, correct? You know the kind that go like, “Glad you didn’t screw that up”. From time to time I get, “You write what you do because you’re affiliated with Xilinx”. Perhaps I will name my next child Xilinx. I have said this before, I do real work (debatable) and trade studies, and I Believe Xilinx FPGAs are the best choice for what I do. So, below is an unsolicited list of what other Xilinx customers think about the 28nm node, it really is impressive.

Continue reading “Xilinx, 100 Reasons to use them”


The Grand Folly of India’s Foundry Plans, Part 2

The Grand Folly of India’s Foundry Plans, Part 2
by Peter Gasperini on 08-04-2014 at 8:00 am


Image Source: Wikipedia

Authors: Pete Gasperini & Abhijit Athavale

The first article on this topic, published here on Semiwiki on July 6[SUP]th[/SUP], addressed New Delhi’s proposal to subsidize the construction of two silicon fabs – one in 22nm, the other in 28nm – in order to stimulate India’s high tech sector and reduce its dependence on foreign technology imports. The argument against this detailed multiple technical and economic factors which negated the purported benefits of the initiative and proposed alternative courses of action.

Defenders of the fab subsidy program have subsequently raised a new issue to bolster their side of the argument. In late 2012, an FPGA sold by Microsemi and extensively used in systems employed by the US military was discovered by Cambridge researchers to have a hardware Trojan. This backdoor, deliberately inserted by designers at Microsemi, was accessible thru the chip’s JTAG pins and permitted a 3[SUP]rd[/SUP] party to remotely reconfigure the device, access its code and even disable it.

Fab proponents are concerned that chip designs from Indian companies could be sabotaged in foreign foundries and have circuitry covertly embedded which would allow spies and saboteurs to steal commercial information or attack private or military systems employing those chips. The question is: how realistic is such a possibility?

A sculptor wields
The chisel, and the stricken marble grows
To beauty. –
William Cullen Bryant

As everyone here is well aware, when a chip design team develops a new product, they don’t just write some code, throw it at a software tool, press “Enter” and have a functional chip pop out from a 3D printer. The process involves ESL abstractions, Verilog descriptions, an expensive portfolio of EDA software whose complexity is such that team members need to specialize in its particular tools, a library of physical, electrical and functional models of increasing sophistication applied in multiple stages of the design flow, and multi-corner PVT characterization to ensure designs yield and perform per specification to any allowed combination of process node, operating temperature and application voltage variances. There are intricately detailed methodologies for timing closure, functional verification, DFT, DFM, power optimization, signal integrity, hierarchical design, clock tree integration, voltage islands, IP integration, timing domains, place & route, mixed signal and/or analog block integration and so on. All thru the development cycle, things are repeatedly tested, verified, analyzed, optimized and otherwise shaked & baked.

It takes 15-18 months of labor by 20-40 seasoned engineers to do this work – time spent using their accumulated skills to craft, sculpt and ultimately realize a functional and timing equilibrium as close to perfection as they can make it. As a consequence, any disturbance to the design is guaranteed to be catastrophic, affecting timing, layout, functionality, yield, power and interconnect.

When one takes into account that only the original development team has all the correct tools, models, code and verification suite, it becomes obvious that a third party at a foundry simply cannot meddle with a design database and successfully insert extra gates for a hardware backdoor. The design will be compromised and even the most ingenious engineering team will simply not be able to integrate the Trojan and restore the design to its original functional and timing envelope without years of reverse engineering work. By the time such an effort is completed, all the organizations that were targets of the chip saboteurs will already have bought systems with untouched, original chips in them.

The truth of the matter is that India doesn’t need New Delhi to step into its technology industry to help, but would actually benefit more if the central government did less. The chip security issue, along with ideas for New Delhi to do other, more beneficial things that will advance India’s high tech sector and improve its trade balance, are discussed in greater depth at http://vigilfuturi.blogspot.com.

Also Read: Semiconductor Manufacturing in India?


Electronic Thermal Management through Icepak

Electronic Thermal Management through Icepak
by Pawan Fangaria on 08-03-2014 at 8:30 pm

Last week my daughter was playing some games on my Google Nexus smartphone for a while when one of my friends called. When I picked up the phone, I couldn’t imagine it was so hot. There is no doubt; every electronic device today emits an order of magnitude higher heat than what it used to at most a decade ago. There is so much emphasis on reliability of semiconductor and electronic devices from the standpoint of electrical function, electro-migration, power, performance, noise and the like. Can we say that it’s enough on reliability without the consideration of temperature and cooling solutions? In my view, the way heat generation is rising in electronic devices; there will be a point when temperature will take the prime spot of consideration in designing any SoC or electronic device. It needs an integrated solution from chip to package to system level.

Although I’ve talked about Ansys’sreliability solutions for semiconductor designs and systems in the past, I haven’t specifically talked about their thermal solution for complete electronic systems. Today, I came across this less than 3 minutes video, specifically focused on Ansys’s thermal solution which provides fast and accurate thermal results for electronic cooling applications. It was impressive, so I thought about writing for this.

Ansys Icepak provides easy-to-use GUI which combines electrical and mechanical CAD for designing electrical as well as mechanical systems that are appropriate for cooling required for generated heat. It can accurately mesh any geometry and represent true shapes of electronic components – IC die, package, PCB or complete system. For CFD (Computational Fluid Dynamics) simulation it uses Ansys Fluent as the solver engine which is among the most accurate and fast engines available for the purpose. The complete performance of a product can be simulated in a workbench environment by using Ansys mechanical and electromagnetic simulation solutions.

TheIcepak can be used to simulate and evaluate electronic devices in various segments including consumer, computer, defense and space technology. The above image shows a complete computer system with chassis which is evaluated for appropriate heat-sink placement, fan and blower selection, cold plate selection, appropriate sizing for airflow circulation and so on. The IC layout is evaluated for heat generation and thermal consideration. Appropriate decisions are made for active and passive cooling systems.

The above image shows the analysis at PCB level where consideration of components that inject heat into the board takes place. The corresponding power and temperature maps are studied. With the help of Ansys SIwave and electromagnetic analysis, the effect of current flow in the region can be examined efficiently. In the above image, the SIwave DC current analysis (on the left) is coupled with the Icepak thermal analysis (on the right).

Ansys provides a complete suite of tools for thermal solution at chip, 3DIC package and complete system level. The RedHawk and Totem provide IR drop, EM and thermal simulation and analysis at the chip level. The Sentinel-TI can provide complete thermal profile of a 3DIC package along with the distribution at each die level. And the Icepak can do a complete analysis and provide cooling solutions at the system level.

Ansys, along with its Icepak and other tools at the chip and package levels, provides a complete comprehensive solution for thermal management and cooling of complete electronic systems that is based on a robust and powerful CFD simulation.

More Articles by Pawan Fangaria…..


Bring Water to those in the World that need it

Bring Water to those in the World that need it
by Luke Miller on 08-03-2014 at 3:00 pm

Dear Reader, I need your help with something.

Invention is the Mother of necessity. When I bought the Miller Farm, it came with a very shallow well. What that meant is priming the pump, alot. Me being an engineer weighed the options. The wife, with 5 kids to bath, wash clothes, cook, etc.. was a bit overwhelmed. Of course the worst day was like 97 degrees and no water. It’s the simple things eh?

I realized that overall the well was capable of producing about a 0.25 Gallon per minute. I just needed a way of capturing that water and storing it in a tank to get one day ahead on water.

0.25 Gallons/Min x 24 hrs = 360 Gallons of water per day, Potentially.

This lead to the invention of the Well Doctor.

Here is a nice video to explain the Well Dr.

I used a flow sensor which feeds a CPLD/FPGA board and uses an Algorithm to sense flow and when the flow drops, prime is not lost. It adapts and constantly changes as the water table does. What is does, is it controls the pump via a relay, which control’s the ON/OFF time based on the success of the last run. Very simple but has been working great for years.

What I would like to do is get this technology to others in the world that do not have the luxury of water. If a hole can be dug and it has water, then this device will get you a day a head. So I am asking you if you can help me find the right channels to see if we can make this happen. I have written the water charity org’s etc.. and no avail? Perhaps a kick starter?


Open Source Verilog

Open Source Verilog
by Paul McLellan on 08-03-2014 at 8:01 am

Over the years there have been various open source EDA projects but none that has realized a full industrial strength design tool that has broad adoption and is strong enough to compete with similar products from the EDA industry.

Open source is clearly a great way to develop software. Lots of people can see all the source code and there is a sort of wisdom of crowds effect. As Eric Raymond famously said, “with enough eyeballs all bugs are shallow.” But there are two weak aspects to open source. Firstly, although it is a great development model it is not a great business model. It is hard to make money selling something that is also available for free. For sure, big companies will pay you something in support fees to make a problem go away and perhaps there are other for-fee services that can be built on top of the product. But there is a limit to how much can be charged. If a big EDA company open sourced all its software, nobody is going to make $50M deals since it is cheaper to set up a team of engineers for a couple of million and pull down the source and support yourself. The second problem is that open source works best when the programmers understand the problem they are solving, which usually means that the product being developed is one programmers will use themselves. Linux, gcc, Firefox, Apache (the web-server not the subsidiary of Ansys), mySQL and so on are the most successful open source projects, written by software engineers for software engineers. When a marketing person specifies the product details and then an engineering team implements them then this model doesn’t work so well. If remember reading a quote (but I can’t find it today) that “if you need a specification the project is already in trouble.”

Tachyon Design Automation has been in existence for years and sells a Verilog simulator called CVC (for compiled Verilog code, since it compiles Verilog straight into x86 code with lots of detailed optimization). It supports full IEEE 1364 (2005) Verilog. Although any simulator’s speed depends somewhat on exactly what is being simulated, it is often the fastest simulator on the market for a given workload. One customer found that they couldn’t get their nightly regressions to run in one night with one of the big 3 EDA companies Verilog simulators but it would with CVC. This is not a cheap but adequate simulator, it is fully competitive.

CVC was developed by Steve Meyer whose roots go back to Gateway (where Verilog was developed and where the roots of Cadence’s simulation technology originate), Chronologic (where Synopsys’s VCS was originally developed). Antrim integrated the technology in their AMS simulator, which is a very demanding environment.

Tachyon’s customer base is mostly small companies who are not big enough to get the attention of the big 3 EDA companies’ salesforces but they also have some larger companies that use it to give additional capacity and keep a check on their primary simulator. However, selling against the big EDA companies is difficult. The big accounts get their simulators as part of a much larger deal, and the small companies don’t consume enough licenses.

So Tachyon have decided to do something different. They are making the entire simulator open source. You can download it from their website. Big semiconductor companies are suddenly interested since they want access to the source. Not so that they can fix their own bugs, they will pay Tachyon to do that, but so that they can integrate their own technology in with the simulator to improve their own verification effectiveness.

Tachyon Design Automation’s website is here.


More articles by Paul McLellan…


Enabling Higher Level Design Automation with Smart Tools

Enabling Higher Level Design Automation with Smart Tools
by Pawan Fangaria on 08-02-2014 at 10:00 pm

Although design houses have always strived for optimizing best design flows according to their design needs by customizing the flows using effective and efficient internal as well as external tools, this need has further grown in the context of design scenarios getting wider and wider from transistor, gate and RTL to system level. Today, it’s rare a single flow connects the system up to transistor in terms of design, verification or debugging; however comprehensive methodologies are must at all these levels to reflect the effect of any change at transistor, gate or interconnect level up to the system level, or let’s say IP level which is integrated into an SoC.

A couple of weeks ago I was talking about Concept Engineering’sS-engine[SUP]TM[/SUP] which could be easily integrated into any design automation tool for automatic schematic generation (and smart editing) that could enable transistor, gate or SoC/IP interface visualization at any level of abstraction in the design for efficient design and system exploration, analysis, debugging and integration. After learning that Verific Design Automation, Alameda, CA has integrated Concept Engineering’s Nlview[SUP]TM[/SUP]schematic generation and visualization engine with their netlist database (press release here), I looked a bit deeper into Nlview and this integration.

Nlview Widgets is pioneered by Concept Engineering which is used to automatically generate and visualize schematic diagrams at various levels in the design process including transistor (with electrical components), gate, RTL or complete block and system. The schematics thus created can be interactively controlled and modified by designers as per their need, with a capability to incrementally generate and further add parts of schematics.

Look at the part of schematic using operator signs with bus connectivity (which may come from any third party parser). The Verific parser (SystemVerilog, VHDL…) and VVDI-link(connectivity package provided with Nlview software package) give Nlview seamless access to the Verific netlist database.

In this schematic with busses, the rippers are automatically created. The Nlview performs automatic net bundling from the connectivity with single-bit level. The IO port buses and the bus pins of the muxes need to be indicated to Nlview.

In this schematic, signals are passing through hierarchical blocks. The Nlview provides features such as putting the blocks in different colors, folding and unfolding of hierarchy with +/- signs, incremental navigation and so on.

There are host of other features including timing annotations, incremental generation and viewing, and others. More examples of schematics can be seen here. The schematics are optimized by using robust, fast algorithms.

The Nlview Widgets are customized according to different GUI environments such as NlviewQT for Qt development environment, NlviewTK for Tcl/Tk based GUI environment and so on. Today, Concept provides NlviewQT, NlviewTK, NlviewJA (for Java platform), NlviewMFC (for MS Windows platform based on MFC library), NlviewWIN (for MS native Windows), NlviewWX (for wxWidgets cross-platform), NlviewP TK (for Perl with Tk) and NlviewCORE. The NlviewCORE is without GUI which can output graphic files like SVG, PostScript or PDF in batch mode. The provided core APIs and algorithms are same in all of them except the GUI interfaces.

By integrating Nlview schematic generation in EDA applications such as high level synthesis or logic synthesis, verification, physical design, test automation etc. designers or tool owners can enhance their tool’s capability in terms of wider and deeper navigation, performance, on-the-fly IP/block management and integration, incremental schematic generation and viewing, greater control and visibility over the synthesis process, easy and integrated debugging environment and so on, thus improving designers productivity.

The VVDI-Link in the connectivity package, aided by standard Verilog, VHDL or SystemVerilog parsers from Verific, enables automatic generation of schematic through Nlview for Verific which acts as front-end for several EDA and FPGA tools for simulation, emulation, verification, synthesis, analysis and test of RTL designs.

With 10s of thousands of installed EDA applications using Nlview Widgets, it’s clearly an industry standard for schematic generation and viewing that provides unparalleled flexibility, customizability, controllability, performance and reliability to the integrators.

More Articles by Pawan Fangaria…..