RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

SOI Future or Flop?

SOI Future or Flop?
by Scotten Jones on 01-31-2014 at 8:00 am

Silicon On Insulator (SOI) is a technology that has been in use by the semiconductor industry for a long time. Early technologies such as Silicon On Sapphire (SOS) were reported as early as the sixties. In the eighties technologies such as V groove dielectric isolation were used. In the nineties we saw wafer bonding become the most prevalent technique for SOI fabrication, although implanted oxygen was also in use. It wasn’t until the late nineties that IBM ushered in the modern era of SOI usage for high performance CMOS.

SOI has always offered certain performance advantages with radiation hardness, the ability to isolate both positive and negative voltages on the same substrate and reduced power consumption being a few key examples. Although this article is focused on state-of-the-art CMOS, SOI is also an important part of many emerging Micro Electro Mechanical System (MEMS) applications and some power IC applications as well. The issue with SOI has always been cost and it has always been a niche technology.

In the late nineties IBM introduced Partially Depleted SOI (PDSOI) for high performance CMOS. The introduction of a buried oxide layer under the MOSFET reduced parasitic source/drain (S/D) capacitance reducing power consumption. Planar MOSFETs have highly doped source and drain regions of one conductivity type separated by a lightly doped channel of opposite conductivity type. The surface of the channel is covered with an insulating layer and a gate electrode. The gate of a MOSFET really only has good control of the surface and in the sub surface regions there are S/D leakage paths. With partially depleted SOI, the device silicon layer on top of the insulator is still thick enough that the gate can’t fully deplete the channel in the off-state and leakage paths still exist. PDSOI also requires all of the processing of a standard bulk process plus a couple of SOI specific masks. This coupled with the high cost of SOI substrates yields a very expensive process. In spite of the cost issues, IBM utilized PDSOI from 130nm to 22nm for high performance processors. Concurrently IBM developed a high performance embedded DRAM technology (eDRAM) that took advantage of the buried oxide isolation in SOI making the process ideal for their high performance processors needs. In the early 2000s AMD was a development partner of IBM and they also adopted PDSOI for their processors for the 90nm through 45nm generation and at one point all three major game consoles utilized PDSOI based processors. Today IBM’s 22nm process is to the best of our knowledge the only leading edge logic process still running on PDSOI. In IBM’s case the processors are used for high margin server products and performance is far more important than cost. Other PDSOI users have concluded that any performance advantages are outweighed by the cost.

Fully Depleted MOSFETs

As MOSFET gate lengths have shrunk, S/D leakage has been increasing exponentially and a change in MOSFET architecture is now required with the 20nm node expected to be the end for the bulk – planer MOSFET. If instead of a partially depleted channel the channel of a MOSFET is full depleted in the off-state, S/D leakage is reduced by orders of magnitude. There are two main techniques currently in use for fully depleted MOSFETs, FinFETs and Fully Depleted SOI (FDSOI).

FinFETs utilize thin fins of silicon standing up perpendicular to the wafer surface in a 3D configuration for the MOSFET channel. The fin may have gates on both sides (a classic FinFET) or on both sides and the top (Trigate). With gates on both sides the fin thickness must be approximately one half the gate length to be fully depleted and with gates on both sides and the top the fin width must be less than approximately one times the gate length to be fully depleted. The Trigate configuration relaxes the fin thickness requirements making them easier to manufacture. FinFETs can also be fabricated on either bulk or SOI (more on that later). At 22nm Intel was the first in the industry to introduce FinFETs with their Trigate process. Intel announced at the time that the Trigate process adds 5% to the processing costs versus planar. One interesting observation here is that the FinFET process actually requires fewer masks and process steps than bulk. In an ideal FinFET, the channel would be undoped but in order to achieve multiple threshold voltages multiple gate work function metals would be required. Due to the difficulty of fabricating multiple work function metals, FinFETs today have doped channels and this requires masks and implants for Vt tuning, however the complexity of the Vt tuning scheme is reduced relative to bulk-planar MOSFETs due to the absence of Halos. If you just look at the processing steps for a FinFET versus a bulk-planar MOSFET the FinFET flow is simpler. I believe the added cost is due to yield issues from the fin formation on bulk process where controlling the fin height is very difficult.

The other option for fully depleted MOSFETs is planar FDSOI. In planar FDSOI the device silicon layer thickness on top of the buried oxide must be less than approximately one third the gate length. The challenges of making a sufficiently thin and uniform FDSOI device layer took many years to overcome and FDSOI has only recently become available. There is a very technologically interesting byproduct of such a thin device silicon layer over an insulator. It is possible to fabricate a buried back gate under the MOSFET channel. The back gate can be biased to tune the MOSFET performance and or threshold voltage. Modern System On a Chip (SOC) designs require three or even four threshold voltages. For a 20nm bulk-planar MOSFET technology each threshold voltage requires an NMOS and a PMOS threshold voltage mask and implant. Halo implants also strongly influence threshold voltage and each threshold voltage requires a S/D Ext/Halo mask and set of implants. S/D Ext/Halos require three to four implants each. Tailoring of deep S/D contact implants can also be required. The bottom line is a planar-bulk MOSFET can require 4 to 6 masks and 9 to 15 implants for each threshold voltage! Multiply that number by 3 or 4 threshold voltages and you can see what a huge process and cost driver threshold voltage is. Eliminating all of these masks and implants by biasing a back gate offsets the cost of the SOI starting substrate and yields a cost competitive technology with better performance. For the interested reader I have written a cost analysis comparing FDSOI to bulk at 22nm available here: http://www.icknowledge.com/news/presentations.html (in the spirit of full disclosure the work was funded by SOI producer Soitec).

FDSOI Versus FinFET
Now that we have reviewed the options for fully depleted MOSFETs let’s take a look at the relative merits of the two approaches.

I have spent a lot of time looking at performance of FinFETs versus FDSOI. I believe it is generally acknowledged that FinFETs offer the highest ultimate speed and drive current in the smallest area. FDSOI on the other hand offers lower power and the best performance per watt. This would suggest that for very high performance applications FinFETs would be the best technology. On the other hand if you are designing a SOC IC for a mobile application FDSOI would be a better choice.

There are a number of cost comparisons that have been published of FDSOI versus FinFETs on bulk showing FinFETs are significantly more expensive. Handel Jones of IBS for example has published a 20nm die cost comparison showing a 35% higher wafer cost for FinFETs versus FDSOI. When I look at the specific process steps for a bulk FinFET versus an FDSOI device I don’t see this at all. I can only assume that there are some unfavorable yield assumptions in the FinFFET portion of the analysis. I haven’t spoken to Handel Jones about this but one of my colleges asked him about this at ISS this year and my understanding is that Dr. Jones acknowledged that it was yield driven. I personally find it very hard to believe that after over two years of manufacturing experience on the process, Intel wouldn’t have over 90% wafer yield on their process by now and I therefore believe these higher cost estimates are wrong.

There have also been published analysis from IBM claiming that FinFETs on SOI are cheaper than FinFETs on bulk (although in the same presentation they also appear to acknowledge that after accounting for the more expensive SOI wafer the costs are roughly the same). Based on my own analysis and Intel’s published 5% cost difference at 22nm I believe FinFETs on bulk and FDSOI at the 22nm/20nm node have very similar cost.

I have also looked at 14nm costs for FDSOI, FinFETs on bulk and FinFETs on SOI. The following table summarizes some findings based on a detailed analysis of the processes.

[TABLE] align=”center” border=”1″
|-
| style=”width: 167px; height: 17px” |
| style=”width: 80px; height: 17px” | FDSOI
| style=”width: 90px; height: 17px” | FinFET on bulk
| style=”width: 86px; height: 17px” | FinFET on SOI
|-
| style=”width: 167px; height: 17px” | Mask layers
| style=”width: 80px; height: 17px” | 44
| style=”width: 90px; height: 17px” | 46
| style=”width: 86px; height: 17px” | 43
|-
| style=”width: 167px; height: 17px” | Multi patterning masks
| style=”width: 80px; height: 17px” | 8
| style=”width: 90px; height: 17px” | 13
| style=”width: 86px; height: 17px” | 11
|-
| style=”width: 167px; height: 17px” | Total masks
| style=”width: 80px; height: 17px” | 52
| style=”width: 90px; height: 17px” | 59
| style=”width: 86px; height: 17px” | 54
|-
| style=”width: 167px; height: 17px” | Substrate cost
| style=”width: 80px; height: 17px” | High
| style=”width: 90px; height: 17px” | Low
| style=”width: 86px; height: 17px” | High
|-

The FDSOI mask and cut mask count is based on a STMicro presentation that details the masks for their upcoming 14nm process. The Intel and IBM mask counts are based on my own analysis performed with other industry process experts I work with and include all mask layers required for the full process. In the case of IBM I have removed the eDRAM related masks to create a direct comparison. All three processes are also based on 10 metal layers and 3 threshold voltages. Looking at the mask counts and process details I don’t see a big cost advantage for any of the three processes and don’t believe cost will be the differentiating factor. If anything a straight forward process and materials analysis appears to favor FinFETs on bulk. If any SemiWiki readers believe they understand FinFET processing and disagree with these counts I would be happy to have a private conversation about it. I am also still working on a detailed cost comparison of the three processes but don’t anticipate that will change my conclusions.

What Leading Edge Logic Companies are Doing
To summarize this paper so far, FinFET appears to be the best solution where ultimate performance is the goal with power consumption being a secondary concern and FDSOI appears to be the best solution where power is the primary concern.

We will now examine what companies are actually doing at 14nm. The following table summarizes all of the companies pursuing 14nm logic with their announced process technology.

[TABLE] align=”center” border=”1″
|-
| style=”width: 201px; height: 17px” | Company
| style=”width: 201px; height: 17px” | 14nm technology
|-
| style=”width: 201px; height: 17px” | Global Foundries
| style=”width: 201px; height: 17px” | FinFET on bulk although will also make FDSOI under a manufacturing agreement with ST Micro
|-
| style=”width: 201px; height: 17px” | IBM
| style=”width: 201px; height: 17px” | FinFET on SOI
|-
| style=”width: 201px; height: 17px” | Intel
| style=”width: 201px; height: 17px” | FinFET on bulk
|-
| style=”width: 201px; height: 17px” | Samsung
| style=”width: 201px; height: 17px” | FinFET on bulk
|-
| style=”width: 201px; height: 17px” | ST Micro
| style=”width: 201px; height: 17px” | FDSOI
|-
| style=”width: 201px; height: 17px” | TSMC
| style=”width: 201px; height: 17px” | FinFET on bulk
|-
| style=”width: 201px; height: 17px” | UMC
| style=”width: 201px; height: 17px” | FinFET on bulk
|-

With many years or development and hundreds of million or even billions of dollars invested in process development I would consider the processes in this table to be pretty well locked in at this point.

Looking at this table, FinFET on bulk represents roughly 95% of the expected 14nm capacity with FinFET on SOI and FDSOI only representing about 5%.

Looking at IBM and taking into account their eDRAM on SOI technology, a FinFET on SOI strategy makes perfect sense. IBM needs the highest possible performance plus eDRAM and there is an argument that FinFET on SOI has some performance advantages over FinFET on bulk.

For Intel performance is also a big driver although in recent years they have become more focused on performance per watt than they were previously. FinFETs on bulk make a lot of sense for Intel.

Looking at the major foundries, TSMC, Global Foundries, Samsung and UMC their technology choice is not as easy to understand. The big business driver for the foundries these days is SOCs going into mobile devices where FDSOI would appear to be a better process. I have spent a lot of effort trying to understand this issue. I have discussed this with a lot of knowledgeable industry experts and as best as I can piece together it appears that FDSOI wasn’t ready back when these companies were making technology decisions. There is also a lingering concern about the SOI substrate cost and the availability of enough wafers to service an Intel, TSMC or Samsung.

The bottom line is FDSOI will be in use at ST Micro and some production volume at Global Foundries. IBM will utilize SOI for FinFETs. At 14nm with process development essentially complete, SOI is unlikely to be much more than 5% of production volumes.

10nm and 7nm Forecast

Another interesting question is could SOI increase market share at 10nm or 7nm. I believe 10nm development is pretty well along at this point and companies that have invested so much money and time into FinFETs are unlikely to change after one generation. There is a lot of published work from companies like TSMC on Ge PMOS fins for 10nm and they have even made an announcement that they plan to use Ge PMOS fins at 10nm. Possibly Global Foundries could ramp up FDSOI if they see a lot of demand but I think it is unlikely anyone else would switch.

At 7nm there are differing opinions of the viability of FDSOI. There appears to be a clear path to 10nm FDSOI but 7nm is more controversial. At IEDM in December 2013 there was an FDSOI paper and the authors appeared to be confident 7nm could be achieved (keep in mind that the device layer has to get thinner as the gate length scales down). Other FDSOI experts I have spoken to are less optimistic.

Conclusion
Based on what companies are doing today, the installed and planned capacity for the companies, the likelihood of changes at 10nm and the difficulties of scaling FDSOI to 7nm, I expect the leading edge logic market to look like this:

[TABLE] align=”center” border=”1″
|-
| style=”width: 90px; height: 17px” | Node
| style=”width: 90px; height: 17px” | FDSOI
| style=”width: 97px; height: 17px” | FinFET on bulk
| style=”width: 96px; height: 17px” | FinFET on SOI
|-
| style=”width: 90px; height: 17px” | 14nm
| style=”width: 90px; height: 17px” | 2.1%
| style=”width: 97px; height: 17px” | 96.2%
| style=”width: 96px; height: 17px” | 1.7%
|-
| style=”width: 90px; height: 17px” | 10nm
| style=”width: 90px; height: 17px” | 5.0%
| style=”width: 97px; height: 17px” | 93.5%
| style=”width: 96px; height: 17px” | 1.5%
|-
| style=”width: 90px; height: 17px” | 7nm
| style=”width: 90px; height: 17px” | 1.6%
| style=”width: 97px; height: 17px” | 96.9%
| style=”width: 96px; height: 17px” | 1.5%
|-

SOI clearly has its place in the mobile market and is also gaining a lot of traction in the RF front end of cell phones, but given the current technology and capacity commitments of the leading edge logic producers FDSOI appears likely to peak at only 6.5% of the leading edge logic market.

lang: en_US


Getting the best from MIPI IP Toolbox

Getting the best from MIPI IP Toolbox
by Eric Esteve on 01-31-2014 at 4:07 am

The set of MIPI specifications has severely enlarged during the past year. This is a positive point, as the large set of specifications induces a wider choice, and a chip maker can decide to implement a complex specification to differentiate with competitors, or select a specification just tailored to support a basic architecture and develop a low cost device. Nevertheless, such a wide specification list can be perceived as a burden for an IP vendor. When I make investment decision and build next year development planning, how to prioritize within this large specification set? The below picture could help, but I will also have to take into account that the success of MIPI has allowed the pervasion of MIPI specifications out of the Cellphone, Smartphone and Tablet application, like in consumer (gaming, digital home), PC peripheral (HE Printers), or even in video conferencing or gesture recognition!

Selecting the next MIPI specification where to invest engineering resources is not an easy task, so I can try to capitalize on the current market status. That I know for sure is that MIPI DSI and CSI-2, Display and Camera specifications going with D-PHY, have been widely adopted (not for 100% of cell phones, but really close to 100%). The next generation, MIPI CSI-3 and DSI-2, will be implemented with M-PHY, so developing M-PHY should make sense… (Such a decision has probably been taken in 2011/2012). That was probably the reason why IP vendors being leaders on MIPI IP market had MIPI M-PHY on 28nm in their port-folio last year, and have enjoyed strong sales of M-PHY… to support UFS, Universal Flash specification commonly developed by the MIPI Alliance and JEDEC!

If you look at the above picture, built by Synopsys from the original MIPI Alliance picture, you can clearly see where MIPI M-PHY could be implemented (thanks for the simplification made to the original picture). And you also understand why it was a wise decision to invest into M-PHY one or two years ago. Let’s see what are the functional specifications associated with the M-PHY (agnostic by nature):

  • DigRF v4 is the specification allowing interfacing with RF chips (supporting LTE), can be connected directly to the M-PHY.
  • Low Latency Interface (LLI) can also be connected directly to the M-PHY.
  • CSI-3, the Camera Interface, has to be connected through UniPro, an “agnostic” controller, to the M-PHY
  • DSI-2, the Display Interface, connect to the M-PHY also through UniPro
  • As well does Universal Flash Storage (UFS), a specification jointly developed by JEDEC and MIPI, to support external Flash devices (Card)
  • USB 3.0 SSIC jointly developed with USB-IF, allowing to connect two USB 3.0 compatible devices, directly on a board (no USB cable)
  • M-PCIe jointly developed with PCI-SIG, allowing to support PCI Express protocol

In fact, Synopsys has already enjoyed good sales of M-PHY IP in the past, in partnership with Arteris who developed the Low Latency Interface (LLI) MIPI specification. But the real growth of MIPI M-PHY IP sales during 2013 can be associated with the fast adoption of UFS. Synopsys can propose an integrated solution, offering UniPro support, as well the support of most of the above mentioned controller (digital) specifications.

The company is continuously investing, and has launched MIPI M-PHY Gear 3 (running at 6 Gbps) A/B along with Type-I and Type-II low-speed capabilities. “The M-PHY’s modular architecture allows implementation of a variety of transmitter and receiver lanes to meet a broad range of applications and all modes outlined in the protocol specification. A sophisticated clock recovery mechanism and power efficient clock circuitry are designed to guarantee the integrity of the clocks and signals required to meet strict timing requirements. The DesignWare MIPI M-PHY supports large and small amplitudes, slew rate control and dithering functionality for optimized electromagnetic interference (EMI) performance.” Said Hezi Saar, and by the way, I would like to thanks Hezi for the fruitful discussion we had yesterday, and also credit him for the “ToolBox” denomination for the set of MIPI specification: he was the first to mention it during the discussion, even if I had it in mind just before the talk.

You still can attend to the MIPI M-PHY Gear 3 Webinar, here

Hezi Saar, Product Marketing Manager for DesignWare MIPI IP, Synopsys
Hezi Saar serves as a staff product marketing manager at Synopsys and is responsible for its DesignWare® MIPI controller and PHY IP product line.

As a conclusion, I would remind that the undisputed leader of the MIPI IP market is Synopsys, since 2011, and I wait for the 2013 IP revenue data. Not to check if Synopsys is still the leader (no doubt about it), but to see the revenue growth rate. Will it be 50%? MIPI IP market is so healthy that it could even be up to 100%! I remember, back in 2011, when I have written the first “MIPI IP Survey”, predicting that MIPI IP should generate revenue as high as $70 million in 2017… and the feedback I had when I was sharing this data with certain people. They just thought I was, at best, over optimistic (if not crazy)… Today, I am still on this trend, and I think that we can expect this IP market to grow with a 20% CAGR during the next four years.

Eric Esteve – See “MIPI IP Survey & Forecast” from IPNEST

lang: en_US

More Articles by Eric Esteve …..



Untangling snags earlier and reducing area by 10%

Untangling snags earlier and reducing area by 10%
by Don Dingee on 01-30-2014 at 6:00 pm

The over 20 years of experience behind Synopsys Design Compiler is getting a new look for 2014, and we had a few minutes with Priti Vijayvargiya, director of product marketing for RTL synthesis, to explore what’s in the latest version of the synthesis tool.

Previewed today, Synopsys Design Compiler 2013.12 continues to target problems on the rise Continue reading “Untangling snags earlier and reducing area by 10%”


Did Google make a huge mistake with Motorola?

Did Google make a huge mistake with Motorola?
by Beth Martin on 01-30-2014 at 4:33 pm

The news wires are alive today with the story that Google sold their Motorola mobility division to Chinese tech giant Lenovo for $2.9 billion. Google bought Motorola in 2011 for $12.5 billion. Did Larry Page make a $9.6 billion mistake? Probably not.

Although Motorola came with $3 billion in cash, and Google already sold the Motorola set-top box division for $2.4 billion, there is still $4.2 billion difference between what Google paid and what they got [12.5 – (3 + 2.4 + 2.9)]. Plus, the Motorola division was losing money. So why was this still a good buy and a good sell?

Because Google got about 17,000 Android patents in the deal. At the time of the Motorola acquisition, both Apple and Microsoft were aggressively buying up Android patents, so Google needed to secure its turf. (Of course, the patent war has kept Google’s lawyers busy with attention from the FTC and at least one successful suit brought by Microsoft.) Those patents bring in billions in licensing fees each year.

It also got a playground and proof of concept for development of a high-end, pure Android phone. (I should mention here that a new Moto X is on its way to me as we speak.) Additionally, Google retains the Motorola Advanced Research Group and Project Ara. Project Ara, a modular phone platform, “will do for hardware what the Android platform has done for software,” according to Paul Eremenko of Google. They probably also kept Motorola’s smartwatch technology, but I couldn’t confirm that. So Google hangs onto the things that have strong growth potential.

Finally, the sale of Motorola paid dividends in other ways. The Motorola purchase caused tension with Google’s semiconductor partner, Samsung, who makes the Chromebook and owns 32% of the Android phone market. This tension over Google entering the phone market gave Google some serious leverage in negotiating the recent patent cross-licensing deal with Samsung. The deal includes provisions to bring Samsung’s Android back in line with Google’s and to feature Google’s suite of apps on their phones.

Google’s adventure with Motorola demonstrates something else: Google is much more than a search engine. Google is committed to expanding its device portfolio, which is has largely done as a system company up until now. But given a series of other acquisitions, Google is also positioned to become a player in the fabless semiconductor space. More on that later.

More articles by Beth Martin…

More SemiWiki articles about Motorola


First Verdi Interoperability Apps Developer Forum

First Verdi Interoperability Apps Developer Forum
by Paul McLellan on 01-30-2014 at 11:47 am

Way back when SpringSoft was still SpringSoft and not Synopsys they launched Verdi Interoperability Apps (VIA) and an exchange for users to share them open-source style. I wrote about it back in 2011 when it was announced. Today, Synopsys announced the first developer forum for VIA. It will be held at SNUG on Wednesday, March 26, 2014 from 3:00pm to 7:00pm at the Santa Clara Convention Center.

The VIA Developers Forum is a no-charge event recommended for SoC design and verification engineers and managers to discuss Verdi’s open extensible debug platform and how to take debug innovation and verification productivity to the next level. Dinner and drinks are included. There will be keynotes, presentations from customers on debug innovation using VIA, plus opportunities to meet other users and teams using VIA.

The call for papers for the forum is open now through February 21st. Send abstract submissions to via@synopsys.com. Authors will be notified about acceptance by February 28, 2014.


There are a lot of areas where users might want to extend the Verdi functionality through (via?) VIA. Probably the biggest is design rule checking. Companies often have proprietary rules that they would like to enforce but no easy way, until now, to build a qualification tool. Or users might want to take output from some other tool and annotate it into the Verdi GUI rather than trying to process the raw data directly. These small programs that run within the Verdi environment are known as Verdi Interoperability Apps, or VIAs.

In addition to allowing users to create such apps, there is also a VIA exchange that allows users to freely share and reuse them. So if a users wants to customize Verdi in some way, it may not even be necessary to write a script or some code since someone may already have done it. Or at least done something close that might serve as a good starting point.

Anyone attending SNUG is welcome to attend. If you are a Verdi user but not eligible to attend SNUG (you work for Cadence for example) then you can still attend. They are still working out how you will register. Pesumably there will be a page on the Via Exchange website.

SNUG Silicon Valley registration opens February 13, 2014. More information here.

Learn more about VIA here. Or watch the introductory webinar.


More articles by Paul McLellan…


Wanna Build a Bitcoin Miner: GlobalFoundries Will Manufacture it For You

Wanna Build a Bitcoin Miner: GlobalFoundries Will Manufacture it For You
by Paul McLellan on 01-30-2014 at 11:00 am

You may know a bit about Bitcoin, the digital currency. One part of the system is “mining” new bitcoins, analogous to mining new gold when we were on the gold standard, creating “money” out of thin air but at a cost of doing the actual mining.

Here is an interesting aside. When I lived in France the father of one of my daughter’s friends was a gold prospector for an Australian mining company. I asked him what they did about the volatility of the price and he said they didn’t care. They would borrow real gold from a bank, the actual physical stuff, and sell it. Then they used the money to build and operate the mine. And they paid back the gold as gold, the physical stuff, so they didn’t much care how the price moved in the meantime.

Anyway, back to bitcoin. The difficulty of doing bitcoin mining is set to make sure that they don’t get found too fast. New bitcoins are called hashes. Bitcoin mining difficulty has been going up by a factor of 10 every 3 months. And you thought Moore’s law was pretty steep.

Originally people used powerful PCs to mine new coins. But it is an arms race to build faster and faster machines to do this. With the difficulty (and hence the price) going up people were ordering high powered machines on the basis that the price made it a no-brainer, only to find a month later that the price had dropped so much their machine was already useless. If you were lucky you might pay $5,000 for a machine, mine $25,000 worth of stuff in the first few weeks and then find your machine was a boat anchor. Or you might pay $5,000 and have it be useless before you recovered the price. Currently, using a high end PC, it takes 5 years to mine one coin, that is how hard it has got. That won’t even cover the electricity.

The next step was to use GPUs and arrays of GPUs to do the mining. That failed to be competitively fast once people switched to making custom FPGA based miners. And what is faster than FPGAs? That would be 28nm ASIC silicon.

This week CoinTerra announced the first terrahash per second product:With blazing performance approaching two terahashes per second, the TerraMiner IV is the first self-contained Bitcoin mining solution to smash the one terahash per second barrier and with its $5999 price point it also delivers a dollar per gigahash proposition unmatched in the marketplace today.


The machine is based on CoinTerra’s new GoldStrike I processor, which has a peak performance of 6500 gigahashes per second. It is a 28nm chip build by GlobalFoundries. Another company, Butterfly Labs is also building miners using GlobalFoundries 28nm silicon. Forbes has a report that TSMC, GlobalFoundries and AMD sold over $200M of silicon for bitcoin miners. There are 2 boards each with 2 chips in each TerraMiner.

And how fast did GlobalFoundries build it. As Asim Salim of OpenSilicon (who did the physical design) said:“Manufactured on GlobalFoundries 28nm technology node, the silicon was delivered in a special custom package with testing completed in an unprecedented cycle time of 49 days from tapeout.”

I’m guessing the economics are such that 28nm ASIC is about as fast as you can reasonably build right now so the difficulty will stabilize. It certainly won’t be improving in performance at 10X every three months. But it will be competitive. GlobalFoundries have at least 4 other engagements. So what is the next step in the arms race for faster and faster hardware. Well, GlobalFoundries already has customers talking to them about 20nm and 14nm processors. Maybe I’ve just found a good niche for Intel 14nm foundry business! If anyone knows how to build silicon for fast processors it is Intel.

CoinTerra story hereand here.
Butterfly Labs story here.
Details on GlobalFoundries 28nm HPP process here.


More articles by Paul McLellan…


If requirements ask for it, it had better be there

If requirements ask for it, it had better be there
by Don Dingee on 01-29-2014 at 8:00 pm

Engineers are known for their attention to detail and precision in thinking, but sometimes still struggle during compliance audits. This is especially true the longer a list of requirements becomes, especially unstructured lists kept in spreadsheets and on Post-It notes.

It gets even more complicated, because in defense circles with standards like DO-254, one has to understand the process and “the customer”. Continue reading “If requirements ask for it, it had better be there”


A Brief History of Qualcomm

A Brief History of Qualcomm
by Paul McLellan on 01-29-2014 at 12:48 pm

Qualcomm is the largest fabless semiconductor company in the world. If you have a smarphone there is a good chance you have a Qualcomm chip in your pocket. It is headquartered in San Diego with offices pretty much everywhere.

Qualcomm’s roots are in Linkabit, which was founded by Irwin Jacobs and Andrew Viterbi. They, along with other Linkabit alumni founded Qualcomm in 1985. The story I heard is that one of the motivations was that Viterbi, who invented the Viterbi decoder in the 1960s, and which is widely used in cell-phones and disk drives, felt that they didn’t really make money from licensing the decoder given how widespread it was and wanted to create a company that could license technology much more profitably. Whether that story is true or not, it is certainly one aspect of how the story played out.

Qualcomm started with a radio system for truckers called OmniTracs. It remained part of Qualcomm until late last year. This was a CDMA-based satellite system. In that era you might remember that they also were the company that supplied the Eudora email system, that was part of OmniTracs but also available separately.

A couple of years later they developed all the technology for CDMA cell-phones and entered both the base-station business and (in a joint venture with Sony) the cell-phone business. They also sold CDMA chips to other manufacturers and licensed CDMA technology to other chip makers. They were pretty much a one-stop shop for CDMA. In 1993 the US Telecommunications Industry Association adopted Qualcomm’s CDMA as an industry standard. Initially Sprint and Verizon were both using CDMA while most other operators were using GSM.

Qualcomm is today around a $25B company. Today the company is split into two main parts, a technology licensing division called Qualcomm Technology Licensing Division, and Qualcomm Technologies Inc which runs engineering and, in particular, their fabless semiconductor business.

I negotiated a technology licensing deal with Qualcomm in around 1997 when I was at VLSI Technology. Just to show you how fast the company has grown, given that it is $25B today, here is the story, We had been unable to get what we considered reasonable terms and had walked. Qualcomm wanted a royalty from us, which was reasonable. But they also wanted a royalty from anyone we sold chips to for use of the same patents that we had already licensed. We felt that put us at to severe a disadvantage competing against Qualcomm’s own chipsets. At the end of Q2 Qualcomm needed $2M to make their quarter. They significantly lowered the royalties and caved on some other conditions provided we could pay them $2M in non-refundable pre-paid royalties (so they could recognized them that day—this really was the end of the quarter). VLSI Technology was in the CDMA business as well as GSM where we already had a strong presence. My guess is that license was inherited first by NXP and then by the now defunct ST-Ericsson. Anyway, the point is that back then $2M was make or break for their quarter, now that is about what they make in an hour.

In the late 1990s, Qualcomm got out of the both the base station and handset businesses and focused completely on technology licensing and fabless SoC development.

3G and 4G wireless air interface standards all depend on various aspects of CDMA and so require patent licenses from Qualcomm, who have continued to innovate and develop more advanced CDMA technologies out ahead of the competition. I believe it is not possible to build a cellphone SoC without a patent license (well, unless you are in China when you can claim the patents are not violated). Just this week Qualcomm acquired a further portfolio of patents from HP including that Palm patents and others.

Since 2007, the current line of SoCs is sold under the name Snapdragon. Qualcomm have an ARM architectural license and design their own CPUs using the ARM instruction set. Krait is the name of the latest incarnation. They also design their own graphics processor (GPU) called Adreno and digital signal processor (DSP) called Hexagon. They recently purchased Arteris’s technology and engineering group, whose network on chip (NoC) technology they used.

Snapdragon chips are integrated Application Processor (AP) and modem, unlike many of their competitors who use two separate chips. More recent Snapdragon chips also have on-chip WiFi and Bluetooth. They are used in a huge variety of cell-phones including Samsung Galaxy, Xiaomi and other market leaders. Although Apple builds its own Ax application processors, they use Qualcomm modems.

Almost all their chips are build in the TSMC 28nm LP process although they are sampling chips in TSMC 20nm too and will presumably ramp those to volume during 2014.


High Quality PHY IPs Require Careful Management of Design Data and Processes

High Quality PHY IPs Require Careful Management of Design Data and Processes
by Pawan Fangaria on 01-29-2014 at 10:05 am

In last few years IP design has grown significantly compared to the rest of the semiconductor industry. There are newer IP start-ups opening across the world, particularly in India and China. Amid this rush, I wanted to understand the actual dynamics pushing this business and whether all of these IPs follow quality standards. Quality is a must considering IP integration into high-end SoCs. I found a very nice opportunity talking to Ritesh Saraf, CEO at OmniPhy. OmniPhy develops specialized IPs for top tier companies like SerDes PHY including HDMI 2.0, Ethernet , USB, PCIe, SATA PHYs, etc.

What I learned from Ritesh is that there are a few major reasons for the growth of IP business:
a) Number of protocols, complexity and speed of execution has grown. This has forced SoC vendors to source IPs from third parties and integrate IP into their SoCs rather than develop everything themselves. Only a few players are developing IPs themselves.
b) Emerging economies like China and India have proved their mettle in making successful IPs at lower cost. Also, there is good availability of talent in these regions — one can find designers with 6-8 years of experience in AMS design which is generally difficult in the USA. This often tips the scales for a “buy vs. make” strategy.

Earlier SoC vendors were satisfied with off-the-shelf IPs from third party vendors. But in recent times, they are also demanding differentiation and customization at a faster pace.

Considering the gold rush towards developing IPs with new entrants, short cycles and the desire to have them at lower cost, I was concerned about the quality of these IPs: Is it being sacrificed somewhere? It was interesting to learn from Ritesh that to lower the cost of IP, vendors may cut costs somewhere in the development process, verification process, the tools used for design management and so on. It’s important to have designers experienced with taking designs through production, otherwise there can be failure either before production (initiating re-spins) or later in the field.

So I wanted to know what OmniPhy does to maintain the quality of their IPs. Ritesh described in detail various aspects of their quality process, such as controlled design management to use the correct versions of cell views in the entire design flow, diligent design reviews, various levels of testing and signing-off through a comprehensive checklist of procedures with extensive rules. They require their designers to have extensive experience in order to make effective decisions during the design process.

Ritesh said usage of effective quality tools makes a difference. In the case of digital design, they have been using open source tools such as SVN for design management. But analog designs have a different flow: they need development and verification hand-in-hand between different designers in the team and that needs much tighter control of design revisions. In the case of AMS designs, there are analog designers and digital designers; they think differently, so there is a need for an intelligent design management tool that can ease the pressure of check-in/check-out synchronization, sharing of cell views between designers, and ensure that correct views are used in higher levels of designs while being seamlessly integrated into the AMS design flow. A lot of bugs appear during top level verification and procedural diligence is needed to flush them out.

Ritesh further talked about earlier days of small analog designs (such as IO or ADC with 10-20 cells) when designers used to manage them manually, something not possible today. Not using a good design management (DM) tool is a big risk. OmniPhy has analog designs with 1000s of cells, with 20-25 designers working on one design at a time. It’s imperative to have an integrated DM and control solution for analog IP design to ensure quality.

OmniPhy uses ClioSoft’sSOS for design management. SOS is a good vehicle to control the design flow: the verification team does not need to wait, they can check-out the DUT from the system, get all information about the changes (who, what, why…) and continue with the verification process. The tool tracks the changes to be verified and resolution of all issues. At tape-out time, management can use SOS to freeze the design (i.e., make the completed cells read only). Any change at that time would be based on management’s decision. In other words, it’s a nice control on creeping elegance! The DM system by ClioSoft provides a greater level of confidence in the state of the design.

Ritesh was kind enough to share some of the screenshots of their actual designs and flows.


[Analog PHY IP – Data Management complexity]

An HDMI 2.0 PHY design like that above has 8-10 schematic designers, 8-10 layout designers and about 4 verification designers.


[Data Flow for AMS PHY IP]

The analog design flow uses Cadence Virtuoso and ClioSoft SOS, whereas the digital flow uses the SVN open source version control system. The digital flow is easy to maintain because there is clear distinction between development and test, but in the case of analog design, an integrated DM is a must.

This is the top level assembly of a PHY. All custom design views are managed in ClioSoft SOS viaDFII(Integration of SOS in Virtuoso). Digital PnR blocks are checked into the same library. The DM system assures that the blocks at the top level have passed through the final verification and ensures a stable state of the design data at the top level.

Considering this complexity in designing IPs, I asked Ritesh if the cost of the DM tool justified the ROI it provides? Ritesh happily said, “It doesn’t cost at all, considering the savings in re-spins. If you talk in monetary terms, it’s just ~2% of our total EDA spend.”

Also Read

ClioSoft at Arasan

Data Management in Russia

Managing Multi-site Design at LBNL


The Biggest Supplier in the Biggest Mobile Market is a Company You Have Never Heard Of

The Biggest Supplier in the Biggest Mobile Market is a Company You Have Never Heard Of
by Paul McLellan on 01-29-2014 at 10:05 am

If you live in the bay area it is easy to come to the conclusion that Apple has huge market share and is in a very strong position in the mobile market. Everyone has an iPhone.

But the truth is less flattering. Yes, Apple continues to make large profits and it made record iPhone shipments. However, only 51M and Wall Street expected nearly 55M. The problem is that Apple is growing much more slowly than the overall market. Apple grew 7% year on year in Q4 but the market grew by about 50%. Apple’s market share is down to 16% in Q4, from 22% in Q4 of 2012. With (probably) no new models for a couple of quarters I think that market share will continue to shrink.

The other big company is Samsung. They are #1 in unit shipments at around 90M, not that far off twice Apple. While not as profitable as Apple due to its product mix, it is still a very profitable business. The second tier players like Sony, LG and Nokia are all struggling too, not cheap enough to be competitive at the low end but without the fashion cachet of an iPhone or Samsung Galaxy. Meanwhile, other Chinese names (Huawei, ZTE and Lenovo) also continue to do well.

The biggest market for smartphones is China and the growth is all at the low end. Yes, Apple finally has a deal with China Mobile but it is really too highly priced to manage the kind of market share that it has in the West. Samsung used to be #1 in pretty much every market (it is #1 worldwide) but in China last quarter it looks like the leader is Xiaomi. That’s right, a company you have almost certainly never heard of overtook Samsung in the biggest smartphone market of all. It was founded in 2010 and only released its first smartphone in 2011. This is unit sales, of course. Samsung has products at every price point and presumably sold its share of the high-end Galaxy phones that compete pretty much head on with iPhone and so it probably made the most money in China. That is a Xiaomi phone above (the first two characters are xiaomi small rice, the second are shouji hand machine which is what they call mobile phones, end of your Chinese language trivia for the day).

The reality is that the market is pretty mature. Android has leveled the playing field so the user experience is very similar on all phones. The growth numbers in $ terms are slowing too, which reflect both the maturing of the market and the transition to cheaper phones. Roughly the market grew 40% last year and is expected to grow a little over 20% this year, half the rate.

Mobile will continue to be the biggest market for chip suppliers. I think the internet of things (IoT) is at the overhype stage right now. Sure Google just bought Nest for a lot of money. Wearables were the big thing at CES. But these are not going to be selling in the billions of units in 2014 (there were over a billion smartphones sold in 2013) and they are not going to sell at $600 price points like the high end of the smartphone market.

2014 should be an interesting year.


More articles by Paul McLellan…