Banner 800x100 0810

Semicon: Multiple Patterning vs EUV, round #1

Semicon: Multiple Patterning vs EUV, round #1
by Paul McLellan on 07-21-2013 at 9:01 pm

If you want to know the state of play in lithography, there is no better place than the special session on lithography at Semicon West. This year was no exception. The session was given the punchy title Still a tale of 2 paths: multi-patterning lithography at 20nm and below: EUVL source and infrastructure progress.

In the blue corner for this fight were Stephen Renwick from Nikon (they make regular optical steppers) and Ben Rathsack of Tokyo Electron who is an expert on the rapidly-getting-less-esoteric technology called Directed Self Assembly (DSA). They represent the non-EUV side of things. In a second blog we’ll look at the red corner (or I suppose that should be violet corner) and what the EUV protagonists had to say.

Stephen talked about Argon Fluoride lithography extension (ArF) which for practical purposes is the only game in town for now. As he pointed out, you can still count the number of EUV tools worldwide on your fingers. ArF immersion is the lion’s share of the business but to further increase the resolution needs things like assisted pitch division, complementary lithography or more exotic solutions like DSA.

One interesting little fact is that it turns out that 90% of the masks are only used for 20% of the wafers. Or, to put it the other way around, 80% of all production is done using just 10% of the masks. Since CD uniformity is going down this is a challenge when boutique masks keep being thrown in.

Double patterning is cheaper than a slow EUV machine, and to date they are all slow (under 50 wafer per hour). True double patterning (self-aligned) can do spacer-based pitch halving and even pitch/6 and pitch/8. So ArF can deliver the required resolution for the forseeable future from a technology point of view.


Complementary lithography is where we use pitch multiplication to lay down a grating of complete lines. Then using a cut mask split them up into the actual lengths required. Obviously this requires very restricted design rules but we already have a lot of that in 20nm and below anyway. The cut mask is the highest resolution (no pitch multiplication) but is also very sparse so technologies like e-beam or EUV might be applicable.


With pitch multiplication it turns out that resolution is not the limiting factor but overlay. Need about 2nm, today at 4nm but getting there. Machine-to-machine overlay is what counts (different steps in different steppers) already there for single machine overlay.

On the 450nm transition, Nikon intends to ship early learning tools based on 193 immersion for 450mm with shipments for production in 2017.

Ben Rathsack talked about building a collaborative ecosystem for DSA. DSA is so new that I’d better explain what it is. If you take two polymers with the right properties and join them with a covalent bond. Now if you bake them so they solidify then polymer A and polymer B separate themselves into separate areas. And if the polymers have the right property you can remove one of them and use the other as a mask for a process step. If you just pour this mixture on a wafer you just get a mess.


That is where the D in DSA, which stands for directed, comes in. If before you do this you lay down guide structures (using traditional 193i lithography) then the polymers will align nicely into a grating at a much finer pitch. Most of this work is still going on in academia, but is advancing rapidly.

Lines are good but you also need contacts/vias. Making small holes with self-healing approaches seems to work well too, but, as Ben pointed out “pitch pays the bills”. You need to be able to place the holes close together. Various approaches are being tried such as hole doubling (put the polymers into the right shaped rectangular slot and you get two circular vias) or making the vias in guide gratings so they form a line of contacts.

So DSA has advanced a lot since last year’s presentation. There is still a lot of work to be done to improve especially defect reduction, silicon fin integration (for FinFETs), sub-20nm holes, and also making it work with EUV eventually.

The really attractive things about DSA are:

  • doesn’t require double, triple… patterning
  • novel materials but existing process equipment
  • many fewer process steps than quadruple patterning
  • CD uniformity is entirely controlled by the polymer formation (1 step) whereas with multiple patterning many of the steps are very critical

Ben’s presentation is here. I’m not sure if you can access it if you didn’t attend Semicon or aren’t a SEMI member or don’t know the magic word. Unfortunately Stephen’s presentation doesn’t seem to be online.

Round 2 now up.


A Brief History of VLSI Technology, part 2

A Brief History of VLSI Technology, part 2
by Paul McLellan on 07-21-2013 at 9:00 pm

Part 1

VLSI’s business grew healthily but it never threw off enough cash to fund all the investment required for process technology development and capital investment for a next generation fab. They made a strategic partnership with Hitachi covering both 1um process technology and a significant investment, which meant that VLSI could build its second fab in San Antonio TX and had a competitive 1um process to run in it. At about the same time the San Jose fab was upgraded from 5” to 8” wafers.

The PC chipset business was very successful but it was clear that it would eventually become a low margin business due to competition from Asia, and probably would finally be owned by Intel who could design more and more functionality to work intimately with their own next-generation microprocessors. VLSI decided to invest in system knowledge for the GSM cellular standard that was starting to get off the ground, as well as some other attractive end markets such as digital video.

Also, in that era, Apple decided to build the Newton. They selected Acorn’s RISC processor and insisted it was spun out as a separate company. So ARM was created with Apple, Olivetti (that by then owned Acorn) and VLSI as the owners. VLSI’s expectations were that when the Newton took off that they would be building all the ARM silicon for Apple. This was still in an era where a 32 bit processor was a standalone chip. Of course, it turned out that the Newton didn’t take off and was canceled, and ARM had to create a new business model, licensing their processor cores to any semiconductor company that wanted it. Since only a few semiconductor companies had their own internal microprocessor development teams, this turned out to be a lot of companies. At the time, VLSI was not happy about having so many competitors in the ARM market, but once they discovered that they no longer had to convince customers that it was safe to use a microprocessor the customer had never heard of, that business grew strongly.


Meanwhile, market for second generation (digital) GSM phones exploded. European companies, especially Nokia and Ericsson, were the most successful handset manufacturers. At one point Ericsson was 40% of VLSI’s entire business. VLSI also built up a GSM chipset business selling to second tier manufacturers who didn’t have the system knowledge to develop GSM baseband chips internally.

In 1991 it was clear that VLSI was really two companies that should already have separated: an EDA company with some of the best VLSI design tools on the market, and an ASIC/ASSP company manufacturing silicon. So the design tool business was spun out as Compass Design Automation.

Compass struggled to shake off the perception that it wasn’t really independent of VLSI and as a result it had a relatively unattractive ecosystem of semiconductor companies that fully supported it with ASIC libraries. But Compass also had its own libraries, and by standardizing on the Passport design rules, which pretty much any fab could manufacture, it created a sizable library business of its own with standard cells, memory compilers and other foundation IP.

Compass grew to nearly $60M but it was never profitable. They had a fully integrated suite of design tools in an era when the large EDA companies, which had grown through acquisition, had educated the market to pick best-in-class point tools and use their internal CAD departments to do the integration. So Compass was swimming against the tide and despite the fact that every VLSI ASIC and every VLSI standard product was designed exclusively using Compass tools and libraries, they never shook off the perception that they were not leading edge and CAD groups were reluctant to standardize on Compass, at least partially because they would have nothing to justify their existence.

Eventually Compass was sold to Avant! who were mostly interested in the library business to complement their own software business. Of course, Avant! in turn was acquired by Synopsys. The software part of the business by then was largely based in France and the entire group in France was hired by Cadence en masse where many of the individual engineers still work today.

VLSI’s semiconductor business, both the ASIC business and the ASSP business grew to about $600M. There was a focus on wireless (not just GSM, VLSI had a CDMA license from Qualcomm too), digital video, PC graphics and an ASIC business that was diversified into many separate segments.

In 1999, Philips Semiconductors (now called NXP) made a hostile bid for VLSI Technology. Philips was a very bureaucratic company and struggled to bring processes to market quickly along with the required libraries. As the ASIC business got more and more consumer oriented, this became a big problem. VLSI’s lifeblood was ASIC and they were much quicker at getting designs going in new process generations, so Philips figured that acquiring VLSI would shake up their internal processes and also give them a network of leading design centers (by then renamed technology centers). After some back and forth negotiation, eventually VLSI was acquired by Philips Semiconductors for just under a billion dollars and it ceased to be an independent company.

VLSI’s Wikipedia page is here.


New Book on Design Constraints

New Book on Design Constraints
by Paul McLellan on 07-20-2013 at 10:18 pm

There is a new book out from Springer. The subtitle is actually a better description that the title. The subtitle is A Practical Guide to Synopsys Design Constraints (SDC) but the title isConstraining Designs for Synthesis and Timing Analysis. The authors are Sridhar Gangadharan of Atrenta in San Jose and Sanjay Churiwala of Xilinx in Hyderabad. The final chapter on Xilinx extensions to SDC was written by Frederic Revenu (not suprisingly, from Xilinx). Given the backgrounds of the authors, the book is equally applicable to SoC/ASIC designs and to FPGA-based designs.

As a totally off-topic aside I have actually been to Hyderabad several times. At Compass I set up a remote development group with a company called CMC (now part of Tata), which was originally set up to service IBM installations after IBM India was kinda nationalized in the 1970s. The weather is great at some times of year but at other times of year it is insanely hot.

The book is, as it says on the cover, a hands-on guide to timing constraints in integrated circuit design. You will learn to maximize performance of IC designs by specifying timing requirements correctly all within the context of SDC, which is, of course, the de facto standard format for specifying constraints. The book:

  • Provides a hands-on guide to create constraints for synthesis and static timing analysis (STA), using SDC
  • Explains fundamental concepts around SDC constraints and its application in a design
  • Explains SDC command syntax, semantics and options
  • Includes key topics of interest to a synthesis, static timing or place & route engineer
  • Explains which constraints command to use for ease of maintenance and reuse, given that there are often several options possible to achieve the same effect on timing

The chapters in the book are:
[LIST=1]

  • Introduction
  • Synthesis basics
  • Timing analysis and constraints
  • SDC extensions through Tcl
  • Clocks
  • Generated clocks
  • Clock groups
  • Other clock characteristics
  • Port delays
  • Completing port constraints
  • False paths
  • Multi-cycle paths
  • Combinational paths
  • Modal analysis
  • Managing your constraints
  • Miscellaneous SDC commands
  • XDC: Xilinx extensions to SDC

    A complete contents listing including all the subheadings is available as a pdf.

    As you can see from the chapter titles the book is pretty comprehensive and has a practical emphasis on giving the practicing engineer the knowledge to do a better job. ASIC and FPGA design flows have a heavy focus on verifying the functionality of the RTL. However, an equal emphasis on validating the timing constraints has been missing. Constraint issues can cause unpredictable design schedule, increase iteration between logical and physical design and result in late stage ECOs. The quality of the constraints has a direct relationship to the quality of the silicon. This book is has been written to address this.

    Unfortunately the book is priced like a textbook with a limited audience, which of course it is. The list price from Springer is $119but Amazon has it for $99.82or you can get on Kindle for only a little less at $89.99.There is a free sample of chapter 2 available as a pdf.


  • The DSP is dead! Long Live the DSP… IP core!

    The DSP is dead! Long Live the DSP… IP core!
    by Eric Esteve on 07-20-2013 at 9:05 am

    Trying to trace DSP birth as a standard IC product, you come back to the early 80’s, when a certain Computer manufacturer named IBM has asked to a certain Semi-Conductor giant (at that time) named Texas Instruments if they could turn a lab concept, Digital Signal Processor, into a standard product that IBM could buy to TI, like they used to buy TTL or DRAM. At that time, IBM was TI’ number 1 customer, so TI said: “yes, Mr. IBM, I will develop the TMS32010 under your clever specification”. That was the beginning of a long story, still going on. I should say that the concept or digitally processing “signals” after their Analog to Digital conversion is still alive, more healthy than ever, as you may count several DSP being used in each smartphone, each cell phone and in many tablets, as soon as a modem is integrated…

    To tell you more about this DSP product line at TI, let say that the product quickly moved from BiCMOS to CMOS technology, becoming the TMS320C10, C25, C50 and so on. When I joined TI in the early 90’s, as ASIC FAE, I had the feeling of being part of a certain technical “aristocracy” … except that the DSP Application Engineer had a deeper scientific know-how! But DSP did not sell in very large volumes, to a large customer base, the type of business model which is not really TI cup of tea. But the product was available, some (crazy?) analysts were predicting huge market adoption to come, and TI decided to market it.

    I am sure that you are clever enough to foresee the next step: Ericsson, then Nokia adopting TI DSP, pushing TI to integrate it into an ASIC technology to minimize both chip count (good for the BOM cost) and power consumption (good for the cell phone battery life). Selling a standard product integrated together with logic gates, memory and potentially a CPU core look obvious today, we call it a SoC, but it was a revolution in the mid 90’s.


    Why such a long introduction? Because TI has always claimed being the DSP market leader, but this leadership was based on a trick: TI was accounting as “DSP” a product being an ASIC integrating a DSP… core! TI’ competition, Analog Devices and Motorola, were selling DSP standard parts (low volumes, multiple customers), when TI was selling to a handful customers, on top of this standard parts, very large volumes of DSP BaseBand developed on ASIC technologies, and easily win the race. In fact, the DSP standard part market probably represents today almost the same percentage of SC shipment that it was 20 years ago. Not a dead market, but not really such a growing and exciting market like wireless… especially when you take into account the emergence 10 years ago of DSP IP core, being integrated into a SoC.

    Starting in 1995, the first IP vendors were emerging, even if ARM CPU core started to sale several years ago, and at the end of the 1990’s, a company named DSP Group was selling the piece of IP ringing the decline of DSP standard product: DSP IP core. About 15 years later, the company name has been changed to CEVA, DSP IP port-folio has dramatically enlarged, as well as the MIPS power going up to the Giga Instruction Per Second, compared with a couple of MIPS dozen for a standard product in 1995. To date, more than 4 billion CEVA-powered chips have been shipped worldwide, for a wide range of diverse end markets. In 2012 alone, CEVA licensees shipped more than 1 billion CEVA- powered products. Long Live the DSP IP core! Does this means that DSP is dead as a product? Not necessarily, but you can guess that an OEM developing DSP intensive system for any market segment reaching 10 or 100 million units will quickly become a chip maker. The company will develop an ASIC (or subcontract the development) integrating a DSP IP core, instead of buying a standard product being by nature un-optimized for power and/or performance for the targeted specific application. Just take a look at CEVA powered product

    A company like CEVA enjoys more than 200 licensees and 300 licensing agreements signed to date, CEVA’s comprehensive customer base includes most of the world’s leading semiconductor and consumer electronics companies. Broadcom, Icom, Intel, Intersil, Marvell, Mediatek, Mindspeed, Mstar, NEC, NXP, PMC-Sierra, Renesas, Samsung, Sharp, Solomon Systech, Sony, Spreadtrum, ST-Ericsson, Sunplus, Toshiba and VIA Telecom all leverage CEVA’s industry-leading platform solutions and DSP cores.

    Eric Esteve from IPNEST

    lang: en_US


    CEVA and ARM Do LTE

    CEVA and ARM Do LTE
    by Paul McLellan on 07-19-2013 at 8:23 pm

    If you have purchased a high-end cell-phone or tablet in the last couple of years it probably has LTE, although some carriers try and blur things by showing a symbol like 4G when you are in an area that has LTE despite the fact that your phone does not support it. Don’t you love cell-phone marketing? Talking of which, if a camel is a horse designed by a committee, then Long Term Evolution is an example of a powerful brand name developed by a committee. It could just as well describe an Energy Drink than a mobile standard.


    Anyway, it is actually 8 different standards with data rates going from 10 megabits/s down and 5 megabits/s up all the way to 3 gigabits/s down and 1.5 gigabits/s up. At the higher data rates it can use multiple antennas. The above table gives some idea of the complexity. There is actually no separate voice channel as in previous standards, but since carriers will get most of their revenue from voice for the forseeable future, you can expect your phone to hide the fact that it is using voice-over-IP under the hood. To be fair, initially it will be using what is called CSFB (circuit switched fall-back) where the regular voice infrastructure is used for voice-calls (and text messages) and LTE is just used for non-voice data like internet access. But as capacity goes up and carriers need to transition, this will change and VoLTE (voice over LTE) will become more common.

    By reputation LTE is a very difficult standard to implement because it requires a lot of processing to get those sorts of bandwidths but the power budget (battery life) has to remain basically the same as prior generation phones. Of course some of the power reduction comes and will come from moving to advanced process nodes but a lot has to come from using advanced digital signal processing. But a single generic DSP is not enough.


    CEVA has a white paper, created jointly with ARM, on their LTE solution which involves an ARM Cortex-R7 to handle the higher levels of the stack (2 and 3) with Ceva DSPs to handle level 1 where all the heavy lifting is done. In fact not just one Ceva DSP but 3: an XC4110 and an XC4120 for the receive side and an XC4100 for the transmit side.

    These CEVA cores are all VLIW SIMD architectures (that stands for Very Long Instruction Word, Single Instruction Multiple Data) which issue multiple instructions at a time and apply them to multiple streams of data using vector processing units. As you would expect, with an architecture involving 4 main cores and lots of other controllers, the software architecture to make everything work is also complex. In addition, VoLTE increases the complexity since the voice protocols also need to be handled.


    The full block diagram gives a flavor of the complexity of an LTE implementation (and ‘full’ is a misnomer, there is a lot more detail underneath the hood). Ceva’s white paper describes the architecture of this LTE subsystem in detail allowing for a low-power rapid implementation of an LTE modem.

    I first encountered what is now CEVA when I was in VLSI Technology working in the group that did M&A and licensed technology. Back then CEVA were called DSP Group (in between they were ParthusCeva) and all the cores were named after trees: Pine, Oak, Teak. Somehow XC4120 isn’t quite so catchy. The only one which seems to survive is a family of TeakLite cores. VLSI used them in VLSI’s chipsets for GSM (eventually a single chip baseband so not technically a chipset) and built up a very successful business.


    Low Cost Smartphones: How Do They Do It For $50?

    Low Cost Smartphones: How Do They Do It For $50?
    by Paul McLellan on 07-19-2013 at 12:09 pm

    The future growth in smartphones is largely going to be at the low end of the market as Eric wrote about here a couple of weeks ago. A lot of that growth is targeted at China. Sitting in the US it is easy to underestimate the size of the Chinese market. China Mobile (the market leader) is just one company but has more than twice the number of subscribers as the entire US market. And while middle-class Chinese in Shanghai and Beijing and Guangzhou may have iPhones, the price point is too high for most Chinese. Plus phones are not subsidized by the carriers in China (or most of the world actually) so they pay the full $600 or whatever an unsubsidized iPhone costs. The growth is in phones with $50 BOMs with open source operating systems like Android or Firefox or Tizen.

    Of course the high end will continue to grow, but most people in the rich parts of the world who want a smartphone already have one, and so it is largely a replacement market. But a large number of sub-$100 smartphones are launching this year in China, India, SE Asia, South America and Africa. Many, if not most, of the companies designing these phones are Chinese, using silicon from Qualcomm, Samsung and, especially, Taiwan-based Mediatek. But many new entrants are going even lower-cost, taping out application processors on process nodes like 45nm that are no longer leading edge and so lower cost. These chips may not have as much capability, especially smaller screens and less powerful graphics, as the phones we all have in our pockets (or purses) but if you are a farmer in western China your avian interests are more likely exploring the market cost for your chickens than playing Angry Birds.

    But how does a company without a lot of deep silicon skills create such an SoC?

    They buy the hard stuff. Building an SoC is nowhere near as simply as building a printed circuit board (PCB) was twenty years ago but the methodology is much the same. Buy all the components, and build the fabric that ties them all together. Or perhaps buy that fabric too. Pretty much anything anyone needs is available from specialist silicon IP suppliers such as Arasan, who don’t just supply the actual netlists and analog cell, but also the verification IP, the device drivers and software stack, and FPGA-based implementations for testing purposes. What Bill Davidow called, in his classic book on marketing high-technogy called…er, Marketing High Technology…the “whole product.” Something the late adopters, the non-Qualcomms of the world, can use.


    Application processors use a lot of standard interfaces (who in their right mind would build a non-standard interface these days?) such as JEDEC’s eMMC or the MIPI Alliance’s family such as CSI, DSI, D-PHY. These standards change all the time, and an SoC design team cannot keep up, which is another reason to leave the challenge to specialist companies that can deliver a high-quality product and amortize the high cost across many design teams.

    Arasan have a new white paper on this low-cost smartphone market and the typical interfaces that such SoCs require and the importance of both standards-based IP in general and Arasan’s Total IP Solution in particular.


    Qualcomm Video Friday

    Qualcomm Video Friday
    by Paul McLellan on 07-19-2013 at 11:30 am

    Two videos (both short) from Qualcomm. They are both amusing but also have a serious aspect to them. The first one is interesting since it is Qualcomm following in Intel’s footsteps with its “Intel Inside” campaign against AMD to make people care about what processor was in their PC. Until that point probably nobody (ok, non-silicon-valley types) could even tell you whether Intel or AMD was the market leader. But the campaign was very successful and made Intel’s brand name one of the most well-known up there with Coke, Google and other companies truly in the consumer space. I bet most people wouldn’t have a clue what the application processor in their smartphone is, except possibly people have heard of A4, A5 etc that Apple built for the iPhones and iPads. It will be fun to see if Qualcomm are successful at making anyone care. But they are obviously investing: this video was shown during the NBA finals, which I’m guessing doesn’t come cheap.

    The second video is also about Snapdragon, showing how low-power it is in an interesting…well, you have to watch the video.

    One of those phones is getting really hot…55[SUP]o[/SUP]C. I remember in high school chemistry learning that if something was so hot that you could no longer touch it then it was about 60[SUP]o[/SUP]C. So that is a phone that you can’t really hold.

    So what is Snapdragon? It is Qualcomm’s integrated base-band and modem processor, almost a single chip phone. It also has WiFi and GPS. You can see from this table from the Linley Mobile Microprocessor Conference (I covered it here) why Qualcomm is the market leader: their chips just have more key capabilities integrated that do not require external helper chips. Snapdragon is actually a family of processors with different capabilities. The Linley table is for the S4 Plus, I believe.

    [TABLE] align=”left” class=”cms_table_grid” style=”width: 480px”
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” |
    | class=”cms_table_grid_td” | AP
    | class=”cms_table_grid_td” | WCDMA
    | class=”cms_table_grid_td” | LTE
    | class=”cms_table_grid_td” | WiFi
    | class=”cms_table_grid_td” | GPS
    | class=”cms_table_grid_td” | NFC
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Qualcomm
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Sampling
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Marvell
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | In Qual
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Broadcom
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Sampling
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Mediatek
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Licensed
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | N
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | nVidia
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | AT&T
    | class=”cms_table_grid_td” | N
    | class=”cms_table_grid_td” | N
    | class=”cms_table_grid_td” | N
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Intel
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Y
    | class=”cms_table_grid_td” | Sampling
    | class=”cms_table_grid_td” | N
    | class=”cms_table_grid_td” | N
    | class=”cms_table_grid_td” | N
    |-

    One reason that ST Ericsson has been closed down is that it had a big customer in Nokia. When Nokia switched to Microsoft Windows Phone, ST Ericsson basically lost the account since Windows Phone specifies the Snapdragon family explicitly as the only chipset it runs on. So Nokia became Qualcomm’s account and powered the Nokia Lumia Series. Which hasn’t been doing so well, but that’s another story.

    One interesting thing I just thought of is whether Intel’s recently announced LTE modem (I covered that yesterday here) can be moved (or is already in) their FinFET process. Because having really good baseband but not being able to integrate the modem puts you at a disadvantage. Of course Qualcomm have to move their modems to FinFET at TSMC, Samsung and wherever else they are using as foundries, and I’ve heard stories that doing analog in the FinFET world is a challenge.


    GSA Entrepreneurship: Getting Money In and Out

    GSA Entrepreneurship: Getting Money In and Out
    by Paul McLellan on 07-19-2013 at 1:32 am

    This afternoon and evening I was at GSA’s entrepreneurship conference at the Computer History Museum. The first two panel sessions were essentially on getting money into companies to get them started (or growing them), and getting money out when you have built the business.

    The first session was officially titled Fueling Success and Innovation with panelists:

    • Shankar Chandran of Samsung Catalyst Fund’s Samsung Strategy and Innovation Center
    • Amer Halder of Cavium
    • Keith Larson of Intel Capital
    • Angel Orrantia of SK Telecom America’s Innovation Center
    • George Pavlov of Tallwood
    • Moderator: Gunjeet Baweja of Needham

    Gunjeet opened by talking about what turned out to be the theme of the session: investment in semiconductor companies continues to fall, driven by lack of IPOs and consequent lower M&A valuations. Need to think about strategics since that’s where the money is. Strategics are companies in the ecosystem that have other reasons than financial for investing, like half the panel.

    Shankar said Samsung has created a strategy and innovation center in Menlo Park with $100M fun to invest in core technology. There is a big capital cap for core technology since VCs stay away from anything requring time and too much capital.

    Amer of Cavium echoed the strategic message. They are the alternatives for funding GSA’s cap lite group has just put together a list of strategics willing to take your call.

    Keith didn’t have a cake but it was Intel’s 45th birthday today. Intel has been the largest strategic for 20 years, having invested over $10B with over 600 exits. There is lots of innovation but not so much in the traditional ASSP/component business. Materials, EDA, IP. Strategics can now add more value that traditional VCs. And the other way around so don’t forget to emphasis the non-financial aspects to a strategic investment.

    SKTA decided to get into the business since they noticed a fundamental shift in semi, which is now much closer to equipment and materials companies with the complexity of modern processes. They help find a strategic match with a goal to reinvigorating the semi funding model.

    George of Tallwood said the old days are gone, you can’t do it any more the way you could. Chips & Technology were making more money in a PC than Intel after just 2 years of development at a $6-800M run rate. Only mobile is big enough and growing enough but you think you will just displace Qualcomm, not going to happen. Semconducor industry is mature (almost nobody in the room was under 40). It is not just returns for semi IPOs that are not there, VC overall has worse returns in the last 10 years than investing in public markets. But he is worried that semi will go the way of EDA where innovation can only now be done in an exceptionally capital efficient way, and he doesn’t think that works for semi. Development can be done better but it still costs real money.

    Gujeet had a ray of hope that exit valuations are going up if the company is build in a capital efficient manner. There is chance that the Jobs act will make smaller IPOs possible again versus the mess that Sarbanes-Oxley made of the market.

    The second session was officially titled Exits – Finding Success in Semiconductor Start-ups with panelists:

    • Alan Jepsen of Comerica Bank
    • Tom Kao of IDT
    • Andy Oberst of Qualcomm
    • Stanley Pierson of Pillsbury
    • Matt Sachse of Pagemill
    • Moderator: Steve Domenik of Stevin Rosen Funds

    Steve set the scene (well, the same scene as the first session): ther won’t be many IPOs, mostly M&A. Semi is in consolidation.

    Alan of Comerica pointed out that there are lots of companies with lots of cash. A big part of their valuation is their cash and they don’t make any real money on cash so acquiring companies is one way to create a better return.

    So why acquire? Tom pointed out that the two main reasons were to fill a hole in the companies product or technology portfolio, but sometimes to roll-up a market segment. So don’t be afraid to approach competitors.

    Andy from Qualcomm said they do acquisition mainly by identifying future needs and working to fill them. Sometimes that means internal development, but sometimes acquisition. They also look for opportunities to expand into adjacent categories. They don’t do acquisitions just to get their hands on patents although they expect companies to have good protection on their unique ideas. Patents are table stakes not something that affects valuation.

    Stanley talked about what can go wrong. Firstly, he said, make sure to talk to your advisers, bankers and lawyers and drill down into the business term before you get into exclusivity (when you sign some sort of no-shop agreement). Your leverage afterwards is a lot less. The rest is good housekeeping, making sure your investment and customer contracts are good and so on. He also pointed out that although not usually a legal agreement, mergers are often problematic if the core team doesn’t stay with the business, at least at first.

    What about strategic investors? Matt of Pagemill said they can be very positive but if they are a competitor of the acquirer it can be a problem. The biggest issue is technology partnerships that can’t be unwound and have not been structured to keep the core IP in the company for the benefit of any acquirer.

    Matt talked about the best things you can do to structure deals? Firstly, series A under corporate law gets a separate vote so you ideally want to make sure that a strategic investor doesn’t have veto rights on any acquisition by proxy nomination terms or bringing in financial investors into the round too. Strategics typically don’t want a board seat, just board observer, so don’t be afraid to close the session and exclude them to discuss sensitive customers (e.g. competitors of the investor). One thing that will come up with a strategic investment is notice rights (effectively a first refusal on M&A). You don’t want to have to restrict yourself by having to reveal bidders and price to your investor, will reduce valuation (or scupper the deal). Try and settle for blind notice (you have to tell the investor that the company has received an offer but not have to declare who or how much, so if they want to bid too it is a normal competitive bidding situation).

    Andy said that the biggest problem he sees is if the investor has an unfair business advantage and is a competitor of, in this case Qualcomm, and can wreck the attractiveness of a deal.

    Tom pointed out that companies are bought not sold. So you should always be socializing with potential buyers at all times since you never know when a company may want to make a move. May be early to get technology, may be late to get into a market where the acquirer has fallen behind.


    Intel’s Q2 Conference Call

    Intel’s Q2 Conference Call
    by Paul McLellan on 07-19-2013 at 12:47 am

    Yesterday was Intel’s Q2 conference call. I think that there are some interesting little pieces of information. The financials were what analysts expected although they did take down their guidance for the rest of the year. But that is never the interesting point of Intel conference calls (they almost always hit guidance). There is a transcript at SeekingAlpha here if you want to follow along at home.

    Oh and the call was also significant since it was Brian Krzanich’s first as CEO.

    What is clear is that the PC market is not growing even as fast as people expected at the start of 2013. The world is going mobile When we started Semiwiki a couple of years ago, mobile access was around 10% (I’m doing this from memory of numbers Dan told me at lunch today, he can correct them if I’m wildly wrong) and now it is closer to 40%.

    Of course it is no secret that Intel wants to get into the mobile market. In fact it has to get into the market through a combination of standard products, ASIC-type business and foundry, or it is doomed to decline. My view is it can’t mix all three. If it pursues an Atom-based mobile strategy then it can’t expect to be a foundry for an Apple or a Qualcomm. If it pursues a foundry business it needs a big IP strategy to match the sort of investments that TSMC and GF have made in their ecosystems.

    It hopes to use a combination of things to do so:

    • manufacturing technology. I don’t know if it is really significant but Intel reduced their capital spending forecast to $11B following a reduction to $12B in Q2 following start of year plans for $13B. My guess this is driven not from any wish to reduce capital spending but that the money pump from the PC industry is weakening and likely to continue to weaken
    • Intel already announced a win for a Samsung tablet using an Atom-based SoC. What they announced on the conference call, which is significant, is that this is also a win for Intel’s LTE modem product. Since you can’t be anyone in mobile these days without LTE, this is significant. When Intel acquired Infineon’s wireless business they were heavily criticised for two things. Loss of Infineon’s largest customer, Apple, to Qualcomm (oops) and then later their tardiness in that group developing a viable LTE modem. In the future this gives them the possibility of building a fully integrated chip including the modem.
    • Intel have also announced Silvermont, their next generation Atom architecture, which has a 5X reduction in power or a 3X increase in performance. Since Intel are experts at DVFS this can be changed on the fly dynamically. But as always they say the “architecture” reduces the power but actually most of the reduction, perhaps all, comes from comparing one process generation to the next, inviting you to make the false comparison with current generation ARM-based chips from Apple, Qualcomm etc. OMG ARM is dead. Except next generation ARM chips will also be on future processes (FinFET based so with lots of goodies in the power area).
    • Intel has always considered its capability to run Windows compatibly on smartphones and especially tablets to be a huge advantage. Now they can run Android too and switch between them. Many analysts who follow Intel all seem to think that this is the killer app. In fact they have always said the game belongs to Intel. For instance here: “That little factoid…renders all other tablet processors obsolete. Period.” OMG Apple is dead too. Except I don’t see it. iPad is not going away because it can’t run Microsoft Office. People said it would never be successful for that reasons. How’s that prediction working out?
    • Intel has always had an attitude that “the best transistors win” and they have done a brilliant job of executing on manufacturing and process to be out ahead. The jury is still out (despite Dan’s optimism that everything is on-track) on how fast TSMC and Global will be able to ramp 20nm and transition to 14/16nm and how competitive their processes (and especially manufacturing costs) will be against Intel. If Intel’s process really is years ahead and dramatically cheaper, it could be a game changer. Otherwise I’d bet on ARM/TSMC and the fabless ecosystem/business model.
    • Cloud and storage are growing 40% year on year. Intel are kings there and, despite ARM having processors to attack the datacenter (the business model of ARM’s licensees in the space is essentially 10% cost, 10% physical volume and 10% power of Intel solutions) the jury is out on whether the big internet giants will switch to a hybrid Intel/ARM solution, Intel for single thread performance where they are unbeatable, and ARM for high threadcount internet servers where throughput and power are more important than single thread performance.

    OK, that’s what I spotted in the call. BTW there is a transcript at SeekingAlpha here. Anyone spot any other interesting things I missed?


    “NoC, NoC” – Are You Listening to nVidia’s Dally?

    “NoC, NoC” – Are You Listening to nVidia’s Dally?
    by Randy Smith on 07-18-2013 at 11:00 pm

    Recently Bill Dally, nVidia’s Chief Scientist & SVP of Research, and a professor of electrical engineering and computer science at Stanford University, has been out speaking quite a bit including a “short keynote” at the Design Automation Conference and a keynote at ISC 2013. The DAC audience is primarily EDA tool users and EDA tool developers. ISC’s attendees are high performance computing (HPC) experts. While the DAC talk focused on designer productivity, the ISC talk honed in on the challenges of exascale system design. The topics, in fact, are highly interrelated.

    As Dally pointed out in his presentation at DAC, the key to greater design productivity is to work at higher levels of abstraction. While this should seem obvious, it does run a bit counter to the behavior of many engineers. Most designers work hard to squeeze out every bit of power, performance, or area (PPA), depending on the importance of each design constraint for their end product. This often means starting with a proven piece of IP in an RTL form and then changing it to “perfect it” for their needs. That perfection takes time, not just on the editing, but on the subsequent implementation and verification efforts. It would be much faster if the designer used all the hardened IP they could find. While the design will not be quite as good on overall PPA it would be ready for tape-out many months sooner.

    Of course, the same argument applies to getting designers to move up from RTL to higher level design languages such as SystemC. High-level synthesis (HLS) technology became available more than 10 years ago, yet the migration has been slow. Many designers have built a career out of RTL design – it is what they know, and it is where they see as their value-added to their employers. In the late 1980s and 1990s there were many IC custom layout engineers who felt the same way about their skills, but unless they were analog experts the number of jobs and the relative value of those jobs has not grown at the same rate as the overall semiconductor industry thanks to what is today are pervasive technologies, standard cell libraries and place and route tools.

    But, according to Dally, moving up from RTL to HLS is not sufficient for getting the chip design time back to a couple weeks, nor is that enough for productive exascale design. To make that leap, Dally says to look at PCBs – fixed components you don’t modify. The challenges to use this approach in semiconductor are primarily twofold – having all of the IP you need available on the process you want, and having a standardized way of connecting them. The former problem is simply a business economics problem which I will leave for a later study. But the latter is simple a call for a standardized network on chip architecture, a NoC.

    Dally’s ISC keynote seemed to conclude that the biggest challenge in exascale computing was not performance, but power management. For me, this is where it gets really interesting. Sonics has been working hard on putting power managementsupport into their NoC tools and architecture. After all, the NoC is aware of what data is being sent where and when which means it should know when to wake up or shut down whole system elements. Of course using a NoC is the best way to connect IP blocks. So, a NoC is a great tool to help the designer move up a level of abstraction and to reduce power – the two main points that Dally is making.

    There is yet one remaining point I should make though. In modern chip designs using a NoC, the system architect picks which NoC features/options they want to implement. Even in standard protocols, there may be optional features. In chip design now, the RTL included in the NoC is only what is needed for the options that will be used, thus minimizing gate count. To fully move Dally’s PCB-like model, those feature need to always be present, yet programmable, not just in the NoC controller elements, but in the system block component interfaces as well. That means giving up some area for improved design productivity. I am not sure how many designers are ready to do that, or how much area they would really be giving up. In any event, modern system design requires a NoC, and one that can simplify chip connectivity and reduce power consumption, like Sonics’ on-chip networks can. This type of IP can help a lot in completing Bill Dally’s vision.