BannerforSemiWiki 800x100 (2)

Previewing Intel’s Q1 2012 Earnings

Previewing Intel’s Q1 2012 Earnings
by Ed McKernan on 04-17-2012 at 9:15 am

Since November of 2011 when Intel preannounced it would come up short in Q4 due to the flooding in Thailand that took out a significant portion of the HDD supply chain, the analysts on Wall St. have been in the dark as to how to model 2012. Intel not only shorted Q4 but they effectively punted on Q1 as well by starting the early promotion of Ivy Bridge ultrabooks at the CES show in January. Behind the scenes, Intel made a hard switch to ramping 22nm production at three fabs faster than what is typical in order to cross the chasm and leave AMD and nVidia behind. But that is not all, I believe Paul Otellini will take considerable time discussing the availability of wafers at Intel relative to that of TSMC and Samsung in supplying the demands expected to come from this years’ Mobile Tsunami.

As mentioned in previous writings, the capital expenditures put forth by Intel in 2011 and expected in 2012 point to a company that expects to nearly double in size (wafer capacity) by end of 2013. The single digit PC growth and mid-teen server growth can not soak up all the new wafers. It has to come from another high volume segment. I have speculated that it is Apple and other tablet and smartphone OEMs. In rough numbers it would be on the order of 400MU of mobile processing capacity. Or it can be a combination of processors and 3G/4G silicon. Either way, it was a big bet on Intel’s part to go out and expand their fab footprint.

In the last few weeks there have been a series of articles on this site and in EETimes that at first argued TSMC was having yield issues at 28nm. As time as gone on, it appears that it was not yield issues but capacity or lack thereof. TSMC’s customers made forecasts two to three years ago, during the worst part of the economic crises that did not account for the step function increase in demand for leading edge capacity to service our Mobile Tsunami build out. The difficulty of any foundry is to modulate the demands of multiple inputs. TSMC has to be aware of double counting that leads to four or five vendors expecting to own 200% of the ARM processor market or wireless baseband chips. Intel, however, did make the bet but probably based on the strength of their process technology.

But there are intriguing questions on Intel’s side as well. For the past year, I have observed and noted that the ASPs on Intel chips no longer fall every 6 to 8 weeks like they did in their old model. It was part of the strategy to keep competitors gasping for air to keep up. It seems to say that they can set prices at will.

Even more interesting is the fact that the first Ivy Bridge parts to be introduced are in the mid to high-end range which is different than what they did in the past. The low end Ivy Bridge will not arrive until late Q3. This says there is either very high demand for Ivy bridge or they can’t build enough or both. Ramping production on three fabs means a lot of wafers are headed down the line with the goal of getting yield up sooner. Is the 22nm trigate process one that inherently have lower yield? If the answer is that Intel will get into high yield mode this summer, than they have the flexibility of selling FREE $$$ Atoms into the Smartphone space with the goal of attaching higher ASP based 3G/4G baseband chips – this is my theory as to how they ramp revenue starting late 2012 and through the 2013 year, which is before TSMC and Samsung can catch up on 28nm capacity. Apple, who just launched their new iPAD with the A5X built on the antiquated 45nm process will be taking lots of notes today.

FULL DISCLOSURE: I am long INTC, AAPL, QCOM, ALTR



Laker Wobegon, where all the layout is above average

Laker Wobegon, where all the layout is above average
by Paul McLellan on 04-17-2012 at 4:00 am

TSMC’s technnology symposium seems to be the new time to make product announcements, with ARM and Atrenta yesterday and Springsoft today.

There is a new incarnation of Springsoft’s Laker layout family, Laker[SUP]3[/SUP] (pronounced three, not cubed). The original version ran on its own proprietary database. The second version added openAccess to the mix, but with an intermediate layer to allow both databases to work. Laker[SUP]3[/SUP] bites the bullet and uses openAccess as its only native database. This gives it the performance and capacity for 28nm and 20nm flows.

There are a lot of layout environments out there. Cadence, of course, has Virtuoso. Synopsys already had one of their own and then with the acquisition of Magma have a second one. Mentor is in the space. Some startups are in the space too. Springsoft had an executive pre-release party on Thursday last week (what EDA tool doesn’t go better with a good Chardonnay) and one senior person (who had better remain nameless since I don’t think it was meant to be an official statement of his employer) said that he thought that by the time we get to 20nm there are only going to be a couple of layout systems with the capability to remain standing and Springsoft would be one.

There are three big new things in Laker[SUP]3[/SUP]. The first is the switch to openAccess. But they didn’t just switch they also re-wrote all the disc access part so that there is a performance increase of 2-10X on things like reading in designs or streaming out gds2. But many intermediate things are also reading and writing stuff to disk so it is not just the obvious candidates that speed up.

The second is that the previous versions of Laker had a table driven DRC. That has been completely re-written since just simple width and spacing type rules are no longer adequate (‘simple’ is not a word that anyone would use about 28nm design rules, let alone 20nm with double patterning and other weird stuff). The new DRC can handle these types of rules, but it is not positioned as a signoff DRC, it is used by all the rule-driven functions and by place and route. On the “trust but verify” basis, Calibre is also built into Laker in the form of Calibre RealTime that runs continuously in the background giving instant feedback using the signoff rule deck. Since no designer can actually comprehend design rules any more, this is essential. The alternative, as one customer of another product complained, is having to stream out the whole design every 15 minutes and kick off a Calibre run.

The third big development is an analog prototyping flow. One big difference is that most constraint generation (to tell the placer what to do) is automatically recognized as opposed to the user having to provide a complex text file of constraints. Symmetrical circuits are recognized by tracing current flow, common analog and digital subcircuits such as current mirrors are recognized. The library of matched devices is extendible so that prototyping flow gets smarter over time as the idiosyncrasies of the designer, design or company get captured. There have been numerous attempts to improve the level of automation in analog layout, the hillside is littered with the bodies. This looks to me as if it manages to strike a good balance between automating routine stuff while still leaving the designer in control (analog design will never be completely automatic, let’s face it).

Laker for a time was regarded somewhat unfairly as “only used by people in Taiwan” where admittedly it has become the dominant tool. But two of the top five fabless semiconductor companies have standardized on Laker, and five of the top ten semiconductor companies are using it. And the hors d’ouvres in the edible spoons at the launch party were pretty neat.

More details on Laker[SUP]3[/SUP] are here.



Soft Error Rate (SER) Prediction Software for IC Design

Soft Error Rate (SER) Prediction Software for IC Design
by Daniel Payne on 04-16-2012 at 10:00 am

My first IC design in 1978 was a 16Kb DRAM chip at Intel and our researchers discovered the strange failure of Soft Errors caused by Alpha particles in the packaging and neutron particles which are more prominent at higher altitudes like in Denver, Colorado. Before today if you wanted to know the Soft Error Rate (SER) you had to fabricate a chip and then submit it to a specialized testing company to see the Failure In Time (FIT) levels. It can be very expensive to have an electronic product fail in the field because of Soft Errors and the SER levels are only increasing with smaller process nodes.


Intel 2117, courtesy of www.cpumuseum.com

Causes of SER
Shown below are the three causes of SER:
[LIST=1]

  • neutrons found in nature can strike Silicon creating alpha particles
  • Impurities in packaging materials emit alpha particles
  • Boron impurities can create alpha particles

    When an alpha particle strikes the IC it can upset the charge in a memory cell or flip-flop, causing it to change states, leading to a temporary logic failure.

    SER Prediction Software
    The good news is that today a company called iROC announced two software tools that will actually allow IC designers to predict and pinpoint the layout and circuit locations that are most susceptible to high FIT levels.

    • TFIT (Transistor Failure In Time)
    • SOCFIT (SOC Failure in Time)

    The TFIT tool reads in something called a Response Model provided by the Foundry, your SPICE netlist, and GDS II layout, it then runs a SPICE circuit simulation using HSPICE or Spectre (can be adapted to work with Eldo, etc.). Output from TFIT is the FIT rate of each cell and it can show you which transistors are most triggered by neutron particles so that you can improve your design sensitivity. This simulation run takes tens of minutes.

    SRAM designers can add Error Correcting Codes (ECC) to their designs to mitigate FIT, however a Flip-Flop has no ECC so one choice is to harden the FF which creates a cell that is 2X or 3X the size and power.

    A FF netlist can be analyzed by TFIT in about 10-20 minutes.

    SER Data has the FIT info for all FF and SRAM cells, including combinational logic.

    SOCFIT can be run on either the RTL or gate-level netlist, and has a capacity of 10+ million FFs. It uses a static timing analysis tool (Synopsys Primetime, Cadence), and can also use simulation tools for fault injection (Synopsys, Cadence). It first runs a static analysis on RTL or gates to determine the overal FIT rate, if your design is marginal then you can run a dynamic analysis using fault injection (typical 10 hour run time). This approach could use emulation to speed up results in the future.

    The SOCFIT tool answers the question, “Which cells are the most sensitive in my design?”

    You can even run SOCFIT before final tapeout, while logic is changing. SOCFIT has been under development for 8 years now, and they’ve seen good correlation between prediction and actual measurement.

    SER Info
    Both memory and logic have SER issues, even FF circuits, but not so much combinational logic because of its high drive.

    One particle can upset multiple memory bits now in nodes like 40nm and smaller.

    SRAM is more sensitive to neutron particles than FFs, then DRAMs are less sensitive because alpha particles impact leakage.

    Flash memory is even less sensitive than DRAM to Single Event Upsets (SEU).

    The FPGA architecture is most sensitive to SER because of the heavy use of FF cells.

    Bulk CMOS is more sensitive than SOI.

    FinFET is new, so iROC is just starting to analyzing that from an R&D viewpoint using 3D TCAD models. You can expect to see more data later in the year.

    TFIT will cover all voltages, and process variations.

    TSMC provides the Response Model input to TFIT, and they have been providing the SER Data to customers based on testing in the past, not simulation.

    iROC – The Company
    iROC (Integrated RObustness On Chip) has a mission to analyze, measure and improve SER on ICs. They’ve been providing SER testing services since 2000, where they bring chips to a Cyclotron and expose them with Neutron beams to replicate 10 years of life in just minutes. iROC also partners with foundries like TSMC and GLOBALFOUNDRIES.

    Competition to the iROC approach are mostly internally developed R&D tools from IDMs.

    Some 500 chips have been tested so far, so iROC understands the problems and how to prevent them from being catastrophic.

    Summary
    iROC is the first commercial EDA company to offer two SER analysis tools used at the cell and SOC levels, the tool results correlate well with actual measurements on silicon chips. This will be an exciting company to watch grow a new EDA tool category in the reliability analysis segment.


  • Atrenta’s Spring Cleaning Deal

    Atrenta’s Spring Cleaning Deal
    by Paul McLellan on 04-16-2012 at 9:00 am

    Atrenta is having a special offer to let you “spring clean” your IP for free. They are providing two weeks of free access to the Atrenta IP kit starting from today, April 16th, until the end of May. During this period, qualified design groups in the US will be able to use the kit for two consecutive weeks to “spring clean” their third party or internally developed IP blocks at no cost.

    Atrenta’s IP Kit is also used by TSMC to quality soft IP for inclusion in the TSMC 9000 IP library. See my blog here. Plus it is TSMC’s technology symposium tomorrow.

    The IP Kit generates two important reports: the Atrenta DashBoard and DataSheet.


    The Atrenta DashBoard provides a pass/fail status for all IP blocks. It shows the status of the block for key design objectives such as CDC, power, test, timing constraints and more. It also reflects overall readiness of the IP as measured by various quality goals. User-defined success criteria are used to report tolerance to fatals, errors and warnings. Designers are able to drill down to get additional information on the exact violations reported, as well as access trend data that shows overall progress to achieve a passing status over time. A SpyGlass Clean report has no failures reported.


    The second report is the Atrenta DataSheet. This report focuses on IP characteristics. Once the DashBoard report is “clean,” the DataSheet acts as a final handoff document that captures key information about the IP block, such as the I/O table, clock trees, reset trees, final power spec, test coverage, constraints coverage and more. Especially useful when a block is being integrated, the report gathers this key information into one easy-to-read HTML document.

    And if you really get carried away with the idea of spring cleaning, my condo could do with some attention.

    Details on the IP Kit Spring Cleaning promotion is here.

    And Atrenta’s geek friend has his own take (1.5 mins):


    High Yield and Performance – How to Assure?

    High Yield and Performance – How to Assure?
    by Pawan Fangaria on 04-16-2012 at 7:30 am

    In today’s era, high performance mobile devices are asserting their place in every gizmos we play with and guess what enables them work efficiently behind the scene – it’s large chunks of memory with low power and high speed, packed as dense as possible. Ever growing requirement of power, performance and area led us to process nodes like 20nm, but that has a burgeoning challenge of extreme process variation limiting the yield. However there is no escape from detecting the failure rate early in the cycle to assure high yield.

    In case of memory, there can be billions of bit cells with column selectors and sense amplifiers and you can imagine the read / write throughput on those cells. Although redundant columns and error correction mechanisms are provided, they are not sufficient to tolerate bit cell failure above a certain number. The requirement here is to detect failure in the range of sigma of 6.

    So, how do we detect failure at such high precision? Traditional methods are mostly based on Monte Carlo (MC) simulation, the idea first invented by Stanislaw Ulam, John Neumann and Nicholas Metropolis in 1940. To get a feel of this, let’s consider a bit cell of 6 transistors with 5 process variables per device, making a total of 30 process variables. Below is the QQ plot of distribution of bit cell read current (cell_i) on x-axis and cumulative density function (CDF) on y-axis. Each dot on the graph is a MC sample point. There are 1 million samples simulated.


    QQ plot of bit cell read current with 1M MC samples simulated

    The QQ curve is a representation of the response of output to process variables. The bend in the middle of the curve means a quadratic response in that region. The sharp drop off in bottom left means a circuit cut off in that region. Clearly any method assuming linear response will be extremely inaccurate.

    Now consider the QQ plot for delay of a sense amplifier having 125 process variables.


    QQ plot of delay of sense amplifier with 1M MC samples simulated

    The three stripes indicate three distinct sets of delays indicating discontinuities; a small step in process variable space sometimes leads to major change in performance. Such strong nonlinearities will make linear and quadratic models completely fail. It must also be noted that the above result is obtained after 1M MC samples which covers circuits of about 4-sigma. For 6-sigma, one would need about 1 billion MC samples, not practical.

    In order to detect rare failures with lesser samples, many variants of MC method and other analytical methods have been tried, but each of them lacks in either of robustness, accuracy, practicality or scalability. Some of them can work with only 6 to 12 process variables. A survey of all of them is provided in a white paper by Solido Design Automation.

    Solido has developed a new method; they call it HSMC (High Sigma Monte Carlo) which is promising; fast, accurate, scalable, verifiable and usable. This method has been implemented as a high quality tool in Solido Variation Designer Platform.

    The HSMC method prioritizes simulations towards the most-likely-to-fail cases by adaptive learning through feedback from SPICE. It never rejects any sample in case it causes failure, hence increasing accuracy. The method can produce extreme tail of the output distributions (like in QQ plot), using real MC samples and SPICE accurate results in hundreds or a few thousand simulations. The flow goes something like this –

    • 1. Extract 6-sigma corners by simply running HSMC, opening the resulting QQ plot, selecting the point at the 6-sigma mark, and saving it as a corner.
    • 2. Bit cell or sense amplifier designs are tried with different sizing. For each candidate design, one only needs to simulate at the corner(s) extracted in the 1[SUP]st[/SUP] step. The output performances are at “6-sigma yield”, but only with a handful of simulations.
    • 3. Finally, verify the yield by doing another run of HSMC. The flow concludes if there are no significant interactions between process variables and outputs, which is generally the case. Otherwise, a re-loop is done, by choosing a new corner, designing against it and verifying.

    Let’s look at the results of HSMC applied on the same bit cell and sense amplifier designs –


    Bit cell_i – 100 failures in first 5000 samples


    Sense amp delay – 61 failures in first 9000 samples


    QQ plot of cell_i – 1M MC samples and 5500/100M HSMC samples
    MC would have taken 100M samples against 5500 with HSMC


    QQ plot of sense amp delay – 1M MC samples and 5500/100M HSMC samples

    The process is extended further for reconciliation between global (die-to-die, wafer-to-wafer) and local (within-die) statistical process variation. It is clear that this method is fast due to handful of samples to be simulated, accurate as no likely failure is rejected, scalable as this method can handle 100s of process variables, verifiable and usable.

    The details can be looked into the actual white paper, “High-Sigma Monte Carlo for High Yield and Performance Memory Design”, written by Trent McConaghy, Co-founder and CTO, Solido Design Automation, Inc.

    By Pawan Kumar Fangaria
    EDA/Semiconductor professional and Business consultant
    Email:Pawan_fangaria@yahoo.com


    Making your ARMs POP

    Making your ARMs POP
    by Paul McLellan on 04-16-2012 at 6:30 am

    Just in time for TSMC’s technology symposium (tomorrow) ARM have announced a whole portfolio of new Processor Optimization Packs (POPs) for TSMC 40nm and 28nm. For most people, me included, my first question was ‘What is a POP?’

    A POP is three things:

    • physical IP
    • certified benchmarking
    • implementation knowledge

    Basically, ARM takes their microprocessors, which are soft cores, and implements them. Since so many of their customers use TSMC as a foundry, the various TSMC processes are obviously among the most important. They examine the critical paths and the cache memories and design special standard cells and other elements to optimally match the processor to the process. They don’t do this just once, they pick a few sensible implementation choices (highest performance 4 core for networking, medium performance dual core for smartphones, lowest power single core for low end devices). A single POP contains all the components necessary for all these different power/performance/area points. Further, although we all casually say things like ‘TSMC 40nm’ in fact TSMC has two or three processes at each node to hit different performance/power points, so they have to do all of this several times.

    Then they provide the performance benchmarks that they managed to achieve, along with all the detailed implementation instructions as to how they did it. These are EDA tool chain independent since customers have different methodologies. But the combination of IP and documentation should allow anyone to reproduce their results or get equivalent results with their own implementations after any changes that they have made for their own purposes and to differentiate themselves from their competitors.

    Companies using the POPs get noticeably better results than simply using the regular libraries and doing without the specially optimized IP.

    About 50% of licensees of the processors for which POPs have been available seem to have licensed them, currently there are 28 companies using them. Here’s a complete list of the POPs (click to enlarge):
    Of course ARM has new microprocessors in development (for example, the 64 bit ones already announced) and they are also working closely with foundries at 20nm and 14nm (including FinFETs). So expect that when future microprocessors pop out that a POP will pop out too.

    About TSMC

    TSMC created the semiconductor Dedicated IC Foundry business model when it was founded in 1987. TSMC served about 470 customers and manufactured more than 8,900 products for various applications covering a variety of computer, communications and consumer electronics market segments. Total capacity of the manufacturing facilities managed by TSMC, including subsidiaries and joint ventures, reached above 9 million 12-inch equivalent wafers in 2015. TSMC operates three advanced 12-inch wafer GIGAFAB™ facilities (fab 12, 14 and 15), four eight-inch wafer fabs (fab 3, 5, 6, and 8), one six-inch wafer fab (fab 2) and two backend fabs (advanced backend fab 1 and 2). TSMC also manages two eight-inch fabs at wholly owned subsidiaries: WaferTech in the United States and TSMC China Company Limited, In addition, TSMC obtains 8-inch wafer capacity from other companies in which the Company has an equity interest.

    TSMC’s 2015 total sales revenue reached a new high at US$26.61 billion. TSMC is headquartered in the Hsinchu Science Park, Taiwan, and has account management and engineering service offices in China, Europe, India, Japan, North America, and, South Korea.


    The Truth of TSMC 28nm Yield!

    The Truth of TSMC 28nm Yield!
    by Daniel Nenni on 04-15-2012 at 7:00 pm

    As I write this I sit heavyhearted in the EVA executive lounge returning from my 69[SUP]th[/SUP] trip to Taiwan. I go every month or so, you do the math. This trip was very disappointing as I can now confirm that just about everything you have read about TSMC 28nm yield is absolutely MANURE!
    Continue reading “The Truth of TSMC 28nm Yield!”


    Arteris evangelization High Speed Interfaces!

    Arteris evangelization High Speed Interfaces!
    by Eric Esteve on 04-15-2012 at 4:36 am

    Kurt Shuler from Arteris has written a short but useful blog about the various high speed interface protocols currently used in the wireless handset (and smartphone) IP ecosystem. Arteris is well known for their flagship product, the Network-on-Chip (NoC), and the Mobile Application Processor market segment represent the first target for NoC: NoC is the IP which help increasing overall chip performance by optimizing internal interconnect, allows avoiding routing congestion during Place & Route and finally helps SoC design team integrating more quickly the tons of various functions, such an IP is more than welcome in such a competitive IC market segment! To make it clear, NoC is supporting interconnects inside the chip, when Kurt’s blog deals with the various functions used to interface the SoC with the other IC, still located inside the system (smartphone or media tablet). The blog provides a very useful summary, under the form of a table listing the various features of: MIPI HIS (High Speed Interface), USB HSIC (High Speed Inter-Chip), MIPI UniPro & UniPort, MIPI LLI (Low Latency Interface) and C2C (Chip-To-Chip Link).

    We will come back later on the listed MIPI specifications and USB HSIC, but I would like to highlight the last two in the list: LLIand C2C.

    The first is based on high speed serial differential signaling and require using MIPI M-PHY physical block when the second is a parallel interface and requiring only LPDDR2 I /Os, but both functions are used in the aim of sharing a single memory (DRAM) between two chips, usually the application processor and the modem. The result is that the system integrator will save 2$ in the bill of material (BOM)… It does not look so fantastic, until you start multiplying these two bocks by the number of systems built by an OEM. Just multiply several dozens of million by 2$ and you realize that the return on investment (the additional cost of the C2C or LLI IP license) can come very fast, and represent several dozen of million dollar!

    I should also add that Arteris is marketing these both controller IP functions, and if the company has the full rights on C2C, LLI is one of the numerous MIPI specifications. Just to give you some insight, LLI has been originally developed by one of the well known application processor chip makers, then the company has offered LLI to MIPI alliance and has asked Arteris to turn this internally developed function into a marketable IP, that Arteris is doing with an undisputable success. As far as I am concerned, I think that both LLI and C2C are “self selling”, as soon as you know that you can save $2 on the system BOM, you can imagine that OEM are pushing hard the chip makers to integrate such a wonderful function!
    About Arteris
    Arteris provides Network-on-Chip (NoC) interconnect semiconductor intellectual property (IP) to System on Chip (SoC) makers so they can reduce cycle time, increase margins, and easily add functionality. Arteris invented the industry’s first commercial network on chip (NoC) SoC interconnect IP solutionsand is the industry leader. Unlike traditional solutions, Arteris interconnect plug-and-play technology is flexible and efficient, allowing designers to optimize for throughput, power, latency and floorplan.

    To know more about MIPI, you can visit:

    MIPI Alliance web

    MIPI wikion Semiwiki

    MIPI surveyon IPNEST

    Reminder: for Kurt’s blog, just go here!

    Eric Esteve from IPNEST


    Handsets, what’s up?

    Handsets, what’s up?
    by Paul McLellan on 04-13-2012 at 3:02 pm

    So who’s in and who’s out these days in handsets?

    It looks as if Samsung has finally achieved a long-held goal to be the largest handset vendor, taking over from Nokia which has been the market leader for 14 years since 1998 when it passed Motorola. Nokia hasn’t reported yet but they cut their forecast. Samsung had a record quarter. Bloomberg estimates that Samsung sold 44M smart phones in Q1, and 92M phones in total, easily beating 83M for Nokia. Samsung also have a goal to be number one in semiconductor and overtake Intel, which they may well do but not immediately.

    Nokia, as I’m sure you know, is largely betting its future on Microsoft and, in the US, on AT&T. It launched its new Lumia phone over easter weekend (when most AT&T stores were closed, not exactly like an iPhone lanuch with people camped out overnight to get their hands on the new model). There were also technical glitches about not being able to connect to the internet, which is a pretty essential feature for a smartphone. My own prediction is that WP7 is too little, too late and as a result Nokia is doomed. But maybe I underestimate the desperate need of the carriers to have an alternative to Android and iPhone that is more under their own control.

    Funny isn’t it to look back just 8 or 10 years to when the carriers were paranoid about Microsoft, worrying that it might do to them in phones what they did to PC manufacturers where they took all the money (well, Intel got some too)? In the end it seems that it is then-tiny-market-share Apple that is taking all the money, 1/2 of the entire handset profits by some reports. iPhone alone is bigger than the whole of Microsoft. Samsung is also making good money but all the other smartphone handset makers such as HTC seem to be struggling. Now Microsoft is seen as the weakling, able to be bullied around the schoolyard by the carriers.

    I still don’t entirely understand Google’s Android strategy. This quarter, for the first time, over 50% of new smartphones were Android based but Google makes very little from each one and all that is in incremental search. Like the old joke about if all you have is a hammer everything looks like a nail, every business seems to look like search to Google. Amazon is certainly making money with Android, and so is Samsung. There are rumors that Microsoft makes more than Google (through patent licenses to the major Android handset and tablet manufacturers). It remains to be seen what Google does with Motorola Mobility. If it favors them too much it risks alienating its other partners and pushing them away from Android. If it doesn’t favor them at all I don’t see why they should suddenly become a market leader in smartphones.

    iPhone5 is expected in June or July. Presumably containing the quad-core A6. Presumably with LTE like the new iPad. But, of course, Apple isn’t saying anything.


    Chip in the Clouds – "Gathering"

    Chip in the Clouds – "Gathering"
    by Kalar Rajendiran on 04-13-2012 at 1:29 pm


    Cloud computing is the talk of the tech world nowadays. I even hear commentaries about how entrepreneurs are turned down by venture capitalists for not including a cloud component into their business plan no matter what the core business may be. The commentary goes “It’s cloudy without any clouds.” Add some clouds to your strategy and the future will be bright and sunny.

    With such a strong trend, one might have expected companies within the $300B semiconductor market to have adopted “cloud” into their strategies by now and the answer is yes to varying degrees. Large established semiconductor companies as well as semiconductor value chain producer companies have built their enterprise-wide clouds for their engineers to tap into their vast compute farm. But access to the right number of latest and greatest compute resources may not always be available for the task on hand, independent of the size of the compute farm. This is because the compute farm is typically upgraded with new hardware resources on an incremental basis. So although the engineers may have their own private clouds to address the chip design needs, peak-time compute resource needs are not addressed optimally. And then there is the matter of peak-time EDA tools resource. Companies are still limited by the number of EDA tools licenses they own. If you’re a major customer of an EDA tools supplier, it is not an issue as peak-load license needs are addressed through temp licenses or short-term licenses. For everyone else, it is a painful negotiation with their EDA tools supplier. As much planning as can be done, peak-load needs cannot always be predicted well ahead of time. And the longer the negotiation with the EDA supplier takes, the more the customer falls behind on their tapeout schedule and consequently time-to-market schedule.

    In other words, today, large to medium sized semiconductor companies have a private cloud for their compute needs and a kludge solution for their EDA license needs given their stature with their EDA suppliers. This solution has many issues. (1) they don’t need to maintain their compute servers but they do only to ensure they have access to compute power on-demand (2) they don’t have an automatic solution to add EDA tools resources on-demand (3) if using an EDA tools supplier’s cloud, customers don’t have a seamless cloud-based design flow simply because the design flow involves tools from more than one supplier.

    As for smaller semiconductor companies, they neither have their own private cloud nor do they have the same flexible access to EDA tools licenses on-demand. And if using an EDA tools supplier’s cloud, they face the same issue as the larger customers do.

    If a secure cloud-based chip design platform from a third-party company provides EDA suppliers agnostic, seamless design flow where the customer could tap into one particular set of tools for one chip project and a different set of tools (as per their team needs/skills) for a different chip project, that would be the ultimate offering. That ultimate offering is what would be called “Chip in the Clouds” platform.

    “Chip in the Clouds” may have sounded a lot like “Head in the clouds.” But it is not. The time has arrived for “Chip in the clouds” platform to play a key role in redefining how chips are designed and implemented. Why do I say this? Stay tuned for future installments of my blog in which I’ll discuss the driving factors for the adoption as well as what is happening in the platform offering space.

    http://www.linkedin.com/in/kalarrajendiran