RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Cadence Results: Good but Palladium under Price Pressure

Cadence Results: Good but Palladium under Price Pressure
by Paul McLellan on 07-21-2014 at 10:00 pm

Cadence announced their 2Q results this afternoon. I listened to the conference call.

You can read all the details of the results in the press release but the big picture is:

  • Revenue $379K, net income $23M GAAP or $64M non-GAAP (8, 21c per share, beat estimates by 1c). Equivalent quarter last year was $362M so less than 5% increase)
  • making adjustments to outlook for rest of 2014
  • increasing size of share repurchase
  • environment has improved since start of year (although as questioners pointed out, this hasn’t really shown through in the numbers)
  • high spots: differentiated IP especially into system companies, hardware/software co-design
  • Tempus and Voltus doing well: Tempus has over 30 customers, 5 new ones this quarter including Broadcom using it for sub-20nm flow; Voltus over 25 customers, Quantus QRC just announced. In the questions people were really trying to find out if Tempus was displacing PrimeTime but nobody was talking.
  • Completed acquisition of Jasper
  • Palladium demand strong driven especially in mobile, driven by FinFET and 64 bit
  • Multiple foundry partners at 10nm
  • First EDA company enaged with ARM on 64-bit v8 architecture
  • Introduced Protium (FPGA prototyping system)
  • Weighted average contract life was 2.2 years. This is unusually low and could be a red flag but Jeff Rebar (CFO) said it was just the product mix this quarter and nothing seemed to be unusual

Some significant IP introductions:“In Q2, we continued to introduce new IP and VIP products, including memory modules for the 3D hybrid memory cube (HMC), the industry’s first VIP for PCI Express Gen Four and silicon proven DDR4 IP for both the 28 nanometer FDSOI process and for the TSMC 16 nanometer FinFET process”

Where was all the business? 44% Americas, 23% Asia, 22% EMEA and 11% Japan. Or cutting the pie the other way, 21% functional verification, 30% digital IC (including signoff), 28% custom IC, 11% system interconnect and analysis (I think that is the trendy new name for PCB), 10% for IP.

One area to watch: Palladium shipments are up. That’s the good news. Palladium revenue is down. This is due to margins coming under pressure since both Mentor (Veloce) and Synopsys (Eve) have competitive product lines. Cadence’s revenue is moving more towards software and less hardware, however the decline in hardware revenues is bigger than the increase in software revenues so Cadence are slightly reducing the outlook for the rest of the year (before increasing it a little again due to Jasper) and reducing their margin outlook too.

Other factors hitting expenses: the Jasper acquisition. I’m not sure of all the details but I believe that most Jasper business that was already booked but not yet recognized as revenue was wrapped into the accounting of the acquisition. But the expense for Jasper will still come due each quarter. Jasper should be accretive next year though.

Another factor: Cadence had a voluntary retirement program which obviously included packages for people that took it. That causes a hit in the short term but reduces the expense run-rate in the long-term.

Outlook: for Q3 Cadence expects $390-400M revenue. For the whole of 2014 they expect booking $1.75-1.8M which is a little up, primarily due to Jasper but also slightly stronger software bookings. Revenue $1.57-1.59M, the increase again being primarily due to Jasper. Operating margin is expected to be down a little due to declining hardware margins at 25-26% (versus a solid 26% before). They will repurchase about $300M shares over a two year period.

Nothing dramatic. Things to watch:

  • pressure on Palladium pricing (there was no announcement but it is clear from a couple of answers there will be a new version coming out early next year riding Moore’s law and getting more gates per $)
  • how well Tempus, Voltus and Quantus really are doing versus the competition (especially PrimeTime)
  • new product announcements in the pipeline
  • IP is now 10% of Cadence’s business but they have invested a lot to acquire it (especially Denali and Tensilica) so they need to grow it

SeekingAlpha transcript of the conference call is here.


More articles by Paul McLellan…


Intel vs AMD

Intel vs AMD
by Daniel Nenni on 07-21-2014 at 6:00 pm

While listening to the Intel and AMD conference calls last week I was reminded of the ATI acquisition by AMD and the painfully long cultural assimilation that ensued. The title of this blog could just as easily have been “Custom vs Synthesizable Design Cultures” or “The Real Reason Why AMD is Fabless” because that is closer to how this blog ended up.

During my stint at Virage Logic I was the token executive assigned to ATI Technologies, which meant that if there was a serious problem I was brought in to help resolve it. Being a bleeding edge fabless semiconductor company and one of Virage’s largest customers there were lots of problems of course so I spent quite a bit of time there. ATI was a great acquisition for AMD as it provided both industry leading GPU technology and synthesizable fabless semiconductor design expertise.

Culturally speaking however it was an “opposites attract” kind of thing. Internally the ATI portion was called AMD Red while the original part was called AMD Green. The color coding was based on logos but psychologically speaking it was more of a traffic light theme with the ATI design culture stopping. Remember, it was AMD that coined the term “Real Men Have Fabs” which could have easily read “Real Men do Custom Design”. In the end of course the ATI culture prevailed and AMD became largely a synthesizable design company. Look at AMD today and ask yourself: Self, would AMD still be in business without the ATI acquisition? My self says no, probably not.

Intel culture however is still custom design centric which is why I was pleasantly surprised when they announced Quark at IDF last year. Quark is the first Intel synthesizable core that was reported to use 1/10th the power of Atom at 1/5th the size. It is targeted at new markets such as wearable computing, disposable medical devices, and the Internet of Things (IoT). During the Q&A with Intel CEO BK however it was disclosed that only Intel can synthesize the core so Quark would be a closely held product for now.

Here is the current Quark propaganda:

Right now Quark powers the Intel Galileo developer board and the Intel Edison microcomputer. Unfortunately I have not heard of any significant Quark design wins since the announcement but I still view this as an important first step in bringing Intel into the world of synthesizable design.

Which brings us to another question for self: Self, will Intel custom designed SoCs ever catch up with synthesizable SoCs from Qualcomm, Apple, MediaTek, Huawei, and Samsung? Again, my self says no, probably not.

More Articles by Daniel Nenni…..


When is a Million-Year MTBF Too Short?

When is a Million-Year MTBF Too Short?
by Jerry Cox on 07-21-2014 at 8:00 am

The reliability metric, Mean Time Between Failures (MTBF), is often misunderstood. Use of an MTBF metric generally assumes a random failure process, one that is very infrequent and has no memory of past failures. Such failure modes can occur in System-on-Chip (SoC) designs and include radiation effects, synchronizer malfunctions at clock domain crossings as well as other rare failures triggered by highly unusual combinations of events. In such a random failure process, 63% of the systems described by a particular value of MTBF will have failed before the end of the MTBF period. Viewing the system failures as occurring at “one MTBF” is quite misleading.

To investigate the impact of using MTBF to describe such SoC failures in safety-critical products, let us create a product story that brings the business issues into focus. Consider a mass-produced product (for example, millions of cars/year), each with the same specific failure-risk that impacts multiple models and endures through multiple model years. We wish to examine the aggregate liability resulting from fatalities arising from such a defect and will use the Toyota Sudden Unintended Acceleration (SUA) experience to assess liability (40,000 SUA reports and over 200 claims, with two initial claims settled for $1.5M each).

As Toyota and a surprising number of other automobile manufacturers have found, these losses can grow dramatically as long as the defect is not eliminated in succeeding model years. To simplify the analysis, assume that wrongful-death suits are settled for S = $1.5M and that cars with the defect are produced with a total volume of V = 5M cars/year over all affected models. The probability that the defect leads to a fatal accident within a year is a very small number p. In the first year of production of cars with the defect, the estimated liability loss will be ½SVp since the average sale date will be halfway through the first year. In the second year of production, essentially all the cars from the first year are still on the road, with estimated liability loss SVp, but now a second cohort of cars has been added, with estimated liability loss of ½SVp. By the end of the second year the total estimated losses sum to 2SVp.

To extend the estimate to later years consider the following table showing the annual estimated liability losses (rows) for vehicles sold in a given year (columns).

As noted above, at the end of the second year the total estimated loss is 2∙SVp (shown within the smaller oval) and at the end of the third year it is 4.5∙SVp (shown in the larger oval). The grand total liability after N years can be found by summing all N[SUP]2[/SUP] cells in the table (note that the average loss per cell is ½SVp).

Thus, the losses grow quadradically with the number of years that the defect fails to be remedied. This is a sad fact that the automotive sector is learning the hard way.

We have assumed that the defect is one that can lead to a fatal event at any time. As a result, p=1/MTBF is the probability that, on average, one fatal event occurs within the time window corresponding to the mean time between failures. Because p is a very small number and because we assume no recall, the cars removed from the road because of a fatal event during a year have a negligible effect on the total cars on the road in later years.

This product story ignores several important issues:

  • Damage to the manufacturer’s reputation and other settlements may have a greater impact on the company than the liability losses. In the Toyota SUA example, settling 200 cases may cost $300M, but the liability associated with legal settlements based on the decreased resale value of Toyota vehicles is expected to exceed $2B.
  • Not all SoC failures result in serious accidents. From the Toyota SUA data, only one in 200 complaints resulted in a suit against the company.
  • An SoC may have many independent instances of the same kind of defect increasing the probability of failure pproportionately.
  • Failures and liability are likely to increase with time even more than predicted above as a result of transistor aging.
  • A 1-month post fabrication test has less than a 1% chance of detecting an SoC that will fail in its 10-year lifetime.

Providing one is mindful of these caveats to our product story, the average resulting losses in millions of dollars in each of the years after introduction of the defect and in the subsequent years before its mitigation are shown in the figure below:

Thus, after 10 years the predicted losses are $375M resulting from a fatal SoC defect with a million-year MTBF (p =0.000001). Even with a billion-year MTBF (p = 0.000000001) the average 10-year losses are $0.4M. In summary, unless it is certain that a product has an MTBF of greater than billion years, there is both a business and ethical case to ensure that such failures are effectively minimized, carefully monitored and their undesirable effects reliably mitigated. Because of the rapid growth in liability, manufacturers should develop a replacement part and issue a recall with the minimum of delay.


Setting the Record Straight on FD-SOI Costs

Setting the Record Straight on FD-SOI Costs
by Scotten Jones on 07-20-2014 at 7:00 pm

I recently published an article on Semiwiki “Is SOI Really Less Expensive”. That article was the result of months of careful research and analysis. I looked at planar FDSOI versus bulk planar, bulk FinFETs and FinFETs on SOI at three different nodes. I took a consistent set of assumptions with respect to the fab used to run the processes, the number of metal layers, Vts, etc. to develop the fairest direct comparison I could construct. During this process I consulted with wafer manufacturers, technical experts, technical papers, patents and other sources. I then ran all of the different processes through the commercial IC Knowledge – Strategic Cost Model tool.

My original article with comments is posted here: https://www.legacy.semiwiki.com/forum/content/3599-soi-really-less-expensive.html

Soon after I published the aforementioned article, Eric Esteve published a lengthy comment on the article. His comment is available on Semiwiki appended to my original article for anyone interested in reading it, but there are a couple of key points that I would like to address here. In his comment Mr. Esteve made two key claims relative to my cost analysis and conclusions. The first was that I overestimated the cost of the SOI wafers in full production. The second was that I compared a FinFET with 3 Vts to planar FDSOI and that I should be using 4Vts or even 5Vts for the bulk FinFET. Mr. Esteve then goes on to conclude that the starting FDOSI wafer price should be reduced giving a 5% reduction in cost relative to FinFETs on bulk and that changing the FinFET on bulk to 4Vts would also give a 5% reduction. In total he then claims that the relative cost for FDSOI versus FinFETs on bulk should be 10% better than my calculation. As a side note here my calculation was that FinFETs on bulk are 6% less expensive to produce than planar FDSOI so if Mr. Esteve’ analysis were correct FDSOI would now be 4% less expensive.

In the comment section for my original article I replied to Mr. Esteve’ comment explaining that 3Vts are sufficient for many designs and that running the numbers, starting FDSOI wafer costs would have to be cut in half just to reach cost parity with bulk FinFETs. ,

After Mr. Esteve posted his comment, but apparently before I posted my reply, Adele Hars, editor in chief of Advanced Substrate News took Mr. Esteve’s comments and made them into an article on Advanced Substrate News entitled “Is FD-SOI Cheaper? Why Yes!” For those of you who don’t know, Advanced Substrate News is a publication that promotes SOI. I first became of aware of this from LinkedIn where Ms. Hars posted a link to the article. I commented on Ms. Hars’ LinkedIn post that Mr Esteve’ analysis was incorrect, I even went back and found that adding a 4[SUP]th[/SUP] Vt to the FinFET on bulk only changed the result by 1% and that the FDSOI starting wafer cost would still have to be cut in half to reach cost parity with FinFETs on bulk. Ironically Ms. Hars’ commented in her article that the “devil is in the details” while quoting Mr Esteve’ analysis that gets the details of his calculation wrong! I found that I can now only find Ms. Hars’ post and discussion on LinkedIn in the FD-SOI design community and my comments are missing. I have also tried twice to comment on the article directly on the Advanced Substrate News web site and both times my comments have failed to appear.

[UPDATE – after I posted this article Adele Hars contacted me and reported she had been unable to access the Advanced Substrate News web site for several days and has now posted my comment. I want to publicly acknowledge here that I very much appreciate this support of open debate.]

The LinkedIn post is here: https://www.linkedin.com/groups/Debate-raging-on-costs-FinFET-5155646.S.5890494542842470403

The Advanced Substrate News article is here: http://www.advancedsubstratenews.com/2014/06/is-fd-soi-cheaper-why-yes/

Mr Esteve has now taken his comments and expended on them in a new Semiwiki article. The article is available for the interested reader on the Semiwiki web site but he basically repeats the criticism from his comment on my article while adding in that my article is “slightly biased”. In the comments he also says my results are “10% biased”.

Mr Esteve’ new article is here: https://www.legacy.semiwiki.com/forum/content/3674-keywords-fd-soi-cost-finfet.html?postid=14967#comments_14967

In the interest of “setting the record straight” I would like to make a few comments on all of this:

[LIST=1]

  • In my original work I found a 6% cost advantage for FinFETs on bulk versus FDSOI at 14nm. This was based on 3Vts, Mr. Esteve believes this is unfair and that I should be using 4Vts or 5Vts. If you read all the comments on both my article and Mr. Esteve’ recent article you will see that there are comments both agreeing and disagreeing with Mr. Esteve. It appears that this is a point where knowledgeable experts disagree and is likely application specific.
  • In the interest of addressing Mr. Esteve’ concerns I have rerun my analysis with 4Vts for FinFET on bulk and found that FinFET on bulk is still 5% less expensive than FDSOI. Mr. Esteve’ 5% change for adding a Vt is incorrect, the correct value is 1% for FinFET on bulk.
  • The starting wafer prices I used in my analysis came from discussions with manufacturers of both SOI and Epi wafers and reflect current pricing for both. I agree that FDSOI prices will likely come down if and when FDSOI volumes ramp up, but they will have to come down 40% or even 50% for FDSOI to reach parity with FinFET on bulk costs. With 14nm ramping later this year that seems like an awfully big drop to me.
  • The FD-SOI design community on LinkedIn and Advanced Substrate News both appear to be blocking or deleting my comments. In my opinion that kind of censorship has no place in this kind of debate. I would like to contrast this to Semiwiki owned by Daniel Nenni. Mr. Nenni publishes a lot of articles on Semiwiki that are hotly debated and I have seen many comments strongly disagreeing with Mr. Nenni views published on Semiwiki. Whether you agree with Mr. Nenni’ viewpoints or not, I believe his lack of censorship is to be applauded. It also in my opinion makes Semiwiki a more credible site for technical information and debate.
  • ​Mr. Esteve uses the phrases “slightly biased” and “10% biased” in his article and comments when referring to my work. If you google “bias” the first definition that comes up is “prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.” I can’t speak to whether that is what Mr Esteve’ actually meant when he used the word “biased” but that is the common usage of the word. I have no allegiance to either FDSOI or FinFETs. I have clients deeply involved in both technologies. Ironically I have actually led process development efforts on SOI, I am coauthor of two issued patents on high voltage integrated circuit technology on SOI, and by the late nineties the operation I led had sold over 10 million ICs on SOI!

    In closing I would just like to point out that after all this analysis and debate the conclusion from my original article still stands:

    “Using the same yield per mask layer assumptions for both planar FDSOI and FinFETs it has been shown that the costs for planar FDSOI and FinFETs on bulk or SOI are all comparable at the 14nm node. Decisions on which process to pursue are therefore expected to be driven by factors other than cost.”

    Scotten W. Jones, President, IC Knowledge LLC


  • Rest in Peace CPUs, Hello FPGAs

    Rest in Peace CPUs, Hello FPGAs
    by Luke Miller on 07-20-2014 at 9:00 am

    FPGAs in many ways are still a bit mysterious to some folk. I was at a high level summit in April, and I realized that many there had no idea what an FPGA was. They knew at least what a CPU was or meant and that their kids talk about GPUs. A good analogy I have for an FPGA when compared to a CPU is something like this. Think of the FPGA and CPU as a player piano if you will. Will you? Both technologies can play any music you load into memory. BUT unlike the CPU, within the FPGA you can not only change the music but the engine that plays the music. Using the CPU, you are stuck with the same, ARM, x86, architecture. The FPGA can be ANY architecture you want and even emulate a CPU if you so desire. Also using the CPU you will be stuck with memory interfaces and IO. Not So with the FPGA.

    So why are the FPGAs faster than a CPU even when they have a much lower clock frequency? I must say dear reader you ask some great questions. FPGAs leverage parallelism, using a much lower clock frequency (100-500 MHz) where the CPUs will clock in at 667 MHz – 3 GHz or so. This allows the FPGA to use drastically lower power to solve the same problem. The other major benefit of the FPGA is something usually inherent in the parallelism, which is low latency and determinism. As technology keeps moving forward latency and determinism will be more important than ever.

    To highlight these points, I saw a design that really impressed me, and it uses only one FPGA. “Intilop Releases 16K Concurrent-TCP-Session Hardware Accelerator Verified and Tested on Xilinx Virtex-7 FPGA VC707 Evaluation Kit”. Please read the press release very carefully and you will quickly see that this is something that one or even 16 CPUs could not do. The days of relying on a CPU are slowly going bye-bye. The power and inflexibility of choosing the correct architecture for the job are creating a revolution for FPGAs. Does this mean death for CPUs? No but they will not be so ‘central’ and I expect more Zynq like products that merge the CPU and FPGA when needed, but they will not always be needed. I just heard a gasp! In many systems the FPGAs are doing the bulk of the work and are ‘central’ and the CPUs are performing the BIT, Control and Status.

    How does Xilinx play in this market? Very well, they invented the FPGA. There is no better FPGA than Xilinx, and no one can argue on how well they execute on every new product. As design schedules keep accelerating there is no doubt Xilinx will meet any future or current need. Don’t believe me? Call your local Xilinx distributer and ask for cost/dates for 20nm UltraScale and try the same for Altera Arria-10. Xilinx is probably a good 8 months to a year ahead. It is easy to make flashy power point and brochures, another thing to make hardware that works. Utilizing Xilinx FPGAs is easier than ever. Xilinx’s Vivado HLS allows the programmer to use native C/C++ or even SystemC as design entry. OpenCL is another possibility that is currently under test by a hand full of customers.

    The time is here on many systems to start asking, why are we using a CPU? GPU? Check out Xilinx.com today, and begin to familiarize yourself with the wonderful world of FPGAs.


    Samsung Foundry Explained!

    Samsung Foundry Explained!
    by Daniel Nenni on 07-19-2014 at 7:00 pm

    Rather than watch the World Cup battle for third place, my beautiful wife and I spent last Saturday afternoon at the CASPA Wearables Symposium. The most interesting presentation was from Samsung because it included slides on their foundry offering. In regards to wearables, I still don’t see the ROI I need to buy one, yet. We are getting close though and I will write more about that in another blog.

    My good friend Kelvin Low presented for Samsung. Kelvin and I first met when he was at Chartered Semiconductor, then again at GlobalFoundries, today Kelvin is Senior Director of Marketing for the company’s foundry business. He is responsible for developing the strategic product direction of the foundry’s advanced process nodes – 28nm and below. In addition, he is working to develop the foundry ecosystem. It is nice to see someone with serious foundry experience in this position, absolutely.

    One thing I noticed is that Samsung is not offering 20nm so the rumor that TSMC will share 20nm customers with Samsung is absolutely false. As I mentioned inThe iPhone6 will have TSMC 20nm, Absolutely!, the iPhone6 is all TSMC 20nm. Samsung skipped 20nm in favor of 14nm which seems to be a wise decision given the comments from Morris Chang on the TSMC Q2 2014 Conference Call last week:

    Morris Chang: After two years of meticulous preparation, we began volume shipments of our 20 nanometer wafers in June. The steepness of our 20 nanometer ramp sets a record. We expect 20 nanometer to generate about 10% of our wafer revenue in the third quarter and more than 20% of our wafer revenue in the fourth quarter and we expect the demand for 20 nanometer will remain strong and will continue to contribute more than 20% of our wafer revenue in 2015.

    TSMC 20nm production is about six months later than I had predicted but is timed perfectly for the Apple iPhone6 builds. And yes, TSMC will dominate the 20nm node just like they did 28nm. The 20nm delay also delayed 16nm of course since 16nm “leverages” the 20nm process. Even IF all goes well with 14nm, which is a big if, TSMC will have big profits from 20m for the next few years. Any 14nm delays will make those profits even bigger.

    Morris Chang: Volume production of 16 nanometer is expected to begin in late 2015 and will be fast ramped up in 2016. The ecosystem for 16 nanometer designs is current and ready. A few years ago, in order to take advantage of special market opportunities, we chose to develop 20 SoC first and then quickly follow with 16 nanometer. We chose this sequence to maximize our market share in the 20 nanometer…

    This means that Apple is back with Samsung for the iPhone6 refresh in 2015 using 14nm. As I have mentioned before, Samsung has a 6 month FinFET lead on TSMC with production starting in 1H 2015. I have also blogged that the big fabless companies will go back to second sourcing at 14nm (28nm and 20nm were single source to TSMC). The result being, per Morris Chang:

    As the 2016 foundry competition unfolds we believe our decision to have been correct. Number one, in 20 SoC, we believe we will enjoy overwhelmingly large share in 2014, 2015 and onwards. Number two, in 16 nanometer, TSMC will have a smaller market share than a major competitor in 2015. But we’ll regain leading share in 2016, 2017 and onwards. Number three, if you look at the combined 20 and 16 technologies, TSMC will have an overwhelming leading share every year from 2014 on.

    The major competitor of course is Samsung. This level of transparency is greatly appreciated even though it caused TSM stock to drop 6% the day after the call. In the race to foundry FinFETs Samsung wins, TSMC places, and Intel barely shows. 10nm FinFETs could be a very different story however.

    Morris Chang:We work closely with our key customers to co-optimize our 10-nanometer process and design. We expect to have customer tape-outs in the second half of 2015.

    Translation: Test chips will tape-out in 2015 but I do not expect 10nm production to start until 2017. Intel and Samsung are hot on the trail for 10nm which I will blog about in more detail later. Hopefully it will be a real three horse race this time, for the greater good of the semiconductor industry!

    Also Read: Intel Custom Foundry Explained!

    More Articles by Daniel Nenni…..


    SEMICON Update: 450mm, EUV, FinFET, and More

    SEMICON Update: 450mm, EUV, FinFET, and More
    by Scotten Jones on 07-19-2014 at 3:00 pm

    I spent all of last week at SEMICON West meeting with customers, potential customers, partners and various industry analysts and experts. I was involved in many interesting discussions over the course of the week and I thought I would share some of the more interesting observations:

    Alternate Fin Materials Pushed Out
    I have for some time been expecting that Intel would introduce Germanium (Ge) fins for PMOS at 14nm. Furthermore I had seen an announcement from TSMC that they would be introducing Ge fins at 10nm. My view was Intel would be the first adopter for Ge fins at 14nm and then the usage would become more widespread at 10nm. I furthermore expected we might see Indium Gallium Arsenide (InGaAs) fins for NMOS first use at 10nm and wide spread use at 7nm. What I am now hearing is that Intel has abandoned Ge fins for 14nm and TSMC has pushed them out to 7nm. It looks like adoption of Ge has been pushed out one node industry wide and presumably InGaAs will also be pushed out at least one node. Furthermore, what I am hearing is that Intel’s 14nm process is now a shrink of the 22nm process with no new technology introductions. I am also hearing Intel is still struggling with 14nm yields but they are ramping up anyway and working on yield as they go. Running volume can be a very effective way to make rapid progress on yield if you can afford the scrap.

    450mm Still Delayed
    After SPIE earlier this year I reported that 450mm was delayed until after 2020. Everything I heard at Semicon was consistent with this. My sense is that Intel and TSMC in particular have both backed off on their timing.

    EUV Delays
    The continued delays in EUV are making insertion at 10nm less and less likely. Insertion at 7nm will either require a high NA tool or multi-patterning. Higher NA has a whole set of problems of its own to add to the current problem list. Meanwhile companies continue to cost reduce and prefect multi patterning.

    28nm A Long Lived Node?
    There have been a number of comments in the media about 28nm being a long lived node. I would just like to point out that all foundry nodes tend to be long lived. You can still get 500nm, 350nm, 250nm, 180nm, 130nm, 90nm, 65nm and 40nm nodes from TSMC, UMC and many others. In fact it wasn’t that long ago that I was hearing that 65nm was still the sweet spot for most designs, it might still be true. I think the real point people are trying to make is that 28nm will continue at higher than normal levels longer than normal because of a perceived lack of value at 20nm and below. I will discuss this more in the next two sections.

    20nm Is Ramping at TSMC
    Morris Chang has stated he expects 20nm to be the fasting ramping node in TSMC history. There are reports coming out that Qualcomm and Apple both have 20nm designs running at TSMC (Chipworks just announced an analysis of a 20nm Qualcomm part presumably purchased on the open market). It is pretty clear 20nm is going to be a big node at TSMC, I think the more interesting question is will 20nm take off anywhere else.

    Scaling and cost
    There have been a number of reports that cost reductions will end at 28nm and that at 20nm and 14nm cost per transistor will rise. Historically each new node has resulted in an increase in wafer cost but at the same time an even greater rise in transistor density has yielded a cost per transistor reduction. I had been planning to write an article on this and I still might, but I thought I would share some observations here.

    At 20nm extensive multi-pattering is required driving up wafer cost more than “normal” versus the 28nm node, however, TSMC is reporting a 2X increase in transistor density for 20nm versus 28nm. 20nm wafer costs are not 2x 28nm wafer costs and cost per transistor should therefore go down although less than “normal” and I am in fact hearing that early adopters at TSMC are seeing reductions in cost per transistor.

    This brings us to 14nm (or 16nm as TSMC calls it). 14nm is a very interesting node for several reasons. First of all it is the first FinFET node for most logic producers. Secondly, the major foundries have all decided to use the same backend for 14nm that they used for 20nm suggesting little or no increase in transistor density will result. There have been some projections of major increases in wafer cost at 14nm due to “FinFET and Multi-pattering”. Since most logic multi-pattering is in the backend and the backend isn’t shrinking I wouldn’t expect multi-pattering to drive a lot of additional cost versus 20nm. Also, FinFETs actually have simpler processes than bulk planar (less mask layers although some steps are very difficult) so I am not expecting an unusually large cost per wafer increase for 14nm versus 20nm. In terms of transistor density I am hearing there will be about a 5% to 10% improvement. The net result is I would expect cost per transistor to be relatively flat to slightly up at 14nm. What I am hearing is cost per transistor will actually go down, but only slightly.

    At 10nm all the foundries are expected to do a full shrink. 10nm will require more complex multi-pattering schemes and I expect that the density improvement will result in a reduction in cost per transistor, although likely smaller than the “normal” trend.

    200mm Growing Again
    SEMI presented a very interesting data point that 200mm wafer volumes have gone up this year. Typically at this point in the life cycle of an older wafer size it would be in a slow steady decline and yet 200mm has grown recently. On top of this, presentations discussing “The Internet Of Things” all seem to include a discussion of all the additional 200mm fabs that will be needed to make the sensors. 300mm represents the majority of silicon wafer area run by the semiconductor industry today and is rapidly growing, 150mm and smaller wafer sizes are all declining but it looks like 200mm will also see some growth for at least the next few years. It will be interesting to see how this all plays out if 450mm is ever introduced. 450mm could result in a lot of low cost 300mm surplus hitting the market that might drive applications to jump from 200mm and smaller wafer sizes to 300mm. Low cost – used 300mm equipment and fabs have already led to the TI 300mm analog Fab and Infineon 300mm discrete fab.

    3D NAND
    There was a very interesting tech spot on 3D NAND run by Mike Corbett of Linx Consulting. Samsung presented on the market, Applied Materials on Deposition Challenges, Tokyo Electron on Etching Challenges and Mark Thirsk of Linx on materials and cost. 3D NAND appears positioned to provide a future scaling path for NAND with much better performance and eventually better cost. To-date 3D NAND is going into high-end applications (less cost sensitive) but with the introduction of a 32 layer devices later this year wider usage is expected. During the question and answer part of the session one person commented that the 3D cell sizes are a lot bigger than people realize. If you do the math he is correct, I get something like 26F2 as the effective cell size based on the size of the arrays. When you take into account 24 layers and 2 bits per cell, the area per bit is larger than current 16nm 2D NAND. However, when you get to 32 layers the area per bit is smaller and additional layers only increase the lead. 3D NAND continues Moore’s law and scaling by going into the third dimension. This is a technology to watch and it will be interesting to see if analogous solutions can be developed for DRAM and even logic.

    The Shrinking Show Floor
    Several years ago I had a booth at SEMICON West but I didn’t find the cost benefit trade-offs to be favorable. The last couple of years I have foregone a booth and just attended the show setting up meetings to take advantage of so many people I wanted to meet with all being in the same place at the same time. Walking the show floor this year it struck me as smaller than in the past. I have the impression that more and more companies are forgoing a booth on the show floor for off-site meeting space in surrounding hotels. If this is in fact an accurate view of what is happening this strikes me as a bad trend for SEMI and the show since they presumably get no revenue from off-site meetings.


    New Release of Semulator3D at #semiconwest

    New Release of Semulator3D at #semiconwest
    by Paul McLellan on 07-19-2014 at 9:01 am

    One of Coventor’s flagship products is SEMulator3D, and at Semicon West they announced a new version, 2014.100.

    SEMulator3D is a powerful 3D semiconductor and MEMS process modeling platform. It uses highly efficient physics-driven voxel modeling technology. It models the physical effects of process steps, which is where all the current challenges are.

    Combining the two-dimensional design layout with the process description gives it the capability to model the process flows and determine what will be manufactured with that combination of layout and process. The basic idea, as with all modeling, is to enable experiments to be done quickly and efficiently. Since the alternative is to actually build chips and then take measurements, which is millions of dollars of investment and months of delay, the virtual fabrication route is especially attractive. This is especially important in the early stages of process development since it can drastically shorten the whole development and ramp to volume roadmap.

    Customers using the product include IDT, Infineon, IBM, Cavendish Kinetics and many others (who are being shy about going public). And talking about it in public at a presentation I attended at Semicon West was GlobalFoundries, who I didn’t even know were a customer (I’ll cover this in more detail in another blog once I get the slideset). They are also in a formal development program at imec for 7nm (which I will cover in a later blog).

    The new release has major improvements on the modeling of etch. There are also improvements in accuracy for modeling deposition and CMP (chemical-mechanical polishing), which require analysis of planarization.

    The biggest feature of the release is a new visibility-limited deposition model. This model dramatically improves the predictive accuracy for directional depositions, like Physical Vapor Deposition (PVD) and other plasma enhanced deposition processes. The key features of the visibility-limited deposition model are the “Source Sigma”, reflecting the directional distribution of the process, and the “Isotropic Ratio”, reflecting the non-visibility-limited component of the deposition process. This model enables a large variety of processes, with a wide range of results. As semiconductors get more vertical (FinFETs, TSVs, DRAM, MEMS etc) this is increasingly important.


    The diagram above shows the same piece of a design modeled with various values for the source sigma and the isotropic ratio. You can see the difference from very directional deposition in the top row to much more conformal in the bottom row.


    The new release also contains two modules addressing planarity. One addresses spin-on processes. These are meant to produce a perfectly planar surface but in fact some aspects of the underlying topology show through, how much depends on the thickness of the material being spun-on and its properties. It can even handle cases where the deposition is thinner than the features in the underlying topography, which can result in exposed structures.

    Another module improves handling of CMP, predicting dishing and overpolish behavior in the presence of very non-planar underlying topography. I still find it amazing that CMP actually works, after all it is pretty much grinding the wafer, without wrecking what is on the wafer and without completely contaminating everything.

    In summary, here are the new features in this release and the prior main April release:

    • Modeling Performance Enhancements
    • Selective Conformal (Basic) Etch now Multi-Threaded
    • Multi-Etch speed enhancement
    • Layout-Aware Rebuild
    • Improves modeling performance response to design changes
    • New GUI for the tool’s Expeditor feature
    • Makes complex multi-dimensional DOEs easier to setup and execute
    • Crystal Etch moved into the tool’s Process Editor
    • Popular custom python, now in process library

    More details on the new release are here.


    More articles by Paul McLellan…


    Winds of Change in the Custom Chip Market

    Winds of Change in the Custom Chip Market
    by Peter Gasperini on 07-18-2014 at 4:00 pm

    The most interesting part of the semiconductor market for me has always been the Custom Chip sector – the FPGA, ASIC and SoC companies where I have spent my entire career. These three segments provide an excellent barometer of the overall state of financial health and technological innovation for the entire High Tech industry, from chips to systems to software.

    All semiconductor markets – communications, computing, consumer, mobile, automotive, industrial, scientific, medical and defense/aerospace – use personalizable solutions from one or more of the above mentioned Custom Chip segments. Any combination of high growth, high complexity, high software content and short system product life cycles drives the need for customizable chip technology in some form.

    The operational requirements of systems houses for custom logic offerings include rapid TTM, ease of use (both in terms of degrees of freedom in personalization and robust tools support) and low risk (both during and after the system design phase.) The combined technical requirements for customizable solutions, though, make it very tough to meet these operational constraints. The technical ‘pieces of the puzzle’ are represented by the diagram below.


    The more of these requirements that are covered, as well as the depth of coverage, determines the value proposition of a given custom chip firm. How do each of the custom logic segments stack up to these prerequisites?

    ASIC
    For maximum flexibility and 3P (price, performance and power)optimization in hardware, the ASIC solution cannot be bested. ASIC companies provide all the hardware components a system house needs in a custom chip –standard cell & I/O libraries, embedded memory and blocks for higher functionality or standard protocols (either directly or thru third parties) –and wraps them up in tools suites, model abstractions and methodology flows so that a system design team can paint on an open canvas of silicon as it sees fit.

    The segment’s weaknesses, nonetheless, are just as glaring as its strengths are compelling. Low unit prices are more than countered by NRE costs,manpower expenses and painfully long design cycles. The other (and arguably more important) half of complex chip design – the software aspect – is almost entirely ignored by ASIC houses, with the exception of providing minimal API, driver and firmware support for certain embedded standard cores.

    The worst flaw in the ASIC model is its total lack of flexibility once a design is captured in GDSII format. Further alterations are painful and time-consuming, while bugs and flaws discovered after product release can waste tens of $M’s in costs and 12-18 months of effort.

    FPGA
    Programmable Logic companies live and die by their embrace of complete flexibility thru logic emulation. They have dominated the bottom right of the technical requirements puzzle for over 30 years.

    Emulation has its drawbacks, as programmable logic has been a notoriously poor solution with respect to the three P’s (though Lattice of late has provided a notable exception with its iCE product lines.) The segment’s participants have tried hard to ameliorate this deficiency thru embedding hardwired CPUs, SERDES, PLL/DLL and other standards-based blocks. PLD firms have also made major, sustained efforts to address all of the other puzzle pieces. This infusion of value has allowed FPGA companies to largely preserve their historic 60% gross margins.

    Yet further technical progress and revenue growth is blocked by the very core of the FPGA value proposition – its programmability. For decades,the leading vendors have turned bit-level configurability into an obsessive cult that permeates everything they do. Challenging this technology orthodoxy internally has long been heretical, and both technical & business progress have been largely hamstrung for the last decade as a consequence.

    SoC
    Though many chip companies fancy themselves to be System-on-Chip enterprises,few genuinely have the technical expertise to back that claim. Those that do maintain incredibly rich embedded IP portfolios of both hardware and software.Their solutions consist of chip designs with a smorgasbord of processing elements, accelerators, analog & mixed signal modules and standard blocks along with a co-developed software stack including firmware, OS, middleware and even applications code. Personalization of the solution to a system OEM’s application is accomplished thru software, and a toolchain supporting the SoC’s embedded processors is provided as part of the software distribution.

    The founders of companies like Broadcom and Qualcomm had an abundance of both courage and vision, pursuing a level of system expertise that conventional standard product firms of the time thought were simply beyond the reach of chip houses. As a result of their efforts, an amazing amount of value was passed down from systems houses to these SoC firms – a value which is reflected in their balance sheets.

    The SoC companies have to a great extent captured the strengths of their ASIC and FPGA competitors while having none of their faults. There can be no question that the SoC segment is the king of the hill in the customizable chip market. Yet a set of circumstances has arisen which will shake the foundations of the entire sector.

    When one looks back at the last two years of market performance for the semiconductor industry, it is evident that memory prices – DRAM and, to some extent, Flash – have driven most of the revenue growth. The situation was particularly stark in 2013, as the 4.8% increase in chip business was apparently almost entirely due to a persistent DRAM shortage, with logic registering a paltry 0.4% growth for the year. The forecast for 2014 appears to be on track to repeat the experience of 2012 and 2013.

    This weakness in chip sales reflects the listlessness of markets,including the heretofore healthy smartphone and tablet segments of mobile computing – both of which appear to be saturating and which are expected to plateau over the next 12 months. Pricing pressure is consequently increasing across the board.

    Combined with market weakness is the unwelcome realization that we may be on the verge of seeing the repeal of Moore’s Law. The negative implications to 3P improvement and value-add thru feature growth by following the process curve into ever deeper geometries is obvious.

    Taken together, these factors will markedly upset the current state of affairs in the custom chip sector, offering an opportunity for companies in each segment to completely overturn the current order. How they might go about doing this and the strengths and weaknesses of each segment that could help or hinder their attempts to re-invent themselves and adjust to the new reality of things is extensively discussed at the Vigil Futuri blog – http://vigilfuturi.blogspot.com.


    eSilicon and IDT Collaborate on Next-generation RapidIO Switches

    eSilicon and IDT Collaborate on Next-generation RapidIO Switches
    by Paul McLellan on 07-18-2014 at 9:01 am

    Earlier in the week, eSilicon and IDT announced a collaboration to accelerate development of next-generation RapidIO switches. These are used to meet the higher performance demands required for new wireless, embedded and computing infrastructures. The two companies will initially work together to develop RapidIO switches operating at 40Gbps per port, based on the RapidIO 10xN specification.

    The switches developed under this program will be enable the next generation of wireless base-stations, such as cloud-RAN (which I previously wrote about here if you don’t know what it is), LTE-Advanced (LTE-A), and 5G. But it will also find use in emerging architectures such as base stations co-located with high-performance computing (HPC) platforms.

    IDT’s production 20 Gbps per port switches are currently the de facto standard for the clustering of DSPs, microprocessors and ASICs in existing 3G and 4G base stations already deployed. Existing RapidIO switches from IDT are in virtually every 4G base station in the world. But a new generation of base stations is on the way requiring higher performance and scalability. Indeed the new switches will not only offer 40 Gbps performance, but 100ns latency and scalable to 4 billion nodes in a network. Wow, that’s a large network.

    The plan is to combine eSilicon’s experience with 28nm implementation, including development of fast SerDes and custom memories, and complement that with IDT’s expertise in RapidIO design.


    The requirements for RapidIO are largely driven by its need for use in wireless base stations, although it does have applicability in other systems. But base stations used to implement 4G, LTE or WiMAX are a particularly demanding application. They must:

    • Maximize the number of subscribers per antenna array/base station
    • Support more data bits per subscriber in the form of data and video (beyond narrowband voice)
    • Provide real-time data video and voice aggregating to beyond 1 Mbps per subscriber
    • Minimize power consumption
    • More data per subscriber, up to 100 Mbps
    • More processing per data bit per user by FPGA/ASIC/DSP cluster
    • Higher-speed handoffs between base stations
    • More onerous Orthogonal Frequency-Division Multiple Access (OFDMA) Physical Layer Protocol (PhY)-based processing compared with current 3G platforms

    RapidIO is not proprietary to IDT, it is an open standard and you developed by the RapidIO Trade Association. Here is the current RapidIO roadmap:

    The eSilicon press release is here. The RapidIO Trade Association website is here.


    More articles by Paul McLellan…