RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

How We Got Here…

How We Got Here…
by Paul McLellan on 01-22-2013 at 12:54 pm

Over at the GSA Forum website I have an article on the history of the semiconductor industry. It is actually based on a couple of brief history of semiconductor blogs (here and here) I published here on SemiWiki last year but edited down a lot and tightened up.

Since the start of the year seems to be the time for predictions, here are the last couple of paragraphs, which are a look to the future. No surprises here for anyone who has been reading my stuff, I’m not as optimistic as some people:Looking to the future, Moore’s law is under pressure. Not from a technical point of view; it is clear that it is possible to go on for many process nodes, but from an economic point of view: it is not clear that the cost to manufacture a million transistors is going to come down.

One major challenge is that for the foreseeable future, multiple masks are needed to manufacture some of the chip, pushing up costs. Extreme ultra-violet lithography (EUV) is a possible savior, but there are so many issues that it probably will not be ready until the end of the decade. Bigger 450mm (18-inch) wafers are another possible driver to bring down costs, but are also years away.

So it is possible that the exponential cost reduction that has driven electronics for decades is coming to an end. Electronics will still have more capability, but may not get cheaper and cheaper in the way that we have become accustomed.

The GSA Forum website is here. My article is here. You can download the entire December 2012 issue of GSA Forum here (pdf).


New PCI Express 3.0 Equalization Requirements

New PCI Express 3.0 Equalization Requirements
by Eric Esteve on 01-22-2013 at 9:18 am

PCI Express 3.0 increased the supported data rate to 8 Gbps, which effectively doubles the data rate supported by PCI Express 2.0. While the data rate was increased, no improvement was made to the channels. As such, an 8 Gbps channel in PCIe 3.0 experiences significantly more loss than one implemented in PCIe 2.0. To compensate for this increased loss, PCI Express 3.0 specifies enhanced equalization in the PHY with improved TX equalization, improved RX equalization, and equalization training.

It is critical that designers who plan to implement PCIe 3.0 understand these equalization changes and their impacts. After attending this webinar from Synopsys, registrants will understand:

  • Why improved levels of equalization are necessary at higher data rates
  • Types of equalization enhancements required for optimal performance at 8 Gbps
  • The difference between decision feedback equalization (DFE) and continuous time linear equalization (CTLE)
  • The need for equalization training and adaptability in PCIe 3.0
  • The importance of proven interoperability between the PHY and the controller

Who should attend:SoC designers and system architects. They may register here… do it quickly, as the webinar will be heeld on January 24 (this Thursday).

I remember, back in 2008, when Snowbush designers were implementing these advanced equalization techniques (DFE, CTLE), that one of the critical point was the number of “taps” in the equalization strategy: is two taps enough or should we implement three taps? I would certainly ask this question to Synopsys when attending this webinar! If you don’t know the mathematical (or Digital Signal Processing-DSP) principles of equalization and would like to learn, it will take you some brain gas, and time, my advice would be to find a good teacher. I was lucky enough 20 years ago to learn Equalization when working in a DSP oriented ASIC design team with DSP experts… it took me several 3 hours set to start (only start) understanding it! You can see one of the numerous steps on the above picture…

Coming back to Synopsys PCIe gen-3 PHY IP, not only the design is 100% compliant with PCI Express specification, but the test strategy is also very solid:
The test features include:

  • Unique built-in diagnostics enables visibility into link performance
  • Automatic Test Equipment (ATE) test vectors for complete, at-speed production testing
  • Built-in per channel BERTs

    • Flexible fixed and random pattern generation
    • Error counting on patterns or disparity
    • Digital phase or voltage margining (bathtub curves)
  • Built-in per channel non-destructive scopes: captures eye diagrams and coherently captures periodic signals in situ without disrupting link operation
  • Additional loopbacks:

    • Serial analog (for wafer probe)
    • Parallel Tx to Rx
  • Supports full analog ATE test on low cost digital tester using only pass/fail JTAG vectors

Presenters:



Rita Horner, Senior Technical Marketing Manager for Analog/Mixed Signal IP, Synopsys
Rita Horner has more than 20 years’ experience in mixed-signal circuit design, interconnect, test, and packaging of high-speed integrated circuits for consumer, computing, and high-end networking ASSP and ASIC products. As a technical and product marketing manager, she has experience in ASSP, ASIC and Fiber Optic products, focusing on High Speed Serial Interconnect. She participated and presented in multiple standards bodies including ANSI T11, IEEE 802.3, OIF, and SFF Multi Sourcing Agreements.


David Rennie, Senior Analog Design Engineer for Mixed-Signal Interface IP, Synopsys
David Rennie is a Senior Analog Design Engineer for Synopsys’ Mixed-Signal IP, developing next-generation high-speed PCIe and Ethernet SerDes technologies. David has authored and co-authored fifteen IEEE conference and journal papers and holds five granted and three pending patents. He has presented at several industry and IEEE conferences, and is an active member in the IEEE.

Eric Esteve from IPNEST


First Time, Every Time

First Time, Every Time
by SStalnaker on 01-21-2013 at 7:10 pm

While this iconic advertising phrase was first used to describe the ink reliability of a ballpoint pen, it perfectly summarizes the average consumer’s attitude toward automobile reliability as well. We don’t really care how it’s done, as long as everything in our car works first time, every time. Even when that includes heated car seats, remote engine controls, power windows and locks, satellite radio, wireless communications, automated traction sensing controls, and the myriad of other electronics-based features present in today’s cars and trucks.

These systems encounter demanding design constructs and operating conditions that can challenge manufacturers’ reliability and quality goals. Given this ever-present conflict between complexity and dependability, it’s no surprise that the automotive electronics industry is constantly looking for ways to enhance the design and verification of electronic components and systems to improve their dependability, operating efficiency, and functional lifespan. I recently looked at some of the challenges they face, specifically as they relate to circuit reliability verification, and some of the new techniques and tools that have been developed to address these needs.

Below is a brief excerpt from a recent article by Dina Medhat that discusses some of these design challenges, and the tools and techniques automotive designers are turning to for solutions.

Circuit Reliability Challenges for the Automotive Industry

In the automotive industry, reliability and high quality are key attributes for electronic automotive systems and controls. It is normal for automotive applications to face high operating voltages, and high electric fields between nets that can lead to oxide breakdown. Moreover, electrical fields can influence sensitive areas on the chip, because high-power areas are commonly located next to logic areas. Consequently, designers must deal with metal spacing design rules that are dependent on voltage drop. Trying to implement such rules in the entire design flow, starting from layout routing implementation through design rule checking (DRC), is too conservative, as well as inefficient, due to lack of voltage information on nets (both in schematic and layout). Trying to achieving this goal with traditional exhaustive dynamic simulation is simply not practical, due to the turnaround time involved, and, if the design is very large, it may not even be possible to simulate it in its entirety.

New circuit reliability verification tools such as Calibre® PERC™ provide a voltage propagation functionality that can help perform voltage-dependent layout checks very efficiently while also delivering rapid turnaround, even on full-chip designs. In addition, they provide designers with unified access to all the types of design data (physical, logical, electrical) in a single environment to enable the evaluation of topological constraints within the context of physical requirements.

For a more detailed explanation of these issues, along with some examples of voltage-dependent design rule checking, negative voltage checking, and reverse current processing, read the entire article.


Double Patterning for IC Design, Extraction and Signoff

Double Patterning for IC Design, Extraction and Signoff
by Daniel Payne on 01-21-2013 at 3:27 pm

TSMC and Synopsys hosted a webinar in December on this topic of double patterning and how it impacts the IC extraction flow. The 20nm process node has IC layout geometries so closely spaced that the traditional optical-based lithography cannot be used, instead lower layers like Poly and Metal 1 require a new approach of using two masks per layer. Using Extreme UltraViolet (EUV) with it’s shorter wavelength could get us back to 1 mask per layout layer, however EUV is not ready for production use quite yet.

Continue reading “Double Patterning for IC Design, Extraction and Signoff”


FD-SOI is Worth More Than Two Cores

FD-SOI is Worth More Than Two Cores
by Paul McLellan on 01-20-2013 at 10:00 pm

This is the second blog entry about an ST Ericsson white-paper on multiprocessors in mobile. The first part was here.

The first part of the white-paper basically shows that for mobile the optimal number of cores is two. It is much better to use process technology (and good EDA) to run the processor at higher frequency rather than add additional cores.

ST Ericsson, as belies its name, is partially owned by ST (although they are trying to get rid of it, as are Ericsson. Dan Nenni has the same problem with his kids, they just hang around and cost a lot of money). Unlike everyone else who has decided the future is FinFET, ST has decided to go with FD-SOI (Chenming Hu was better at naming transistors than whoever came up with that mouthful). FD stands for fully-depleted, meaning that the channel area under the gate is not doped at all. SOI stands for silicon-on-insulator since the base wafer is an insulator. One of the advantages that FD-SOI has over FinFET is that it is essentially a planar process very similar to all existing processes and so can be build using very mature technology with fewer process steps than “other” processes (by which I read FinFET).


Compared to a bulk transistor, the advantages of FD-SOI are that it is faster and lower power. In a given technology node the channel is shorter and it is fully-depleted, both result in up to 35% higher operating frequency for the same power at high voltage and up to 100% faster at low voltage. The fully depleted channel removes drain-induced parasitic effects and has better confinement of carriers from source to drain, a thicker gate dielectric reducing leakage and so on. The result is power reductions of 35% at high performance and 50% at low operating points.

So for a processor, obviously, it can be run at a higher frequency for the same voltage/power or run at the same frequency without consuming so much power. The maximum achievable frequency is higher. And the process can operate at lower voltages with reasonably high frequencies (such as 1GHz at 0.65V).

This 35% increase in efficiency a high frequencies is more than enough for FD-SOI dual processors to outperform slower bulk quad-processors due to the limited software scalability. It also can obviate the need to optimize power by using heterogeneous big.LITTLE type architectures which requires complex hardware and sophisticated control mechanisms that are not yet mature.

ST Ericsson was an early adopter of dual processors but have resisted moving to quad core for all the reasons in these papers. However, this is probably academic. ST Ericsson has struggled to find major customers (partially because Apple and Samsung take most of the market and roll their own chips, and their biggest customer was Nokia who switched to Microsoft WP which only runs on Qualcomm chips). ST and Ericsson both have announced that they want to “explore strategic options” for STE (aka find a buyer) but it may end up that the company ends up simply being shut down. The group that I worked with at VSLI Technology in the 1990s, ended up inside NXP and was then folded into ST Ericsson. I notice on their blog entries the names of some people that used to work for me.

Download the white-paper here.


Wall Street Does NOT Know Semiconductors!

Wall Street Does NOT Know Semiconductors!
by Daniel Nenni on 01-20-2013 at 6:00 pm

In my never ending quest to promote the fabless semiconductor ecosystem I cannot pass up a discouraging word about one of the oldest financial services companies. You can consult with me for $300 per hour to answer your questions about the semiconductor industry on the phone or you can buy me lunch and get it in person (lunch will probably cost you more). The people who hire me are usually financial types (hedge fund managers etc…) but I also get called by semiconductor companies for market strategies and such. SoCs are a popular topic now and things get busy when quarterly results come in for TSMC, Intel, and the fabless guys in the mobile market segment. The fun part is taking apart analyst reports like the recent one from Morgan Stanley about TSMC.

Since its founding in 1935, Morgan Stanley and its people have helped redefine the meaning of financial services. The firm has continually broken new ground in advising our clients on strategic transactions, in pioneering the global expansion of finance and capital markets, and in providing new opportunities for individual and institutional investors. Click below to see a timeline of Morgan Stanley’s growth, which parallels the history of modern finance.

To start, look at the Morgan Stanley Wikipedia page which lists controversies and lawsuits with fines in the hundreds of millions of dollars.

TSMC released Q4 2012 numbers during last week’s conference call. You can read the Seeking Alpha transcript HERE. I’m a big fan of the Seeking Alpha transcripts, reading is much better than listening, unfortunately the Seeking Alpha analysts don’t know semiconductor either but more on that later.

Per Morgan Stanley:

[LIST=1]

  • 28nm is surprising on the upside (DUH)
  • 28nm Margins above expectations (DUH)
  • 20nm node to be bigger than 28nm (WRONG)
  • TSMC 6.5% Q1 revenue drop (WRONG)

    Disclaimer: This information came from a phone call so it may not be 100% accurate but it has been repeated by other analysts so they are valid discussion points.

    Morgan Stanley and others were surprised at the TSMC Q4 financial numbers which they should not have been. As I blogged before, 28nm will be the most successful node we have seen in a long time (in all regards) and TSMC owns it, thus the high margins. To be fair, TSMC warned that Q4 could be soft but I blogged otherwise. TSMC is a conservative company and can certainly play the Wall Street game. On the other hand, I would rather see ACCURATE forecasts from analysts so we can do business without shortages, layoffs, and the other things that go along with bad business decisions.

    In what way will 20nm be bigger than 28nm? Compare the value proposition of 28nm and 40nm versus 20nm and 28nm in regards to price, performance, and power consumption. The value proposition of 20nm is less than half of 28nm meaning some companies will do limited tape-outs at 20nm, some are even skipping 20nm completely in favor of 14nm FinFETs which should ramp shortly after 20nm. Samsung, GLOBALFOUNDRIES, and TSMC will all use Gate-Last HKMG and have 20nm production simultaneously so heated competition will be a factor. TSMC has the advantage of being on their second generation of the Gate-Last HKMG experience since Samsung and GLOBALFOUNDRIES used Gate-First at 28nm which did not yield as well. Samsung has the “Brute Force” 20nm advantage with what seems like unlimited resources and fab capacity. Samsung is also an IDM with internal SoC/VLSI design experience. GLOBALFOUNDRIES has the advantage of being the designated second source foundry by companies like Qualcomm and other big fabless companies that see Samsung as a ruthless competitor. The GLOBAFOUNDRIES New York fab is 20nm so customers can keep their IP protected under American law.

    Bottom line: 20nm is a completely different game than 28nm so any comparison will be much more complicated, be very careful who you listen to, my opinion.

    Q1 will be like Q4, underestimated, but that is all part of the Wall Street game. There is no stopping the mobile market, 28nm owns mobile, TSMC owns 28nm, simple as that. Seeking Alpha is still perpetuating the Apple at TSMC 28nm misinformation and, in general, I’m not impressed at all with their semiconductor coverage. If you read them do a quick author look up on LinkedIn. If they don’t have a profile there is probably a good reason for it. If they do, look for at least some semiconductor experience before investing your hard earned money their way.

    Also read: TSMC Apple Rumors Debunked!


  • Mobile SoCs: Two Cores are Better Than Four?

    Mobile SoCs: Two Cores are Better Than Four?
    by Paul McLellan on 01-20-2013 at 8:00 am

    I came across an interesting white-paper from ST Ericsson on two topics: multi-processors in mobile platforms and FD-SOI. FD-SOI is the ST Microelectronics alternative to FinFETs for 20nm and below. It stands for Fully-Depeleted Silicon-on-Insulator. But I’m going to save that part of the white-paper for another blog entry and focus on the mobile stuff here. By the end of this blog entry you will see why these two seemingly disconnected topics are in a single white paper.

    Multi-core originally was driven by the realization in the PC market by Intel that the power density of their processors was going to surpass that of the core of a nuclear reactor if they just kept upping the clock rate. Instead, increased computing power would have to be delivered not by a single core with more performance but by multiple cores on the same die.

    Initially this worked well since it was not hard to make good use of dual cores, one doing compute intensive stuff and one being responsive. But more cores turned out to be really hard to make use of (not a surprise to software type like me since making a big computer out of lots of little cheap ones has been a major area of research for many decades). Ten years later, most software including games, office productivity, multimedia playback, web browsing etc only make use of two cores. Just a few applications such as image and video processing have very easily partitioned algorithms that can make use of almost arbitrary numbers of cores.

    In mobile the multi-core change has come faster since the same power considerations are more urgent. The cliff was not that people couldn’t have a nuclear reactor in their pocket but that pushing the clock rate up too high killed battery life and so multi-core was a way of getting that back up again.


    But look as a close reading of the above graphs shows, single core performance isn’t, in fact, saturated at all and there is still strong acceleration. This is totally different from PCs which were clearly saturated very early. But, as for the PC, software only scales with single processor performance and less than proportionally (or, depressingly, not at all) for multiprocessors.

    So why have mobile platforms jumped into multi-core well before reaching saturation on the first core. One is that it could piggy-back on existing experience since it was already known that dual cores could be exploited well by operating systems and the fundamental technologies such as cache-coherence (and how to build them on silicon) were well understood. And the other is aggressive marketing (“mine’s bigger than yours”).

    What is unclear is why the major platforms have gone to 4 cores (or 8 the way Samsung counts) since the PC experience already shows that more than two processors is useless for most things (and people aren’t running photoshop on their phones). It turns out that there are no strong technical reasons, it seems to be entirely down to marketing since the number of processors is used for differentiation, even for the end-user (who, generally, wouldn’t know what a core was if you asked them, is it something to do with the center of the Apple?).


    For web browsing, especially with HTML5 rich content, processing speed is often the critical path rather than the network bandwidth (my LTE iPad downloads faster than my Mac iBook since LTE is faster than my wireless router). You can see from the above picture that going to dual core gets 30% speedup going from single to dual core, but 0-11% from dual to quad.

    On the other hand, frequency improvements always pay off. 1.4GHz dual core beats 1.2GHz quad core. In all the experiments STE have done, the results are the same. No benefit in going from dual core to quad core and 15-20% faster dual core always beating slower quad core.

    What this means is that increasing the frequency of the processor at constant power is more important than adding cores. This is where the FD-SOI part comes into play. Using process technology to get the frequency up 15% is better than adding cores beyond two.

    Download the white paper here.

    Also read: FD-SOI worth more than two cores



    What did CES 2013 mean for #SemiEDA?

    What did CES 2013 mean for #SemiEDA?
    by Don Dingee on 01-18-2013 at 4:55 pm

    CES is the preeminent gadget show, and in the LVCC South Hall a wave has been building for some time. It’s now the place where chipsets are introduced, and this year saw a wide range of introductions from Atmel, Bosch, Broadcom, Intel (OK, they’re still in Central Hall), InvenSense, Marvell, NVIDIA, Qualcomm, Samsung, ST-Ericsson, Tensilica, TI, ViXS, and more.

    Continue reading “What did CES 2013 mean for #SemiEDA?”


    Yawn… New EDA Leader Results Are Coming

    Yawn… New EDA Leader Results Are Coming
    by Randy Smith on 01-18-2013 at 4:00 pm

    We will soon start to see the quarterly financial reporting installments of the “Big 3” public EDA companies. I predict they will be as boring as usual. I am not sure if I would want it any differently though.

    Back in the 90s there were times when it was truly interesting to wait to see what Cadence, Mentor, or later Synopsys, might announce. I still have my brass-plated letter opener which Cadence gave to every employee in September 1990 when Cadence moved to the NYSE. Heck I even was excited to see the SVR (Silicon Valley Research, aka Silvar-Lisco) announcements. It was exciting to follow the industry then.

    So what changed? The biggest change in the past 20 years is the EDA revenue model. Instead of primarily selling perpetual licenses and annual maintenance, EDA vendors sell time-based licenses (TBLs) with bundled maintenance. For the customers, this somewhat lessened the gamesmanship as to what enhancements they got for free, and what constituted a new product for which they would have to pay more money. But the financial community loved it since it forced a smoothing out of revenue. A 3-year license is now recognized ‘ratably’ – 1/36 of the purchase price is counted as revenue each month. Under the old perpetual licenses all of the revenue was recognized with the booking.

    The effect of this change is fairly dramatic. If all the revenues were to be coming in on 3-year TBLs, then each quarter more than 90% of the company’s revenue is already made on the first day of the quarter. This is why all the analysts estimates are so close to each other and the announced number, too. How hard can it be?

    Table 1. Analyst Estimates for Q4-CY-2012

    How much do the analysts like predictability? Well, it was rumored that Cadence’s Executive VP of Sales Mike Schuh (now a General Partner at Foundation Capital), who lead Cadence’s sales team during its most explosive growth period, was forced out of Cadence after making his numbers yet again because a high percentage of the deals that actually closed that quarter were not in the forecast. So, despite always driving sales higher, Mike was seen as too unpredictable. [As an aside, I learned a tremendous amount watching Mike in sales situations – he was a brilliant salesman.]

    So where has this predictability gotten the big EDA companies? Well first, they didn’t really make a quick switch to this model. Some of them cheated in the eyes of the financial community. They sold their deals to banks so they could pull all of the revenue forward. They did “respins” of deals before they would expire making it more difficult to properly estimate their effect on revenue over time. But, over the past few years it has all seemingly been cleaned up. So thanks to Lip-Bu, Aart, and Wally for giving us credibility with Wall Street. I am still waiting for the other effect that should have happened…

    With a much more predictable revenue stream, I would have also expected it to be easier to plan an R&D investment stream, too. I expected that finally the Big 3 would come out with homegrown products that could redefine markets and upset the balance of power. That has not happened. Despite the declining number of EDA start-ups we are still not seeing any ground-shaking new products from the Big 3 EDA vendors. More about that in a later blog.

    There is yet another way this model might change. We might see more moves into the EDA space from adjacent spaces. Recently, not all EDA acquirers have been the traditional EDA vendors. That might shake things up a bit, too.

    Note: I run an executive consulting business. A list of my past and present clients can be found on my website, www.randysan.com.


    Oasys Has a New CEO

    Oasys Has a New CEO
    by Paul McLellan on 01-18-2013 at 2:21 pm

    Scott Seaton is the new CEO of Oasys Design Systems. Paul van Besouw, the CEO since the company’s founding, becomes the CTO. I met Scott last year when I was doing some consulting work for Carbon Design where he was VP of sales (the new VP sales at Carbon is Hal Conklin, by the way).

    I talked to Scott about why he had joined Oasys. After all, when you take on a CEO (or any senior position) in an early-stage company, the most important thing is whether you think that the company is going to be successful.

    He believes that in RealTime Designer and RealTime Explorer that Oasys has breakthrough technology. When he talks to customers, they tell him that they have reduced their time to market by an average of 2 months compared to their previous methodology.

    RTL designers need to get physical feedback. If they don’t use RealTime then the only way to do this is using the full implementation flow that typically takes 2-3 days. Oasys is at least 10 times faster than this. Further, since there is a single tool with a unified database, it is easy to highlight problems (such as timing constraint violations) and tie them back to the correct line of RTL. One customer told Scott that the last timing problem they tried to correct took them 6 hours just to find the correct bit of RTL. Cross-probing in RealTime takes a minute.

    Oasys recently introduced RealTime Explorer to address this issue whereby RTL designers either don’t get any physical feedback (in which case their timing numbers are pretty much meaningless) or where it is extremely cumbersome to get the data. This is especially critical with geographically remote RTL development groups who may not have any access to the physical tools or, indeed, the expertise to run them if they did. Up to now, other vendors have offered either estimation tools or else taken synthesis engines and de-tuned them to gain speed while sacrificing quality-of-results (QoR) and thus correlation. Customers tell Scott that neither of these approaches address their needs. By definition, RTL engineers that are trying to push the boundaries of performance don’t have much margin for error. And in advanced nodes like 28nm, everyone is trying to push the boundaries of performance otherwise why bother.

    Ultimately, none of this matters unless the customers are deriving real value. On every metric, Oasys is gaining momentum. Sales were up in 2012 by 114% over 2011 and they finished off in Q4 being cash-flow positive. Customers include 5 of the top 10 semiconductor companies, many using RealTime for 28nm tapeouts and some qualifying RealTime for production use at 20nm and 14nm nodes. All of them want to use RealTime for both RTL exploration and for implementation.