RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The Semiconductor Landscape – II

The Semiconductor Landscape – II
by Pawan Fangaria on 01-01-2013 at 9:15 pm

It has been a year since my article Semiconductor Landscape in Jan 2012 I wanted to look back into the major events over the year and then anticipate what’s in store going forward. What has happened over the year is much more than what I could foresee. Major consolidation in EDA space – Synopsys acquired Magma, SpringSoft, Ciranova, Eve; consolidation in semiconductor space – IBM acquired Texas Memory System, GlobalFoundries became independent of AMD, Micron is set to acquire Elpida and there were others. The point is that these consolidations are along the expected lines. One thing we have not seen yet is any indication of Qualcomm or Apple having their own foundries, although they could afford one profitably, if not for others’ designs, at least for their own needs. We may need to wait more to hear on that.

My tacit understanding in that article and still is that more consolidation will happen in coming years. Main reasons of consolidations are macroeconomic situation, business leadership, technology leadership and IP leadership. Let’s examine the scenarios from each of these perspectives and decipher from there what’s expected to happen –

Macroeconomics – In 2012, there is 3% decline in Semiconductor revenue, $298B from $307B in 2011, published by Gartner. And considering the unforeseen catastrophe of Fiscal Cliff, EU crisis and slowdown in BRICS, I am not hopeful of any real growth in economy in a few years from now, rather contraction is possible. The future is uncertain. If some corporate in US, baring a few like Qualcomm and Apple, are well capitalized, that is due to government and Fed pumping money into the system. In such a situation weaker hands will get hold of the stronger hands and thereby consolidation will happen. Another aspect is of meagre operating profit margins which, in order to improve the bottom line, will eventually force artificial robots to do routine work. Management tools will evolve for general book keeping of man power to reduce management overhead. Jobs will be measured by tools and paid accordingly, automatically.

Business Leadership – Here I would like to take a few examples. First comes to my mind is memory business which is faltering; low profit, high volume. We have seen the fate of Elpida. Micron, a strong leader in this space is coming for its rescue. Second, Freescale Semiconductor is not doing well and can be acquired, either in parts or in whole. Although it is narrowing down its losses, challenging future may initiate it to sell some of its lucrative portfolio in RF products and micro controllers which Qualcomm and TI would like to happily adopt. Another indication in business leadership we can see is that while majority of semiconductor vendors (e.g. Samsung, Toshiba, TI) had decline in their revenue Qualcomm and Broadcom had increase in their revenue. That’s a clear indication that while strong ones will emerge stronger, weaker ones will be subdued.

Technology Leadership – This is an interesting area where technology leaders are always in dilemma on what kind of services to be outsourced. When the technology becomes too complex as in the case of sub 20nm process, close collaboration between technology leaders (Foundry, EDA, Design) become necessary. Clear evidence is about Apple starting its own chip design team. That is also related to protecting IP but technology is the driver for better
efficiency. Eventually technology drives business and hence small and mid-size and weak players will either close or coalesce with strong technology leaders.

IP Leadership – This is a niche space where an IP owner can stay as long as it wishes or can survive. That’s the reason it has a separate unique space. ARM has established itself as a large IP leader and will continue. New IP leaders will keep on emerging and merging with other IP or technology leaders at their will.

By Pawan Kumar Fangaria
EDA/Semiconductor professional and Business consultant
Email:Pawan_fangaria@yahoo.com


Wafer Costs: Out of Control or Not?

Wafer Costs: Out of Control or Not?
by Paul McLellan on 01-01-2013 at 8:30 pm

I didn’t attend the International Electronic Device Meeting (IEDM) earlier this month, but there have been a lot of reports on the inter webs especially about 20nm and 14nm processes. Some of this is really geeky stuff but I think that perhaps the most interesting thing I’ve read about is summarized in this chart:

This shows the wafer costs (12″, 30cm wafers) for 28nm, 20nm and then 14nm with multiple patterning and, in purple, 14nm with EUV lithography. The chart comes from Luc van den Hove, chief executive of IMEC in Belgium.

These are raw wafer costs and thus haven’t been adjusted for the increase in transistor (and perhaps interconnect) density. Typically when we transition from one process node to the next, the wafer costs go up a little bit but that is completely dominated by the increase in how much we can put on a given sized die, and so the cost per transistor drops substantially. This is the economic driver of Moore’s law and is what makes it possible to have a $500 iPhone deliver more computer power than a 1980s mainframe that cost millions of dollars.

But these costs are going up dramatically. The Y-axis scale doesn’t start at zero so the picture is a bit misleading, 14nm costs are not three times 28nm. But they are nearly twice. If the process truly scaled everything then the density of transistors at 14nm would be four times that of 28nm so cost per transistor would still be falling fast. But increasingly the transistor length is only the headline number for the process and the interconnect is shrinking much more slowly, if at all. When you look at the pitches for various layers in a modern process it is impossible to see anything close to 2X the headline number.

So the key question is whether 14nm will have an economic driver or just a technology driver for those few designs that can truly take advantage of the increased density and decreased power, even though there may even be a cost penalty. For Apple’s iPhone and Samsung’s Android phones probably. For those $50 smartphones for developing countries that won’t work.

Despite the purple bar looking optimistic, the received wisdom is that EUV is now too late for 14nm and so we will have to have a lot of double and triple patterning instead (which is one of the things that drives the cost up so much). EUV works in the sense that you can flash some wafers but the current state-of-the-art seems to be about 20 wafers/hour versus the 100 or 200/hour that is required to make the approach viable. The intensity of the light source (droplets of tin zapped with a huge laser) is too low, the mirrors (which aren’t really mirrors in the usual sense) absorb too much of the light, and there are too many reflections required between the source and the photoresist. Not much energy makes it to the resist to make the exposure.

On a more optimistic note, Intel claimed that their costs per transistor were falling with each process node. Apparently they also don’t use double patterning at 20nm and there are two reasons for this. Firstly, they can have as restrictive design rules as they like, since they are the ultimate IDM with a limited product range. Secondly, most of the pitches at 20nm are not much different from 28nm. As I said above, only the FinFET transistor is 20nm or 22nm long.

Anyway, 2013 will be the year we find out what 20nm and 14nm really can deliver as these processes start to ramp up. As Yogi Berra said, “the future ain’t what it used to be.” (although you have to be careful with Yogi Berra quotes. As he also (maybe) said, “I didn’t say all the things I said.”)


Happy New Year from SemiWiki!

Happy New Year from SemiWiki!
by Daniel Nenni on 01-01-2013 at 7:05 pm

It was an amazing year for SemiWiki and I would like to sincerely thank all who participated. SemiWiki traffic doubled again which is amazing in itself. SemiWiki membership more than tripled as we continue to add vertical markets (EDA, IP, Services, Foundry). More people are blogging on SemiWiki and the Forums and Wikis are coming to life. With millions of page views I can comfortably say that social media is now an integral part of the fabless semiconductor ecosystem.

What’s in store for SemiWiki in the new year? We will continue to grow with more original content, another vertical market (FPGA), and increased exposure through new partnerships with mainstream news carriers. Today more than 450,000 people read SemiWiki, in 2013 that number will exceed 1,000,000 people, believe it.

A significant challenge we continue to face as an industry, and I do not see this changing anytime soon, is the lack of understanding of what social media is really capable of. Seriously, most of the people in our industry simply do not get it. Admittedly, I probably would not get social media either if not for my four children as they have pushed me along. As a result I have spent a considerable amount of time on SemiWiki, LinkedIn, Twitter, FaceBook, and Google+ looking at the analytics and calculating the return on investment (ROI) for different activities.

As a career salesperson I can tell you social media is all about ROI. The most limiting factor of the sales equation is time spent educating customers on your technology and the resulting value proposition. If you call on an account that has never heard of your product or company the sales cycle can be very expensive. Back in the day we made cold calls and calculated the ROI. For example, you would have to call on 100 people to get 5 qualified meetings which today is an added expense that we as an industry cannot afford.

On the other hand, when a potential customer initiates contact (out of the blue) we call that a Bluebird, as in a sale just flew in the window which is a rare occurrence. Bottom line, social media is all about creating Bluebirds and compacting the sales cycle as much as possible.

White papers are a staple in our industry with webinars and live seminars being a close second and third. They are cost effective customer communication channels, invitations to collaborate, but also very important sales tools.

SemiWiki bloggers are industry professionals with 25+ years of experience. By day we work inside the fabless semiconductor ecosystem, by night we blog our opinions, observations, and experiences. In doing so, we drive qualified traffic to your whitepapers, webinars and seminars. This is a collaborative real time feedback loop and the ROI is easily documented. Work with us and Bluebirds will fly through your windows, absolutely.

Happy New Year and best of luck to you in 2013! It would be a pleasure to work with you, just fly through our window and we will take care of the rest.


Cadence 3D Methodology

Cadence 3D Methodology
by Paul McLellan on 12-28-2012 at 8:20 pm

A couple of weeks ago was the 3D Architectures for Semiconductor Integration and Packagingconference in Redwood City. Cadence presented the changes that they have been making to their tool flow to enabled 2.5D (interposer-based) and true 3D TSV-based designs. You know what TSV stands for by now right? Through-silicon-via, a (usually) copper plug that carries signal, power or clock from the front-side of the chip, through the thinned wafer, to the back where it can contact micro-balls on the die or interposer below.


The first people to do 3D designs managed to do it with tools that were unchanged, adding scripts and other modifications to fool the 2D tools into doing their bidding. After all, each die is a 2D design and can, in some sense, be done independently.

One of the biggest changes comes at the front of the design. Even assuming the partition between the different die has already been done (often because they are using different processes, memory or analog on one die, logic on another) there is still a lot that can be done to optimize the 3D structure. Many of the TSVs will end up being used to distribute clock or power with decap capacitors on the interposer. Having too many TSVs drives up the cost (and wastes area) and too few risks reliability failures or intermittent errors.


Since each die is largely designed independently of the others once these decisions have been made, the other big area that needs additional capability is verification. In particular, making sure that all the TSVs on the various die and interposers involved all line up, carry the correct signals and so forth. Not to mention verifying the power delivery network and the clock network behave as planned.


The final big area of difference is manufacturing test. Once the entire design is packaged up there is only access to the pins on the package. These only go to the interposer (in a 2.5D design) or the lowest die (in a 3D design). There is no direct access to the die above. So additional TSVs are required to build a test “elevator” that gets the scan patterns in through the pins and up to the level where they are used. Wafer sort, the forgotten child of manufacturing test, is also much more important in a 3D design due to the “known good die” problem. If a faulty die slips through wafer sort and gets packaged, not only is that die discarded (and it was bad anyway) but other die and interposers (which are most likely good) also get discarded. The cost of a bad die is that much higher than in a normal 2D design.

Of course there are many other issues in 3D other than EDA. It requires a whole new ecosystem and the details of who does what haven’t even all been ironed out yet. TSVs need to be created, wafers need to be thinned, microballs need to be placed, die need to be bonded together, packaged, bonded out, tested. And when all this is done, it needs to be economic to do it for volume production, not just an elegant price-is-no-object technical solution.


Intel 22nm SoC Process Exposed!

Intel 22nm SoC Process Exposed!
by Daniel Nenni on 12-27-2012 at 9:00 pm

The biggest surprise embedded in the Intel 22nm SoC disclosure is that they still do NOT use Double Patterning which is a big fat hairy deal if you are serious about the SoC foundry business. The other NOT so surprising thing I noticed in reviewing the blogosphere response is that the industry term FinFET was dominant while the Intel invented term Tri-Gate was rarely used.

The transistor pitch – essentially the distance between two transistors – in the 22nm tri-gate technology is 80nm, which is the smallest pitch that can be produced using single-pattern lithography, Bohr says. “The next generation, 14,” he said, “we’re going to have to convert to Double Patterning to get tighter pitches.”

Mark Bohr is the infamous Intel Senior Fellow who mistakenly predicted the doom of the fabless semiconductor ecosystem. Mark is a funny guy. I remember him putting up an incomplete 22nm defect density trend slide at this year’s Intel Developers Forum and saying “Was it a mistake that I left the numbers out? Yes! Oh my goodness, how could I have done that? But, gee, time is up, so … ”

TSMC on the other hand presents their process defect density numbers every year at the TSMC Tech Symposium. Transparency equals trust in the foundry business, believe it.Back to Double Patterning, I will defer to the experts at Mentor for a complete description. Please see the Double Patterning Exposed articles for technical detail. No registration is required, just click on over.

So the question is: Why does TSMC use the extra lithography steps of Double Patterning for 20nm and Intel does not for 22nm? The answer is Restrictive Design Rules which essentially eliminates any variability in orientation of shapes on critical layers. Intel is very comfortable with incredibly restrictive design rules since they are a microprocessor manufacturer and not a pure-play foundry. Intel can micromanage every aspect of design and manufacturing down to the electron. TSMC on the other hand needs to accommodate different design requirements and intellectual property from 615 customers.In addition to more flexible metal routing, Double Patterning also enables a tighter metal pitch which will put TSMC 16nm head-to-head with Intel 14nm even though, as I explained in 16nm FinFET Versus 20nm Planar, 16nm FF leverages 20nm process technology.

It will be interesting to see how Intel tackles the Double Patterning challenge without the support of the mighty fabless semiconductor ecosystem.Which brings me to another trending topic: Is 20nm Planar a full node, half node, or everybody gonna skip node?

I can tell you as a matter of fact that the top semiconductor companies around the world will NOT skip 20nm. 20nm tape-outs are happening now with production silicon late next year. 20nm will require more processing time from GDS to wafer but it will NOT be cost prohibitive for high volume customers. You are probably familiar with the 80/20 rule where 80% of something or other is controlled by 20% of the people, in the semiconductor industry we call it the 90/10 rule where 90% of the of the silicon shipped is by 10% of the companies and you can bet that they will tape out at 20nm. Designing at 20nm planar will also make the transition to 16nm FinFET easier and I can tell you that EVERYONE will be taping out at 16nm FinFET. That’s my story and I’m sticking to it.

My favorite Mark Bhor quote: “We don’t intend to be in the general-purpose foundry business, I don’t think the volumes ever will be huge for Intel”. Exactly! So what is Intel going to do with all that empty fab space?


Equipment Down 16% in 2012, Flat to Down in 2013

Equipment Down 16% in 2012, Flat to Down in 2013
by Bill Jewell on 12-22-2012 at 8:30 pm

Shipments of semiconductor manufacturing equipment have been trending downward since June 2012, based on combined data from SEMI for North American and European manufacturers and from SEAJ for Japanese manufacturers. The market bounced back strongly in late 2009 and in 2010 after the 2008 downturn to return to the $3 billion a month level. Bookings and billings fell in the latter half of 2011 and recovered to the $3 billion level in the first half of 2012. The latest downturn is more severe than in 2011, falling below the $2 billion a month level. However the downturn may be bottoming out, with November 2012 three-month-average bookings up 1% from October.

Total semiconductor manufacturing equipment shipments in year 2012 will be about $28 billion, down 16% from 2011, based on data through November. Recent forecasts for 2013 range from a decline of 4.4% from VLSI Research to flat from SEMI. However the largest foundry company, TSMC, is bucking the trend. According to Digitimes, TMSC plans to increase capital spending in 2013 by 8% to $9 billion.

What does this mean for the semiconductor market in 2013? Since the demise of SICAS, no accurate industry capacity utilization data is available. We at Semiconductor Intelligence estimate utilization is currently in the low to mid 80% range, down from the 90% to 95% range for 2010 and 2011. Thus the semiconductor market has room to grow in the near term without significant capacity additions. Our forecast is for 9% semiconductor market growth in 2013. Semiconductor market growth will probably accelerate in 2014 to the 10% to 15% range, requiring increased capacity. Thus the semiconductor equipment market should return to healthy growth in 2014.


Winner, Winner, Chicken Dinner!

Winner, Winner, Chicken Dinner!
by SStalnaker on 12-21-2012 at 8:00 pm

I have no idea if chicken was actually on the menu, but on December 12, Calibre RealTime picked up its thirdindustry award, this time the 2012 Elektra Award for Design Tools and Development Software from the European Electronics Industry. Calibre RealTime came out on top in a group full of prestigious finalists, including ByteSnap, Cadence, CadSoft, Synopsys, and Xilinx.


Calibre RealTime provides complete sign-offDRC/DFM feedback to the designer during custom layout creation and editing. Using an OpenAccess run-time model to enable integration with most 3rd-party custom design environments, the Calibre RealTime platform provides custom/AMS designers with immediate access to qualified Calibre rule decks (including recommended rules, pattern matching, equation-based DRC, and double patterning) during design creation. Instead of waiting for the full sign-off DRC iteration at the end of the design cycle, the designer can check each design choice against sign-off DRC, with the same fast response time as the in-design checkers. In addition, the full range of Calibre verification support is provided, including proprietary “hinting” and error identification displays that provide precise correction suggestions. This interactive editing and verification of layouts ensures DRC-clean designs in the shortest time possible, reducing overall design cycle time and providing designers with more time for design optimization, resulting in better quality and higher performance.

Calibre RealTime was previously recognized by Electronic Design with a Best award recognizing “the best technology, products and standards” of the year, and by DesignCon with its DesignVision award, honoring ” the most innovative design tools in the industry.”

For more information, visit the Calibre RealTimewebpage.




Apply within: four embedded instrumentation approaches

Apply within: four embedded instrumentation approaches
by Don Dingee on 12-21-2012 at 9:00 am

Anyone who has been around technology consortia or standards bodies will tell you that the timeline from inception to mainstream adoption of a new embedded technology is about 5 years, give or take a couple dream cycles. You can always tell the early stage, where very different concepts try to latch on to the same, simple term.

Such is the case with embedded instrumentation. At least four different post-silicon approaches have grabbed the term and are applying it in very different ways. Here at SemiWiki, our team has begun to explore the first two, but I found a couple others in Googling around.

The basic premise of all these approaches is how to see inside a complex design comprised of multiple IP blocks. All borrow from the board-level design world and the concept of IEEE 1149.x, JTAG. By gathering instrumented data from a device, passing it out on a simple low pin count interface, and connecting those interfaces to an external analyzer, a designer can gain visibility into what is going on without intrusive test points and broadside logic analysis techniques. (JTAG is also handy for device programming, but let’s stick to debug and visibility in this discussion.)

A quick snapshot of these approaches:
IJTAG, IEEE P1687
– Internal JTAG targets the problem of IP as a “black box”, especially when considering the problem of 3D and importing a block wholesale. The standard tries to reign in proprietary test methods using differing and incompatible methods into a single, cohesive framework. By standardizing the test description for an IP block, using Instrument Connectivity Language (ICL) and Procedure Description Language (PDL), a P1687-compliant block can connect into the test structure. If IP providers start adopting P1687, IP blocks plug-and-play in test development packages such as ASSET InterTech ScanWorks and Mentor Graphics Tessent.

IJTAG, Testing Large SoCs, by Paul McLellan
Mentor and NXP Demonstrate ITJAG …, by Gene Forte
One pager on IEEE P1687, by Avago Technologies

Tektronix Clarus – Tek has come at the same problem from a different direction, working on the premise that RTL is RTL, and an optimized approach which looks at an RTL design and inserts analysis instrumentation seamlessly during any EDA flow. Tektronix Clarus is built around the idea of inserting a lot of points – perhaps as many as 100,000 – but not taking a lot of real estate. Instrumentation is always collecting data, but the analyzer capability uses compression and conditional capture techniques to bring long traces on just the signals of interest. This approach is more about improved analysis and deeper visibility under functional conditions.

Tektronix articles from SemiWiki

Nexus, IEEE-ISTO 5001 – Recognizing that processors, especially multicore parts, are getting more complex and software for them is getting harder to debug, Nexus 5001 creates more robust debugging capability than just IEEE 1149.7 JTAG offers. Defining a second Nexus auxiliary port with five signals, and adding higher bandwidth capability extends the capability for operations like reading and writing memory in real-time. There is also a port controller which combines signaling for up to 8 cores. The latest revision of the standard in 2012 adds capability for the Xilinx Aurora SerDes, a fast pipe for large traces. (This is one of those standards taking a long time to get traction – membership in Nexus 5001 has waned a bit, and still seems focused on automotive MCUs with staunch backing from GM.)

Nexus 5001 forum
IPextreme Freescale IP

GOEPEL ChipVORX – GOEPEL Electronic has looked at that problem of different IP using different interfaces and tried to create a proprietary solution using adaptive models that can connect software tools to various target instruments. There’s not a lot of detail, but the GOEPEL Software Reconfigurable Instruments page has a bit more info.

Different technologies emerge to fit different needs, and these approaches show how deep the need for post-silicon visibility and multicore and complex IP debugging goes. Which of these, or other, embedded instrumentation approaches are you taking in your SoC or FPGA designs?


Formal Verification at ARM

Formal Verification at ARM
by Paul McLellan on 12-20-2012 at 4:34 pm

There are two primary microprocessor companies in the world these days: Intel and ARM. Of course there are many others but Intel is dominant on the PC desktop (including Macs) and ARM is dominant in mobile (including tablets).

One of the keynotes at last month’s Jasper User Group (JUG, not the greatest of acronyms) was by Bob Bentley of Intel, talking about how they gradually introduced formal approaches after the highly visible and highly embarrassing floating point bug in 1997. I already blogged about that here. Bob took time to emphasize that Intel doesn’t endorse vendors and so nothing he said should be taken as an endorsement of anyone. He did say that beside using commercial tools they also have some of their own internal formal tools.

Later in the day ARM talked about their use of formal verification and, in particular, Jasper. Laurent Arditii of the ARM Processor Division in Sophia Antipolis (where I lived for 6 years, highly recommended as great mix of high tech, great lifestyle and great weather) presented. In some ways this was an update since ARM presented on how they were gradually using more and more formal at last years JUG.

He characterized ARM’s use of Jasper as AAHAA. This stands for:

  • Architecture
  • bug Avoidance
  • bug Hunting
  • bug Absence
  • bug Analysis

Architecture:
Specify architectures using formal methods and verify them for completeness and correctness. For example, ARM used this approach for verifying their latest memory bus protocols. They talked about this at last years JUG.

Bug Avoidance:
Catch bugs early, during design bringup, usually before simulation testbench is ready. By catching bugs early they are easier and cheaper to fix.

Bug Hunting:
Find bugs at the block and system level. Server farm friendly (lots of runs needed).

Bug Absence:
Prove critical properties of the design. This is complex expert stuff and requires considerable user-expertise.

Bug Analysis:
Investigate late-cycle bugs. Isolate corner-case bugs observed in the field or in the lab. Confirm the correctness of any fixes.

Jasper formal is now going mainstream in ARM, which is a big advance from last year when it was starting to get traction in just some groups. Bug Avoidance and Hunting flows are leading the proliferation and give an early return on investment. They now have formal regression flows running on their server farm. Formal is no longer a niche topic. At an internal ARM conference on verification they had over 100 engineers interested. Just like Intel, a lot of the introduction of formal requires cookbooks and flows to be put together by real formal experts and then proliferated to all the various design teams.


For example, the above graphic shows two IP blocks. One was done with classical techniques (aka simulation). The other, an especially hard block to verify, was done using formal techniques with no bringup testbenches. As you can see, most of the bugs were found much earlier.


Currently Jasper is heavily used for various aspects of AAHAA. You can see in the above diagram how Jasper features map onto the AAHAA taxonomy.

In the future, apart from propagating additional formal techniques, ARM wants to use formal approaches for coverage analysis, IP-XACT validation, security validation (TrustZone), UPF/CPF (power policy) in formal