Synopsys IP Designs Edge AI 800x100

Happy New Year from SemiWiki!

Happy New Year from SemiWiki!
by Daniel Nenni on 01-01-2013 at 7:05 pm

It was an amazing year for SemiWiki and I would like to sincerely thank all who participated. SemiWiki traffic doubled again which is amazing in itself. SemiWiki membership more than tripled as we continue to add vertical markets (EDA, IP, Services, Foundry). More people are blogging on SemiWiki and the Forums and Wikis are coming to life. With millions of page views I can comfortably say that social media is now an integral part of the fabless semiconductor ecosystem.

What’s in store for SemiWiki in the new year? We will continue to grow with more original content, another vertical market (FPGA), and increased exposure through new partnerships with mainstream news carriers. Today more than 450,000 people read SemiWiki, in 2013 that number will exceed 1,000,000 people, believe it.

A significant challenge we continue to face as an industry, and I do not see this changing anytime soon, is the lack of understanding of what social media is really capable of. Seriously, most of the people in our industry simply do not get it. Admittedly, I probably would not get social media either if not for my four children as they have pushed me along. As a result I have spent a considerable amount of time on SemiWiki, LinkedIn, Twitter, FaceBook, and Google+ looking at the analytics and calculating the return on investment (ROI) for different activities.

As a career salesperson I can tell you social media is all about ROI. The most limiting factor of the sales equation is time spent educating customers on your technology and the resulting value proposition. If you call on an account that has never heard of your product or company the sales cycle can be very expensive. Back in the day we made cold calls and calculated the ROI. For example, you would have to call on 100 people to get 5 qualified meetings which today is an added expense that we as an industry cannot afford.

On the other hand, when a potential customer initiates contact (out of the blue) we call that a Bluebird, as in a sale just flew in the window which is a rare occurrence. Bottom line, social media is all about creating Bluebirds and compacting the sales cycle as much as possible.

White papers are a staple in our industry with webinars and live seminars being a close second and third. They are cost effective customer communication channels, invitations to collaborate, but also very important sales tools.

SemiWiki bloggers are industry professionals with 25+ years of experience. By day we work inside the fabless semiconductor ecosystem, by night we blog our opinions, observations, and experiences. In doing so, we drive qualified traffic to your whitepapers, webinars and seminars. This is a collaborative real time feedback loop and the ROI is easily documented. Work with us and Bluebirds will fly through your windows, absolutely.

Happy New Year and best of luck to you in 2013! It would be a pleasure to work with you, just fly through our window and we will take care of the rest.


Cadence 3D Methodology

Cadence 3D Methodology
by Paul McLellan on 12-28-2012 at 8:20 pm

A couple of weeks ago was the 3D Architectures for Semiconductor Integration and Packagingconference in Redwood City. Cadence presented the changes that they have been making to their tool flow to enabled 2.5D (interposer-based) and true 3D TSV-based designs. You know what TSV stands for by now right? Through-silicon-via, a (usually) copper plug that carries signal, power or clock from the front-side of the chip, through the thinned wafer, to the back where it can contact micro-balls on the die or interposer below.


The first people to do 3D designs managed to do it with tools that were unchanged, adding scripts and other modifications to fool the 2D tools into doing their bidding. After all, each die is a 2D design and can, in some sense, be done independently.

One of the biggest changes comes at the front of the design. Even assuming the partition between the different die has already been done (often because they are using different processes, memory or analog on one die, logic on another) there is still a lot that can be done to optimize the 3D structure. Many of the TSVs will end up being used to distribute clock or power with decap capacitors on the interposer. Having too many TSVs drives up the cost (and wastes area) and too few risks reliability failures or intermittent errors.


Since each die is largely designed independently of the others once these decisions have been made, the other big area that needs additional capability is verification. In particular, making sure that all the TSVs on the various die and interposers involved all line up, carry the correct signals and so forth. Not to mention verifying the power delivery network and the clock network behave as planned.


The final big area of difference is manufacturing test. Once the entire design is packaged up there is only access to the pins on the package. These only go to the interposer (in a 2.5D design) or the lowest die (in a 3D design). There is no direct access to the die above. So additional TSVs are required to build a test “elevator” that gets the scan patterns in through the pins and up to the level where they are used. Wafer sort, the forgotten child of manufacturing test, is also much more important in a 3D design due to the “known good die” problem. If a faulty die slips through wafer sort and gets packaged, not only is that die discarded (and it was bad anyway) but other die and interposers (which are most likely good) also get discarded. The cost of a bad die is that much higher than in a normal 2D design.

Of course there are many other issues in 3D other than EDA. It requires a whole new ecosystem and the details of who does what haven’t even all been ironed out yet. TSVs need to be created, wafers need to be thinned, microballs need to be placed, die need to be bonded together, packaged, bonded out, tested. And when all this is done, it needs to be economic to do it for volume production, not just an elegant price-is-no-object technical solution.


Intel 22nm SoC Process Exposed!

Intel 22nm SoC Process Exposed!
by Daniel Nenni on 12-27-2012 at 9:00 pm

The biggest surprise embedded in the Intel 22nm SoC disclosure is that they still do NOT use Double Patterning which is a big fat hairy deal if you are serious about the SoC foundry business. The other NOT so surprising thing I noticed in reviewing the blogosphere response is that the industry term FinFET was dominant while the Intel invented term Tri-Gate was rarely used.

The transistor pitch – essentially the distance between two transistors – in the 22nm tri-gate technology is 80nm, which is the smallest pitch that can be produced using single-pattern lithography, Bohr says. “The next generation, 14,” he said, “we’re going to have to convert to Double Patterning to get tighter pitches.”

Mark Bohr is the infamous Intel Senior Fellow who mistakenly predicted the doom of the fabless semiconductor ecosystem. Mark is a funny guy. I remember him putting up an incomplete 22nm defect density trend slide at this year’s Intel Developers Forum and saying “Was it a mistake that I left the numbers out? Yes! Oh my goodness, how could I have done that? But, gee, time is up, so … ”

TSMC on the other hand presents their process defect density numbers every year at the TSMC Tech Symposium. Transparency equals trust in the foundry business, believe it.Back to Double Patterning, I will defer to the experts at Mentor for a complete description. Please see the Double Patterning Exposed articles for technical detail. No registration is required, just click on over.

So the question is: Why does TSMC use the extra lithography steps of Double Patterning for 20nm and Intel does not for 22nm? The answer is Restrictive Design Rules which essentially eliminates any variability in orientation of shapes on critical layers. Intel is very comfortable with incredibly restrictive design rules since they are a microprocessor manufacturer and not a pure-play foundry. Intel can micromanage every aspect of design and manufacturing down to the electron. TSMC on the other hand needs to accommodate different design requirements and intellectual property from 615 customers.In addition to more flexible metal routing, Double Patterning also enables a tighter metal pitch which will put TSMC 16nm head-to-head with Intel 14nm even though, as I explained in 16nm FinFET Versus 20nm Planar, 16nm FF leverages 20nm process technology.

It will be interesting to see how Intel tackles the Double Patterning challenge without the support of the mighty fabless semiconductor ecosystem.Which brings me to another trending topic: Is 20nm Planar a full node, half node, or everybody gonna skip node?

I can tell you as a matter of fact that the top semiconductor companies around the world will NOT skip 20nm. 20nm tape-outs are happening now with production silicon late next year. 20nm will require more processing time from GDS to wafer but it will NOT be cost prohibitive for high volume customers. You are probably familiar with the 80/20 rule where 80% of something or other is controlled by 20% of the people, in the semiconductor industry we call it the 90/10 rule where 90% of the of the silicon shipped is by 10% of the companies and you can bet that they will tape out at 20nm. Designing at 20nm planar will also make the transition to 16nm FinFET easier and I can tell you that EVERYONE will be taping out at 16nm FinFET. That’s my story and I’m sticking to it.

My favorite Mark Bhor quote: “We don’t intend to be in the general-purpose foundry business, I don’t think the volumes ever will be huge for Intel”. Exactly! So what is Intel going to do with all that empty fab space?


Equipment Down 16% in 2012, Flat to Down in 2013

Equipment Down 16% in 2012, Flat to Down in 2013
by Bill Jewell on 12-22-2012 at 8:30 pm

Shipments of semiconductor manufacturing equipment have been trending downward since June 2012, based on combined data from SEMI for North American and European manufacturers and from SEAJ for Japanese manufacturers. The market bounced back strongly in late 2009 and in 2010 after the 2008 downturn to return to the $3 billion a month level. Bookings and billings fell in the latter half of 2011 and recovered to the $3 billion level in the first half of 2012. The latest downturn is more severe than in 2011, falling below the $2 billion a month level. However the downturn may be bottoming out, with November 2012 three-month-average bookings up 1% from October.

Total semiconductor manufacturing equipment shipments in year 2012 will be about $28 billion, down 16% from 2011, based on data through November. Recent forecasts for 2013 range from a decline of 4.4% from VLSI Research to flat from SEMI. However the largest foundry company, TSMC, is bucking the trend. According to Digitimes, TMSC plans to increase capital spending in 2013 by 8% to $9 billion.

What does this mean for the semiconductor market in 2013? Since the demise of SICAS, no accurate industry capacity utilization data is available. We at Semiconductor Intelligence estimate utilization is currently in the low to mid 80% range, down from the 90% to 95% range for 2010 and 2011. Thus the semiconductor market has room to grow in the near term without significant capacity additions. Our forecast is for 9% semiconductor market growth in 2013. Semiconductor market growth will probably accelerate in 2014 to the 10% to 15% range, requiring increased capacity. Thus the semiconductor equipment market should return to healthy growth in 2014.


Winner, Winner, Chicken Dinner!

Winner, Winner, Chicken Dinner!
by SStalnaker on 12-21-2012 at 8:00 pm

I have no idea if chicken was actually on the menu, but on December 12, Calibre RealTime picked up its thirdindustry award, this time the 2012 Elektra Award for Design Tools and Development Software from the European Electronics Industry. Calibre RealTime came out on top in a group full of prestigious finalists, including ByteSnap, Cadence, CadSoft, Synopsys, and Xilinx.


Calibre RealTime provides complete sign-offDRC/DFM feedback to the designer during custom layout creation and editing. Using an OpenAccess run-time model to enable integration with most 3rd-party custom design environments, the Calibre RealTime platform provides custom/AMS designers with immediate access to qualified Calibre rule decks (including recommended rules, pattern matching, equation-based DRC, and double patterning) during design creation. Instead of waiting for the full sign-off DRC iteration at the end of the design cycle, the designer can check each design choice against sign-off DRC, with the same fast response time as the in-design checkers. In addition, the full range of Calibre verification support is provided, including proprietary “hinting” and error identification displays that provide precise correction suggestions. This interactive editing and verification of layouts ensures DRC-clean designs in the shortest time possible, reducing overall design cycle time and providing designers with more time for design optimization, resulting in better quality and higher performance.

Calibre RealTime was previously recognized by Electronic Design with a Best award recognizing “the best technology, products and standards” of the year, and by DesignCon with its DesignVision award, honoring ” the most innovative design tools in the industry.”

For more information, visit the Calibre RealTimewebpage.




Apply within: four embedded instrumentation approaches

Apply within: four embedded instrumentation approaches
by Don Dingee on 12-21-2012 at 9:00 am

Anyone who has been around technology consortia or standards bodies will tell you that the timeline from inception to mainstream adoption of a new embedded technology is about 5 years, give or take a couple dream cycles. You can always tell the early stage, where very different concepts try to latch on to the same, simple term.

Such is the case with embedded instrumentation. At least four different post-silicon approaches have grabbed the term and are applying it in very different ways. Here at SemiWiki, our team has begun to explore the first two, but I found a couple others in Googling around.

The basic premise of all these approaches is how to see inside a complex design comprised of multiple IP blocks. All borrow from the board-level design world and the concept of IEEE 1149.x, JTAG. By gathering instrumented data from a device, passing it out on a simple low pin count interface, and connecting those interfaces to an external analyzer, a designer can gain visibility into what is going on without intrusive test points and broadside logic analysis techniques. (JTAG is also handy for device programming, but let’s stick to debug and visibility in this discussion.)

A quick snapshot of these approaches:
IJTAG, IEEE P1687
– Internal JTAG targets the problem of IP as a “black box”, especially when considering the problem of 3D and importing a block wholesale. The standard tries to reign in proprietary test methods using differing and incompatible methods into a single, cohesive framework. By standardizing the test description for an IP block, using Instrument Connectivity Language (ICL) and Procedure Description Language (PDL), a P1687-compliant block can connect into the test structure. If IP providers start adopting P1687, IP blocks plug-and-play in test development packages such as ASSET InterTech ScanWorks and Mentor Graphics Tessent.

IJTAG, Testing Large SoCs, by Paul McLellan
Mentor and NXP Demonstrate ITJAG …, by Gene Forte
One pager on IEEE P1687, by Avago Technologies

Tektronix Clarus – Tek has come at the same problem from a different direction, working on the premise that RTL is RTL, and an optimized approach which looks at an RTL design and inserts analysis instrumentation seamlessly during any EDA flow. Tektronix Clarus is built around the idea of inserting a lot of points – perhaps as many as 100,000 – but not taking a lot of real estate. Instrumentation is always collecting data, but the analyzer capability uses compression and conditional capture techniques to bring long traces on just the signals of interest. This approach is more about improved analysis and deeper visibility under functional conditions.

Tektronix articles from SemiWiki

Nexus, IEEE-ISTO 5001 – Recognizing that processors, especially multicore parts, are getting more complex and software for them is getting harder to debug, Nexus 5001 creates more robust debugging capability than just IEEE 1149.7 JTAG offers. Defining a second Nexus auxiliary port with five signals, and adding higher bandwidth capability extends the capability for operations like reading and writing memory in real-time. There is also a port controller which combines signaling for up to 8 cores. The latest revision of the standard in 2012 adds capability for the Xilinx Aurora SerDes, a fast pipe for large traces. (This is one of those standards taking a long time to get traction – membership in Nexus 5001 has waned a bit, and still seems focused on automotive MCUs with staunch backing from GM.)

Nexus 5001 forum
IPextreme Freescale IP

GOEPEL ChipVORX – GOEPEL Electronic has looked at that problem of different IP using different interfaces and tried to create a proprietary solution using adaptive models that can connect software tools to various target instruments. There’s not a lot of detail, but the GOEPEL Software Reconfigurable Instruments page has a bit more info.

Different technologies emerge to fit different needs, and these approaches show how deep the need for post-silicon visibility and multicore and complex IP debugging goes. Which of these, or other, embedded instrumentation approaches are you taking in your SoC or FPGA designs?


Formal Verification at ARM

Formal Verification at ARM
by Paul McLellan on 12-20-2012 at 4:34 pm

There are two primary microprocessor companies in the world these days: Intel and ARM. Of course there are many others but Intel is dominant on the PC desktop (including Macs) and ARM is dominant in mobile (including tablets).

One of the keynotes at last month’s Jasper User Group (JUG, not the greatest of acronyms) was by Bob Bentley of Intel, talking about how they gradually introduced formal approaches after the highly visible and highly embarrassing floating point bug in 1997. I already blogged about that here. Bob took time to emphasize that Intel doesn’t endorse vendors and so nothing he said should be taken as an endorsement of anyone. He did say that beside using commercial tools they also have some of their own internal formal tools.

Later in the day ARM talked about their use of formal verification and, in particular, Jasper. Laurent Arditii of the ARM Processor Division in Sophia Antipolis (where I lived for 6 years, highly recommended as great mix of high tech, great lifestyle and great weather) presented. In some ways this was an update since ARM presented on how they were gradually using more and more formal at last years JUG.

He characterized ARM’s use of Jasper as AAHAA. This stands for:

  • Architecture
  • bug Avoidance
  • bug Hunting
  • bug Absence
  • bug Analysis

Architecture:
Specify architectures using formal methods and verify them for completeness and correctness. For example, ARM used this approach for verifying their latest memory bus protocols. They talked about this at last years JUG.

Bug Avoidance:
Catch bugs early, during design bringup, usually before simulation testbench is ready. By catching bugs early they are easier and cheaper to fix.

Bug Hunting:
Find bugs at the block and system level. Server farm friendly (lots of runs needed).

Bug Absence:
Prove critical properties of the design. This is complex expert stuff and requires considerable user-expertise.

Bug Analysis:
Investigate late-cycle bugs. Isolate corner-case bugs observed in the field or in the lab. Confirm the correctness of any fixes.

Jasper formal is now going mainstream in ARM, which is a big advance from last year when it was starting to get traction in just some groups. Bug Avoidance and Hunting flows are leading the proliferation and give an early return on investment. They now have formal regression flows running on their server farm. Formal is no longer a niche topic. At an internal ARM conference on verification they had over 100 engineers interested. Just like Intel, a lot of the introduction of formal requires cookbooks and flows to be put together by real formal experts and then proliferated to all the various design teams.


For example, the above graphic shows two IP blocks. One was done with classical techniques (aka simulation). The other, an especially hard block to verify, was done using formal techniques with no bringup testbenches. As you can see, most of the bugs were found much earlier.


Currently Jasper is heavily used for various aspects of AAHAA. You can see in the above diagram how Jasper features map onto the AAHAA taxonomy.

In the future, apart from propagating additional formal techniques, ARM wants to use formal approaches for coverage analysis, IP-XACT validation, security validation (TrustZone), UPF/CPF (power policy) in formal


IP Scoring Using TSMC DFM Kits

IP Scoring Using TSMC DFM Kits
by Daniel Payne on 12-20-2012 at 11:00 am

Design For Manufacturing (DFM) is the art and science of making an IC design yield better in order to receive a higher ROI. Ian Smith, an AE from Mentor in the Calibre group presented a pertinent webinar, IP Scoring Using TSMC DFM Kits. I’ll provide an overview of what I learned at this webinar. Continue reading “IP Scoring Using TSMC DFM Kits”


Intel’s New Tablet Strategy Brings Ivy Bridge to the Forefront

Intel’s New Tablet Strategy Brings Ivy Bridge to the Forefront
by Ed McKernan on 12-19-2012 at 11:00 pm

In an article published this week in microprocessor report and highlighted in Barron’s, Linley Gwennap makes the argument that Intel should stay the course and fix the PC instead of trying to offset its declines with sales into the Smartphone and Tablet space. He cites that lower PC sales growth was due to a dramatic slowdown in processor performance gains (60%/year to 10-16%/year) and that the mobile market outside of Samsung and Apple is only $1.5B in size. I think his analysis on Intel’s focus is correct, yet restrictive. Intel has on several recent occasions subtly communicated a Firewall strategy that is firm when it comes to tablets while giving way in the Smartphone space, especially if it leads to Foundry Business (i.e. Apple).

What is a tablet? The first iterations defined by Apple are soon to be challenged as the Ultrabook Convertibles come down in size and price to compete with the iPAD and likewise the battery life, touchscreen and baseband functions migrate upward. What began as a low performance consumer internet device will in 2013 expand into a broad range of compute devices to satisfy the needs of consumers as well as corporations from price points starting at below $300 to above $1200 and from 7” to 13” LCD based formfactors. This is the battleground and Intel needs x86 to succeed. Its main argument for x86 will be with corporate based on its traditional strength, which is performance. Not of the Atom variety, but of the combination of a 14nm Ivy Bridge and Haswell.

Apple’s introduction of the A6X upgraded iPADs in October was a foretaste of what is to come. The tablet market is really the successor to the notebook, relying on a better balance of battery life and performance aided by SSDs and absent the high Intel processor price. Cannibalization has occurred rapidly in the consumer channel, especially following the introduction of the iPAD mini. Next up the corporate market where performance longevity drives the CFO and CTOs calculation of 3-5 year ROIs. This is why the A6X processor is key. The much-improved performance of the A6X based on a full custom design encroaches on Intel’s Ivy Bridge turf and thereby enables the iPad to benchmark well against Ultrabooks and convertibles, effectively leap frogging competitive ARM solutions and Intel’s Atom.

Publicly, Intel continues to communicate that the Atom processor that is behind in terms of schedules and process technology will catch up by 2014 on 14nm. However the market isn’t waiting and in fact is evolving faster than what was originally anticipated. As an example, I offer a little anecdote from a meeting I had with a PC customer a few weeks ago. I asked about whether there were plans to build a tablet with a fan and the engineer remarked for sure. This should tell you that the tablet market in 2013 will be an all out battle where performance will be pushed to its limits. It doesn’t take much to know that Intel will be driving this trend with the help of its customers. This is why I believe Intel’s roadmap will change dramatically in the months ahead as they cram peak performance into smaller enclosures while at the same time recreating extensive cooling systems as they did in notebooks more than a decade ago.

Currently Haswell is expected to enter the market in Q2 2013 at 22nm. It is designed to win the high-end PC market, including ultra books. It’s larger die size, though will prevent it from targeting the volume segment of the mobile markets until a 14nm shrink arrives with Broadwell in Q2 2014. The interim period from now until Q2 2014 presents a hole that Intel must fill. This is where I expect Intel to make a dramatic move with Ivy Bridge. First up will be a reduced MHz (~ 1Ghz) and reduced voltage (sub 1V) part to drop the thermal power below 10W to satisfy the cooling demands of even the smallest tablets. Following this, though I expect Intel to deliver a 14nm shrink of Ivy Bridge to reduce power further and to build an economical part with die sizes down as small as 50mm. With this part Intel will be able to cover the four corners of lowest cost, highest performance, best performance per watt and longest battery life.

Given the high yield of Ivy Bridge at 22nm and that it is the high volume runner, it would be a perfect candidate to ramp as soon as 14nm is ready. The combination of Haswell and a 14nm Ivy Bridge in late 2013 really can be seen as the two pillars that Intel will rely on to slow the movement away from the $35B legacy PC platform. As mentioned in a previous blog if the Datacenter business ramps from $10B in 2011 to $20B in 2016 then Intel can sustain a PC decline of roughly 1/3 and maintain a similar cash flow. This assumes that the Fabs are adequately loaded.

The aggressive mobile strategy outlined here, leaves open the question about Atom’s long-term viability. I believe the value of Atom to Intel has to do more with training Engineers on low power circuit design, efficiencies in developing custom SOCs and building out IP. The tablet and ultrabook mobiles are vastly different from Smartphones in the one area that counts most and that is increased space that offers vastly more degrees of freedom in which to fit the processor, memory and wireless components.

The Smartphone is quickly becoming a two horse race, where volumes will eventually reach multibillions of units. Apple and Samsung are both continuing to vertically integrate across most every component. Samsung is on a path to be completely internal, while Apple is cobbling together a supply chain that includes the “over the hill gang” of formerly great Japanese suppliers about to be cost leaders with the rapidly depreciating Yen and leading edge semiconductor giants like TSMC and most likely Intel. Having both will allow Apple to have the best economics and leverage.

Intel’s retreat from the smartphone market with the exit of Atom will be necessary for Apple to sign on. Despite rumors to the contrary, they will be happy to build Apple’s ARM processors if it comes at the expense of Samsung or TSMC. In 12 months Intel’s expanded six Fab foot print will be complete and available to support any combination of 22nm and 14nm processes (85% of the equipment in the fab works for both process nodes). Today it takes roughly three to meet the demands of the x86 market. Intel’s CapEx budget will drop dramatically in the coming year enabling Intel to use its tremendous cash flow to “buy” foundry customers. Likewise, if the Fabs remain empty, the cash flow will diminish rapidly.


Full Disclosure: I am Long AAPL, INTC, QCOM, ALTR