RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

No, gosh darn it, I said the NFC is near!

No, gosh darn it, I said the NFC is near!
by Don Dingee on 03-22-2012 at 11:24 am

Getting a feature to take off in today’s smartphone market is tricky. It requires a combination of hardware support, OS support, app integration, and maybe most importantly carrier adoption. Ideas that seem ready technologically, like NFC, get stopped in their tracks by silly things like the William J. LePetomaine Thruway toll booth.

Continue reading “No, gosh darn it, I said the NFC is near!”


What’s Up with SNUG This Year in Santa Clara?

What’s Up with SNUG This Year in Santa Clara?
by Daniel Payne on 03-22-2012 at 11:04 am

Next week is a big deal because it’s when Synopsys has their annual user group meeting, SNUG in Santa Clara at the Convention Center from Monday through Wednesday. I’d love to hear if they have made any decisions on the new product roadmap after the Magma acquisition, although it’s probably too early to tell.

Traditionally the marketing folks are kept away from the podium, and it’s really about users talking about their design challenges and how they used one or more Synopsys tools in their design flow to get the job done.

If you haven’t registered yet online, no problem, just show up and register at the event. There’s no charge if you are a Synopsys customer.

Enjoy the free WiFi and now you can even give your feedback for each session using an online system instead of paper.

You can expect to be joined by about 2,000 other engineers at this SNUG in Silicon Valley.

Post a SNUG trip report HEREand qualify to win an iPad2!

On Monday morning the welcome message is from John Busco, an NVIDIA engineer that I follow online and also the Technical Chair for SNUG Silicon Valley. Aart de Geus is the keynote speaker.

John Cornish from ARM is the keynote speaker on Tuesday morning.

Dr Chenming Hu from UC Berkeley will be the keynote speaker on Wednesday.


3D-IC Testing – A 3D perspective to SoC

3D-IC Testing – A 3D perspective to SoC
by Pawan Fangaria on 03-21-2012 at 9:30 am

In my last article I talked about the physical design aspect of 3D-IC. Now looking at its verification aspect, it spans through a wide spectrum of test at hardware as well as software level. The verification challenge goes much beyond that of a SoC which is at a single plane. Even a typical SoC that comprises of a processor core, memory controller, GPU, IP block and peripheral units is very difficult to test as a whole system together. A cycle accurate level of testing needs RTL level of description of the whole system and that cannot be attempted at the very early stages of the SoC. Therefore the usual practice is to test the SoC through Virtual Platform to start with. Hardware, software co-verification can take place in the Virtual Platform environment. At this stage, by using SystemC TLM (Transaction Level Modelling), embedded software can be verified at Untimed (UT) and architectural exploration and verification can be done at Loosely Timed (LT) or Approximately Timed (AT) abstraction levels as per need. Most of the major architecture level design issues need be sorted out at this stage. Then the design passes through verification at each stage of design cycle till GDS. Nevertheless, provision for test access architecture like Built-In-Self-Test (BIST) needs to be provided in the chip for testing of manufacturing defects.

Now if we consider a 3D-IC, it can have multiple processor cores, memory controller, cache, high speed peripherals etc. on multiple planes. Each die and the whole stack need to be tested. The dies can come with different IPs or design components from different vendors and can be heterogeneous. The order of magnitude of testing increases exponentially. One faulty die would mean the whole stack to be useless. The individual die can be assumed to be tested by the vendor, provided that is certified on a common test standard. Here are some of the complexities and methodologies of testing which need attention –

Testing in Virtual Platform – Virtual Platform is very appropriate to test the whole system together at higher level of abstraction above RTL. For 3D-IC level of test, it needs to augment the notion of design assembly in multiple planes. The software needs to account for effects of TSVs (Through Silicon VIA) and interconnects across the dies adding into delays.

Die testing – It is important that each die is tested thoroughly to be converted into KGD (Known Good Die). Testability needs to be included by the vendor and that needs test standards, discussed below. Comprehensive testing of die needs more advanced fault models other than stuck-at, bridging and transition faults. Mentor has developed what is called user defined fault model (UDFM) which can be cell aware. The die should also be accessible and testable after being packaged into the stack.

Interposer and TSV testing – Like KGD, interposer and TSV also need to be certified to be known good interposer and TSV for robust inter-die connection and good yield. The die interconnect should be accessible and testable.

Package (Stack) testing– This needs a test architecture which can enable transport of test data and control signals through all dies and interconnects in the stack. External IOs are typically located at the bottom most die, hence the test signals at a die need to pass through the die below it to finally arrive at the external IO. This asks for a standard configuration of test circuitry at every die which is interoperable as the dies in a stack can come from different vendors. In the stack, testing can be done at partial as well as full stack level.



IEEE standards for test
IEEE1149 describes the boundary scan test which is a mature DFT technique used at board level and can be used through the bottom die. IEEE1500 has been developed for test re-use and embedded core test. It has serial as well as parallel test access architecture and rich instruction set for testing cores and associated circuitry. It uses core test language (CTL) for describing test data and wrappers. IEEE1687 describes standard for access and control of instruments embedded into the chip for debugging purposes. To address the issues of heterogeneous dies from multiple vendors, IEEE1838 has been proposed which is die-centric and leverages features from IEEE1149, IEEE1500 and IEEE1687.

Standardizing test access architecture helps IP, die and stack providers to document and deliver appropriate level of information on test access architecture and data. EDA vendors can provide tools to generate and make use of the standard test access architecture. This is specifically necessary for 3D-IC and SoC as there are multiple parties involved in its making.

By Pawan Kumar Fangaria
EDA/Semiconductor professional and Business consultant
Email:Pawan_fangaria@yahoo.com


According with Cadence, PCI Express gen-3, to be the PCIe solution for the mainstream market as soon as in 2012

According with Cadence, PCI Express gen-3, to be the PCIe solution for the mainstream market as soon as in 2012
by Eric Esteve on 03-21-2012 at 9:10 am

The launch from Cadence of the PCI Express 3.0 Controller IP was officially done about one year ago, and demonstrated at the June 2011 PCI-SIG Developer’s Conference, where Cadence Design IP for PCI Express 3.0 controller IP implemented as a high-performance, dual-mode, 128-bit data-path, x8 PCI Express 3.0 controller configuration was shown, implemented in a customer’s ASIC, see here.

The associated Verification IP (VIP), made of Compliance Management System (CMS) which provides interactive, graphical analysis of coverage results, and PureSuite which provides the PCIe associated test cases, clearly demonstrate that the acquisition of Denali has greatly helped Cadence to position on the advanced PCIe IP market, with design IP (Controller) and VIP.

Maybe some history will help. Back in 2006, Denali was known for their VIP products for Interface functions like PCIe, USB or SATA, when they first launch a PCI Express (gen-1 at that time) Controller IP. It was quite surprising, especially for their former partners, suddenly becoming their competitors! Nevertheless, they found a place on the market, positioning on the high end (and expensive) side, supporting Root Port and soon Single Root I/O Virtualization (SRIOV), a solution targeting the PC Server market when Synopsys and PLDA where positioned on the mainstream PCIe IP market. Then PCIe 2.0 specification was issued, in 2007, and Denali was still in the race. With the launch of this PCIe 3.0 solution, supporting single root I/O virtualization (SR I/OV), Cadence was initially targeting the high end, advanced applications, like storage, supercomputing, enterprise and networking. But, with the release from Intel of the Z68 chipset, PCIe gen-3 ready, in May 2011 and the launch from the motherboard manufacturers (ASUS, ASRock…) of products based on the Z68 chipset, the 3[SUP]rd[/SUP] generation of PCI Express is now available on products sold on the mainstream market.

Initially, Denali was positioning their PCIe Controller IP on the high end and high price market only, leaving the mainstream to the competition. With PCIe gen-3 becoming the solution for the mainstream market, Cadence should increase their market share in the PCIe Controller IP mainstream market and consolidate their share in the high end, thanks to the support of SR-IOV. What is SR-IOV? Briefly, SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. PCI-SIG created and maintains the SR-IOV specification with the goal of having a standard specification to help promote interoperability. One of the milestones achieved for Cadence’s design IP for PCI Express Gen3 is proving SR-IOV interoperability in silicon against an Intel chipset.
Why is it important? The two main advantages of an SR-IOV PCIe device are:

  • It allows multiple OS’s to have their own private view of the PCIe device
  • It helps improve I/O performance by reducing latency of the hypervisor

You will find more information about SR-IOV, as well as a nice video showing how have Cadence customers used PCI Express Gen3 SR-IOV to solve their design problems here.

The PCIe core includes these features:

Single-Root I/O Virtualization
The PCIe core provides a Gen 3 16-lane architecture in full support of the latest Address Translation Service (ATS) specification, Single-Root I/O Virtualization (SR-IOV) specification, including Internal Error Reporting, ID Based Ordering, TLP Processing Hints (TPH), Optimized Buffer Flush/Fill (OBFF), Atomic Operations, Re-Sizable BAR, Extended TAG Enable, Dynamic Power Allocation (DPA, and Latency Tolerance Reporting (LTR). SR-IOV is an optional capability that can be used with PCIe 1.1, 2.0, and 3.0 configurations.

Dual-mode operation

Each instance of the core can be configured as an Endpoint (EP) or Root Complex (RC).

Power management

The core supports PCIe link power states L0, L0s and L1 with only the main power. With auxiliary power, it can support L2 and L3 states.

Interrupt support

The core supports all the three options for implementing interrupts in a PCIe device: Legacy, MSI and MSIx modes. In the Legacy mode, it communicates the assertion and de-assertion of interrupt conditions on the link using Assert and De-assert messages. In the MSI mode, the core signals interrupts by sending MSI messages upon the occurrence of interrupt conditions. In this mode, the core supports up to 32 interrupt vectors per function, with per-vector masking. Finally, in the MSI-X mode, the controller supports up to 2048 distinct interrupt vectors per function with per-vector masking.

Credit Management

The core performs all the link-layer credit management functions defined in the PCIe specifications. All credit parameters are configurable.

Configurable Flow-Control Updates

The core allows flow control updates from its receive side to be scheduled in a flexible manner, thus enabling the user to make tradeoffs between credit update frequency and its bandwidth overhead. Configurable registers control the scheduling of flow-control update DLLPs.

Replay Buffer

The Controller IP incorporates fully configurable link-layer reply buffers for each link designed for low latency and area. The core can maintain replay state for a configurable number of outstanding packets.

Host Interface

The datapath on the host interface is configurable to be 32, 64, 128 or 256-bits. It may be AXI or Host Application Layer (HAL) interface.

Offering the PCIe 3.0 IP Controller and VIP is a must, as Cadence is clearly and strongly positioned on Verification IP and well positioned on a few interface IP, essential to support SoC design, like DDR3, DDR4, LPDDR2, LPDDR3, and PCIe. Several design-in for PCIe gen-3, like those at PMC-Sierra, Cray Research or Marvell have demonstrated that Cadence commitment to the PCI Express market has lead to success. Cadence has enhanced the memory controller products inherited from Denali by moving to a hard PHY solution, will they consolidate their penetration in the PCIe IP market the same way, by adding PHY IP to their port-folio?

By Eric Esteve from IPNEST


Apple’s Leveling of the Semiconductor Industry

Apple’s Leveling of the Semiconductor Industry
by Ed McKernan on 03-20-2012 at 2:03 pm

Holman Jenkins, the distinguished writer of business trends for the Wall St. Journal, recently penned an article entitled “The End of Apple’s Roach Motel?” (Personally, I think that since Apple is in California, he should have used Hotel California in his title), questioning the iPhone and iPAD maker of its ability to continue selling devices at high ASPs while competitors build out similar functioning cloud based ecosystems. The article makes a valid point that a $600 iPhone is likely to come down in price but neglects to highlight the cost offsets that are about to come based on Apple’s expanding supply chain, including the raft of semiconductor companies that have been operating with 50%+ Gross Margins for what seems like eternity.

The fabless model as it evolved from the 1990s to today was based on a fast turn cycle in semiconductor design married to a CapEx model that spread risk (read cost) across the industry horizon. Within the PC space, there was no end to the demand for processor performance and thus Intel was not that interested in participating in areas beyond the processor and the chipset. The chipset was the gate that locked in hundreds of Taiwanese motherboard and notebook designs to the Intel platform, completely incompatible to the presume competitors like AMD, Cyrix and Transmeta.

At the turn of this century, WiFi opened up a new avenue for fast moving fabless vendors like Broadcom to enter into the PC space with significant value but outside Intel’s “Desktop & Notebook designated Fence.” But with WiFi came greater portability, smaller formfactors and the need to consider battery life not MHz as the key defining factor. Intel responded initially with Centrino but then flagged until the Smartphone and Tablet markets kicked them out of bed.

The surprise though that Apple unleashed was a vertically integrated Software and Semiconductor model that mirrored the PC duopoly of Microsoft and Intel at a revenue multiple that in 2H 2012 will be on a run rate that is 1.5 times as large vs. five years ago when it was roughly one fourth. With a low operating overhead (especially R&D), Apple has the flexibility to trickle down prices as needed but also at the same time expand the number of underlying platforms that reside within the nice iPAD and iPhone enclosures. And here is where they will probably invest a good portion of their R&D efforts over the coming year to stay ahead of competitors.

Nothing says that the 50%+ gross margins that are typical of nVidia, Broadcom, Marvell, TI and AMD/ATI graphics and the rest of the mobile supply chain is sacrosanct. I would in fact argue that as Apple increases market share and splits its business that these gross margins will drop into the mid 30s. Given that the Fabless companies have R&D budgets that as a percentage of sales are 25% and higher, this is going to cause some severe pain and likely cutbacks. Intel, as a contrast invests 16% of Sales in R&D, which includes process development.

Value migration is on a warpath and will come in multiple waves. The first wave will be head to head competitor price discounts. Following this is the likely scenario where Apple funds Fab CapEx to reduce the margins of the foundry as well. With demand risk removed, it is no longer viable for TSMC or Samsung or Global to charge 50% gross margins on a wafer. In effect Apple will squeeze the margins at both ends.

One could argue for the moment that Apple is still a relatively small piece of the PC and smartphone markets, therefore the collapse of margins is overstated. All true, however on the other side of the vise is Intel who by my estimation is willing to sell Atom processors at scortched earth pricing based off a depreciated 32nm fab today and at half the die size next year. The true value of Mobile Tsunami platforms, if one hasn’t guessed yet, is in the kitchen sink communications chip (4G/3G, WiFi, Bluetooth etc…) and in the processor that furthest extends the life of the battery. People value being connected – All Day.

Apple’s leveling of the semiconductor supply chain is coming, with an assist from Intel. The Fabless model is going to come under some extreme duress as it applies to leading edge process technology where many of the wafers are targeting Smartphones, Tablets and Ultrabook PCs. Expect Qualcomm and Apple to take the lead in financing leading edge fabs in partnerships with either TSMC or Samsung (and perhaps Global Foundries).

FULL DISCLOSURE: I am long AAPL, INTC, QCOM and ALTR


A Chat with John Stabenow

A Chat with John Stabenow
by Daniel Payne on 03-20-2012 at 10:57 am

John Stabenow is the marketing group director at Cadence for the Virtuoso products and it has been awhile since we last talked, so we met for lunch on Friday at McMenamins in a city called West Linn, half-way between where we both live in Oregon. I had blogged about Interoperability at DAC 2010 and we had a public exchange at Chip Design Magazine. This time we had a wide ranging discussion on IC topics.

Q: What’s new in IC EDA these days?
A: One new thing is from Orora, a start-up in Seattle, they have an analog model generation tool called Arana that looks interesting. You provide it a transistor-level netlist and it produces an Analog Behavioral Model (ABM) out. ST is an early user. Dr. Richard Shi is the founder.

Q: What tools does Cadence offer for generating analog models?
A: We have something called Schematic Model Generator (SMG).

Q: Why are analog models so important?
A: There’s a great need to abstract analog behavior into higher-level models that can be quickly and accurately simulated, because you cannot SPICE everything you want in a short enough time.

Q: What did you learn from the Neolinear acquisition since 2004?
A: That analog synthesis and analog layout synthesis is very difficult.

Q: What benefit will Synopsys have after acquiring Magma?
A: They can recover some of the simulation customers lost to FineSIM and P&R tools. They also need to decide which IC layout tools survive: IC Designer (a Virtuoso clone) or Magma Titan? Magma probably has more IC layout design customers than Synopsys does.

Q: As the technology nodes get smaller, what issues are your IC customers experiencing?
A: Layout Dependent Effects (LDE) is a growing area of concern in IC design. Transistor performance now depends on what is placed next to each MOS device. You need more than just DRC rules, you need some automation that can be used by a circuit designer first. We will never eliminate the IC layout designer job.

Q: How is your relationship with ClioSoft?
A: They are a great Hardware Configuration Management (HCM) partner, their product is well liked and highly regarded by customers, it’s very complimentary with Virtuoso. IC Manage and Methodics are other HCM vendors. Data management for analog asks the basic question, “What changed on my schematic or layout?” Digital data management is quite different because they look at their task as one of software source code management mostly.

Q: Does Cadence create analog IP for sale?
A: Yes, we have an analog IP design group in Columbia, MD that provides that service. They use Virtuoso tools and tell us unfiltered what they think. Their customers don’t reveal their identities.

Q: What is your take on Carl Icahn raiding Mentor Graphics?
A: My personal opinion is that it would be a mistake to breakup Mentor into pieces.

Q: Is EDA360 still alive now that John Bruggeman left Cadence?
A: We’ve organized our product marketing people back into the business units now, instead of having them centralized. I wish John all the best. CDNLive is not promoting the EDA360 banner front and center.

Q: What were some of the highlights of CDNLive last week?
A: The GLOBALFOUNDRIES 28nm reference flow was a highlight. Freescale did a paper on LDE. Other customers presenting on the IC side include: IBM, Maxim, LSI and TSMC. ADI had a paper on using Circuit Prospector for design reuse and they showed how design constraints could be automated in just seconds. Orora has a paper on the second day showing how an AMS design optimization was reduced from 6 days to just 6 minutes.

Summary

I learn so much when I talk to a seasoned EDA executive like John Stabenow, and look forward to blogging more about how Cadence works with foundries on advanced nodes like 20nm to ensure that the tools, PDKs and methodology are in place to create successful IC designs.


GLOBALFOUNDRIES Dresden Fab 1

GLOBALFOUNDRIES Dresden Fab 1
by Daniel Nenni on 03-18-2012 at 6:00 pm

Even though my Dresden trip was fraught with fail points it went off without a hitch. Flying over was easy, I connected through London Heathrow, flying back I connected through Frankfurt. The last time I connected through Frankfurt was right after the 9/11 attacks so I had a bit of deja vu. I was in Munich, Heathrow was closed, I was routed through Frankfurt and experienced the most frightening security procedures ever. TSA procedures today are nothing compared to Frankfurt during 9/11.

Day 1 was a visit to Fab 1. I get the VIP treatment which is very flattering because really, I’m just a regular guy trying to put four kids through college. To be completely honest, I’m not a fan of the GFI marketing pitch where they expound on the GLOBAL part of GLOBALFOUNDRIES, having fabs in different countries equals less risk for the customer. Even suggesting that a natural disaster can take out the Taiwan fabs is a horrible mental image, especially if they mention the Tsunami in Japan. I’ve experienced several Taiwan earthquakes, including the big one on September 1999, then again in July 2009. Thousands of people were killed and injured yet the fabs are still there. Last year my earthquake karma was better. My March trip ended early so I was in the air for the Thursday 6.9 earthquake. My next trip started late with a Monday evening arrival so again I was in the air for the Monday 6.5 quake, as I blogged before, my Taiwan friends joke that I bring California earthquakes to Taiwan. One of my better blogs on the subject “TSMC Earthquake Damage Redo” might be worth a read.

Rather than stay in a western hotel, I stayed in a renovated mansion near city center. This way I was able to walk downtown and get a feeling for the Dresden culture and taste the local cuisine. Too cold and rainy for the beer gardens but I did get out for beer and pretzels. Even though it was cloudy and dreary, it was still a very nice visit with very friendly people and a very tourist safe environment. The only dissappointment is that I did not see one Porsche in Dresden. The taxis were all Mercedes and there were lots of VWs but not a Porsche in sight.

After the marketing presentation and a nice Dresden lunch I got a tour of the fab. I opted out of the clean room tour this time and spent an hour or so with a materials science guy. These guys are physicists and honest to a fault so I trust them implicitly. I also look at the equipment and when I see state of the art electron and ion-beam microscopes costing many millions of dollars I know these people are in it to win it. One thing I did not know is how much semiconductor history Dresden had. My tour guide was one of the people who opened the fab in the late 1990’s starting at .18 micron. That is a very deep pool of experience. It was also good to see University interns at the microscopes, young and old eyes looking together.

After the fab tour I went to the Dresden Military History Museum. It wasn’t as depressing as the Dachu Concentration Camp tour I did in Munich but it was very blunt about the atrocities of World War II. They gave me an iPad Touch and I roamed the halls for a couple hours. The thorough bombing of Dresden by the Allies remains one of the more controversial actions and it was described in detail from both points of view. Very interesting. There were also cars and military vehicles of the period which I’m really into. Definitely time well spent.

Bottom line: GLOBALFOUNDRIES has done an incredible thing in combining the Chartered Semiconductor Fabs with the AMD fabs, integrating the IBM process technology and building a world class pure-play foundry. I think as long as they can control the public relations and marketing people and keep expectations realistic, GFI will be a major player in the semiconductor ecosystem for the long term. As an internationally recognized industry blogger that is my heartfelt opinion, believe it.


EDPS Monterey

EDPS Monterey
by Paul McLellan on 03-17-2012 at 8:00 am

Every year in Monterey is a relatively small conference that looks at the design process, EDPS, the electronic design process symposium. I gave a keynote there a couple of years ago, but you don’t have to listen to me this time. The keynotes are from:

  • 1st day: Misha Buric, CTO of Altera, talking about SoC FPGAs and other things
  • Dinner: Jim Hogan, himself, talking about SoC Realization
  • 2nd day: Riko Radojcic, director of engineering at Qualcomm, talking about 3D IC roadmap

I highly recommend this conference. It covers a lot of different issues. The second day, in particular, covers a lot of information on 3D ICs which is clearly a hot topic. Silicon interposer ICs and memory on processor are clearly arrived, and true 3D ICs will be coming, especially if EUV isn’t ready for full production by 14nm.

The first day is everything that isn’t 3D. After Misha’s keynote are the top 5 problems of EDA:
[LIST=1]

  • Sri Granta of Broadcom on DFT at the RTL level
  • Frank Schirrmeister of Cadence on embedded software
  • Tom Spyrou of AMD on parallelized tools
  • Sangeeta Aggrwal of Synopsys on a mysterious unnanounced topic
  • err…isn’t that just 4 problems

    After lunch, there is a panel session on EDA in the cloud with:

    • Hans Spanjaart of Altera (moderator)
    • James Colgan of Xuropa on the CADless semiconductor company
    • Don MacMillen of Nimbic on electromagneic simulation in the cloud
    • Kiron Pai of Intel on improving producitivity in the cloud
    • Azadeh Davoudi of University of Wisconsin on highly distributed global routing
    • Naresh Seghal of Intel on optimizing a cloud

    Gary Smith reviews the new ITRS power model which took longer than expected to produce but was finally announced in January this year.

    Ian Ferguson of ARM on energy efficient servers in the data center (let me guess, ARM ones).

    Qi Wang of Cadence on whether the power problem is solved (I’d say not yet).

    Grant Martin of Tensilica on another mysterious unannounced topic but if I had to guess it would be something to do with offloading the control microprocessor (usually ARM) with specialized VLIW processors optimized for the task at hand.

    Then off to the wharf for dinner and Hogan.

    Next day kicks of with Riko’s keynote and then a series of 3D IC design topics:

    • Stephen Pateras of Mentor on BIST for 3D ICs
    • Arif Rahman of Altera on FPGA design challenges, presumably 3D ones
    • Samta Bansal of Cadence on the wide-IO standard for putting memory stacks on processors

    During lunch there is a 3D IC panel moderated by Steve Leibson:

    • Herb Reiter
    • Samta Bansal of Cadence
    • Dusan Petranovic of Mentor
    • Deepak Sekar of Monolithic 3D
    • Steve Smith of Synopsys

    And with that we wrap up and most of us drive back north.


  • Double Patterning and Then The End of Lithography

    Double Patterning and Then The End of Lithography
    by Paul McLellan on 03-15-2012 at 8:00 am

    I went to a couple more sessions at the Common Platform Technology Forum today, on 20nm double patterning and whatever will we do at 14nm. Basically, this is the end of planar transistors and the end of optical lithography. One session was by IBM scientists about process and one by Michael White of Mentor about double patterning. These two subjects turn out to be very related.

    Double patterning will be required at 20nm because we are so far below the threshold for using 193nm light to print at the level of detail that single patterning requires. And everyone considers that betting on EUV to be ready for 14nm is very risky and so in the early days of 14nm we will use double patterning and then, if EUV works out, then we can switch to it and go back to single patterning.

    The scary things about EUV is just how many things remain to be worked out. The source, which is plasma, is currently 1-2 orders of magnitude dimmer than is required. We don’t yet have good resist that responds to EUV. The masks, which are reflective, have defect issues since we can’t cover them with a pellicle like we can in a transmission mask. The masks are multi-layer films and even the blanks won’t be defect free. The scariest thing was a comment by Lars Liebmann of IBM: “I worked on X-ray lithography for years and EUV is not as far along as X-ray was when we finally discovered it wasn’t going to work.” We really don’t know if we can make EUV work and certainly not by the time it is needed for 14nm. A possible alternative, but equally sketchy in practice, is massively parallel e-beam.

    The FinFET transistors will be much lower power. As the voltage drops, delay doesn’t go up nearly so much as with a planar transistor since they turn off so much more effectively and quickly. This means power supply voltage (squared in the dynamic power equation) can be lower for given performance.

    Double patterning means that half the polygons on the chip are on one mask and half on the other. It is not possible to simply expose the wafer to first one mask and then the other, there need to be etch steps in between and then a new photoresist exposed to the second mask. Since we really can’t live with just horizontal or just vertical metal on M1, this will need to be triple patterned with 3 masks.

    Michael from Mentor started off in general terms pointing out that at every node the designer must address more and more manufacturability concerns. 20nm and 14nm are just more of the same. But double patterning does seem to be a major change, of course.

    One challenge is how to reflect errors back to the user so that they can fix them. The problem occurs when 3 patterns are too close to each other but they can’t be put on separate masks. There are two solutions: redraw at least part of the layout, or split one of the polygons into two (cut and stitch) so that part goes on one mask and part on the other.

    At 20nm we will also need smart cell-based fill rather than the old polygonal dummy fill, since the fill needs to be double pattern aware.

    Mentor’s own place and route, Olympus, has a patterning aware placement that avoids putting cells too close and creating errors. I presume Cadence and Synopsys have, or will have, similar placement.

    Mentor white papers on double patterning and other challenges are here.


    No Semiconductor Design Cloud Strategy? Really?

    No Semiconductor Design Cloud Strategy? Really?
    by Andrea Casotto on 03-14-2012 at 6:00 pm


    I ask my customers about their cloud strategy and they all tell me “none”. The main reason is a red herring: “The legal department will never allow our IP outside our walls”.

    Security issues on the cloud are largely solved, as proven by the fact that banks have no problem using external clouds. Behind the curtain, the real reason for a lack of a push out towards external clouds is the mismatch between the needs of engineering computing and the cloud offering.

    Cloud providers tout the benefits of agility and elasticity of an external cloud, and how well it fits the needs of organizations with spiky workloads. This is not compelling to our most sophisticated customers: they constantly run a background load of random tests on their chips, before, during, and even after tapeout. Plus, they have multiple chips in the pipeline, so the load on the computing resources is always sustained.

    In the past decade, EDA has benefited greatly from the Linux revolution. Linux brought higher speed and lower cost. Cloud Computing brings neither, at least not in engineering computing.

    As technology progresses, it is possible that costs will go down, and data transfer latencies will be reduced. At such time, the EDA licensing model may also have evolved. Today, for what we know, the licensing model is another barrier to adoption of cloud bursting, for it does one no good to deploy 1,000 new cores in an external cloud if one also does not have 1,000 additional simulation licenses to go with it.

    This puts the big EDA vendors, SNPS, CDN, MENT in an advantageous position as providers of cloud computing services for our community, although such offerings will be slanted towards a single vendor solution as opposed to a best-in-class approach.

    From RTDA’s point of view, as provider of software to manage all computing resources, we remain neutral with respect to Cloud computing. If it happens, whether it is a cloud cluster that shares licenses with the main cluster, or a hybrid solution with shared data between the local cluster and the cloud machines, we have experimented with both.

    For now, we keep our focus on improving our NetworkComputer scheduler, in order to provide the highest possible performance for processing our customers’ workloads using all available licenses and all available computing resources.