Synopsys IP Designs Edge AI 800x100

Design-to-Silicon Platform Workshops!

Design-to-Silicon Platform Workshops!
by Daniel Nenni on 07-17-2012 at 7:30 pm

Have you seen the latest design rule manuals? At 28nm and 20nm design sign-off is no longer just DRC and LVS. These basic components of physical verification are being augmented by an expansive set of yield analysis and critical feature identification capabilities, as well as layout enhancements, printability, and performance validation. Total cycle time is on the rise due to more complex and larger designs, higher error counts, and more verification iterations so we have some work to do here.

Learn how to leverage the superior performance and capacity of the Calibre design-to-silicon (D2S) platform, a comprehensive suite of tools designed to address the complex handoff between design and manufacturing. The Calibre D2S platform offers fast and reliable solutions to design rule checking (DRC), design for manufacturing (DFM), full-chip parasitic extraction (xRC), layout vs. schematic (LVS), silicon vs. layout, and electrical rule checking (ERC).

Target Audience:
IC Design Engineers who are serious about an in-depth evaluation of the Calibre Design-to-Silicon platform.

What you Will Learn:

  • Reduce turnaround time with advanced Calibre scaling algorithms and debugging capabilities that directly works within your design environment
  • Execute DFM functions and visualize results using Calibre Interactive and RVE
  • Understand the benefits of hierarchical vs. flat verification
  • Highlight DRC errors in a layout environment by using Calibre RVE
  • Learn the concepts of Waivers and hierarchical LVS
  • Identify and automatically repair planarity issues in low-density regions
  • Identify antennas and understand various repair methods
  • Use Calibre’s advanced Nanometer Silicon Modeling capabilities and understand advanced hierarchical parasitic extraction
  • Address manufacturability issues by using Calibre DFM tools that help analyze Critical Areas and features
  • Understand the importance of identifying LPC hotspots on advanced design nodes

Register Now:
[TABLE]
|-
| Jul 19, 2012
Fremont, CA
| Register
|-
| Aug 16, 2012
Fremont, CA
| Register
|-
| Aug 23, 2012
Irvine, CA
| Register
|-
| Sep 20, 2012
Fremont, CA
| Register
|-
| Oct 18, 2012
Fremont, CA
| Register
|-
| Nov 15, 2012
Fremont, CA
| Register
|-
| Dec 13, 2012
Fremont, CA
| Register
|-
| Jan 17, 2013
Fremont, CA
| Register
|-


How about this, attend the workshop and do a detailed write-up on SemiWiki and I will let you drive my Porsche. This one has the new Porsche Doppelkupplungsgetriebe (PDK) transmission with the Sport Chrono Package. An unforgettable driving experience for sure. Porsche….There is no substitute.


3D Thermal Analysis

3D Thermal Analysis
by Paul McLellan on 07-17-2012 at 11:32 am

Matt Elmore of ANSYS/Apache has an interesting blog posting about thermal analysis in 3D integrated circuits. With both technical and economic challenges at process nodes as we push below 28nm, increasingly product groups are looking towards through-silicon-via (TSV) based approaches as a way of keeping Moore’s law on track and being able to deliver increasingly complex systems at acceptable cost.

There are lots of challenges with 3D ICs, from floorplanning, to noise, to power distribution, to test. But one of the big ones is thermal analysis. Once you stack a lot of die on top of each other, heat from the silicon in the center can really only be dissipated by through the other die. The TSVs themselves, which are large copper plugs (well, large by semiconductor standards), are not just an electrical interconnect between adjacent die but also a thermal connection. Heat from the center is moved in the vertical axis, which with care can be a very good thing. The biggest area where care needs to be taken is to ensure that hot spots on one die do not align with hot spots on the die above or below. The big risk here is thermal runaway, where the temperature increase in turn increases current and power and so further increases the temperature. A chip can be completely destroyed by this.

Temperature affects performance, reliability, stress and leakage. So a full analysis of a 3D design is not straightforward since everything affects everything else. In particular, temperature affects performance and performance affects temperature.

A good analysis needs a model of how the temperature affects other aspects of the design at micron resolution. In turn this needs to interact with models of the chip, package and board to arrive at a sort of “thermal closure”.

Read the blog posting here.


An Approach to 20nm IC Design

An Approach to 20nm IC Design
by Daniel Payne on 07-17-2012 at 10:10 am

Last month at DAC I learned how IBM, Cadence, ARM, GLOBALFOUNDRIES and Samsung approach the challenges of SoC design, EDA design and fabrication at the 20nm node. Today I followed up by reading a white paper on 20nm IC design challenges authored by Cadence, a welcome relief to the previous marketing mantra of EDA 360.

Here’s a quick overview of the challenges, approaches to overcome the challenges and then the impact. Continue reading “An Approach to 20nm IC Design”


Laker Analog Prototyping

Laker Analog Prototyping
by Paul McLellan on 07-16-2012 at 6:18 pm

Over the years many attempts have been made to increase the level of automation in analog design. Most of these have not been especially successful. Probably part of the reason was inadequate technology but also there is an attitude that “real” analog designers design polygons on the bare silicon. I think two things have changed. Firstly, the technology for analog design, balancing automation with manual control, has improved a lot. And secondly, as more and more is integrated onto big SoCs in leading-edge processes, analog design has had to make a huge increase in productivity to keep up and it is no longer possible to ignore automation.

Laker Analog Prototyping is a new generation of technology. It brings together in one integrated interactive environment:

  • layout prototyping
  • constraint management
  • logic view
  • schematic view
  • integrated placer
  • integrated router

The heart of analog automation is constraint management. There are much more complex interactions between analog circuits than digital and so the designer has to be able to quickly and easily specify these. Even a single transistor can be laid out in many different ways depending on performance, the space available, other devices and so on. For accuracy, many devices and signals have to be symmetric. The analog designer may also have a very good idea of which devices should be clustered together.

However, specifying all the constraints can be time-consuming and tiresome, so in addition to allowing the designer to specify constraints manually, Laker can extract constraints automatically since it recognizes many circuit types and has sensible default constraints to use, automatically matching device and signal pairs for example.


Another important capability is the ability to cross probe in the integrated environment, picking up the same device in the constraint view, the hierarchy tree, schematic and layout.

Under the hood, of course, is a placer that can generate multiple candidate placements based on the contraints, aspect ratio, wire length and so on. The result manager makes it easy to see which trial layouts have good wire length, area and so forth. Good candidates can be saved for further analysis. Routing is also integrated and allows routing estimation to be used during placement.

Of course the goal of all this automation is two fold: better layouts and a faster, more predictable schedule. Doing everything by hand the old way is no longer a good route to either of those goals. The analog prototyping is build directly in the Laker schematic-driven layout environment so the user still has full manual control and can decide the level of automation that is appropriate.



Qualcomm’s Moment to Re-Align Globally

Qualcomm’s Moment to Re-Align Globally
by Ed McKernan on 07-16-2012 at 6:00 pm

Qualcomm has a nice problem to have: too much demand for its Snapdragon and 4G LTE baseband parts. How Qualcomm realigns its manufacturing strategy around this problem will determine whether or not they can breakaway from the ARM camp and go toe to toe with Intel. Last week Malcolm Penn claimed TSMC was too big to fail. Really? The World’s Monetary authorities truly can backstop any bank they want by running the printing presses but there are no backstops if for example an earthquake rips through Taiwan or if trade barriers pop up around a world fighting off weak economic conditions. This and other reasons make it interesting that Qualcomm last week has reportedly signed an agreement with Global Foundries to manufacture 28nm production.

They say that too much success can breed contempt. Lately we have watched as every major semiconductor firm (sans Intel) adopted ARM’s architecture for the Mobile Tsunami Market. Then with the exception of Samsung (and Apple), everyone headed to TSMC for their 28nm production. As TSMC tried to satisfy everyone and enlarge their family, they naturally had to short an existing customer: thus the Qualcomm dilemma. If Qualcomm stays with TSMC then they will need to pony up $5B for their own guaranteed fab and supply. Paul Jacobs, the CEO of Qualcomm acknowledged this in a recent BusinessWeek article. However owning a fab or investing in Capex to guarantee destiny results in stock P/E compression in Wall Street’s Playbook and Qualcomm wants to avoid being denied the “Darling” Status.

Perhaps there is another model.

Years from now we will all probably recount how AMD tried to save themselves by divesting of manufacturing and process development. If nVidia and Broadcom could be successful by going Fabless then perhaps it will work for the x86 market? It hasn’t and in the end it was the radical shift to one-handed mobility that doomed low cost x86. ARM receives much of the early low cost benefit.

With the except of the truly low cost desktop or heavy laptop, what we are witnessing is that the Server Market and leading edge Mobile PCs (ultrabooks and Apple MAC Airs, MacBook Pros) and Smartphones (soon tablets) require leading edge process technology and integration across the board. Intel leads in maximizing Performance/Watt with its Ivy Bridge in 22nm process while Qualcomm has the edge in integrating the processor, graphics and baseband on the 28nm Snapdragon chip.

Never the twain shall they meet and partner.

And so as AMD departed Global Foundries for the well spring of TSMC, joining the rest of the herd, an opportunity arose for Qualcomm to gain control of their destiny at a discount. The bi-yearly screams of low yields and high wafer costs emanating from Nvidia this year seemed louder and perhaps signaled that TSMC had gained an upper hand in the fabless business model.

Qualcomm is much larger and more profitable than Broadcom, Marvell, nVidia and the rest of the ARM camp and should receive special treatment. They are the only supplier that Apple can’t dismiss or be demanding in price negotiations. They are at the moment, however, vulnerable with their dependence on TSMC. With Global Foundries, they can get virtually unlimited supply at what I am guessing is a significant discount on wafer pricing. In addition they will get the full attention of its fab resources while significantly delaying any need for writing the big Capex check. If by mid 2013, Qualcomm recovers to be the majority supplier of Snapdragon and 4G baseband chips at the expense of the rest of the Mobile ARM Camp then they will be in a much stronger position to compete with Intel across the entire range of Mobility. Consolidation is coming quickly to Mobile Tsunami.

FULL DISCLOSURE: I am long AAPL, QCOM, INTC, ALTR


Extreme Ultra Violet (EUV)

Extreme Ultra Violet (EUV)
by Paul McLellan on 07-15-2012 at 8:15 pm

EUV is the great hope for avoiding having to go to triple (and more) patterning if we have to stick with 193nm light. There were several presentations at Semicon about the status of EUV. Here I’ll discuss the issues with EUV lithography and in a separate post discuss the issues about making masks for EUV.

It is probably worth being explicit and pointing out that the big advantage of EUV, if and when it works, is it is single patterning technology (for the forseeable future) with just one mask and one photolithography process per layer.

First up was Stephan Wurm, the director of litho for Sematech (it’s their 25th anniversary this year, seems like only yesterday…). He talked about where EUV is today. Just a little background about why EUV is so difficult. First, at these wavelengths, the photons won’t go through lenses or even air. So we have to switch from refractive optics (lenses) to reflective optics (mirrors) and put everything in a vacuum. The masks have to be reflective too but I’ll talk about that in the next blog. Obviously we need a different photoresist than we use for 193nm. And, most critically, we need a light source that generates EUV light (which is around 14nm wavelength, so by the time EUV is inserted into production it will already be close to the feature size but we’ve got pretty good at making small features with long wavelength light).

The status of the resist is that we now have chemically amplified resist (CAR) with adequate resolution for a 22nm half pitch (22nm lines with 22nm spaces) and seems to be OK down to 15nm. A big issue is sensitivity, it takes too much light to expose the resist which reduces throughput. However, when we have had sensitivity problems in the past they were not so severe and were solved earlier. Line width roughness (LWR) continues to be a problem and will need to be solved with non-lithographic cleanup. Contact holes continue to be a problem. Stephan discussed mask blank defect and yield issues but, as I said, that comes in the next blog.

Next up was Hans Meiling from ASML (with wads of Intel money sticking out of his back pocket). They have already shipped 6 NXE-3100 pre-production tools to customers for them to start doing technology development. They have 7 3300 scanners being built.

You can’t get EUV out of a substance in its normal state, you need a plasma. So you don’t just plug in an EUV bulb like you do for visible light. You take little droplets of tin, zap them with a very high powered CO[SUB]2[/SUB] laser, and get a brief flash of light. They have run sources like this for 5.5 hours continuously. It takes a power input of 30kW to get 30W of EUV light, so not the most efficient process.

Contamination of mirrors is one challenge, given that putting everything in a vacuum and using metal plasma is how we make interconnect and for sure we don’t want to coat our mirrors with tin. ASML found problems with the collecting optics not staying clean after 10M pulses, which sounds a lot until you realize it is about 1 week of operation in a fab running the machine continuously. They now have 3 or 4 times more but there is clearly progress to be made.

Reflectivity of the mirrors is a problem. These are not the sort of mirror you have in your bathroom, they are Mo/Si multilayers which forms a Bragg reflector that reflects light due to multilayer interference. Even with really good mirrors, only about 70% of the EUV light is reflected from the mirror and since the optics require 8 or more mirrors to focus the light first on the mask and then on the wafer, very little of the light you start with (maybe 4%) ends up hitting the photoresist. Some of these mirrors are grazing incidence mirrors, which are mirrors that bend the light along their length like some pinball machine bending the path of the ball and can be used to focus a beam.

Currently they are managing to get 5-7W and have demonstrated up to 30W. For high throughput the source needs to be 200W so this is still something that seems out of reach from just tweaking the current technology.

The light source power issue is the biggest issue in building a high-volume EUV stepper. Intel is betting that a few billion dollars and ASML will solve it.


How has 20nm Changed the Semiconductor Ecosystem?

How has 20nm Changed the Semiconductor Ecosystem?
by Daniel Nenni on 07-15-2012 at 7:30 pm


What does mango beer have to do with semiconductor design and manufacturing? At a table of beer drinkers from around the world I would have never thought fruity beer would pass a taste test, not even close. As it turns out, the mango beer is very good! Same goes for 20nm planar devices. “Will not work”, “Will not yield”, “Will not scale”, as it turns out 20nm is very good!!! The leading edge fabless people who were scratching their heads six months ago are now scheduling 20nm tape-outs for Q1 2013. Crowd sourcing wins again!

The 20nm process node represents a turning point for the electronics industry. While it brings tremendous power, performance and area advantages, it also comes with new challenges in such areas as lithography, variability, and complexity. The good news is that these become manageable challenges with 20nm-aware EDA tools when they are used within end-to-end, integrated design flows based on a “prevent, analyze, and optimize” methodology.

I agree with this statement 100%. It comes from a Cadence white paper A Call to Action: How 20nm Will Change IC Design. 20nm was definitely a turning point for the semiconductor ecosystem. Now that the technical wrinkles have been ironed out let’s look at how the fabless business model has evolved.

“There is no doubt we are at a crossroads at the most advanced process technology nodes. In order to take positive steps forward, significant monetary and collaborative investments and resources are required from both the manufacturing and design sides of the equation,” said Ana Hunter, vice president of Samsung’s North American foundry services.

I spoke with Ana before I left for Taiwan. We are in agreement: The industry is at an inflection point and the business model is changing. A simulated IDM environment is required for fabless semiconductor companies to be competitive at the advanced process nodes, absolutely. Check out the new Samsung “Best of Both Worlds: IDM and Pure-Play Foundry” brochure.

Before, during, and after DAC 2012 I asked various engineers from the top fabless semiconductor companies what has changed for them in regards to how they work with the foundries on new process technologies. I also asked the foundries. The IDM-like answers did not surprise me since working with EDA and IP companies on new process nodes is what I do during the day. It may however surprise others who are on the sidelines and believe the latest Intel PR nonsense.

UMC actually pioneered this simulated IDM environment with Xilinx from .25 micron to 40nm. Xilinx employees literally consumed an entire floor of UMC HQ for more than a decade and acted as the dutiful wife delivering many new processes. After the Xilinx 40nm divorce, UMC is no longer monogamous and now has multiple process development mistresses including TI, Qualcomm, and IBM.

TSMC took a different approach which many people overlook. The early days of collaboration started with Reference Flow 1.0 which is now version 12.0 with many new sub flows to be rolled out at TSMC OIP in October. This collaboration included both established and emerging semiconductor and EDA companies. Next came the IP effort with TSMC developing physical IP for reference and production at zero cost to customers. Commercial IP was next with the TSMC “silicon proven” program. IP companies big and small completed an exhaustive qualification program to make it into the TSMC IP catalog. Next came the TSMC “Early Access Program” where a select group of customers and partners were included in process development activities. Correct me if I’m wrong but I believe this started at 90nm. The qualification process for Early Access was also daunting but clearly it was critical to the evolution of the fabless semiconductor ecosystem. The result being the TSMC DTP (Design Technology Platform) Division which employs hundreds of people and has spent hundreds of millions of dollars (my guess) building the industry leading “Simulated IDM platform” you see today!

That brings us to 40nm. At 28nm and even more so at 20nm the fabless people have taken up residence in Hsinchu and the foundry people have “Early Access” to the top fabless companies. Bottom line: you will be hard pressed to differentiate between a Qualcomm, Broadcom, Nvidia, or Xilinx and a modern day IDM, except of course for the actual ownership of the manufacturing equipment. And lets not forget, the now fabless TI, AMD, Fujitsu, LSI Logic and others used to be IDMs, right?

Thank you again UMC and TSMC for leading the way!


Scoreboards and Results Predictors in UVM

Scoreboards and Results Predictors in UVM
by Daniel Nenni on 07-15-2012 at 10:56 am

If verification is the art of determining that your design works correctly under all specified conditions, then it is imperative that we are able to create an environment that can tell you if this is truly the case.

Scoreboards are verification components that determine that the DUT is working correctly, including ensuring that the DUT properly handles all stimuli it receives. Predictors are components that represent a “golden” model of all or part of the DUT that generate an expected response against which the scoreboard can compare the actual response of the DUT. This online webinar will outline the proper architecture of scoreboards and predictors in UVM and how they relate to coverage.

REGISTRATION

What You Will Learn:

  • Basic scoreboard architecture
  • Basic predictor architecture
  • Integration of predictors and scoreboards
  • Using a predictor for stimulus
  • Integrating SystemC into your predictor with UVM Connect
  • Tradeoffs in including coverage in your scoreboard
  • Efficient messaging to simplify results analysis

Target Audience:

  • Design and Verification Engineers and Managers

About the Presenter:

Tom Fitzpatrick Verification Technologist
Tom is currently a Verification Technologist at Mentor Graphics Corp. where he brings over two decades of design and verification experience to bear on developing advanced verification methodologies, particularly using SystemVerilog, and educating users on how to adopt them. He has been actively involved in the standardization of SystemVerilog, starting with his days as a member of the Superlog language design team at Co-Design Automation through its standardization via Accellera and then the IEEE, where he has served as chair of the 1364 Verilog Working Group, as well as a Technical Champion on the SystemVerilog P1800 Working Group. At Mentor Graphics, Tom was one of the original designers of the Advanced Verification Methodology (AVM), and later the Open Verification Methodology (OVM), and is the editor of Verification Horizons, a quarterly newsletter with approximately 40,000 subscribers. He is a charter member and key contributor to the Accellera Verification IP Technical Subcomittee. He has published multiple articles and technical papers about SystemVerilog, verification methodologies, assertion-based verification, functional coverage, formal verification and other functional verification topics.


Silicon on Insulator (SOI)

Silicon on Insulator (SOI)
by Paul McLellan on 07-14-2012 at 5:51 pm

I attended a panel session followed by a party during Semicon to celebrate Soitec’s 20th birthday. Officially it was titled An Insider’s Look at the Future of Mobile Technologies. But in reality it was a look at the future possibilities for SOI.

Silicon on Insulator (SOI) has been a sort of bastard child of semiconductor. Almost all foundry work (TSMC, GF etc) and Intel in particular has been bulk CMOS. But both IBM and AMD have used SOI for high performance microprocessors. The drawback of SOI is that it is more expensive than bulk to manufacture.

When FinFETs were first invented, it was an SOI-based technology. It is much easier to form the gate over the fin when the substrate stops the etch. It also only had gate on the sides of the fin and not over the top. But TSMC worked out how to put the gate completely around the fin, and Samsung worked out how to manufacture it on bulk technology. The highest profile FinFET proponent is Intel with its tri-gate process, which is bulk although beyond 14nm apparently they are looking at SOI. TSMC is also going FinFET although less aggressively than Intel.


But ST microelectronics is taking the SOI approach and building its advanced processes using fully-depleted SOI (FDSOI). As I explained in a blog earlier this year, as we go below 28nm you have to have the whole channel well-controlled by the gate and that means the channel must be very thin. One way is to build it vertically (the FinFET) and the other way is to make the gate on an insulator so it is still a 2D structure but the channel really is thin. ST were one of the participants on the panel session.

The panelists were:

  • Ron Moore of ARM
  • Subramani Kengeri of GlobalFoundries
  • Gary Patton of IBM
  • Horacio Mendez of SOI Industry Consortium
  • Steve Longoria of Soitec
  • Philippe Margashack of STM
  • Chenming Hu of UC Berkeley

The panel session was a bit like that party game in which you get eliminated if you say “yes” or “no,” except this time you were not allowed to say Intel…even when clearly talking about them. In the end, Dr Chenming Hu, the inventor of the FinFET, gave in and said the dirty word towards the end of the panel. But I expect he consults to them on FinFETs so I guess he had a free pass.

Clearly SOI has a strong future going forward, with ST committed to it, Global planning on using SOI as well as bulk and, as we move down the process nodes (if we can get an economically viable lithography working) it seems to make some things easier especially in the foundry business where there is a bigger separation between process and design.


Direct Write E-beam

Direct Write E-beam
by Paul McLellan on 07-13-2012 at 2:08 pm

One of the presenters at the standing-room only litho session at Semicon this week was Serge Tedesco, the litho program manager at CEA-Leti in Grenoble France. He is running a program called IMAGINE for maskless lithography. Chips today are built using a reticle (containing the pattern for that layer of the chip) which is exposed one die at a time with a flash of light, and then the wafer is moved to the next die and the process is repeated. The nice thing about this process is that the whole of each die is basically processed in parallel with that flash of light (although obviously it is a serial process to get from die to die).

Direct write e-beam takes a different approach. The e-beam is steered across the design and writes the pattern as it goes, in much the same way as a cathode-ray tube (CRT) black and white television worked, but with a rather greater resolution. A TV has dots at around a millimeter and we need dots around 20nm. So more like a scanning electron microscope. The big problem with this approach is that it is serial. No matter how fast the ebeam scans, there are a lot of points at 20nm resolution for a 300mm wafer (if I did the calculation right, about 10[SUP]15[/SUP] points). So to be effective, direct write e-beam needs more than one beam. A lot more.

The IMAGINE program started with a pre-alpha e-beam machine which could only expose 1.5mm at a rate of 0.002 wafers per hour with a single beam. The current version (beta?) has 110 beams and the next machine will have 13,260 and will have full 300mm wafer coverage and process 10 wafers per hour.

The big problems with the approach are:

  • Throughput. If this is going to be viable it it needs to be cost competive with mask-based technologies. The speed of writing depends on the capabilities of the machine but also on the sensitivity of the photoresist.
  • Stitching/overlay. Aligning the pattern on one layer onto the patterns already manufactured on the wafer, and given that this is wafer-scale technology then this needs to be continuously aligned, and making sure all the beams stay aligned with each other so that adjacent patterns line up.
  • Data handling. Obviously the mask data (I’ll continue to call it mask data even though there are no masks) doesn’t need to be stored fully rasterized, but as the machine processes a wafer the computing infrastructure needs to transmit around 10[SUP]15[/SUP] bits of data. At a target speed, for a single machine, of 10 wafers per hour, that is something like 10 terabits per second.
  • Data Integrity. A single bit wrong in the data is enough to cause a “defect”

A full machine will have a cluster of 10 wafer handling units for a throughput of 100 wafers per hour (and 130K beams). That’s a lot of beams and a lot of data handling. Apparently over half the cost of the machine is, in fact, the data handing which is transmitted to the e-beam machine itself over optical fiber.

E-beam direct write for commercial semiconductor manufacturing was actually tried in the mid-1980s. European Silicon Structures (ES2) near Aix-en-Provence (also in France) using Perkin-Elmer equipment. I know a bit about it since they tried to recruit me to work on the EDA software they knew they would need but in the end I stayed with VLSI and went and opened an R&D center for them in Sophia-Antipolis, a couple of hours east. They never got the technology to work and eventually they shut down. The software group largely went to Cadence, I think. Robin Saxby, who I think ran sales, went on to spin ARM out of Acorn and so to fame and fortune.