SILVACO 073125 Webinar 800x100

Extreme Ultra Violet (EUV)

Extreme Ultra Violet (EUV)
by Paul McLellan on 07-15-2012 at 8:15 pm

EUV is the great hope for avoiding having to go to triple (and more) patterning if we have to stick with 193nm light. There were several presentations at Semicon about the status of EUV. Here I’ll discuss the issues with EUV lithography and in a separate post discuss the issues about making masks for EUV.

It is probably worth being explicit and pointing out that the big advantage of EUV, if and when it works, is it is single patterning technology (for the forseeable future) with just one mask and one photolithography process per layer.

First up was Stephan Wurm, the director of litho for Sematech (it’s their 25th anniversary this year, seems like only yesterday…). He talked about where EUV is today. Just a little background about why EUV is so difficult. First, at these wavelengths, the photons won’t go through lenses or even air. So we have to switch from refractive optics (lenses) to reflective optics (mirrors) and put everything in a vacuum. The masks have to be reflective too but I’ll talk about that in the next blog. Obviously we need a different photoresist than we use for 193nm. And, most critically, we need a light source that generates EUV light (which is around 14nm wavelength, so by the time EUV is inserted into production it will already be close to the feature size but we’ve got pretty good at making small features with long wavelength light).

The status of the resist is that we now have chemically amplified resist (CAR) with adequate resolution for a 22nm half pitch (22nm lines with 22nm spaces) and seems to be OK down to 15nm. A big issue is sensitivity, it takes too much light to expose the resist which reduces throughput. However, when we have had sensitivity problems in the past they were not so severe and were solved earlier. Line width roughness (LWR) continues to be a problem and will need to be solved with non-lithographic cleanup. Contact holes continue to be a problem. Stephan discussed mask blank defect and yield issues but, as I said, that comes in the next blog.

Next up was Hans Meiling from ASML (with wads of Intel money sticking out of his back pocket). They have already shipped 6 NXE-3100 pre-production tools to customers for them to start doing technology development. They have 7 3300 scanners being built.

You can’t get EUV out of a substance in its normal state, you need a plasma. So you don’t just plug in an EUV bulb like you do for visible light. You take little droplets of tin, zap them with a very high powered CO[SUB]2[/SUB] laser, and get a brief flash of light. They have run sources like this for 5.5 hours continuously. It takes a power input of 30kW to get 30W of EUV light, so not the most efficient process.

Contamination of mirrors is one challenge, given that putting everything in a vacuum and using metal plasma is how we make interconnect and for sure we don’t want to coat our mirrors with tin. ASML found problems with the collecting optics not staying clean after 10M pulses, which sounds a lot until you realize it is about 1 week of operation in a fab running the machine continuously. They now have 3 or 4 times more but there is clearly progress to be made.

Reflectivity of the mirrors is a problem. These are not the sort of mirror you have in your bathroom, they are Mo/Si multilayers which forms a Bragg reflector that reflects light due to multilayer interference. Even with really good mirrors, only about 70% of the EUV light is reflected from the mirror and since the optics require 8 or more mirrors to focus the light first on the mask and then on the wafer, very little of the light you start with (maybe 4%) ends up hitting the photoresist. Some of these mirrors are grazing incidence mirrors, which are mirrors that bend the light along their length like some pinball machine bending the path of the ball and can be used to focus a beam.

Currently they are managing to get 5-7W and have demonstrated up to 30W. For high throughput the source needs to be 200W so this is still something that seems out of reach from just tweaking the current technology.

The light source power issue is the biggest issue in building a high-volume EUV stepper. Intel is betting that a few billion dollars and ASML will solve it.


How has 20nm Changed the Semiconductor Ecosystem?

How has 20nm Changed the Semiconductor Ecosystem?
by Daniel Nenni on 07-15-2012 at 7:30 pm


What does mango beer have to do with semiconductor design and manufacturing? At a table of beer drinkers from around the world I would have never thought fruity beer would pass a taste test, not even close. As it turns out, the mango beer is very good! Same goes for 20nm planar devices. “Will not work”, “Will not yield”, “Will not scale”, as it turns out 20nm is very good!!! The leading edge fabless people who were scratching their heads six months ago are now scheduling 20nm tape-outs for Q1 2013. Crowd sourcing wins again!

The 20nm process node represents a turning point for the electronics industry. While it brings tremendous power, performance and area advantages, it also comes with new challenges in such areas as lithography, variability, and complexity. The good news is that these become manageable challenges with 20nm-aware EDA tools when they are used within end-to-end, integrated design flows based on a “prevent, analyze, and optimize” methodology.

I agree with this statement 100%. It comes from a Cadence white paper A Call to Action: How 20nm Will Change IC Design. 20nm was definitely a turning point for the semiconductor ecosystem. Now that the technical wrinkles have been ironed out let’s look at how the fabless business model has evolved.

“There is no doubt we are at a crossroads at the most advanced process technology nodes. In order to take positive steps forward, significant monetary and collaborative investments and resources are required from both the manufacturing and design sides of the equation,” said Ana Hunter, vice president of Samsung’s North American foundry services.

I spoke with Ana before I left for Taiwan. We are in agreement: The industry is at an inflection point and the business model is changing. A simulated IDM environment is required for fabless semiconductor companies to be competitive at the advanced process nodes, absolutely. Check out the new Samsung “Best of Both Worlds: IDM and Pure-Play Foundry” brochure.

Before, during, and after DAC 2012 I asked various engineers from the top fabless semiconductor companies what has changed for them in regards to how they work with the foundries on new process technologies. I also asked the foundries. The IDM-like answers did not surprise me since working with EDA and IP companies on new process nodes is what I do during the day. It may however surprise others who are on the sidelines and believe the latest Intel PR nonsense.

UMC actually pioneered this simulated IDM environment with Xilinx from .25 micron to 40nm. Xilinx employees literally consumed an entire floor of UMC HQ for more than a decade and acted as the dutiful wife delivering many new processes. After the Xilinx 40nm divorce, UMC is no longer monogamous and now has multiple process development mistresses including TI, Qualcomm, and IBM.

TSMC took a different approach which many people overlook. The early days of collaboration started with Reference Flow 1.0 which is now version 12.0 with many new sub flows to be rolled out at TSMC OIP in October. This collaboration included both established and emerging semiconductor and EDA companies. Next came the IP effort with TSMC developing physical IP for reference and production at zero cost to customers. Commercial IP was next with the TSMC “silicon proven” program. IP companies big and small completed an exhaustive qualification program to make it into the TSMC IP catalog. Next came the TSMC “Early Access Program” where a select group of customers and partners were included in process development activities. Correct me if I’m wrong but I believe this started at 90nm. The qualification process for Early Access was also daunting but clearly it was critical to the evolution of the fabless semiconductor ecosystem. The result being the TSMC DTP (Design Technology Platform) Division which employs hundreds of people and has spent hundreds of millions of dollars (my guess) building the industry leading “Simulated IDM platform” you see today!

That brings us to 40nm. At 28nm and even more so at 20nm the fabless people have taken up residence in Hsinchu and the foundry people have “Early Access” to the top fabless companies. Bottom line: you will be hard pressed to differentiate between a Qualcomm, Broadcom, Nvidia, or Xilinx and a modern day IDM, except of course for the actual ownership of the manufacturing equipment. And lets not forget, the now fabless TI, AMD, Fujitsu, LSI Logic and others used to be IDMs, right?

Thank you again UMC and TSMC for leading the way!


Scoreboards and Results Predictors in UVM

Scoreboards and Results Predictors in UVM
by Daniel Nenni on 07-15-2012 at 10:56 am

If verification is the art of determining that your design works correctly under all specified conditions, then it is imperative that we are able to create an environment that can tell you if this is truly the case.

Scoreboards are verification components that determine that the DUT is working correctly, including ensuring that the DUT properly handles all stimuli it receives. Predictors are components that represent a “golden” model of all or part of the DUT that generate an expected response against which the scoreboard can compare the actual response of the DUT. This online webinar will outline the proper architecture of scoreboards and predictors in UVM and how they relate to coverage.

REGISTRATION

What You Will Learn:

  • Basic scoreboard architecture
  • Basic predictor architecture
  • Integration of predictors and scoreboards
  • Using a predictor for stimulus
  • Integrating SystemC into your predictor with UVM Connect
  • Tradeoffs in including coverage in your scoreboard
  • Efficient messaging to simplify results analysis

Target Audience:

  • Design and Verification Engineers and Managers

About the Presenter:

Tom Fitzpatrick Verification Technologist
Tom is currently a Verification Technologist at Mentor Graphics Corp. where he brings over two decades of design and verification experience to bear on developing advanced verification methodologies, particularly using SystemVerilog, and educating users on how to adopt them. He has been actively involved in the standardization of SystemVerilog, starting with his days as a member of the Superlog language design team at Co-Design Automation through its standardization via Accellera and then the IEEE, where he has served as chair of the 1364 Verilog Working Group, as well as a Technical Champion on the SystemVerilog P1800 Working Group. At Mentor Graphics, Tom was one of the original designers of the Advanced Verification Methodology (AVM), and later the Open Verification Methodology (OVM), and is the editor of Verification Horizons, a quarterly newsletter with approximately 40,000 subscribers. He is a charter member and key contributor to the Accellera Verification IP Technical Subcomittee. He has published multiple articles and technical papers about SystemVerilog, verification methodologies, assertion-based verification, functional coverage, formal verification and other functional verification topics.


Silicon on Insulator (SOI)

Silicon on Insulator (SOI)
by Paul McLellan on 07-14-2012 at 5:51 pm

I attended a panel session followed by a party during Semicon to celebrate Soitec’s 20th birthday. Officially it was titled An Insider’s Look at the Future of Mobile Technologies. But in reality it was a look at the future possibilities for SOI.

Silicon on Insulator (SOI) has been a sort of bastard child of semiconductor. Almost all foundry work (TSMC, GF etc) and Intel in particular has been bulk CMOS. But both IBM and AMD have used SOI for high performance microprocessors. The drawback of SOI is that it is more expensive than bulk to manufacture.

When FinFETs were first invented, it was an SOI-based technology. It is much easier to form the gate over the fin when the substrate stops the etch. It also only had gate on the sides of the fin and not over the top. But TSMC worked out how to put the gate completely around the fin, and Samsung worked out how to manufacture it on bulk technology. The highest profile FinFET proponent is Intel with its tri-gate process, which is bulk although beyond 14nm apparently they are looking at SOI. TSMC is also going FinFET although less aggressively than Intel.


But ST microelectronics is taking the SOI approach and building its advanced processes using fully-depleted SOI (FDSOI). As I explained in a blog earlier this year, as we go below 28nm you have to have the whole channel well-controlled by the gate and that means the channel must be very thin. One way is to build it vertically (the FinFET) and the other way is to make the gate on an insulator so it is still a 2D structure but the channel really is thin. ST were one of the participants on the panel session.

The panelists were:

  • Ron Moore of ARM
  • Subramani Kengeri of GlobalFoundries
  • Gary Patton of IBM
  • Horacio Mendez of SOI Industry Consortium
  • Steve Longoria of Soitec
  • Philippe Margashack of STM
  • Chenming Hu of UC Berkeley

The panel session was a bit like that party game in which you get eliminated if you say “yes” or “no,” except this time you were not allowed to say Intel…even when clearly talking about them. In the end, Dr Chenming Hu, the inventor of the FinFET, gave in and said the dirty word towards the end of the panel. But I expect he consults to them on FinFETs so I guess he had a free pass.

Clearly SOI has a strong future going forward, with ST committed to it, Global planning on using SOI as well as bulk and, as we move down the process nodes (if we can get an economically viable lithography working) it seems to make some things easier especially in the foundry business where there is a bigger separation between process and design.


Direct Write E-beam

Direct Write E-beam
by Paul McLellan on 07-13-2012 at 2:08 pm

One of the presenters at the standing-room only litho session at Semicon this week was Serge Tedesco, the litho program manager at CEA-Leti in Grenoble France. He is running a program called IMAGINE for maskless lithography. Chips today are built using a reticle (containing the pattern for that layer of the chip) which is exposed one die at a time with a flash of light, and then the wafer is moved to the next die and the process is repeated. The nice thing about this process is that the whole of each die is basically processed in parallel with that flash of light (although obviously it is a serial process to get from die to die).

Direct write e-beam takes a different approach. The e-beam is steered across the design and writes the pattern as it goes, in much the same way as a cathode-ray tube (CRT) black and white television worked, but with a rather greater resolution. A TV has dots at around a millimeter and we need dots around 20nm. So more like a scanning electron microscope. The big problem with this approach is that it is serial. No matter how fast the ebeam scans, there are a lot of points at 20nm resolution for a 300mm wafer (if I did the calculation right, about 10[SUP]15[/SUP] points). So to be effective, direct write e-beam needs more than one beam. A lot more.

The IMAGINE program started with a pre-alpha e-beam machine which could only expose 1.5mm at a rate of 0.002 wafers per hour with a single beam. The current version (beta?) has 110 beams and the next machine will have 13,260 and will have full 300mm wafer coverage and process 10 wafers per hour.

The big problems with the approach are:

  • Throughput. If this is going to be viable it it needs to be cost competive with mask-based technologies. The speed of writing depends on the capabilities of the machine but also on the sensitivity of the photoresist.
  • Stitching/overlay. Aligning the pattern on one layer onto the patterns already manufactured on the wafer, and given that this is wafer-scale technology then this needs to be continuously aligned, and making sure all the beams stay aligned with each other so that adjacent patterns line up.
  • Data handling. Obviously the mask data (I’ll continue to call it mask data even though there are no masks) doesn’t need to be stored fully rasterized, but as the machine processes a wafer the computing infrastructure needs to transmit around 10[SUP]15[/SUP] bits of data. At a target speed, for a single machine, of 10 wafers per hour, that is something like 10 terabits per second.
  • Data Integrity. A single bit wrong in the data is enough to cause a “defect”

A full machine will have a cluster of 10 wafer handling units for a throughput of 100 wafers per hour (and 130K beams). That’s a lot of beams and a lot of data handling. Apparently over half the cost of the machine is, in fact, the data handing which is transmitted to the e-beam machine itself over optical fiber.

E-beam direct write for commercial semiconductor manufacturing was actually tried in the mid-1980s. European Silicon Structures (ES2) near Aix-en-Provence (also in France) using Perkin-Elmer equipment. I know a bit about it since they tried to recruit me to work on the EDA software they knew they would need but in the end I stayed with VLSI and went and opened an R&D center for them in Sophia-Antipolis, a couple of hours east. They never got the technology to work and eventually they shut down. The software group largely went to Cadence, I think. Robin Saxby, who I think ran sales, went on to spin ARM out of Acorn and so to fame and fortune.


Using Accurate Models to Debug Cellphones

Using Accurate Models to Debug Cellphones
by Paul McLellan on 07-13-2012 at 10:54 am

There is an interesting Gizmodo review of an HTC Android-based smartphone. The basically positive review (as good as the iPhone, best Android phone at the time) ends up with an update:UPDATE: After more extensive testing there’s something a little weird going on. You’ll probably only see this while gaming, but there’s a little bit of stuttering that happens. You really notice it in games like Temple Run, where the processor seems to get a little overloaded and it misses a finger-swipe which kills you dead.

It’s troubling. If we had to hazard a guess as to what’s going on, it seems that the new Snapdragon S4 processor may be the culprit. While it doesn’t really have this problem with the One S, it has many more pixels to drive on the higher resolution One X and the EVO 4G LTE screens. It seems to strain under the weight of driving those pixels while handling a lot of graphics processing at once.

These sorts of problems are just the type of thing that the combination of virtual platforms along with cycle-accurate processor models can discover ahead of time to get the problem fixed. There is nothing so expensive as a bug in an SoC that escapes into the field. Of course the problem might not be in the S4 and so nothing to do with Qualcomm, but Gizmodo’s guess is certainly very plausible.

Companies that are Carbon’s customers indeed find these sort of low-level problems, everything from driver bugs to performance issues like this. The use model is to start with fast models of the processor to boot the operating system and start up an application (such as a game in this case). Then the processor state is frozen and the fast models are swapped for cycle-accurate models initialized with the frozen state. If this sounds tricky, that’s because it is. I think Carbon is the only virtual platform supplier that can do it. But it is the only way to get both speed and accuracy since you can’t get both in a single model without compromising one or the other (or usually both).

Bill Neifert, Carbon’s CTO, wrote about virtual prototypes in wireless development in EETimes although with their usual unhelpful rules that opinion pieces can’t mention companies or products. Like Carbon sells virtual platforms (and other stuff), Bill is writing about virtual platforms and…wink, wink…I wonder what specific products he could possibly be referring to.


Nokia: the Epic Version

Nokia: the Epic Version
by Paul McLellan on 07-12-2012 at 2:00 pm

Whenever I write about the handset industry, lots of people seem to be interested. As I’ve said before, my go to person for the industry but especially for Nokia, is Tomi Ahonen. He has written a long (and I mean long, it is nearly 30,000 words) indictment of Elop’s tenure at Nokia and how he has destroyed one of the most iconic and loved brands. If you are in the US (or Japan or Korea) you don’t realize how dominant Nokia used to be in the rest of the world. When Elop took over Nokia, its smartphone market share was bigger than Apple and Samsung…together. Now it is in the noise. Microsoft has announced its own tablet and it is only a matter of time (imho) before they announce their own phones. They have to, or they will be nowhere in mobile and Windows Phone (whether this strategy of manufacturing their own hardware will work is also an open question, I’m dubious).

Even if you are not that interested in Nokia’s fall from grace, it is worth reading as a case study in how not to run a business. Elop was brought in to improve Nokia’s operational efficiency which had deteriorated under the previous CEO. Strategy was sound but execution was not. Instead, he changed the strategy and operational efficiency was no longer relevant.

Since 30,000 words is long, and to celebrate his blogs 3,000,000th visitor, Tomi had a contest to summarize his long blog in a tweet, so with a limit of 140 characters. The winner:

  • Designed by Finns, improved around the world, manufactured by great people & destroyed by 1 Canadian http://bit.ly/OArMOD

Nokia’s Q2 results come out next Thursday. They have already said Q2 will be worse than Q1. The stock is trading at a fraction of their cash and near-cash.

And if you don’t understand the joke at the start of this blog, then read my blog Microsoft messes up mobile even more(I promise it’s less than 30,000 words, less than 1,000 even).


Semicon West

Semicon West
by Paul McLellan on 07-11-2012 at 7:08 pm

I have been spending some time at Semicon West at the Moscone center the last couple of days. Since it was only a month ago that I was there for DAC, the first contrast is the size of the show. DAC didn’t fill Moscone South. Semicon fills Moscone South, and North, and the corridor between. And Moscone West on the other side of 4th street. Admittedly there is a co-located solar show but that is not a large fraction.

The big news of the show was Intel/ASML’s announcement that Intel is putting a lot of money into ASML for EUV (see below) and 45cm (18″) wafer technology. They were even one of the speakers at a fascinating session I attended.

I spent all morning today at a series of presentations about lithography. I blogged recently about how wafer prices going up faster than transistor densities (the scariest graph I’ve seen recently) which is essentially a graph and a story all about lithography, when you look under the hood.

Right now, continuing to use 193nm light and immersion lithography, everyone seems confident we can build 14nm chips (and smaller). But whether we can build them economically is the big question. At 20/22nm we have to double pattern the 1X layers (the higher levels of metal are on coarser grids and so don’t require it). We may have to triple pattern the first layer of metal since we really want both vertical and horizontal segments. Double patterning requires that you do some things twice and so costs more (although not twice as much). It also requires two masks so NRE is higher too, pushing up the fixed costs of taping out a design and manufacturing the first wafer.

There are three technologies that are in some level of development, and there were presentations about all of them. I will blog about each of them in more detail in the coming few days. But the executive summary is as follows:

Extreme Ultra-Violet (EUV) is shorter wavelength light (14nm or so). At that wavelength the light can’t get through lenses (or even air) so we have to switch to reflective optics and reflective masks. We don’t yet have good ways to generate the light at high enough power but a lot of work is being done. We don’t yet have photoresist that is responsive enough. We can’t build defect-free mask blanks (and never will be able to). The big advantage is that we don’t need double patterning so only one mask per layer. But unless the throughput is high enough that isn’t a big enough advantage.

Next is DWEB, Direct Write Electron Beam. This is even better on the mask front. None of them. More like an old CRT TV used to write on the phosphor, this scans an electron beam across the photoresist and scans the design. The challenge again is getting throughput up, and having responsive enough photo resist. Plus the data handling is a challenge with thousands of beams writing simultaneously across a whole wafer (this isn’t reticle/stepper type technology).

OK, so if lithography can’t hack it, how about we do something really spacey. How about we do directed self-assembly (DSA). This is something that a few academics have been looking at but suddenly a few years ago industry started to look at seriously. If you mix two substances that don’t mix (like oil and water) but that polymerize (so not oil and water, more like polystyrene) then if you just mix them and put them on a wafer you get a random pattern like a fingerprint. But if you first put down some guidance (hence “directed” self assembly) such as tracks on the wafer at 80nm spacing (which is easy today) and put the mixture in between, then instead of just forming random patterns, it lines up into nice 14nm tracks of alternating polymers. And if you build a trench and put some of the mixture in with the right ratio, it will form a few tiny holes. This is a long way from a chip, of course. But building lines and holes is the basis of chips. Then immersion lithography was considered out there not that long ago.

TL;DR EUV, DWEB or DSA


Atrenta Technology Forum, Japan

Atrenta Technology Forum, Japan
by Paul McLellan on 07-11-2012 at 6:32 pm

The 1st Atrenta Technology Forum in Japan (well, it used to be the user group meeting, so it’s only the first in a very technical sense) is next week on July 19th from 1pm until 5.15pm. It will be held in the Shin-Yokohama Kokusai Hotel (how to access it here).

In the unlikely event that non-Japanese are reading this blog, here’s the story about Shin-Yokohama (“shin” means new in Japanese. And Chinese as it happens: Hsinchu). When they built the first of what people in the US call bullet trains and are prosaically called shinkansen (new line) in Japan, it was impractical to route it through Yokohama station, so they built a brand new station in the middle of rural fields outside Yokohama and that was Shin-Yokohama Station. That was in the early 1960s. Since then a whole new town, Shin-Yokohama has grown up around it.

Here is the agenda for the meeting:

  • 13.00-13.30 Registration
  • 13.30 to 14.00 Semiconductor market and design trends. Atrenta Japan
  • 14.00 to 14.30 TSMC quality assesment with DMP 3D graphics IPcore. Digital Media Profesionals.
  • 14.30 to 14.45 What’s SpyGlass Advanced Lint. Atrenta Japan.
  • 14.45 to 15.15 Tips on asynchronous design and case study with SpyGlass family. Nippon System Ware Co
  • 15.15 to 16.00 Break and demo
  • 16.00 to 16.15 What’s SpyGlass Physical Base, Atrenta Japan.
  • 16.15 to 16.30 SpyGlass Physical case study. Renasas Electronic Corporation.
  • 16.30 to 16.45 Atrenta Update. Atrenta Japan.
  • 16.45 to 17.15 Update for 2011 STARC Design Style Guide

For more information including how to register to attend (in Japanese) is here.


Using Synopsys Analysis Tools for AMS Design

Using Synopsys Analysis Tools for AMS Design
by Daniel Payne on 07-11-2012 at 12:05 pm

I attended the Synopsys webinar today for a tool called Custom Explorer Ultra (CXU). Product details on the Synopsys web site are here. The CXU tool would be used by AMS designers that want to setup, control and view results from simulators like HSPICE, CustomSim or VCS on transistor-level and AMS designs. Continue reading “Using Synopsys Analysis Tools for AMS Design”