100X800 Banner (1)

Semiconductor Power Crisis and TSMC!

Semiconductor Power Crisis and TSMC!
by Daniel Nenni on 03-02-2011 at 8:48 pm

Power grids all over the world are already overloaded even without the slew of new electronic gadgets and cars coming out this year. At ISSCC, Dr. Jack Sun, TSMC Vice President of R&D and Chief Technology Officer made the comparison of a human brain to the closest thing available in silicon, a graphical processing unit (GPU).

Dr. Jack Sun is talking about the NVIDIA GPU, I believe, as it is the largest 40nm die to come out of TSMC. The human brain has more than 100 billion neurocells (cells of the nervous systems). 100 billion neurocells consume 20 watts of power versus 200 watts for an equivalent amount of transistors in silicon. The bottom line is that semiconductor technology is severely power constrained and he suggests that we must learn from nature, we must look at technology from the bottom up.

“New transistor designs are part of the answer,” said Dr. Jack Sun. Options include a design called FinFET, which uses multiple gates on each transistor, and another design called the junctionless transistor. “Researchers have made great progress with FinFET, and TSMC hopes it can be used for the next generation of CMOS — the industry’s standard silicon manufacturing process,” Sun said.

According to Wikipedia:
The term FinFET was coined by UniversityofCalifornia,Berkeleyresearchers (Profs. Chenming Hu, Tsu-Jae King-Liu and Jeffrey Bokor) to describe a nonplanar, double-gate transistor built on an SOI substrate,[SUP][5][/SUP]based on the earlier DELTA (single-gate) transistor design.[SUP][6][/SUP]The distinguishing characteristic of the FinFET is that the conducting channel is wrapped by a thin silicon “fin”, which forms the gate of the device. The thickness of the fin (measured in the direction from source to drain) determines the effective channel length of the device.

So now you know as much about a FinFET as I do.

After the panel I had a conversation with Dr. Sun about power and looking at it top down from the system level. TSMC is already actively working with ESL and RTL companies (Atrenta) to do just that. Designers can optimize for power consumption and efficiency early in the design cycle at the register transfer level (RTL). Using commercially available tools, information about power consumption is available at RTL and can provide guidance where power can be reduced, in addition to detecting and automatically fixing key power management issues. Finding and/or fixing these types of problems later in the design cycle, during simulation or verification, can be costly and increase the overall risk of your design project finishing on time and within predetermined specifications.

TSMC’s current Reference Flow 11.0 was the first generation to host an ESL/RTL based design methodology. It includes virtual platform prototyping built on TSMC’s proprietary performance, power, and area (PPA) model that evaluates PPA impact on different system architectures. The ESL design flow also supports high level synthesis (HLS) and ESL-to-RTL verification. TSMC also expanded its IP Alliance to include RTL based (soft) IP with Atrenta and others. Atrenta is known for the SpyGlass product which is the defacto standard for RTL linting (analysis). If you do a little digging on the Atrenta site you will find the Atrenta GuideWare page for more detailed information.

But of course TSMC can always do more to save power and they will.


With EUVL, Expect No Holiday

With EUVL, Expect No Holiday
by Beth Martin on 03-02-2011 at 1:12 pm

For a brief time in the 1990s, when 4X magnification steppers suddenly made mask features 4X larger, there was a period in the industry referred to as the “mask vendor’s holiday.” The party ended before it got started with the arrival of sub-wavelength lithography, and we all trudged back to the OPC/RET mines. Since then, the demands for pattern correction have grown ever more onerous. But at long last, there may be an extreme ultraviolet light at the end of the lithography tunnel.

Extreme ultraviolet lithography (EUVL) is a strong candidate for 15nm node and below. While the 13.5nm EUV wavelength seems to imply a bit of an OPC vendor’s holiday, it’s shaping up to be quite the opposite situation. EUVL is introducing two major pattern-distorting effects at a magnitude never before seen: flare and mask shadowing. These effects combined create pattern distortions of several nanometers, and they vary in magnitude across the reticle in a complex yet predictable way. So, unpack your bags, there’s no holiday with EUVL.

EUV Lithography Patterning Distortions
There are several technical challenges for EUVL, but I’ll focus on just one: correcting pattern distortions unique to EUVL, specifically flare and mask shadowing.

Flareis an incoherent light produced by scattering from imperfections in the optical system, and it can degrade image contrast and worsen control of CD. Optical scanners suffer little from flare, but EUV scanners have flare levels of around 10% today, likely declining to around 5% in a few years. The magnitude of flare is driven by the roughness of the mirror surfaces in the scanner, the wavelength (flare is proportional to the square of the wavelength), and local pattern density on the mask (clear, open, low-density regions suffer worse from flare).
The level of flare, and the impact on CD control, depends on local and global pattern density, which exhibits a large variation within a chip. A 1% change in the level of flare within a chip produces more than a 1nm change in printed resist dimension.

Compensating for Flare Effect with traditional rules-based OPC is not accurate enough for EUVL, so the EDA industry developing new model-based flare compensation models. The biggest challenge in incorporating flare into OPC software is in simulating the very long range of the flare effect with reasonable runtimes. New simulation engines and models designed to handle these long interaction distances have largely addressed the runtime issues. It is now possible to simulate an entire 26X33mm reticle layout with >15mm diameter flare models in about an hour.
Once a flare map is generated, we need to utilize this information to mitigate the flare’s impact on printing. This can be accomplished within existing model-based OPC software with minor enhancements to how the aerial image is computed. A simple formula where intensity is scaled by the total integrated scatter (TIS), and the local flare intensity is added to the image, should be an effective way of adding flare intensity to the OPC simulator. This method is currently being validated with actual wafer data.

Mask Shadowing is truly unique to EUV lithography. Because everything (all solids, liquids, and gasses anyway) absorb 13.5nm EUV wavelength, EUV exposure tools must use mirrors for all optical elements and for the mask. Unlike the telecentric refractive projection lens systems used in optical lithography, EUV systems rely on off-axis mask illumination. Currently, the angle of incidence is 6o from normal. Even more problematic is the complication of the azimuthal angle. This angle varies across the scanner’s illumination slit and ranges from 0o to +/- 22o [2].
Off-axis mask illumination wouldn’t be a problem if we lived in a perfect world where the mask has no topography, and is infinitely thin. But unfortunately, the mask absorber has thickness, and the mask reflector is a distributed Bragg reflector composed of alternating thin-film layers. Thus, off-axis mask illumination results in an uneven shadowed reflection depending on the orientation of the mask shapes and the angle of incidence. This is shown in the figure below.

Figure : Mask topography and azimuthal angles for left (a), center (b), and right (c) region of the scan slit. Detailed view (d) showing how the mask in the right region of the scan slit will shadow both incoming light from the source, and light reflected from the mask substrate.

The result is an orientation-dependent pattern bias and shift, which is also a function of the location within the scanner slit. Current estimates of the magnitude of this effect range from 1-2nm for longer lines, and >5nm for small 2D shapes like contact & via holes. Variation in wafer CDs of this magnitude are obviously unacceptable.

Compensating for Mask Shadowing Effect requires new 3D mask topography models. Existing TCAD lithography simulators can already accurately predict the effects but cannot be directly adapted for use in OPC due to their very long simulation times. These models were designed to be very, very fast under certain assumptions: a thin mask with no topography (the Kirchhoff approximation), and a normal illumination incidence at the mask plane. Neither is valid in EUV exposure tools. The EDA industry has already introduced fast 3D mask topography models capable of simulating an entire chip with minor runtime penalty and with an accuracy approaching that of TCAD simulators. New OPC models that correctly handle off-axis incidence at the mask plane are becoming available.

However, we won’t be able to accurately model the mask shadowing effects until the software can handle the fact that the angle of illumination incidence is now a function of the location within the scanner field, as shown in Figure 1. The best solution now may be a hybrid of rules and model-based mask shadow compensation: models to handle the field-location-invariant component, with rules to compensate for the field-dependent component.

Conclusion
The upcoming availability of EUV scanners will enable further technology scaling. Initially, the features being imaged by EUV scanners will be larger than the 13.5nm illumination wavelength, and this would seem to provide some breathing room to the over-worked OPC engineers of the world. But as we’ve demonstrated here, the unique illumination source, reflective optics, and reflective masks are conspiring to ensure continued full employment amongst these engineers. So I think we can safely predict there will be no OPC engineer’s holiday for the foreseeable future.

–James Word, OPC product manager at Mentor Graphics.


Semiconductor Design Flows: Paranoia or Prudence

Semiconductor Design Flows: Paranoia or Prudence
by Steve Moran on 02-28-2011 at 11:34 am

I was recently talking to a friend who works for a semiconductor company (I can’t tell you which one or how big it is. I am not even sure I can tell you that he/she works for a semiconductor company.) I was describing Semiwiki.com to him/her and this person thought it was a great concept but wondered how much of the site would be focused on custom design. He/she wished there was more open discussion about techniques and tools used in custom design. Here is the problem and he/she described it:

1. There are relatively few companies that do custom IC design as a result, compared to the number of ASIC design teams there are only a small number of custom design teams. Finally this means that there are relatively few custom designers.

2. Custom designers and design teams typically work on the most important, most secretive, most costly and potentially the most profitable parts of a chip or system, in other words the crown jewels of the company.

3. In an effort to protect these highly competitive pieces of technology, almost universally, the companies that do custom designs have a complete embargo, on sharing any information about their designs and even their design flows.

4. Typically these information embargoes are so restrictive that even mentioning that the company uses a given tool or technique becomes a firing offense.

I find myself thinking this is secrecy gone amok.

There is no question that technology area is fiercely competitive; that the stakes run into billions of dollars; That being first to market with something like an iphone, or ipad can be worth billions of dollars and needs to be protected. Yet the question still remains, do those companies really gain when they refuse to share basic information about the nuts and bolts of their design flows, their design bottlenecks and their design tools?

I would argue that the competitive advantage comes not from these flow “secrets” but rather from pure differentiated technology, which is a combination of hardware, software and concept or idea, plus brilliant demand creation marketing. What ends up happening is that these companies end up wasting millions of dollars recreating, reinventing the basics. It means that every single time there is a fundamental/foundational problem each company spends hundreds to thousands of man hours figuring out how to solve what are really basic problems, that ultimately everyone solves in one fashion or another. That each company evaluates multiple tools that solve the same problem without a clue as to whether or not it has worked or failed for other companies.

What I find so puzzling is that given the enormous expense of building up a flow, evaluating and selecting tools, direct and indirect engineering costs, plus the inherent risk of producing a new product or device, how little interest there is in sharing the basics. Do have this right or wrong?


Will Thunderbolt kill SuperSpeed USB?

Will Thunderbolt kill SuperSpeed USB?
by Eric Esteve on 02-28-2011 at 11:09 am

And probably more protocols, like: Firewire, eSATA, HDMI …
Some of you probably know it as Light Peak, as it was the code name for the just announced (last week) Thunderbolt Intel proprietary interface, supporting copper (or optical) interconnect when Light Peak was optical only. This will make Thunderbolt easier to implement, and more dangerous for USB 3.0 and the other Interface in competition.


Thunderbolt is a High Speed Serial, Differential, Bi-directional offering 10 Gbps per port and per direction. Each Thunderbolt port on a computer is capable of providing the full bandwidth of the link in both directions with no sharing of bandwidth between ports or between upstream and downstream direction. See: http://www.intel.com/technology/io/thunderbolt/index.htm
Intel will provide the technology (protocol)and the silicon, the Thunderbolt controller. So far, Intel was providing the protocol controller on the Host (PC) side, through the Chipset, to support the PCIe, USB, eSATA etc… and other chip makers the controller on the peripheral side, sourcing the related protocol IP (PHY or Controller) to IP vendors, when necessary. Now, Intel will be present on both side of the link.


Thunderbolt Controller supports natively two protocols: PCIe and DisplayPort. Intel claims that “Users can always connect to their other non-Thunderbolt products at the end of a daisy chain by using Thunderbolt technology adapters (e.g., to connect to native PCI Express devices like eSata, Firewire). These adapters can be easily built using a Thunderbolt controller with off-the-shelf PCI Express-to-“other technology” controllers.” What does this means? That the BOM for peripheral will increase, as you add Intel device to it, but you still need a PCI Express to other technology controller (when there was a unique controller before)!

What does this means for the IP vendors? On the PC side, on mature technology (like USB 2.0), they were not present as the PC Chipset was supporting the protocol (please note that this was not true for USB 3.0, not supported from now by Intel). So, no big change. But, on the peripheral side, the number of protocol previously supported will reduce to PCI Express (in term of PHY IP sales) and to a PCIe to “other protocol” bridge on the controller side. In other word, it will probably kill the USB 2.0, yet to come USB 3.0 and eSATA IP sales.

The other protocol natively supported by Thunderbolt is DisplayPort. Here the target competitor seems to be Silicon Image with HDMI. This makes sense on the PC side, supporting only one connector will be cheaper. Which is unclear is the willingness of Intel to be present in the HDTV type of product. If this is the case, we can expect these products to support both HDMI and DisplayPort, at least until the market decide to reject one of these. If Thunderbolt finally win in the Consumer Electronics (which is far to be certain), this will increase Intel sales for chips and kill any IP sales for HDMI, USB 3.0… and DisplayPort in the Consumer Electronic segment.

Another big unknown is Thunderbolt penetration into the Mobile Internet Devices segments (Smartphones, Media Tablets). So far, in the mobile handset segment, you find HDMI and USB 3.0 protocols supported in the high end products. Thunderbolt could kick out these two, at two minimum conditions: the controller price and power consumption. If you look at the Application processor for Smartphone market, you see that the chip makers tend to integrate IP (internal or sourced to IP vendors) and leaving Intel catch a part of the BOM is probably not their first priority!

From an IP market point of view, Thunderbolt does not look great, except maybe for the PCI Express PHY IP vendor, as well as the PCI Express to “other protocol” bridge controller IP developers, at the condition that this market is not cannibalized by off-the-shelf (ASSP) solutions. From an OEM point of view, this means that the PC peripheral manufacturer will have to pay a tribute to Intel (that they did not do before).as well as the Consumer Electronic and Cell phone manufacturers. Will Thunderbolt be Intel Trojan horse into the other than PC segments?
Eric Esteve
www.ip-nest.com


Intel Sandy Bridge Fiasco and EDA

Intel Sandy Bridge Fiasco and EDA
by Daniel Nenni on 02-27-2011 at 6:49 am

I purchased two Toyotas last year and both have since been recalled. Why has Toyota spent $1B+ on recalls in recent years? Same reason why it will cost Intel $700M (which does not include reputation damage) to recall Sandy Bridge chip sets, because someone did not do their job! The WHAT has been discussed, lets talk about HOW it happened.



Intel Identifies Chipset Design Error, Implementing Solution

SANTA CLARA, Calif., Jan. 31, 2011 – As part of ongoing quality assurance, Intel Corporation has discovered a design issue in a recently released support chip, the Intel® 6 Series, code-named Cougar Point, and has implemented a silicon fix. In some cases, the Serial-ATA (SATA) ports within the chipsets may degrade over time, potentially impacting the performance or functionality of SATA-linked devices such as hard disk drives and DVD-drives. The chipset is utilized in PCs with Intel’s latest Second Generation Intel Core processors, code-namedSandyBridge. Intel has stopped shipment of the affected support chip from its factories. Intel has corrected the design issue, and has begun manufacturing a new version of the support chip which will resolve the issue. TheSandyBridgemicroprocessor is unaffected and no other products are affected by this issue.

Coincidently Mentor Graphics recently published an article:

New ERC Tools Catch Design Errors that Lead to Circuit Degradation Failures
Published on 02-11-2011 12:18 PM: A growing number of reports highlight a class of design errors that is difficult to check using more traditional methods, and can potentially affect a wide range of IC designs, especially where high reliability is a must.Today’s IC designs are complex. They contain vast arrays of features and functionality in addition to multiple power domains required to reduce power consumption and improve design efficiency. With so much going on, design verification plays an important role in assuring that your design does what you intended. Often, verification will include simulations (for functional compliance), and extensive physical verification (PV) checks to ensure that the IC has been implemented correctly, including DRC, LVS, DFM and others. A growing number of reports highlight a class of design errors that is difficult to check using more traditional methods, and can potentially affect a wide range of IC designs, especially where high reliability is a must.

Note the comment by SemiWiki Blogger Daniel Payne, who used to work at both Intel and Mentor Graphics:


A transistor-level checking tool like PERC can certainly catch reliability failure issues like a PMOS transistor with bulk node tied to the wrong supply (let’s say 2.5V instead of 1.1V).

Since Intel announced that a single transistor reliability issue is to blame for their recent re-spin, we can only guess at the actual reliability issue that they found and fixed with a single metal layer.

I’m sure that at Intel for all new chips they will be running their reliability tool checks before tape out instead of during fabrication.

Back in the 1980’s Intel did have an internal tool called CLCD (Coarse Level Circuit Debugger) that could crawl a transistor netlist and look for any configuration. You would write rules for each reliability or circuit issue that you knew of, then run your netlist in CLCD to detect them.

Now for the HOW: Based on my personal experience, Intel has a classic case of the NIH (not invented here) Syndrome when it comes to EDA tools and methodologies. Even when Intel purchases commercial EDA tools they are often not used in the prescribed methodology. Intel also does not collaborate well with vendors and is very secretive in the usage of purchased tools. Bottom line: Someone at Intel did not do their job. Just my opinion of course but this $700M fiasco could have and should have been avoided.


ISSCC Semiconductors for Healthy Living

ISSCC Semiconductors for Healthy Living
by Daniel Nenni on 02-26-2011 at 4:19 pm

Not only do I enjoy San Francisco, I really enjoy the International Solid-State Conference that was held in San Francisco again last week. This was ISSCC #57 I believe. ISSCC attracts a different crowd than other semiconductor conferences, probably because there are no exhibits and no sales and marketing nonsense, just serious semiconductor people. Here are some interesting ISSCC stats:

  • 3k people attended
  • 30 countries represented
  • 669 submissions
  • 211 selected
  • 32% acceptance rate
  • 50/50 industry versus academia

The conference theme for 2011 was Electronics for Healthy Living.. Electronics play a significant role in enabling a healthier lifestyle. Technology in the hospital enables doctors to diagnose and treat illnesses that might have gone undetected just a few years ago. External monitors provide us with a good assessment of our health risk and vital-sign status. Those with chronic diseases can live a more normal life with implanted devices that sense, process, actuate and communicate. Body Area Networks can be connected to a monitoring program running on our mobile phone. Those with disabilities also benefit from electronics that improve their lifestyle.

Wireless communications for healthy living has arrived. My running shoes talk to my heart monitor, my bathroom scale talks to my personal fitness program, my refrigerator talks but it lies so it is password protected. Your smartphone is the gateway and will process this “healthy” data affording us all a more “comfortable” lifestyle.

Of course it goes beyond that. The original heart pacemaker was both a medical and technological breakthrough that has saved millions of lives. Now we have deep brain stimulators, active embedded diagnostics and wearable sensors to prevent emergency health care situations. It’s a government conspiracy really, in an effort to not only reduce medicare costs but to make us healthier and more productive so we can pay off the U.S. National Debt!

Significantly increasing our lifespan and raising the retirement age to 80 is the only hope for the social security system! But I digress…..

At the semiconductor level the technological challenges are daunting: power efficiency, energy harvesting, 3D IC, analog / digital integration, RF, signal integrity, die size, and memory. I attended a few of the sessions and will post trip reports, hopefully other ISSCC attendees will post as well. I also spoke with Dr Jack Sun, TSMC Vice President of R&D and Chief Technology Officer, and Philippe Magarshack, TR&D Group Vice-President and Central CAD & Design Automation GM at STMicroelectronics, and I will write more about that in the trip reports.


Semiconductor Ecosystem Collaboration

Semiconductor Ecosystem Collaboration
by Daniel Nenni on 02-24-2011 at 8:30 am

After 27 years in semiconductor design and manufacturing I actually had to look up the word collaboration. Seriously, I did not know the meaning of the word.

Collaboration:a recursive process where two or more people or organizations work together to realize shared goals, (this is more than the intersection of common goals seen in co-operative ventures, but a deep, collective, determination to reach an identical objective).

The type of collaboration I would like to see is between the PANEL and the AUDIENCE. There will be no PowerPoint slides or lengthy introductions. We will go right to the audience to better define the type of collaboration that will be required for semiconductor design and manufacturing to be successful in the coming nodes. Bring your toughest questions and they will get answered, believe it. Registration for EDA Tech Forum is FREE. This event is filling up fast so register today!

PANEL: Enabling True Collaboration Across the Ecosystem to Deliver Maximum Innovation

At advanced nodes manufacturing success is highly sensitive to specific physical design features, requiring more complex design rules and more attention to manufacturability on the part of designers. Experts will discuss how collaboration among EDA vendors, IP suppliers, foundries and design firms is the key to enabling efficient design without over-constraining and limiting designers’ creativity. The discussion will touch on what has been accomplished, including industry initiatives under way, and where we need to go in the future.

Moderator:
Daniel Nenni, SemiWiki.com

Panelists:
Walter Ng, Vice President, IP Ecosystem, GLOBALFOUNDRIES
Rich Goldman, Vice President Corporate Marketing & Strategic Alliances, Synopsys
John Murphy, Director of Strategic Alliances Marketing, Cadence
Prasad Subramaniam, VP of R&D and Design Technology, eSilicon
Michael Buehler-Garcia, Director of Marketing, Mentor Graphics

Walter’s position on collaboration:
– “Collaboration” has now become a buzz word
– Chartered/GF pioneered definition of “Collaboration” in semiconductor manufacturing space with origination of the JDA model for process development
o Benefits in bringing the best expertise worldwide from multiple companies to innovate at the leading edge of process development
o Best economic model for access to leading edge technology in the industry which is also scalable
– Efficiency in the value chain for design enablement support of a common process technology
– “Collaboration” is now a requirement in almost everything we do
– “Concurrent Process development and design enablement development” now the focus area of Leading Edge
– “Unencumbered Open, collaborative” model with Eco-system is a requirement; need to address concerns around leakage and differentiation

Rich’s position on collaboration:
Collaboration has been key in Semiconductors since the advent of ASIC and EDA in the early 80s. Our industry depends on it, and you can’t get anywhere without it. It has just been getting more critical every year, and every new node. Without it, forget about Moore’s Law, and forget about your own company. Doing it right IS hard, especially between competitors. It is an essential skill, and will just continue to get more critical. Get used to it. Love it. Embrace it.

John’s position on collaboration:
Enabling fast ramp by optimizing for rules, tools, and IP together. The success of a semiconductor manufacturer depends upon ramping process nodes to volume production as fast as possible to achieve attractive ROI for the staggering capital investment required to build fabrication facilities. SOC designers on the bleeding edge of advanced nodes need early access to process information, design and sign-off tools optimized for the node, and IP they trust will work in real silicon, to ramp their products to volume quickly to access ever-shrinking market windows. The age of enablement which focused primarily on process rules has ended; a new age has emerged that requires industry-level collaboration that optimizes rules, tools and IP together. This requires industry-level collaboration on a scale and depth never seen nor achieved before.

Prasad’s position on collaboration:
With the desegregation in the semiconductor industry, it is no longer practical for a single company to have all the expertise required to design and manufacture integrated circuits. As a result, collaboration is a basic necessity for successful chip design. It is an ongoing process where every piece of the supply chain needs to collaborate with the other to ensure that the needs of the industry are met.

Michael’s position on Collaboration:

Market Scenario:
o Each IC node advance introduces more interactions between design and manufacturing processes, and consequently more complexity in the design flow and design verification process.
o These interactions are occurring at the same time as the business model has moved almost exclusively to fabless/COT. This means more companies doing their own design flow integration vs. just running a completely scripted flow (aka ASIC) provided by the silicon supplier.
o Each Major EDA vendor has an interface program specifically designed to help facilitate integration between different their products and other EDA tools
· Discussion Topics:
o How many designers know about these programs and are taking advantage of them?
o As the complexity of the design and process grows with each node, do we need more and tighter interactions?
o What roles should the foundry play, without becoming an ASIC supplier and controlling too much of the ecosystem?
o What should be the role and responsibility of the fabless company?


Wally Rhines DvCon 2011 Ketnote: From Volume to Velocity

Wally Rhines DvCon 2011 Ketnote: From Volume to Velocity
by Daniel Nenni on 02-23-2011 at 1:49 pm

Abstract:
There has been a remarkable acceleration in the adoption of advanced verification methodologies, languages and new standards. This is true across all types of IC design and geographic regions. Designers and verification engineers are surprisingly open to new approaches to keep pace with the relentless rise in design density. They are looking beyond simply increasing the volume of verification to instead using advanced techniques to improve the velocity of verification.

The result is design teams have not lost ground on meeting schedule goals or first-pass silicon success even as design complexity has grown. Now the focus is shifting to appreciably improving those metrics while shrinking verification costs.

Wally Rhines, Chairman and CEO of Mentor Graphics, will discuss the state of verification past, present and future. After examining the results from a leading verification survey, he’ll look at how advanced techniques are taking hold in mainstream design. Rhines will then offer his insights on where verification needs to evolve to ensure continued improvement.

Wally Rhines is not only my favorite EDA CEO, he is also one of the most inspirational speakers in EDA, so you don’t want to miss this one.

D.A.N.


Evolution of process models, part I

Evolution of process models, part I
by Beth Martin on 02-23-2011 at 1:15 pm

441 750px dill parameters

Thirty five years ago, in 1976, the Concorde cut transatlantic flying time to 3.5 hrs, Apple was formed, NASA unveiled the first space shuttle, the VHS vs Betamax wars started, and Barry Manilow’s I Write the Songs saturated the airwaves. Each of those advances, except perhaps Barry Manilow, was the result of the first modern-era, high-production ICs.

During those years, researchers were anticipating the challenges of fabricating ICs that, according to Moore’s Law, would double in transistor count about every two years. Today, the solution to making features that are much smaller than the 193nm light used in photolithography is collectively referred to as computational lithography (CL). Moving forward into double patterning and even EUV Lithography, CL will continue to be a critical ingredient in the design to manufacturing flow. Before we get distracted by today’s lithography woes, let’s look back at the extraordinary path that brought us here.

ICs were fabricated by projection printing in the early ‘70s by imaging the full wafer at 1:1 magnification. But, linewidths were getting smaller, wafers getting larger, and something had to change. As chip makers dove into the 3um process node, they got some relief by using the newly introduced steppers, or step-and-repeat printing. Still, the threat of printing defects with optical lithography spurred research into modeling the interactions of light and photoresist.

In 1975, Rick Dill and his team at IBM introduced the first mathematical framework for describing the exposure and development of conventional positive tone photoresists: the first account of lithography simulation. What is now known as the Dill Model describes the absorption of light in the resist and relates it to the development rate. His work turned lithography from an art into a science, and laid the foundation for the evolution of lithography simulation software that is indispensible to semiconductor research and development today.

The University of California at Berkeley developed SAMPLE as the first lithography simulation package, and enhanced it throughout the ‘80s. Several commercial technology computer aided design (TCAD) software packages were introduced to the industry through the ‘90s, all enabling increasingly accurate simulation of relatively small layout windows. As industry pioneer, gentleman-scientist, and PROLITH founder Chris Mack has described, lithography simulation allows engineers to perform virtual experiments not easily realizable in the fab, it enables cost reduction through narrowing of process options, and helps to troubleshoot problems encountered in manufacturing. A less tangible but nonetheless invaluable benefit has been the development of “lithographic intuition” and fundamental system understanding for photoresist chemists, lithography researchers, and process development engineers. Thus patterning simulation is used for everything from design rule determination, to antireflection layer optimization, to mask defect printing assessment.

At the heart of these successful uses of simulation were the models that mathematically represent the distinct steps in the patterning sequence: aerial image formation, exposure of photoresist, post-exposure bake (PEB), develop, and pattern transfer. Increasingly sophisticated “first principles” models have been developed to describe the physics and chemistry of these processes, with the result that critical dimensions and 3D profiles can be accurately predicted for a variety of processes through a broad range of process variations. The quasi-rigorous mechanistic nature of TCAD models, applied in three dimensions, implies an extremely high compute load. This is especially true for chemically amplified systems that involve complex coupled reaction–diffusion equations. Despite the steady improvements in computing power, this complexity has relegated these models to be used for small area simulations, on the order of tens of square microns or less of design layout area.

TCAD tools evolved through the ‘90s, and accommodated the newly emerging chemically-amplified deep ultraviolet (DUV) photoresists. At the same time, new approaches to modeling patterning systems were developing in labs at the University of California at Berkeley and a handful of start-up companies. By 1996, Nicolas Cobb and colleagues created a mathematical framework for full-chip proximity correction. This work used a Sum of Coherent Systems (SOCS) approximation to the Hopkins optical model, and a simple physically-based, empirically parameterized resist model. Eventually these two development paths resulted in commercial full-chip optical proximity correction (OPC) offerings from Electronic Design Automation (EDA) leaders Synopsys and Mentor Graphics. It is interesting to note that while the term “optical” proximity correction was originally proposed, it was well known that proximity effects arise from a number of other process steps. The label “PPC” was offered to more appropriately describe the general problem, but OPC had by that time been established as the preferred moniker.

Demand for full-chip model-based OPC corresponded with the increase in computing capability. Along with some simplifying assumptions and a limited scope of prediction capability, these OPC tools could achieve several orders of magnitude of increase in layout area over TCAD tools. An important simplification was the reduction in problem dimensionality. A single Z plane 2D contour is sufficient to represent the proximity effect that is relevant for full-chip OPC. The third dimension, however, is becoming increasingly important in post-OPC verification, in particular for different patterning failure modes, and is an area of active research.

Today, full-chip simulation using patterning process models is a vital step in multi-billion dollar fab operations. The models must be accurate, predictable, easy to calibrate in the fab, and must support extremely rapid full-chip simulation. Already, accuracy in the range of 1nm is achievable, but there are many challenges ahead in modeling pattern failure, model portability, adaptation to new mask and wafer manufacturing techniques, accommodating source-mask optimization, 3D awareness, double patterning and EUV correction, among others. These challenges, and the newest approaches to full-chip CL models, affect the ability to maintain performance and yield at advanced IC manufacturing nodes.

By John Sturtevant

Evolution of Lithography Process Models, Part II
OPC Model Accuracy and Predictability – Evolution of Lithography Process Models, Part III
Mask and Optical Models–Evolution of Lithography Process Models, Part IV


Custom and AMS Design

Custom and AMS Design
by Daniel Payne on 02-21-2011 at 10:06 pm

Samsung%203DIC%20Roadmap

For IC designers creating full-custom or AMS designs there are plenty of challenges to getting designs done right on the first spin of silicon. Let me give you a sneak peek into what’s being discussed at the EDA Tech Forum in Santa Clara, CA on March 10th that will be of special interest to you:

3D TSV (Through Silicon Vias) are gaining much attention and for good reason, they help to make our popular portable electronics quite slim and cost effective:

Panelists from the following companies will discuss: Is 3D a Real Option Today?

Rob Aitken, ARM Fellow, ARM
Bernard Murphy, CTO, Atrenta
Simon Burke, Distinguished Engineer, Xilinx
Kuang-Kuo Lin, Ph.D., Director, Foundry Design Enablement, Samsung
Juan Rey, Senior Engineering Director, Design to Silicon Division, Mentor Graphics

The TSMC AMS Design Flow 1.0 was announced back in June 2010, so come and find out what’s changed in the past 9 months. In contrast, the digital flow is already at rev 11.0, which indicates a much more standardized approach to convergence in the digital realm.

TSMC and Mentor will present on their AMS design flow:

Tom Quan, Deputy Director of Design Methodology & Service Marketing, TSMC
Carey Robertson, Director LVS and Extraction Product Marketing, Mentor Graphics

Custom physical design is getting more interoperable with standards arising like OpenAccess:

Learn how SpringSoft and Mentor are working together on signoff-driven custom physical design:

Rich Morse, Technical Marketing Manager, SpringSoft
Joseph C. Davis, Calibre Interface Marketing Manager, Mentor Graphics

The EDA Tech Forum is free to attend, however you’ll want to signup to reserve your spot today. This is a full-day event with other sessions that may answer some nagging questions that you have about AMS design tools and flows.