Banner 800x100 0810

Lithography from Contact Printing to EUV, DSA and Beyond

Lithography from Contact Printing to EUV, DSA and Beyond
by Paul McLellan on 03-05-2013 at 6:21 pm

I used my secret powers (being a blogger will get you a press pass) to go to the first day of the SPIE conference on advanced lithography a couple of weeks ago. Everything that happens to with process nodes seems to be driven by lithography, and everything that happens in EDA is driven by semiconductor process. It is the place to find out if people believe EUV is going to be real (lots of doubt), how about e-beam, is directed self-assembly really a thing.

The keynote was by William Siegle called Contact printing to EUV and was a history and lessons learned from a career from when lithography started to the present day (well, a little bit into the future even).

Back when ICs first started, the technology was 1X contact printing. The mask would physically be in contact with the wafer (like a contact print in photography before everything went digital). The design was actually hand-cut out of a red plastic called rubylith (originally a trademark like escalator) at 5X the actual size. This would then be photographically reduced to create the master mask. The master mask would then be used to create submaster masks. And the submaster masks used to make working masks. The working masks didn’t last very long because they were physically in contact with the wafer and so would get damaged and pick up defects fast. One lesson was to beware 1X printing.

IBM decided that the thing to do was to use the technology used to make the mask, photoreduction, to build a stepper (although I don’t know if they used that name back then). But it turned out to be much harder than they thought and the project failed. Perkin-Elmer were the first to build a successful optical projection stepper. It was still a 1X reduction, the polygons on the mask the same size as on the wafer, but the mask didn’t contact the wafer so the damage/defect problem was much reduced.

Late 1970s steppers finally make it into production. Embarrassingly for the US, Nikon and Cannon soon had better machines than GCA and PE and dominated the market by the late 1980s. In the mid-1980s ASML emerged (and absorbed PE).

The US got seriously worried about the Japanese, not just in steppers but in semiconductor memory too. Until that point there had been no co-operation among companies but that was about to change. SRC, Sematech, DOD VHSIC program, IMEC, LETI, College of nanoscale science and engineering (CNSE) at University of Albany. Another lesson was that sharing enables faster progress.

There was dramatic advances in the 1990s as we went from 365nm to 248nm to 193nm light (we are still at 193 today) along with continual improvement in the photoresist, most notably the invention of CAR (chemically amplified resist) and excimer lasers. IBM actually had all this but kept it secret until they realized that the equipment industry would never build the equipment they needed at IBM until good resist was widely available.

There were some blind alleys too. Ebeam was used in the mask shop and everyone wondered if it would make it to the production flow. The challenge with e-beam is resolution versus throughput. If the beam is small, the throughput is low. As we moved to smaller nodes, e-beam became non-competitive.

The big blind alley was X-ray lithography (around 1nm wavelength). This was killed by three things. Firstly, it needed a synchrotron as a source of X-rays. Second, it required a 1X membrane mask. And remember, beware of 1X printing. But mostly it was built on a false assumption that we would not be able to get beyond 0.25um using optical technology. Well, we are pretty much at 14nm using that technology which is 0.014um in old currency. So one more lesson was to never underestimate the extendability of existing technology.

What has enabled light to last so long was a combination of optimizing the mask (optical proximity correction, OPC) and holistic litho optimization.

There are also promising future technologies. Nano-imprint has remarkable image fidelity (where the mask essentially is pressed onto the wafer). But it has all the same problems as the rubylith era, of 1X contact printing meaning that masters, submasters and working masks are required.

Of course the big hope for the future is EUV with a wavelength of 13.5nm with a 4X reduction reflective reticle. But we need to get the light source power up to 100-250W, we need production worthy masks, resists and metrology. The justification is economic, to avoid multiple patterning, but that it won’t be adopted until it can beat that cost ceiling.

And the future? Another rule, it is impossible to predict 10 years ahead. So at this point we can’t tell if EUV will make it, if directed self-assembly will turn out to be a breakthrough, if carbon nanotubes can be manufactured into circuits economically. We can really only see out about 3 years.


Verification the Mentor Way

Verification the Mentor Way
by Paul McLellan on 03-05-2013 at 3:05 pm

During DVCon I met with Steve Bailey to get an update on Mentor’s verification. They were also announcing some new capabilities. I also attended Wally Rhines keynote (primarily about verification of course, since this was DVCon; I blogged about that here) and the Mentor lunch (it was pretty much Mentor all day for me) on the verification survey that they had recently completed.

Verification has changed a lot over the past few years. The techniques that were only used by the most advanced groups doing the most advanced designs have become mainstream. Of course this has been driven out of necessity, as verification has expanded to take up more and more of the schedule. This is evident in the 75% increase in the number of verification engineers on a project since 2007, compared to the minor increase in the number of design engineers.


The specification for many designs is that each block must have 100% coverage, or waivers are required. Generating and justifying waivers to “prove” that certain code is unreachable and so does not need to be covered is very time consuming. NVidia estimated that they took 9 man-years on code-coverage on a recent project. So one new development is Questa CoverCheck that automates coverage closure. Formally generated waivers for unreachable code reduce the effort to write manual tests and also eliminates the tedious manual analysis to justify waivers to management.


Another new capability is in the area of interconnect verification. Trying to set up all the blocks on a modern SoC so that they generate the required traffic on the interconnect is very time-consuming to do by hand. The simulation is also large, requires a lot of memory and runs slowly. Instead, inFact can be used to generate the traffic more explicitly, replacing the actual blocks of the design with traffic generators that work much more directly.


Mentor also has tools for rules-based verification that gives verification engineers and, especially, the project management insight into how far along verification really is. When this is done ad hoc it always seems that verification is nearly complete for most of the schedule. As the old joke goes, it takes 90% of the time to do the first 90% of the design, and then the second 90% of the time to do the remaining 10%. By switching to rules-based verification the visibility is both improved and made more accurate.


Watch the Clock

Watch the Clock
by Paul McLellan on 03-05-2013 at 2:24 pm

Clock gating is one of the most basic weapons in the armoury for reducing dynamic power on a design. All modern synthesis tools can insert clock gating cells to shut down clocking to registers when the contents of the register are not changing. The archetypal case is a register which sometimes loads a new value (when an enable signal is present, for example) and otherwise recirculates the old value back from the output. This can be replaced with a clock gating cell using the same enable so that the register is only clocked when a new value is loaded, and instead of recirculating the old value the register is simply not clocked at all so that it retains the old value.

The efficiency of clock gating can be measured by clock-gating efficiency (CGE). Static CGE simply counts up the percentage of registers that are gated. But not every clock gate has much effect. In the archetypal example mentioned earlier, there is little power saving if the register loads a new value almost all the time, and a huge saving if the new value is almost never clocked in. Instead of using static CGE, dynamic CGE, the percentage of time that the clocks are actually shut off, is a much better measure.

But even dynamic CGE ignores just how much power is actually saved. If the enable signal shuts off a large part of the clock tree then the power saving can be large and it is worth the effort to try and improve the enable signal so that it captures all the times that the clock can be suppressed. On the other hand, if an enable only applies to a small part of the design (perhaps just a single flop) then there is little point in trying to optimize the enable (and, in fact, just clock gating the register may not even save power versus leaving the multiplexor to recirculate the output bit).

To perform this analysis most accurately requires clock-tree synthesis (CTS) to have been completed. But this is part of the physical design flow and is too late to return to the RTL level to optimize the RTL to incrementally reduce power. Instead, Apache’s PowerArtist allows this analysis to be done at the RTL level using models of the clock tree and the associated interconnect capacitance. This allows the enable efficiency to be calculated for each clock gate and highlights the cases where a gate controls a large amount of capacitance and so is a candidate for additional effort to further improve the enable efficiency and so further reduce power.

See Will Ruby’s blog on clock gating here.


Integrating Formal Verification into Synthesis

Integrating Formal Verification into Synthesis
by Paul McLellan on 03-05-2013 at 1:29 pm

Formal verification can be used for many things, but one is to ensure that synthesis performs correctly and that the behavior of the output netlist is the same as the behavior of the input RTL. But designs are getting very large and formal verification is a complex tool to use, especially if the design is too large for the formal tool to take in a single run. This is an especially severe problem for Oasys RealTime Designer because its capacity is so much larger than other synthesis tools. Using formal verification typically requires complex scripting and manual intervention to get results with reasonable runtimes.

Oasys and OneSpin Solutions have just announced an OEM agreement. Now, in EDA, OEM agreements really only work when the product being sold is integrated inside another (such as Concept Engineering’s schematic generator). Otherwise customers always prefer to buy the two products from their respective companies. This OEM is a tight integration. OneSpin is licensing a portion of its OneSpin 360 EC technology, automated functional equivalence checking software, to Oasys to integrate with RealTime Designer.


The integrated product allows RealTime Designer to drive the formal verification process automatically, dividing the design up into portions that can then be verified in parallel using multiple licenses. For example, a nearly 5 million instance design (so perhaps 30 or 40 million gates) can be verified in just over 2 hours using 10 licenses. The integration is fully compatible with the low power and DFT flows in RealTime Designer, correctly handling clock gating and scan chain insertion.

OneSpin EC equivalence checking ensures that the RTL design and the output gate-level netlist will produce the same results for the same inputs under all circumstances. It doesn’t use simulation-type approaches but is based on mathematically proving that this is so. In the event that this isn’t so (which would be a bug in RealTime Designer unless any manual intervention has taken place) it will produce a counter example.


Image Sensor Design for IR at Senseeker

Image Sensor Design for IR at Senseeker
by Daniel Payne on 03-05-2013 at 10:30 am

Image sensors are all around us with the cell phone being a popular example, and 35mm DSLR camera being another one. Last week I spoke with Kenton Veeder, an engineer at Senseeker that started his own image sensor IP and consulting services company. Instead of focusing on the consumer market, Kenton’s company does sensor design work for the military and scientific markets.


Read Out Integrated Circuit (ROIC) Continue reading “Image Sensor Design for IR at Senseeker”


Cavium Adopts JasperGold Architectural Modeling

Cavium Adopts JasperGold Architectural Modeling
by Paul McLellan on 03-05-2013 at 7:00 am

Cavium designs some very complex SoCs containing multiple ARM or MIPS cores at 32 and 64 bit. This complexity leads to major challenges in validating the overall chip architecture to ensure that their designs will meet the requirements of their customers once they are completed, with performance as high as 100Gbps.

Cavium have decided to use Japer’s JasperGold Architectural Modeling App to allow their architects to better specify, model and verify the complex behavior of these bleeding edge designs. I’ve written before on how ARM has been using Jasper’s architectural modeling to verify their own cache protocols (and, indeed, found some corner-case errors that all their other verification had missed). Cavium’s multi-core, multi-processor chips for sure have very complex interconnection protocols between the processors and memories.


The JasperGold Architectural Modeling App provides an easy and well-defined methodology for an efficient modeling and verification of complex protocols. Jasper’s Modeling App models a large part of the protocol much faster and with less effort compared to other modeling and validation methods. It captures protocol specification knowledge at the architectural level; performs exhaustive verification of complex protocols against the specification; creates a golden reference model that can be used in verifying the RTL implementation of the protocol; and automates protocol-related property generation and debugging aids.


Synopsys ♥ FinFETs

Synopsys ♥ FinFETs
by Daniel Nenni on 03-03-2013 at 6:00 pm

FinFETs are fun! They certainly have kept me busy writing over the past year about the possibilities and probabilities of a disruptive technology that will dramatically change the semiconductor ecosystem. Now that 14nm silicon is making the rounds I will be able to start writing about the realities of FinFETs which is very exciting!


From Moore’s Law, we can infer that FinFETs represent the most radical shift in semiconductor technology in over 40 years. When Gordon Moore came up with his “law” back in 1965, he had in mind a design of about 50 components. Today’s chips consist of billions of transistors and design teams strive for “better, sooner, cheaper” products with every new process node. However, as feature sizes have become finer, the perils of high leakage current due to short-channel effects and varying dopant levels have threatened to derail the industry’s progress to smaller geometries.

Synopsys published an article FinFET: The Promises and the Challenges which is a very good primer and talks about the FinFET Promise:

Leading foundries estimate the additional processing cost of 3D devices to be 2% to 5% higher than that of the corresponding Planar wafer fabrication. FinFETs are estimated to be up to 37% faster while using less than half the dynamic power or cut static leakage current by as much as 90%.

The foundries, on purpose or by accident, made the right decision in taking the 20nm planar process and adding FinFETs. Ramping a new process and a new 3D transistor would have been daunting for the SoC based fabless semiconductor ecosystem. Even for Intel, they may have 22nm Tri-Gate microprocessors but I have yet to see a 3D SoC from them. FinFET Design enablement (EDA and IP) is a big part of that transition and I have to give Synopsys the advantage here.

The foundry’s intent is to ensure the transition to FinFET is as transparent as possible, allowing users to seamlessly scale designs to increasingly smaller geometry processes. Maximum benefits with this technology will require implementation tools to minimize power consumption and maximize utilization and clock speed. FinFETs require some specific enhancements made in the following areas: TCAD Tools, Mask Synthesis, Transistor Models, SPICE Simulation Tools, RC Extraction Tools and Physical Verification Tools.


Synopsys building critical IP mass over the years, especially buying Virage Logic, has given them an early and intimate look at the bleeding edge of process development. Yes I have seen fluffy 14nm test chip press releases from all vendors but the foundation IP (SRAM) is where the rubber first meets the road and that gives Synopsys a lead on tool development.

That is why I asked Raymond Leung, VP of SRAM development at Synopsys, to present at the EDPS Conference FinFET Day that I’m keynoting. Not only does Raymond have deep SRAM experience from Virage, he also led SRAM development at UMC. At Synopsys, Raymond now gets first silicon at the new processes nodes at ALL of the foundries, so his presentation on FinFET design challenges will be something you won’t want to miss!

Don’t forget to log into the webinar I’m moderating on Unlocking the Full Potential of Soft IP with Atrenta, TSMC, and Sonics Tuesday, March 5, 2013 9 a.m. Pacific Time. You just never know what I’m going to say so be sure and catch the live uncensored version!


DVCon 2013 – Hope For EDA Trade Shows

DVCon 2013 – Hope For EDA Trade Shows
by Randy Smith on 03-03-2013 at 2:04 pm

Those of us who spend a lot of time at EDA marketing events cannot help but notice the dramatic shrinking of the floor space, and to some extent attendance, at the major EDA shows such as DAC and DATE. DAC used to occupy both the north and south halls of Moscone Center when in San Francisco, but now only takes up one hall. So, I did not have high expectations when going to DVCon 2013 in San Jose, California this week – but I was very pleasantly surprised.

First, the decline in the number of exhibitors at DAC is not the fault of MP Associates, the company that runs DAC (and DVCon). Very simply there are many less EDA companies now than there were ten years ago. I believe the cause of this contraction was a combination of a bad international economy and Mike Fister’s misguided belief that Cadence would not make any more significant acquisitions. Upon arriving at Cadence in 2004, Mr. Fister announced that not only would Cadence not use acquisitions as part of its development strategy, but he also shut down Telos Ventures, Cadence’s own venture capital company. By then Telos had three EDA veterans – Bruce Bourbon, Jim Hogan, and Charlie Huang – making investments in EDA and other high technology areas. Closing Telos took tens of millions of dollars in early round funding off the market and signaled to other investors that now was the time to exit EDA. Of course we can now easily see that Mr. Fister’s prediction was wrong, but the damage has been done and the seed and A round money has simply dwindled to near zero.

Another reason for the decline is the rise of the private shows offered by the big EDA companies. Customers have a limited travel budget for attending trade shows. Some may only choose to attend SNUG and CDN Live. Particularly the customers’ EDA department management can only attend so many shows. Some will therefore not make it to DAC or DATE. So the open shows see less attendance due to the pull from private shows as well.

But, while DAC has seen a decline, DVCon is breaking its previous attendance records and this year’s exhibit area was hopping with customers and vendors – 33 vendors with only one unused booth. There are several reasons for this. First, design verification is one of the fastest growing segments of EDA as it is one of the crucial elements in the overall system design space. With ever increasing content in chips, the job to verify all that content gets more difficult every year. Customers are investing in simulation environments, increasing use of formal techniques such as assertion-based verification, simulators, emulators, verification services, and verification intellectual property (VIP). Secondly, DVCon is a very focused show. The attendees and the employees from the vendors attending the show all have a tight focus on verification. They are passionate about it, but there is also a greater sense of the need for collective solutions. Yes, Synopsys and Cadence had larger booths, but they did not suck attendees off the floor and keep them away from the other vendors, as they often do at DAC. There were lots of discussions between the vendors and signs of cooperation in this segment. I think Accellera plays a significant role in this behavior as well.

Part of the reason the focus on this segment works well is it is just the right breadth of technology. Contrast this with DesignCon. The product range at DesignCon spans at least from logic synthesis to design rule checking (DRC). And there are so many design creation and analysis tools in between, plus a significant array of semiconductor IP vendors. It is so large it hardly seems like there is any focus at all. It’s scope is still more than half of the scope at DAC. DVCon’s focus adds to the buzz because all the attendees are talking about the same thing – their energy builds on one another.

I hope that MP Associates and other trade show organizers will take note of DVCon’s success and try to come up with other events of similar scope. It was nice to feel the excitement around a segment of EDA again. Thank you, DVCon.


TSMC ♥ Atrenta (Soft IP Webinar)

TSMC ♥ Atrenta (Soft IP Webinar)
by Daniel Nenni on 03-02-2013 at 4:00 pm

Back in 2011, TSMC announced it was extending its IP Alliance Program to include soft, or synthesizable IP. Around that time it was also announced that Atrenta’s SpyGlass platform would be used as the sole analysis tool to verify the completeness and quality of soft IP before being admitted to the program. Since then, the program has grown quite a bit. At present, I believe TSMC is closing in on 20 IP Partners that have qualified for inclusion in the program.

Why would TSMC want to focus on soft IP, and why the love affair with Atrenta? If you dig a little, it all makes sense. The third-party IP content in most chips today is 80 percent or more. The winner is no longer the company with the most novel circuit design, it’s the company who picks the best IP and successfully integrates it first. Because of the need for competitive differentiation, soft IP is becoming the preferred technology. You can tweak the content or function of soft IP; it’s a lot harder to do that with hard IP.

“Atrenta will be known for its relentless focus to deliver high quality, innovative products that help to enable design of the most advanced electronic products in the world. Our customers routinely benefit from improved quality, predictability and reduced cost. We maximize value for every customer, employee and shareholder.

So TSMC is on to something. Why not close the customer earlier in the design flow? If I have a choice of two foundry vendors, and one tells me about soft IP quality and one doesn’t, I know who I’m calling back. In sales terms, TSMC is expanding the reach of their “funnel”. So why is SpyGlass the only tool used at the top of that funnel? The aforementioned love affair between TSMC and Atrenta seems to be based on a one-stop shopping approach. TSMC’s quality check for its Soft IP Alliance looks at a lot – power, test, routing congestion, timing, potential synthesis issues and more. SpyGlass has been around a long time and covers all of those requirements. The other option is to work with multiple vendors to get the same coverage. It seems to me as long as SpyGlass is giving reliable answers, it will continue to be the sole tool at the gate to the Soft IP Alliance.

This doesn’t necessarily say Atrenta has a monopoly on the program. TSMC recently announced an endorsement of OaSys as another tool in the Soft IP program, see TSMC ♥ Oasys. I expect more such announcements. It’s a good idea for Soft IP suppliers to have multiple options to help achieve the quality and completeness TSMC is requiring.

If you want to learn more about what TSMC is up to with this program, I’m moderating a Webinar on March 5th that will cover all the details. See Unlocking the Full Potential of Soft IP (Webinar)for more information.

Agenda:

  • Moderator opening remarks – Daniel Nenni (SemiWiki)
  • The TSMC Soft IP Alliance Program – structure, goals and results – (Dan Kochpatcharin, TSMC)
  • Implementing the program with the Atrenta IP Kit – (Mike Gianfagna, Atrenta)
  • Practical results of program participation – (John Bainbridge, Sonics)
  • Questions from the audience (10 min)

Anyone who is contemplating the use of soft IP for their next SoC project should attend this webinar, absolutely!


SoC Derivatives Made Easier

SoC Derivatives Made Easier
by Paul McLellan on 03-01-2013 at 2:44 pm

Almost no design these days is created from scratch. Typical designs can contain 500 or more IP blocks. But there is still a big difference between the first design for a new system or platform, and later designs which can be extensively based on the old design. These are known as derivatives and should be much easier to design since they can leverage not just the pre-existing IP but much of the way that it has been interconnected (not to mention much or all of the software that today forms so much of the investment in an SoC).


Atrenta’s GenSys is a tool that structures the whole process of doing derivative designs. It reads in an existing design in RTL (or a standard database like IP-XACT) to bring the design database into a new structured object model. This is a more reliable, flexible and just easier way to make changes according to a derivative design specification.

GenSys makes it easy to add, delete and update the existing IP blocks and their interfaces. Only a few clicks are required to remove a block or to update one. It also provides a very interactive way to build connections which can be used to update the existing connectivity with full transparency. Altering the hierarchy to group or ungroup blocks is also straightforward.

Taping out the design is also much easier. The design can be saved in interoperable formats such as IP-XACT or XML. The RTL netlist and schematics for the derivative design are then produced. The high level internal view means that it is simple to generate extensive design documentation.

GenSys is a framework for creating a derivative design while boosting productivity in terms of quick iterative cycles while eliminating errors in the design.

  • Ease of data entry
  • Fast design build time
  • Ability to accept large modern designs
  • Support for last-minute ECOs
  • Automated generation of RTL netlist, design reports and documentation
  • Customizable to support in-house methodologies
  • Standard database backend allowing API-based access

GenSys has been used to tape out many chips at leading semiconductor companies. The GenSys white paper on derivatives is here.