BannerforSemiWiki 800x100 (2)

FineSim Webinar

FineSim Webinar
by Paul McLellan on 02-07-2012 at 2:00 pm

FineSim is Magma’s circuit simulator that has been doing extraordinarily well. In my opinion it is one of the big reasons that Synopsys is acquiring (presumably, still subject to approval of course) Magma. FineSim is especially strong in the memory market with over 70% of the top 5 DRAM manufacturers and the top 10 flash manufacturers using it. Plus over half of the top 20 semiconductor manufacturers. For a relatively new product this is impressive growth.

FineSim was written from the start to be scalable and to take advantage of multi-core workstations and racks of servers. This means that it scales to simulate large analog designs tha could not have been verified with previous SPICE engines. It is actually two products, FineSim Pro and FineSim SPICE.

There is a huge explosion in the need for analog, RF and mixed-signal solutions. For example, your smartphone may have as many as 10 radios in it: 4 GSM bands, GPRS, EDGE, 3 UMTS bands, HSDBA, WLAN, GPS, Bluetooth. Plus modern processes require characterization at many more than the old four-corners that we used to be able to use just a few process generations ago.

There is a new FineSim webinar that covers the use of FineSim for various kinds of simulation. It is 2-5X as fast as the competition on a single CPU and, of course, gets faster still with multiple CPUs.

Some of the things that will be covered in the webinar are:

  • multi-threaded/multi-machine performance and scalability that allows you to simulate 1.7 million transistors in just 16 hours with SPICE-accurate results
  • support for industry standard formats, enabling seamless integration into existing design and verification environments
  • extensive reliability analysis to ensure design quality
  • superfact runtime that allows you to increase test coverage with having to tradeoff against accuracy
  • AMS (analog/mixed-signal) verification
  • Fast Monte Carlo (FMC) flow
  • FineSim RF

Register for the webinar here.


Virtuoso has got you cornered

Virtuoso has got you cornered
by Paul McLellan on 02-07-2012 at 1:33 pm

Things you don’t know about Virtuoso: we’ve got you cornered.

That is the title on a Cadence blog item last week. It is actually about variability and how to create various corners for simulation and analysis, but given Cadence’s franchise for Virtuoso, its lock-in through SKILL-based PDKs and so forth, it is not perhaps the ideal message to be sending. There is plenty of resentment at both foundries and customers about Cadence’s lack of openness in this area.

The blog is actually about the new features in Virtuoso supporting process variation and the need in a modern design to characterize it at dozens of different points, not just in the traditional PVT (process, voltage, temperature) realm but also device parameters and even data collected from Monte Carlo analysis.

Most of the blog is about how to expand various corners without creating a combinatorial explosion where every parameter appears with every combination of others, which is not normally all that useful.


Synopsys latest acquisitions: ExpertIO (VIP) and Inventure (IP)… Any counter-attack from Cadence?

Synopsys latest acquisitions: ExpertIO (VIP) and Inventure (IP)… Any counter-attack from Cadence?
by Eric Esteve on 02-07-2012 at 12:29 pm


Even if ExpertIO acquisition by Synopsys, coming after nSys acquisition a couple of months ago, will not have a major impact on Synopsys’ balance sheet, it will again change the Verification IP market landscape. The acquisition of Inventure, a subsidiary of Zuken, will have a major impact on the Interface IP market, even if it’s on the Japanese market only, as Inventure was very successful on this domestic market, but only in Japan. This acquisition will also have an impact on the balance sheet of an IP vendor based in Canada, we will see why.

As already explained in a previous post, Synopsys strategy was to offer “bundled” VIP around IP sales, and this is not the best way to valorize the VIP product, as the Design IP customer expect to get a bundled VIP almost for free. After Synopsys acquisition of nSys, the acquisition of ExpertIO is likely to reflect a real strategy inflection, the company deciding to attack Cadence in the field where they were the strong leader, especially after the acquisition of Denali (May 2010), facing a competition made of small companies only (nSys, ExpertIO, PerfectVIP, Avery…).

Another side effect is now the lack of accuracy of the “Yalta description”: Cadence dominant in VIP and Synopsys in IP market! This claim is not true anymore! The VIP market is, by definition of “verification”, limited to the protocols based functions, like USB, PCIe, SATA, AMBA, MIPI, Ethernet… or to the memory interfaces like DDRn, GDDRn, Flash and so on. In other words, VIP market is far to be a huge market (even if we still don’t know the market size, as no survey has been done so far). IPNEST evaluation is between $50M to $100M, please don’t expect double digit precision! Going after this market can be a way for Synopsys to apply the “barbed wire fence strategy” as described by Ed McKernan. To protect their Interface IP market share, Synopsys is expanding their presence –extending the ranch size- to make it more difficult for the competition to attack the core business (IP)… That’s one explanation, the other could simply be that Synopsys need to expand in VIP to guarantee higher growth rate on a limited size market. You choose it!

The acquisition of Inventure is easier to understand. Anybody who has tried to develop the business on the Japanese market knows that it’s not easy; the go-to-market rules are different from the west part of the world, doing advertising is not enough, your customers expect a very high quality product (NOT a well marketed one), and an outstanding level of technical support. Needless to say, they also expect you to speak Japanese… The success of Inventure in PCI Express IP since 2007 and SuperSpeed USB more recently was certainly linked to their ability to best serve their Japanese customers. I don’t know if Synopsys was successful on the Japanese market, but I am sure that after this acquisition, they will be!

The side effect of this acquisition is that Snowbush (the Canadian IP vendor), who had built a strong partnership with Inventure by bringing their high quality PHY IP to complement the Controller IP sold by Inventure, will most probably see their PHY IP sales vanish in Japan. IPNEST evaluation was that about 25% of Snowbush revenue was made thanks to this partnership (initiated in 2008, thanks to a well known consultant – guess who). But Snowbush future will change anyway: being part of Gennum, they have been acquired by Semtech, ironically two days after Inventure’s acquisition by Synopsys!

By Eric Esteve from IPNEST


AMD and GlobalFoundries?

AMD and GlobalFoundries?
by Daniel Nenni on 02-05-2012 at 1:00 pm

One thing I do as an internationally recognized semiconductor blogger is listen to the quarterly conference calls of companies that drive our industry. TSMC is always interesting, I really like the honesty and vision of Dr. Morris Chang. Cadence is good, I always want to hear what Lip-Bu Tan has to say. Oracle and Larry Ellison, Synopsys, Intel, AMD, Qualcomm, Broadcom, Altera, Nvidia, and a couple of others.

If I miss the actual call I get the transcript from Seeking Alpha. Here is the most recent AMD call Q4 2011. I post this blog as an observation and discussion rather than a report of facts and figures. I respect GlobalFoundries and hope they succeed but I do not understand the relationship between AMD and GFI. But then again, I’m just a blogger so help me out here:

Granted, the “spin-off” of a new corporate entity is a difficult endeavor, especially when AMD retained a substantial % of GFI (and ATIC, GFI’s parent company, received a substantial % of AMD).

For a while, AMD would routinely incorporate a loss in their quarterly results, based upon their percentage ownership of GF which made sense to me. Prior to the spin-off, AMD’s losses reflected 100% of the fab expense, and immediately after the spin-off, AMD’s one-third ownership of GF resulted in roughly 1/3 of the previous losses still being reported quarterly.

However, AMD’s % ownership of GFI declined, due to the increased investment by ATIC in GFI, and the acquisition of Chartered Semi. When AMD’s ownership was reduced below 15%, the declaration was that “we will no longer incorporate the ongoing financial results of our ownership in GFI in quarterly reports… the investment in GF will be treated as a long-term asset.” OK, that makes sense too.

Then, there were different classes of GFI shares issued. And, throughout 2010-11, there were repeated updates in AMD’s GAAP quarterly financials, based upon updates to the book value of the investment in GFI, in contradiction to the earlier declaration.

In a couple of cases, AMD reported a significant gain in the value of its investment, due to a recalculation of the value of its (diminished) percentage share in GF, during the acquisition of Chartered:

http://www.sec.gov/Archives/edgar/data/2488/000119312511163112/filename1.htm

However, in the most recent 4Q11 fiscal quarter, AMD recorded a loss of $209M. It is unclear to me how AMD intends to represent the ongoing value of their investment in GlobalFoundries.

Actually, it’s hard for me to believe that their value in GFI could increase, as was reported in a couple of recent quarters. AMD no longer invests in the ongoing operations of GFI, ATIC does. I highly doubt GFI is profitable, based upon the losses incurred prior to spin-off plus the integration of Chartered Semi, lacking new sources of external customer revenue. Yet, AMD has recently reported both substantial quarterly GAAP gains and losses with regards to GFI, amounts which far exceed their operating profit each quarter. This financial reporting method is very puzzling to say the least.

The “cost-plus” wafer purchase agreement that AMD established with GFI is clearly an opportunistic one for AMD, which leads to a discussion of a very unusual financial agreement:

http://semiaccurate.com/2011/04/04/amd-and-global-foundries-agreement-not-what-it-seems/

AMD is contractually bound to provide additional payments (up to $400M) to GFI this year, above and beyond the wafer purchase agreement between the two entities. The explanation for these payments was “based upon obtaining sufficient 32nm yields”. Even for a foundry blogger it is hard to understand how a wafer-purchase agreement requires an additional “bonus payment”, up to $100M quarterly. AMD must be assuming it can move lots of additional (32nm SOI) product, to make a committed payment based upon wafer yield, not wafer volume. The amount of $100M per quarter is dangerously close to AMD’s quarterly free-cash flow and non-GAAP profits.

And now IBM “quietly” starts to make chips for AMD?

So, it is not clear to me what relationship AMD and ATIC have maintained, in terms of the value of AMD’s holding in GFI, and the financial obligations (beyond customer and supplier) that AMD has to ATIC in 2012. This lack of transparency is troubling, and in my mind it brings into question the credibility of each quarterly financial report.

For that reason alone, I would consider AMD to be an unsound (long-term) investment, although it certainly makes for interesting “short-term trading”. This is an observation, opinion, for entertainment purposes only, I do not own AMD stock nor do I have investments in related companies.

There is also some good and some perplexing news from GF, unrelated to its relationship with AMD:

GFI announced that the new fab in Malta, NY, will be providing prototype wafers to IBM in mid-2012. That’s the good news.

However, it’s not really a “big win” for GFI, which may not be clear from the press release. Chartered Semi has been a second source for the processor parts used in the Microsoft Xbox 360 family. Microsoft insisted, of course, that IBM Microelectronics have a viable second source, and IBM ensured that Chartered was a qualified supplier of the corresponding SOI technology.

So, in my opinion, this current announcement is really just an extension of that second-source agreement – Microsoft clearly demanded a second source for the processor in the upcoming Xbox 720 product. However, the Xbox 360 parts were never really a large source of profit for Chartered – it was more a way for Microsoft to negotiate the best pricing from IBM. Although additional revenue for GFI is a good thing, the parameters of this agreement are likely not very different from the previous second-sourcing deal, and thus, not an exclusive, nor high-margin revenue opportunity.

The perplexing thing is that the resources invested in Malta on 32nm SOI bring-up as a second source to IBM will be diverted from 28nm bulk technology bring-up. In the 2013-2015 time frame, TSMC has made it clear that 28nm is going to be a very important source of revenue for them and I know this to be true.

Last week I sent a version of this to GFI for clarification / comment but have not heard back yet. If somebody else out there has more information or can correct me please post in the comment section or email me directly: dnenni at SemiWiki dot com.




DVCon: Formal Verification with lunch

DVCon: Formal Verification with lunch
by Paul McLellan on 02-03-2012 at 6:03 pm

At DVCon on Thursday March 1st (St David’s day for any Welsh readers) Jasper is sponsoring lunch from 12pm to 1.30pm. It will take place in the Cascade/Sierra ballrooms.

During lunch there will be a panel discussion Formal Verification from Users’ Perspectives with real users no how they mitigate risk in their designs while meeting the tight schedules of modern designs. The panel will share their experiences with formal verification and how the formal approach has helped them in different ways in their design and verification methodologies.

The panelists are:

  • Jon Michaelson from nVidia
  • Ambar Sukar from ParadigmWorks
  • Someone from ARM
  • Probably one other company too

Details of the event are here. There is no need to register although you must be registered for DVCon.

Immediately following the lunch, at 1.30pm in the Siskiyou ballroom, Jasper is running a tutorial Leveraging Formal Verification Throughout the Entire Design Cycle. The tutorial last until 5pm and is conducted by Lawrence Loh and Norris Ip. They will talk about the benefits of using formal technology in such areas as:

  • Stand-alone verification of architectural protocols
  • Designer sandbox testing for RTL development
  • End-to-end data packet integrity
  • SoC connectivity and integration verification
  • Root-cause isolation and full proofs during post-silicon debug

Formal verification can be a valuable addition to traditional verification methods. For example, applying formal techniques early in the design cycle to exhaustively verifying block-level design functionality can produce higher quality RTL delivered to unit and system level verification. Attendees will learn about new formal technologies and flows that enable designers and verification engineers to augment existing flows. Also included will be discussions about how effort applied to one application can be leveraged in others. When applied intelligently, formal technologies can enhance traditional design and verification flows to help reduce the risks associated with increasing SoC complexity.

Details of the tutorial are here. You can add this tutorial to your DVCon pass with tutorials, or even register just to attend this one tutorial. Details of registration are here.


Using "Apps" to Take Formal Analysis Mainstream

Using "Apps" to Take Formal Analysis Mainstream
by Daniel Payne on 02-02-2012 at 12:47 pm

2760d1328206331 dvcon 2012.png

On my last graphics chip design at Intel the project manager asked me, “So, will this new chip work when silicon comes back?”

My response was, “Yes, however only the parts that we have been able to simulate.”

Today designers of semiconductor IP and SoC have more approaches than just simulation to ensure that their next design will work in silicon. Formal analysis is an increasingly popular technology included in functional verification.

DVCon 2012


I received notice of DVCon 2012 coming up in March, and saw a tutorial session called: Using “Apps” to Take Formal Analysis Mainstream. I wanted to learn more about the tutorial so I contacted the organizer, Joe Hupcey III from Cadence and talked with him by phone.


Joe Hupcey III, Cadence

Q What is an App?
A: An app is a well documented capability or feature to solve a difficult, discreet problem. An App has to be more efficient to use (like how formal can be more efficient than a simulation test bench alone). An app has to be easy enough to use without having a PhD in Formal analysis.

Q: Who should attend this tutorial?
A: Design and verification engineers that could benefit from the use of formal. Little coaching and documentation is needed to get up to speed. Also Formal experts can benefit. Design and verification engineers that want to quickly and easily take advantage of the exhaustive verification power that formal and assertion-based verification has to offer.Formal experts that what to branch out, and make all their colleagues more productive, plus in the case of the apps they tie into the Metric-Drive Verification flows the contribution made by formal can be mapped to simulation terms.

Q: Does it matter if my HDL is Verilog, VHDL, SystemVerilog or SystemC?
A: All languages benefit from formal, PSL or SystemVerilog Assertions are talked about and used.

Q: What are the benefits of attending this tutorial?
A: Everyone on the design and verification team gets some value out of formal tools and methodology. We’ll be showing 5 or 6 apps that are available for use today. As I noted above, the “apps” approach starts with hard problems where formal, or formal and simulation together, are more efficient than simulation alone – then structures a solution that’s laser focused on the problem. There are quite a few apps available today – so if you are a Cadence customer this tutorial will help you get the most of the licenses you already have.

… plus we are hoping to include a bonus, guest speaker from a world-wide semiconductor maker who will speak about the app he created for a current project. (The engineer is working with his management to get approval now)

Our lead example app is one for SOC connectivity – we show how to validate the connectivity throughout the entire SOC, adding BIST, plus using low-power mode controls. You could create a test bench, simulate and verify that connectivity is correct (couldn’t test exhaustively all combinations). SOC Connectivity app accepts input as an Excel spreadsheet with connectivity, it then turns that into assertions, finally the Formal tool verifies that assertions are true for all cases (finds counter-examples where design fails). This takes only hours to run, not weeks to run like simulation. This is just part of the Cadence flow – assertion driven simulation is kind of unique to Cadence (take formal results, feed into coverage profile to help improve test metrics).

Q: Why should my boss spend the $75?
A: Because these apps can help save you design and verification time faster than running pure simulation alone. Case studies are use to provide measured improvements. You can leave the tutorial, go back to work, and start using the formal approaches. The main presenters are experts in each area.

Christopher Komar – Formal Solutions Architect at Cadence Design Systems, Inc.

Dr. Yunshan Zhu – Presdient and CEO, NextOp Software. They have an assertion synthesis tool that reads the TB and the RTL for the DUT, then creates good assertions (not a ton of redundant assertions). BugScope will be shown along with case studies.


Vigyan Singhal – CEO at Oski Technology, they make formal apps for both the design and verification engineers and will talk about assertion-based IP.


Source: Oski Technology

Summary
To learn more about formal analysis applied to IP and SoC design then consider attending the half-day tutorial at DVCon on March 1 in San Jose. You’ll hear from people at three different companies:

For just $75 you receive the slides on a USB drive and they provide coffee and feed you lunch.


Design & Verification of Platform-Based, Multi-Core SoCs

Design & Verification of Platform-Based, Multi-Core SoCs
by Daniel Payne on 02-02-2012 at 11:16 am

Consumer electronics is a new driver in our global semiconductor economy as we enjoy using Smart Phones, Tablets and Ultra Books. The challenge of designing and then verifying the electronic systems to meet the market windows is a daunting one. Instead of starting with a blank sheet for a new product, most electronic design companies are choosing to start with a platform then integrate ready-built IP.



Amazon Kindle Fire – Tear Down

An example of a platform-based consumer product is the Kindle Fire from Amazon. The ICs included in the design of the Kindle Fire are:

  • Samsung KLM8G2FEJA 8 GB Flash Memory

  • Hynix H9TKNNN4K 512 MB of Mobile DDR2 RAM

  • Texas Instruments 603B107 Fully Integrated Power Management IC with Switch Mode Charger

  • Texas Instruments LVDS83B FlatLink 10-135 MHz Transmitter

  • Jorjin WG7310 WLAN/BT/FM Combo Module

  • Texas Instruments AIC3110 Low-Power Audio Codec With 1.3W Stereo Class-D Speaker Amplifier

  • Texas Instruments WS245 4-Bit Dual-Supply Bus Transceiver
  • 1 GHz processor— a Texas Instruments OMAP 4430
  • Texas Instruments WL1270B 802.11 b/g/n Wi-Fi

So, how do you create an SoC like this and what are the costs and power challenges?

DVCon


I spoke with Stephen Bailey of Mentor Graphics this week to learn about a half-day tutorial that he is part of at DVcon called: Design & Verification of Platform-Based, Multi-Core SoCs. Platform-based design is when you create a new SoC with pre-defined processor subsystems (think ARM), semi IP, and then add some of your own new blocks (maybe as little as 10% of the design).


Stephen Bailey, Director of Product Marketing, Mentor Graphics DVT

Clearly SW integration is now the bottleneck and with an exploding amount of state space it makes verification an issue to automate.

We all love our mobile devices to have a battery life of at least one full business day, so we need to design with that constraint in mind.

Tools and Methodology
Here’s a methodology flow that can help address the design and verification challenges listed so far:

Specific EDA tools for each block show above:

  • Vista for SoC architectural design and SW development virtual prototyping;

  • Certe for register-memory map specification;

  • Catapult for HLS of the new subsystem and Calypto for sequential LEC;
  • ARM’s AMBA Designer for Fabric implementation
  • Questa for simulation (with Vista for SC/TLM, new subsystem verification pre/post HLS and sign-off verification of the SoC);
  • Veloce for sign-off verification (SoCs require far more cycles than is practical with SW simulation alone) and SW development;
  • We also use Questa/Veloce Verification IP, inFact with VIP to create traffic generators to verify (re-validate) performance at RTL, and Codelink for synchronized SW/HW debug in both Questa and Veloce
  • Codebench embedded software tools that can be used with SW virtual prototype, CDC and Power-Aware verification. However, due to time constraints, we only can only mention them as part of the complete flow.

Summary
To learn more about design and verification of platform-based, multi-core SoCs then consider attending the half-day tutorial at DVCon on March 1 in San Jose. You’ll hear from experts at three different companies:

The tutorial will cost you $75 and in return you receive the slides on a USB drive and they feed you lunch and provide coffee.


3D Standards

3D Standards
by Paul McLellan on 02-01-2012 at 5:06 pm

At DesignCon this week there was a panel on 3D standards organized by Si2. I also talked to Aveek Sarkar of Apache (a subsidiary of Ansys) who is one of the founding member companies of the Si2 Open3D Technical Advisory Board (TAB), along with Atrenta, Cadence, Fraunhofer Institute, Global Foundries, Intel, Invarian, Mentor, Qualcomm, R3Logic, ST and TI.

The 3D activities at Si2 are focused on creating open standards so that design flows and models can all inter-operate. In the panel session Riko Radojcic of Qualcomm made the good point that standards have to be timed just right. If they are too early, they attempt to solve a problem that either there is no consensus needs to be solved, or where the solutions are not yet known and thus cannot be standardized. If standardization is too late, then everyone has already been forced to come up with their own ways of doing things and nobody wants a standard unless it is simply to pick their solution. Riko reckons that 3D IC is about a year behind where he would like to see it and it risks the standards being too late and so everyone having to do their own thing. The Si2 open3D page is here.

One standards that does now exist, as of earlier in January, is the JEDEC wide IO single data rate standard for memories. This includes the ball positioning and signal assigments that allows up to 4 DRAM chips to be stacked on an SoC and permits data rates up to 17Gb/s at significantly lower power than traditional interconnect technologies with 4 128b wide channels. The standard is here (free PDF, registration with JEDEC required). This should allow memory dice from different DRAM manufacturers to be used interchangeably in the same way as we have become accustomed to with packaged DRAM.

Apache is most interested in power delivery and thermal issues of course. Multiple tiers of silicon mean that the power nets on the upper tiers are further from the interposer and the package pins. In a conventional SoC, the IO power may make up 30-50% of all power and the clock another 30% or so. There is a lot of scope in 3D for power reduction due to the much shorter distances, the capability to have very wide buses. Nonetheless, microbumps and TSVs all have resistance and capacitance that affects the power delivery network and general signal integrity.

Thermal analysis is another big problem. Since reliability, especially metal migration, is affected by temperature severely (going from 100 degrees to 125 degrees reduces the margin by 1/3) the overall reliability can be very negatively affected if the temperature in the center of the die stack is higher than expected and modeled.

The big attraction of 3D is the capability to get high bandwidth at low power. It has the potential to deliver 1-2 orders of magnitude of power reduction on signalling versus alternative packaging approaches, as much as 1/2 Terabit/s between adjacent die.

Everyone’s focus in 3D standardization at the moment is to standardize the model interfaces so that details of TSVs, power profile of die, positioning of microbumps and everything can work cleanly in different tool and manufacturing flows. Note that there is no intention to standardize what the models describe (so, for example, no effort to standardize on a specific TSV implementation).


21st Century Moore’s Law Providing Unforeseen Boost to Silicon Valley

21st Century Moore’s Law Providing Unforeseen Boost to Silicon Valley
by Ed McKernan on 01-30-2012 at 10:00 pm

It has been a great conundrum to many of the 20[SUP]th[/SUP] century trained economists and Harvard’s Kennedy School of Government folks as to why a government led massive spending spree and Ben Bernanke’s non-stop printing presses can’t at least engender a mediocre economic recovery.

I blame 21st century Moore’s Law!

Today’s process technology is not just 4 times that of when the downturn began in 2008, it is at a magnitude that has enabled companies the freedom to move beyond the tax and regulatory grasp of many Sovereign nations that are now having difficulty paying their bills. Moore’s Law is the Overwhelming Force that is bypassing the immovable object known as too big to fail governments. As we gear up for another election season, a realization has emerged that the place where things are going swimmingly and money is piling up is none other than Silicon Valley. For politicians and governments to get access to this pile of money they will have to play nice and offer significant tax cuts that allows the Trillions of dollars that sit overseas in to come home. Apple, Cisco, Google, Intel and the rest of the high flying Silicon Valley firms can unleash this tidal wave of cash in increased investments at home while paying off politicians from both parties. This, as Obama has recently communicated, will be the major storyline of the 2012 Presidential Campaign.

Winston Churchill once remarked, “You can always count on the Americans to do the right thing – after they have tried everything else.” Now that just about everything else has been tried, the politicians will try to do something completely different, letting the strong, thriving high tech companies to be America’s primary economic engine for the coming decade like it was in the 1990s.

The political dance that started 12 months ago between the politicians and silicon valley didn’t become serious until just recently. All expectations of economic revival were cut short by Europe’s Sovereign Debt Crises and Wall St tanked again. Elections are coming soon and politicians need money from new sources as the old ones dry up. The Trillions of Silicon Valley dollars sitting overseas is not an accident. It sits there because to bring it home would incur a 35% tax. If the rate were dropped to 5-10%, then the floodgates would open. Expect the miracle wrapped in a nice fig leaf story about exchanging lower rates for the promise that companies invest in new buildings, equipment and jobs. I say expect a miracle, because Apple and the rest of the mentioned companies are gushing cash at a rate that is astronomical and politicians would just hate to see it not end up in the pockets of the people who need it most. The well won’t run dry for years.

To give one a sense how times have change since the 65nm process node was in fashion, recall that at the beginning of Obama’s term, the focus was on saving the unions and investing in the future slam dunk industry called solar. Meanwhile, California continued to bleed companies, jobs and money. Without a vibrant Silicon Valley with lots of IPOs, California can’t afford to stay in business. The Democrats without a thriving California are out of office and out of money. Obama now realizes that he needs to show extreme favoritism to Apple, Google, Facebook, Cisco, Intel and the rest of the who’s who crowd.

The upside to the President’s need to win an election and to put in place a campaign funding source for many cycles is that the current Silicon Valley, not the one that wanted to be left alone in the 1990s (think TJ Rodgers), will likely get considerations beyond the tax cut. Taken at face value, Obama’s proposal calls for taxes to be reduced on companies who invest in the US and raised on those who invest overseas (think Fabless semiconductor companies). However, some companies with high R&D like Intel and Google will likely push for relief there as well. And why not, increasing the engineering head count in silicon valley is a good thing for the President’s Party. Apple though might counter and request a break for opening up retail stores or a Data Center. Google and Facebook would concur on the Data Center subsidy. Intel on the other hand would love to get a break on its new 14nm Fabs or future 450mm fab, especially since Paul Otellini says they cost a $1B more to build in the US than in Taiwan. This is where congressional sausage making gets to be interesting.

For Intel to remain at a 27% tax rate while fabless vendors are as low as 11% makes no sense. Nor does Apple’s 24% tax rate look fair up against Google’s 7%. When the bubble burst in 2000, Silicon Valley lost 200,000 jobs, many in the semiconductor industry. It was the IP of the valley that has kept it in the technology lead but those jobs are sorely needed. We may finally get the attention needed to turn Silicon Valley into a bigger driver of the economy, much bigger than 2000. Nothing can be more glaring of the opportunity waiting for us than the startling fact that Apple has $100B in the bank and is adding it at the rate of $15B a quarter. We should remove any and all roadblocks.

With Intel, the storyline gets much more interesting as Obama and Otellini have struck up a special bond in the past year. Two years ago Otellini was excoriating the President and now he is on Obama’s Council on Jobs and Competitiveness. The only other tech related person on the council is John Doerr. This council was formed after Obama visited Intel’s Oregon site last year. Last week, the day after the State of the Union, Obama paid a visit to the construction site of Intel’s new 14nm fab located in Arizona. The purpose of his visit was to emphasize his new tax proposal and to start broadcasting his election year economic theme of bringing jobs home.

Imagine during these visits that Otellini whispers in Obama’s ears that with the right incentives the whole future of the semiconductor industry can reside in the US and with it thousands of jobs and the associated tax revenue. When combined with the Bernanke printing presses depreciating the currency in a daily drip-by-drip manner, the US government is going to make life more difficult for fabless vendors to be invested in Taiwan instead of the US. This is why Qualcomm, with its $21B in cash has to consider building a fab in the US. AMD, Broadcom, nVidia, Altera, Xilinx, Marvell and others will be pleading for Morris Chang to build in the US or alternatively make peace with Intel and enter a foundry agreement. Unless, of course, the Obama tax agreement that develops only applies to US Multinationals, then it is a completely new ballgame for Fabless Vendors. The Silicon Valley playing field could end up being tilted towards Intel.

FULL DISCLOSURE: I am Long AAPL, INTC, ALTR, QCOM


The Future of Lithography Process Models

The Future of Lithography Process Models
by Beth Martin on 01-30-2012 at 4:02 pm

Always in motion is the future. ~Yoda

For nearly ten years now, full-chip simulation engines have successfully used process models to perform OPC in production. New full-chip models were regularly introduced as patterning processes evolved to span immersion exposure, bilayer resists, phase shift masking, pixelated illumination sources, and much more. The models, in other words, have kept up with and enabled the relentless march into the lithographic nanosphere.[SUP]1
[/SUP]

“Hello? 1983 calling.” Perhaps this is what Yoda was talking about—technology such as this Motorola DynaTAC 8000x ushered in the age of microelectronics.


We learned from Yoda that the future is not set, it is always in motion. Still, I feel confident that the industry can predict several areas where full-chip models will need to evolve and improve.[SUP]2[/SUP] As process margins continue to narrow at lower k[SUB]1[/SUB], models will need to more faithfully predict all failure modes which loiter at the process window corners. In addition to pinching and bridging, models will need to accurately predict behaviors you may be less familiar with: sub-resolution assist feature (SRAF) scumming / dimpling, side-lobe dimpling, and aspect ratio induced mechanical pattern collapse (Figure 1). These can all lead to defects in the etched layer.

Figure 1. Emerging patterning failure modes.

While full-chip OPC models based on a 2D contour simulation have so far been sufficient to meet the task of correction and verification, we may need some 3D awareness in these models. For example, we might need to account for underlying pattern topography/reflectivity for implant layer patterning, or want an etch model to predict bias as a function of lithographic focus (which imparts resist profile changes). One thing to be certain of – 3D mask topography effects will continue as target and SRAF dimensions shrink, and improvements in the accuracy of 3D mask models must keep pace.

Another emerging technology, Source Mask Optimization (SMO), may place greater demands upon the portability of process models. With SMO, the illumination source is dynamically changed on a design-by-design basis in manufacturing, yet a single calibrated resist model is preferred for optimum cycle time. Full-chip mask process models may be needed to facilitate portability, and to enable maximum flexibility for process evolution.

New processes are emerging for double patterning including litho-etch-litho-etch and litho-freeze-litho-etch, sidewall image transfer, and negative tone develop. Novel chemical and thermal pattern shrink processes will continue to find their way into manufacturing. These processes represent a wide range of complex physiochemical processes, but the phenomenological compact model approach, based upon relatively few optical parameter inputs, and empirical CD/contour outputs, will no doubt be able to accurately represent these processes for full-chip simulation.

Finally, another emerging process is EUV lithography. In order to accurately perform full-chip simulation, optical models that account for flare and field-dependent mask shadowing will be required. These models are already in mature development. It is important to highlight that “OPC” will indeed be required for EUV, despite the fact that the lower wavelength will deliver a substantially higher k[SUB]1[/SUB] factor than 193 nm lithography.

Model accuracy and predictive capability requirements will surely continue to shrink below today’s 1.0 nm, and additional requirements beyond simple single-plane CD will be required. Perhaps it’s time to increase our accuracy budget 10X by converting to units of Angstroms—it will make us feel like there is more room at the bottom of the scaling curve!

As a final note, the SPIE Advanced Lithography meeting in San Jose (12-16 February) has an ever-expanding conference focused on design for manufacturability through design-process integration. As the co-chair of this conference, I can say with certainty that the technical presentations are of the highest quality. If you want to engage more deeply in the interface between IC design and manufacturing, attend the keynotes, papers presentations, and poster session on Wednesday, 15 February, and the joint optical microlithography/DFM joint sessions on Thursday, 16 February.

— John Sturtevant, Mentor Graphics

1—Lots of interesting information about process models in my previous posts: Part I, Part II, Part III, and Part IV.

2—This series was inspired by this paper I presented at SPIE Advanced Lithography in 2011.