100X800 Banner (1)

Yes, there is such a thing as a free…model

Yes, there is such a thing as a free…model
by Paul McLellan on 02-09-2012 at 8:18 pm

I have been saying for years, ever since I started working at VaST, the biggest barrier to adoption of virtual platform technology for what I like to call virtualized software development is the availability of models. If models do not already exist when they are needed there are two issues: it takes money to develop them but, probably more importantly, it takes time. Since a large part of the attraction of virtual platforms is that they can potentially be available before reference boards, chips, real cell-phones etc, anything that soaks up that time makes the approach less attractive. A month developing models is a month slip in the software development schedule.

There have been a number of attempts to address this problem, the most serious of which is Synopsys’s TLMCentral. When this started last year, it was focused on being a central location to find models that were commercially available either through the site, or, more often, elsewhere. There are now over 800 such models available.

This year they are adding a push to encourage model developers to upload open-source models. Obviously nobody will do this for something proprietary like a processor, nor something expensive to develop that therefore can easily be sold. But there is really little downside to sharing a model for a UART or a timer, and there is little point in every company developing their own generic timer model.

To encourage people to get started, Synopsys have uploaded 16 or so models that can be freely downloaded. Some have already been downloaded 30 times so there is definitely demand there. These are open-source models with no licensing fees.

And as a more direct form of encouragement there are several competitions to win an iPad2 (and I’m sure they’ll switch it to iPad3 if Apple announce a new one as some people expect).

  • submit any model. One of the first 50 submitted will win (although confusingly the front page of TLMCentral says upload the 50th model and win; maybe you win another one for that too)
  • be the person who uploads the most models by the end of March
  • upload the model that gets the highest vote by end of April
  • upload the model that gets the most downloads by the end of May
  • or upload a model of a sensor interface such as accelerometer, gyroscope, magnetometer or proximity sensor (deadline February 17th)

Of course there are some conditions, the models need to be written in System-C TLM 2.0 being the main one.

More details about all of this, including the competitions, here.


DFM at SPIE Advance Litho show

DFM at SPIE Advance Litho show
by Beth Martin on 02-09-2012 at 6:40 pm

This year’s SPIE Advanced Lithography is loaded with interesting keynotes and sessions. To help me narrow down what to see, I spoke with John Sturtevant. John is co-chair of the Design for Manufacturability through Design-Process Integration conference, and the director for technical marketing for RET products at Mentor Graphics.

12-15 February, 2012
San Jose Convention Center
All the cool kids will be there

I asked him how it’s changed since first introduced six years ago. John has been co-chair of the DFM conference for the last five years, so he has perspective. Here’s what he told me.
The conference saw a steady upsurge in number of papers in years 1-4, reflecting to a certain extent the hype that was DFM early on, as startups appeared on the scene, and “DFx” appeared on the business cards of an increasing number of engineers and VPs. As we separated the wheat from the chaff, most startups disappeared, and SPIE paper submissions to the conference dropped off somewhat to a steady state of around 40 papers. This was consistent with the predictions of Joe Sawicki in his invited paper in 2004, the second year of the conference.


The wheat which is left now grows in the fields of

    [*=1]Multi-patterning implications to design and manufacturing
    [*=1]Implications of EUV
    [*=1]Design variability effects in manufacturing
    [*=1]Litho-friendly design
    [*=1]Restricted design rules

John recommends these papers in the DFM conference at SPIE:

Wednesday 10:40a, “Layout optimization through robust pattern learning and prediction in SADP gridded designs.”
UC Santa Barbara and Mentor researchers present their study of placement-level optimization, including how to build a predictive model for layout pattern classification, and applying the model to find and eliminate printing hotspots. [8327-04]

Wednesday 1:50p, “Fully integrated litho aware PnR design solution.”
STMicroelectronics and Mentor engineers present the STMicroelectronics back-end CAD solution for litho hotspot search and repair that is based on pattern matching and local re-route abilities in place and route tools. [8327-09]

Wednesday 2:30p, “Smart double-cut via insertion flow with dynamic design-rules compliance for fast new technology adoption.”
Mentor and GLOBALFOUNDRIES engineers introduce an automatic redundant-via insertion flow. [8327-11]

Thursday 1:40p,
Thickness-aware LFD for the hotspot detection induced by topology.” Samsung and Mentor engineers present a method for advanced process window simulations with awareness of chip. [8327-24]

Thursday 2:00p, “The complexity of fill at 28nm and beyond.”
Mentor and AMD engineers discuss modern fill challenges and advances in technology for 28nm. [8327-25]

And, if you’re interested to see how triple patterning will work for 14nm designs, go see “14nm M1 triple patterning”at 5:10p on Wednesday. [8326-38]


Why X-Fab uses 3D Resistance Extraction and Analysis

Why X-Fab uses 3D Resistance Extraction and Analysis
by Daniel Payne on 02-09-2012 at 11:18 am

At DAC in 2011 I visited an EDA company called Silicon Frontline Technology because they offered some 3D field solver tools used to create the highest accuracy netlists that can then be simulated with a SPICE circuit simulator to predict timing, power and IR drop. A recent press release with X-FAB and Silicon Frontline looked interesting so I contacted Thomas Hartung, the VP Marketing and Joerg Doblaski, the team leader of the Design Technology Group at X-FAB to better understand their IC design process and why it required a 3D resistance extractor.
Continue reading “Why X-Fab uses 3D Resistance Extraction and Analysis”


DVCon: Hardware/software Co-design from a Software Perspective

DVCon: Hardware/software Co-design from a Software Perspective
by Paul McLellan on 02-09-2012 at 4:56 am

The EDAC Emerging Companies Comittee (would that be the EDACECC?) is organizing a free panel session one evening at DVCon. It is Monday February 27th from 6pm to 8.30pm. I don’t yet have a room but it will be at the DoubleTree Hotel where DVCon is being held.

EDA companies often address hardware/software co-design from a hardware point of view as if the software somehow is going to be put together once the chip is available, and is a relatively small part of the design of the system (real men design chips). But, in fact, software is often much longer lasting than any individual chips. Much of the software in your phone may be a decade old and running on the fifth or sixth iteration of the hardware. Apple’s iOS alone ran first on a Samsung chip, and then Apple’s own A4 and A5 chips, for example. The result is that hardware and software teams look at the importance of software and software development methodology very differently.

Michel Genard of Wind River was going to moderate it, but Wind moved their sales kickoff and so I’ve been drafted in. I was VP marketing at both VaST Systems Technology and at Virtutech, both of which were heavily involved in the space (VaST is now part of Synopsys, Virtutech now part of Wind River, in turn owned by Intel).

The panellists are:

  • Atul Kwatra, principal engineer, Intel Corporation (Intel own Wind River and Virtutech)
  • Michael James, senior staff engineer, Lockheed-Martin Space Systems Company (Michael used to work with me at Virtutech)
  • Don Williams, head of core technology, Skype (and previously headed up a software development team at Cisco)
  • Bill Neifert, chief technology officer, Carbon Design Systems (a virtual platform and modeling company)

Grab a beer and come along. I hope to see you there. The panel is free. You don’t need to be registered for DVCon, nor an EDAC member.

To register for the EDAC panel, go here
To register for DVCon, go here
For more information about the panel, go here


Powering the Platforms: ARM’s 2012 Approach

Powering the Platforms: ARM’s 2012 Approach
by Don Dingee on 02-08-2012 at 4:30 pm

A client turned me on to a great new book, “The Age of the Platform” by Phil Simon. It’s about how Amazon, Apple, Facebook, and Google have radically transformed the landscape. For me, it’s not just social networking – it’s social computing, changing how things are designed.

I’m borrowing this right from Phil’s description of why he selected these four companies as the platforms at the center of this shift:

“1) They are rooted in equally powerful technologies—and their intelligent usage. In other words, they differ from traditional platforms in that they are not predicated on physical assets, land, and natural resources.

2) They benefit tremendously from vibrant ecosystems (read: partners, developers, users, customers, and communities).”

All these platforms are making this kind of impact in large part because they run on devices powered by one company: ARM. They match the above description, as a purveyor of intellectual property with virtual design and fabrication allies. ARM processors power social computing.

ARM’s technology does one thing better right now: it puts intelligent use in a person’s hand. The ARM approach is unique in that most tasks can be done in less than 1W, and some very complex things can be done in less than 2W – which correspond to the points of interest for smartphones and tablets respectively.

A vibrant ecosystem is an understatement, and it’s building smartphone and tablet momentum. Apple’s epic success, driven by dual Cortex-A9 cores in their A5 processor, is fueling a whole new set of changes driven by BYOD and consumer trends like mHealth and the “quantified self”. Qualcomm, about to set a new bar with their Krait implementation of the ARM instruction set, has taken a lead in smartphone processing innovation. NVIDIA and Freescale have gone quad Cortex-A9, with NVIDIA rumored to have taken the socket in the next-gen Amazon Kindle Fire. TI and ST-Ericsson are on dual Cortex-A15, with first silicon just seeing the light of day. Microsoft, having swung the Windows 8 machine in the direction of ARM, is part of the equation – and we’re about to find out more about that February 29th at Mobile World Congress.

The upper right of ARM’s “Gods and Giants” roadmap shows new 64-bit cores, Atlas and Apollo, targeting 20nm implementations of ARMv8 with an eye on growing into the server market. Here the ecosystem isn’t as thriving yet, with names like AppliedMicro and Calxeda at the front. Crossing Intel’s Sandy Bridge in that direction may prove just as difficult as Intel coming across toward the smartphone side.

Where things are thriving is the center of the roadmap, for smartphones and tablets. They tipped their highly efficient Cortex-A7 core last fall, and is now looking deeply at a “big-little” processing approach where the A7 does all but the biggest tasks, with a Cortex-A15 dozing patiently until needed for brief heavy lifting. They’ve hinted at a similar big-little approach with two new graphics cores, Skrymir and Tyr, with few details so far. It seems they are looking at smooth, dynamic shifting between cores sized for the task instead of simply tossing more cores into the mix. It’s all about the 1W.

What do you think of the state of social computing, the idea of these as the new platforms, the contrast between ARM and Intel, and what’s on the horizon from ARM?


The Old Order Changeth

The Old Order Changeth
by Paul McLellan on 02-07-2012 at 3:02 pm

It is interesting watching as changes in technology bring giants to their knees. Far and away the best book on the subject is Clayton Christensen’s The Innovator’s Dilemma. If you haven’t read it then rush out and buy it immediately. In tech, you are not educated if you haven’t.

Two things made me think about this recently. One is Kodak filing for bankruptcy. Some commentary is basically critical that Kodak were blindsided by the digital revolution. But in fact they not only pioneered early digital cameras, but also predicted almost exactly the speed of adoption by government (think spy satellite), then business, and then the consumer. The problem was that nobody knew how to make money on digital cameras which are a one-off low-margin sale. It was classic Innovator’s Dilemma stuff: everyone saw it coming and even so, nobody knew what to do about it. Curiously, FujiFilm, Kodak’s big competitor has fared better and managed to diversify into other markets. But that is always a risk strategy. Most forest product companies that try to get into electronics do not become Nokia, one of the most amazing business transformations ever. Now in turn the standalone digital camera industry is itself under threat, at least the the low end. When your cell phone has an 8 megapixel camera and is always in your pocket, why do you want another camera that is only marginally better.

Talking of cell phones, here are a couple of anecdotesabout changes in technology leadership. Apple’s iPhone business is now larger than Microsoft (in revenue). Not just Office or Xbox, but all of Microsoft, a company that recently was regarded as so powerful that it should be broken up since it would otherwise be a perpetual monopoly. But here’s an even more amazing fact. Apple is now criticized as being too dependent on iPhone, a one-product company. But if you take away Apple’s iPhone business completely, then the rest of Apple is still bigger than Microsoft and pretty well diversified.


Of course for EDA, companies that ship huge volume are not a dream but more of a nightmare. Apple designs one or two chips per year and so have comparatively modest demands for EDA tools. Of course they also buy lots of silicon from Qualcomm, Broadcom and memory suppliers in particular. But since EDA doesn’t share in volume, it shares in design starts, it benefits from less concentrated market power. Lots of competitors all designing their own chips is the EDA dream (and EDA doesn’t really care if chips go into production). As a handful of companies become more and more dominant in the end markets, shipping enormous quantities of relatively few designs, I think it will be a challenge for EDA to maintain its business models unchanged. And, as has been pointed out on this site many times, it will also have a major impact on where the chips are manufactured and from where the capital for the fabs comes.


FineSim Webinar

FineSim Webinar
by Paul McLellan on 02-07-2012 at 2:00 pm

FineSim is Magma’s circuit simulator that has been doing extraordinarily well. In my opinion it is one of the big reasons that Synopsys is acquiring (presumably, still subject to approval of course) Magma. FineSim is especially strong in the memory market with over 70% of the top 5 DRAM manufacturers and the top 10 flash manufacturers using it. Plus over half of the top 20 semiconductor manufacturers. For a relatively new product this is impressive growth.

FineSim was written from the start to be scalable and to take advantage of multi-core workstations and racks of servers. This means that it scales to simulate large analog designs tha could not have been verified with previous SPICE engines. It is actually two products, FineSim Pro and FineSim SPICE.

There is a huge explosion in the need for analog, RF and mixed-signal solutions. For example, your smartphone may have as many as 10 radios in it: 4 GSM bands, GPRS, EDGE, 3 UMTS bands, HSDBA, WLAN, GPS, Bluetooth. Plus modern processes require characterization at many more than the old four-corners that we used to be able to use just a few process generations ago.

There is a new FineSim webinar that covers the use of FineSim for various kinds of simulation. It is 2-5X as fast as the competition on a single CPU and, of course, gets faster still with multiple CPUs.

Some of the things that will be covered in the webinar are:

  • multi-threaded/multi-machine performance and scalability that allows you to simulate 1.7 million transistors in just 16 hours with SPICE-accurate results
  • support for industry standard formats, enabling seamless integration into existing design and verification environments
  • extensive reliability analysis to ensure design quality
  • superfact runtime that allows you to increase test coverage with having to tradeoff against accuracy
  • AMS (analog/mixed-signal) verification
  • Fast Monte Carlo (FMC) flow
  • FineSim RF

Register for the webinar here.


Virtuoso has got you cornered

Virtuoso has got you cornered
by Paul McLellan on 02-07-2012 at 1:33 pm

Things you don’t know about Virtuoso: we’ve got you cornered.

That is the title on a Cadence blog item last week. It is actually about variability and how to create various corners for simulation and analysis, but given Cadence’s franchise for Virtuoso, its lock-in through SKILL-based PDKs and so forth, it is not perhaps the ideal message to be sending. There is plenty of resentment at both foundries and customers about Cadence’s lack of openness in this area.

The blog is actually about the new features in Virtuoso supporting process variation and the need in a modern design to characterize it at dozens of different points, not just in the traditional PVT (process, voltage, temperature) realm but also device parameters and even data collected from Monte Carlo analysis.

Most of the blog is about how to expand various corners without creating a combinatorial explosion where every parameter appears with every combination of others, which is not normally all that useful.


Synopsys latest acquisitions: ExpertIO (VIP) and Inventure (IP)… Any counter-attack from Cadence?

Synopsys latest acquisitions: ExpertIO (VIP) and Inventure (IP)… Any counter-attack from Cadence?
by Eric Esteve on 02-07-2012 at 12:29 pm


Even if ExpertIO acquisition by Synopsys, coming after nSys acquisition a couple of months ago, will not have a major impact on Synopsys’ balance sheet, it will again change the Verification IP market landscape. The acquisition of Inventure, a subsidiary of Zuken, will have a major impact on the Interface IP market, even if it’s on the Japanese market only, as Inventure was very successful on this domestic market, but only in Japan. This acquisition will also have an impact on the balance sheet of an IP vendor based in Canada, we will see why.

As already explained in a previous post, Synopsys strategy was to offer “bundled” VIP around IP sales, and this is not the best way to valorize the VIP product, as the Design IP customer expect to get a bundled VIP almost for free. After Synopsys acquisition of nSys, the acquisition of ExpertIO is likely to reflect a real strategy inflection, the company deciding to attack Cadence in the field where they were the strong leader, especially after the acquisition of Denali (May 2010), facing a competition made of small companies only (nSys, ExpertIO, PerfectVIP, Avery…).

Another side effect is now the lack of accuracy of the “Yalta description”: Cadence dominant in VIP and Synopsys in IP market! This claim is not true anymore! The VIP market is, by definition of “verification”, limited to the protocols based functions, like USB, PCIe, SATA, AMBA, MIPI, Ethernet… or to the memory interfaces like DDRn, GDDRn, Flash and so on. In other words, VIP market is far to be a huge market (even if we still don’t know the market size, as no survey has been done so far). IPNEST evaluation is between $50M to $100M, please don’t expect double digit precision! Going after this market can be a way for Synopsys to apply the “barbed wire fence strategy” as described by Ed McKernan. To protect their Interface IP market share, Synopsys is expanding their presence –extending the ranch size- to make it more difficult for the competition to attack the core business (IP)… That’s one explanation, the other could simply be that Synopsys need to expand in VIP to guarantee higher growth rate on a limited size market. You choose it!

The acquisition of Inventure is easier to understand. Anybody who has tried to develop the business on the Japanese market knows that it’s not easy; the go-to-market rules are different from the west part of the world, doing advertising is not enough, your customers expect a very high quality product (NOT a well marketed one), and an outstanding level of technical support. Needless to say, they also expect you to speak Japanese… The success of Inventure in PCI Express IP since 2007 and SuperSpeed USB more recently was certainly linked to their ability to best serve their Japanese customers. I don’t know if Synopsys was successful on the Japanese market, but I am sure that after this acquisition, they will be!

The side effect of this acquisition is that Snowbush (the Canadian IP vendor), who had built a strong partnership with Inventure by bringing their high quality PHY IP to complement the Controller IP sold by Inventure, will most probably see their PHY IP sales vanish in Japan. IPNEST evaluation was that about 25% of Snowbush revenue was made thanks to this partnership (initiated in 2008, thanks to a well known consultant – guess who). But Snowbush future will change anyway: being part of Gennum, they have been acquired by Semtech, ironically two days after Inventure’s acquisition by Synopsys!

By Eric Esteve from IPNEST


AMD and GlobalFoundries?

AMD and GlobalFoundries?
by Daniel Nenni on 02-05-2012 at 1:00 pm

One thing I do as an internationally recognized semiconductor blogger is listen to the quarterly conference calls of companies that drive our industry. TSMC is always interesting, I really like the honesty and vision of Dr. Morris Chang. Cadence is good, I always want to hear what Lip-Bu Tan has to say. Oracle and Larry Ellison, Synopsys, Intel, AMD, Qualcomm, Broadcom, Altera, Nvidia, and a couple of others.

If I miss the actual call I get the transcript from Seeking Alpha. Here is the most recent AMD call Q4 2011. I post this blog as an observation and discussion rather than a report of facts and figures. I respect GlobalFoundries and hope they succeed but I do not understand the relationship between AMD and GFI. But then again, I’m just a blogger so help me out here:

Granted, the “spin-off” of a new corporate entity is a difficult endeavor, especially when AMD retained a substantial % of GFI (and ATIC, GFI’s parent company, received a substantial % of AMD).

For a while, AMD would routinely incorporate a loss in their quarterly results, based upon their percentage ownership of GF which made sense to me. Prior to the spin-off, AMD’s losses reflected 100% of the fab expense, and immediately after the spin-off, AMD’s one-third ownership of GF resulted in roughly 1/3 of the previous losses still being reported quarterly.

However, AMD’s % ownership of GFI declined, due to the increased investment by ATIC in GFI, and the acquisition of Chartered Semi. When AMD’s ownership was reduced below 15%, the declaration was that “we will no longer incorporate the ongoing financial results of our ownership in GFI in quarterly reports… the investment in GF will be treated as a long-term asset.” OK, that makes sense too.

Then, there were different classes of GFI shares issued. And, throughout 2010-11, there were repeated updates in AMD’s GAAP quarterly financials, based upon updates to the book value of the investment in GFI, in contradiction to the earlier declaration.

In a couple of cases, AMD reported a significant gain in the value of its investment, due to a recalculation of the value of its (diminished) percentage share in GF, during the acquisition of Chartered:

http://www.sec.gov/Archives/edgar/data/2488/000119312511163112/filename1.htm

However, in the most recent 4Q11 fiscal quarter, AMD recorded a loss of $209M. It is unclear to me how AMD intends to represent the ongoing value of their investment in GlobalFoundries.

Actually, it’s hard for me to believe that their value in GFI could increase, as was reported in a couple of recent quarters. AMD no longer invests in the ongoing operations of GFI, ATIC does. I highly doubt GFI is profitable, based upon the losses incurred prior to spin-off plus the integration of Chartered Semi, lacking new sources of external customer revenue. Yet, AMD has recently reported both substantial quarterly GAAP gains and losses with regards to GFI, amounts which far exceed their operating profit each quarter. This financial reporting method is very puzzling to say the least.

The “cost-plus” wafer purchase agreement that AMD established with GFI is clearly an opportunistic one for AMD, which leads to a discussion of a very unusual financial agreement:

http://semiaccurate.com/2011/04/04/amd-and-global-foundries-agreement-not-what-it-seems/

AMD is contractually bound to provide additional payments (up to $400M) to GFI this year, above and beyond the wafer purchase agreement between the two entities. The explanation for these payments was “based upon obtaining sufficient 32nm yields”. Even for a foundry blogger it is hard to understand how a wafer-purchase agreement requires an additional “bonus payment”, up to $100M quarterly. AMD must be assuming it can move lots of additional (32nm SOI) product, to make a committed payment based upon wafer yield, not wafer volume. The amount of $100M per quarter is dangerously close to AMD’s quarterly free-cash flow and non-GAAP profits.

And now IBM “quietly” starts to make chips for AMD?

So, it is not clear to me what relationship AMD and ATIC have maintained, in terms of the value of AMD’s holding in GFI, and the financial obligations (beyond customer and supplier) that AMD has to ATIC in 2012. This lack of transparency is troubling, and in my mind it brings into question the credibility of each quarterly financial report.

For that reason alone, I would consider AMD to be an unsound (long-term) investment, although it certainly makes for interesting “short-term trading”. This is an observation, opinion, for entertainment purposes only, I do not own AMD stock nor do I have investments in related companies.

There is also some good and some perplexing news from GF, unrelated to its relationship with AMD:

GFI announced that the new fab in Malta, NY, will be providing prototype wafers to IBM in mid-2012. That’s the good news.

However, it’s not really a “big win” for GFI, which may not be clear from the press release. Chartered Semi has been a second source for the processor parts used in the Microsoft Xbox 360 family. Microsoft insisted, of course, that IBM Microelectronics have a viable second source, and IBM ensured that Chartered was a qualified supplier of the corresponding SOI technology.

So, in my opinion, this current announcement is really just an extension of that second-source agreement – Microsoft clearly demanded a second source for the processor in the upcoming Xbox 720 product. However, the Xbox 360 parts were never really a large source of profit for Chartered – it was more a way for Microsoft to negotiate the best pricing from IBM. Although additional revenue for GFI is a good thing, the parameters of this agreement are likely not very different from the previous second-sourcing deal, and thus, not an exclusive, nor high-margin revenue opportunity.

The perplexing thing is that the resources invested in Malta on 32nm SOI bring-up as a second source to IBM will be diverted from 28nm bulk technology bring-up. In the 2013-2015 time frame, TSMC has made it clear that 28nm is going to be a very important source of revenue for them and I know this to be true.

Last week I sent a version of this to GFI for clarification / comment but have not heard back yet. If somebody else out there has more information or can correct me please post in the comment section or email me directly: dnenni at SemiWiki dot com.