Synopsys IP Designs Edge AI 800x100

Happy Birthday Dear Cadence…

Happy Birthday Dear Cadence…
by Paul McLellan on 08-14-2013 at 8:30 pm

Cadence is 25 years old this year, on June 1st if you want to be precise.

The most direct ancestor of Cadence was SDA (which might or might not have stood for Solomon Design Automation). SDA was founded by Jim Solomon in 1983. It turns out that a guy I shared an office with while we were both doing our PhDs in Edinburgh Scotland was one of the early employees: Graham Wood. After getting his PhD he worked at Bell Labs at Murray Hill for a time before moving out to California. I got my PhD and came straight to California and joined VLSI Technology, itself a pre-IPO startup when I joined although it went public in 1983 before SDA even existed. Graham suggested that I should come and interview at the new SDA but I decided I was happy at VLSI Technology where we were creating the ASIC revolution and had, at the time, the best IC design tools available. I wasn’t smart enough to realize that the money was all in the software and customers would not want their EDA tools to be tied to their manufacturing. Graham went on to invent SKILL which, even today over 25 years later, is at the heart of Cadence’s layout environment.

SDA was funded in a novel way by subscriptions from a handful of semiconductor companies, initially National Semiconductor (where Jim Solomon workd) and General Electric. Subsequently they would add Harris Semiconductor, Ericsson, Toshiba and SGS (in Italy, today it is the S in ST Microelectronics). These companies knew that semiconductor design was changing but that they didn’t have strong enough internal groups to develop their own toolchains.

SDA had analog design tools, early place and route, and an early framework to tie all the graphics and data management together (although I don’t think they used the word framework back in that era).

SDA filed to go public. They did the roadshow. They decided the IPO would take place on October 19th 1987. Oops. That day is known as Black Monday, the day of the 1987 stock-market crash when the DJIA fell 23%.

Meanwhile, Glen Antle had formed a company called ECAD that provided design rule checking. By formed I mean that it was spun out of System Engineering Laboratories, SEL. That DRC was called Dracula, of course. They would continue to develop LVS (layout-versus-schematic) and other products.

Earlier in 1987 they also filed to go public, did their roadshow and decided the IPO would take place on June 10th. The stockmarket didn’t crash and ECAD went public successfully.

In mid 1988, SDA and ECAD merged. The deal was structured as an acquisition of SDA by ECAD since ECAD was already public. The new company was called Cadence. Somewhat surprisingly, since he wasn’t the CEO of SDA, and because ECAD was the acquiring company at least on paper, the new CEO of Cadence was Joe Costello.

There were a lot of mergers over the years but I think that three were especially important:

  • Tangent: place & route. The Tangent products became cell-ensemble, gate-ensemble and cell3 (3 layers of metal, count them) and were the cornerstone of Cadence’s leadership in automated physical design
  • Gateway: simulation. Gateway’s Verilog language and simulator were the foundations of all of Cadence’s simulation product line and what it has grown into today
  • Valid. Front-end design. The acquisition of Valid made Cadence the largest EDA company.

I was unable to escape the gravitational tractor beam of Cadence forever. I was the VP Engineering of Ambit when we were acquired in 1998 and ended up staying for 3 years.

More details on Cadence’s 25 year anniversary are here.


450mm Wafers are Coming!

450mm Wafers are Coming!
by Daniel Nenni on 08-14-2013 at 8:05 pm

The presentations from the 450mm sessions at SEMICON West are up now. After talking to equipment manufacturers and the foundries I’m fairly confident 450mm wafers will be under our Christmas trees in 2016, absolutely. TSMC just increased CAPEX again and you can be sure 450mm is part of it. SEMI has a 450mm Central landing page HERE. The SEMICON West 450mm Transition presentations are HERE. The Global 450mm Consortium is HERE. Everything you ever wanted to know about 450mm wafers just a click away; you’re welcome.

Intel, Samsung, and TSMC have invested heavily in 450mm and will have fabs built and operational in 2015 (my opinion). Given the pricing pressures and increasing capacity demands of the mobile semiconductor market 450mm wafers will be mandatory to maintain healthy margins. Based on the data from SEMICON West and follow-up discussions, this is my quick rundown on why moving from a 12” wafer (300mm) to an 18” wafer (450mm) is the next technical innovation we will see this decade.

First and foremost is timing. 14nm wafers will begin production in 2014 with 10nm slated for 2016. Ramping already production worthy 14nm wafers in a new 450mm fab reduces risk and the semiconductor industry is all about reducing risk. Second is wafer margins. As I mentioned before, there will be a glut of 14nm wafers with no less than six companies (Intel, Samsung, TSMC, GLOBALFOUNDRIES, UMC, and SMIC) manufacturing them 24/7. The semiconductor industry has never ever seen this kind of total capacity increase for a given node. Add in that the mobile electronics market (phones and tablets) have reached commodity status, wafer margins will be under even more pressure than ever before. Just like the top criteria for investing in real estate: location, location, location. Wafer purchasing criteria at 20nm and below will be: price, price, price.


According to Intel a 450mm fab will cost twice as much as a 300mm fab with equipment accounting for the majority of the delta. The wafer handling equipment is a good example. The additional size and weight of the 450mm wafers will require complete retooling. If you have never been in a fab let me tell you it is something to see. The wafers zip around on ceiling mounted shuttles like something out of a Star Wars movie. As much as I would like to change our dinner plates at home from 12” to 18” to accommodate my increasing appetite, I certainly don’t want to buy a new dishwasher and cabinets to store them.

The ROI of 450mm wafers however is compelling. A 450mm fab with equal wafer capacity to a 300mm fab can produce 2x the amount of die. If you roughly calculate die cost, a 14nm die from a 450mm wafer will cost 23% less than a 300mm wafer. This number is an average of numbers shared with me by friends that work for: an IDM, a foundry, a large fabless company, and an equipment manufacturer. Sound reasonable?

lang: en_US


Compressing OpenGL ES textures

Compressing OpenGL ES textures
by Don Dingee on 08-14-2013 at 6:00 pm

The 80s called, and they want lazy programming back. Remember “Mr. Mom”? Michael Keaton is talking about rewiring the house, and Martin Mull asks if he’s going to use all 220V, and Keaton responds “Yeah, 220, 221, whatever it takes.” Not knowing what’s inside can make you look silly.

Such is the case with OpenGL ES. Taking a look at how the device actually supports graphics can mean big differences in results. A key area of differentiation is how texture maps are stored, and the support for that in hardware and software. Storing uncompressed or lightly compressed stuff works, but is not very efficient – modern implementations rely on texture compression algorithms to get faster results with less bandwidth and better memory utilization.

One reason the graphics on Apple smart devices are so beautifully consistent is TextureTool, native support in the iOS SDK for compressing textures into the PVRTC format. In and of itself, that isn’t remarkable until adding that PVRTC is supported directly in hardware in the Imagination Technologies PowerVR SGX family of GPUs found in Apple devices. Hardware acceleration for software features almost always wins, and if an accelerator is available it should be utilized. One choice for texture compression in iOS, everyone and everything optimizes for it, and all is well.

Now move over to the Android camp for comparison. There are several texture compression formats in play, and a given phone may or may not have hardware acceleration for them. With underlying support in OpenGL ES, all Android versions since Froyo support ETC1, and most GPU hardware supports it (or ETC2). There are four other formats supported in OpenGL ES 3.0:

ATITC, found in Qualcomm Adreno GPU and AMD implementations (yes, as in ATI);
DTXC, also known as S3TC, favored by NVIDIA, Microsoft et al (yes, as in DirectX);
ASTC, developed by ARM for its Mali GPU;
PVRTC, for Imagination PowerVR SGX implementations.


(image courtesy Intel)

Are you beginning to sense why Android graphics benchmarking results vary so wildly? (Before going off on the visual quality compression produces, let’s take into consideration there is a lighting difference between the left and right side of that image. What I’d like to see is slices of the left eye side by side, and the other formats.)

The naïve Android developer plows into this like they are heading out to body surf waves at Newport Beach, thinking they will get decent results just from watching what other people are doing. It’s harder than it looks. I was reading a game developer forum discussing the merits of ETC1 versus the other texture compression schemes, and the non-conclusive response was startling: if you pick a format not supported in hardware, what you get is software decompression on load using CPU, not GPU, cycles. Ouch. “That may explain our loading times” might be the understatement of the decade. Even Apple advises app developers to check for texture compression file extensions before using them.

The good news is we have general agreement on OpenGL ES for mobile devices, and using some form of texture compression. If you are an Apple developer, the choice of PVRTC is clear for the foreseeable future. If you are developing for Android, Windows, or other environments, PVRTC is one of several options. Selecting a texture compression scheme is more than a software decision, hitting four dimensions that can make or break an implementation: visual quality, memory footprint and bandwidth, power consumption, and overall performance – all functions of a GPU and software support working together.

I have an interesting assignment: to take an objective look at PVRTC, combining information from Imagination Technologies and third party sources. We’ll start with some background: four high-level discussions of some of the issues involved.

Imagination’s overview of PVRTC:
PVTRC: the most efficient texture compression standard for the mobile graphics world

Apple’s OpenGL ES programming guide:
Best Practices for Working with Texture Data

The Android view of OpenGL ES including texture compression:
OpenGL ES | Android Developers Guide

A good overview written by an Intel intern (with some “consider the source” caveats):
Android Texture Compression

Future posts will look at how PVRTC works, what it clearly does well (example: producing small files), where there is some debate (example: visual quality on some images and different algorithms), and more. I’d be happy to hear from those with experience on these issues, and other useful resources we should be looking at.

lang: en_US


An EDA Acquisition that Worked

An EDA Acquisition that Worked
by Daniel Payne on 08-14-2013 at 5:30 pm

I first heard about Andrew Yang back in 1993 when he founded a Fast SPICE company called Anagram, then acquired by Avant! in 1996. Andrew’s latest EDA company Apache Design, Inc.was started in 2001, then acquired by ANSYS in 2011. Most EDA mergers simply don’t work because of one or more reasons, like:

  • Incompatible corporate cultures
  • Product overlap, rendering EDA tools redundant
  • Loss of key people, necessary for continued success
  • Loss of a dedicated sales channel
  • Product line neglect, leading to a stagnant EDA tool which becomes less competitive
  • Lack of product or sales synergy


Andrew Yang, Apache co-founder

Being curious, I set out to uncover why the Ansys acquisition of Apache was successful and did not fall apart.

Q&A

Q: You’ve gone through more than one acquisition in EDA, so why did the ANSYS acquisition of Apache turn out successful?

We’ve had a simple strategy to have a non-overlapping product line, and a non-disruptive execution. Our Apache team has a great track record, so there was no disruption in how we did our business.

Other EDA companies can acquire overlapping product lines, which causes problems.

Q: How do products from Apache and ANSYS have synergy?

ANSYS always had the culture of expanding into new industries, like the semiconductor space. With Apache, now ANSYS adds chip-aware tools.

From Apache’s view we look at Chip-Package-System type of EDA analysis tools, so joining ANSYS helped us grow into new markets.

Q: Often in EDA, the company being acquired will have little control over product and sales direction. Why was this different with ANSYS?

The strategy was to not disrupt what was working with Apache in terms of product and sales. Apache customers continue to work with their same account managers and AE’s, so they’ve had a continuous experience.

Q: Is Apache growing?

Yes, in the past 8 quarters we have met or exceeded our sales targets, we’re growing faster than the EDA industry rate. Also our margins have been growing, not just sales.

Q: What is new at Apache since the acquisition?

We have been constantly innovating with new products and efficiently, so we have to keep up with Moore’s Law in terms of capacity. We are certified for FinFETs at TSMC and Samsung. With Distributed Machine Processing (DMP) technology it allows us to scale the capacity requirements for the next several years.

Other approaches like High Performance Computing (HPC) are focused on parallel processing, however with DMP we are solving highly-connected power meshes demanding highest accuracy.

Q: Who is using Apache tools today?

IC Insight published a list of the top 20 semiconductor companies, and we are serving all 20 of these leading companies with Apache tools. Most of our growth is coming from the mobile SoC companies. We serve 9 of the top 10 mobile SoC companies.

At automotive companies we are serving 9 of the top 10 companies.

Q: What are Apache’s plans for the next 12 months to ensure continued growth?

Keep our sales team focused on Apache products and AEs supporting customers, so no structural changes. Our strategy is to continue to keep up with Moore’s Law.

Q: What would you say to new EDA start-ups?

There are lots of challenges going down to the 10nm node to be solved.

Q: Does Apache have any lawsuits?

No, we have been at the forefront of technology so nobody has file suit against us.

Q: How about patents?

We do have a good number of patents and within ANSYS we continue to protect our intellectual property by adding more patents.

Further Reading

Dr. Yang blogged about the two year anniversary of the acquisition earlier this month.

lang: en_US


Save the Dates

Save the Dates
by Paul McLellan on 08-13-2013 at 3:22 pm

There are several events in Silicon Valley coming up of general interest to people working in EDA and the semiconductor industry.

SEMI 16th Annual Valley Lunch Forum. August 22nd, 11.30am to 1.30pm, Santa Clara Marriott

  • What are the Opportunities for Advanced Semiconductor Devices?
  • Where will the year end for 2013?
  • Will we have a double-digit increase in 2014?

Speakers: Dan Freeman (Gartner), Mike Corbett (Lynx Consulting), Brian Matas (IC Insights)
Details and registration here.

GSA Executive Forum. September 25th, 12pm to 6pm, Rosewood Sand Hill Hotel

  • Keynote: David Small, Verizon
  • Keynote: Condoleeza Rice
  • Panel session moderated by Aart de Geus

Details and registration here.

EDAC Back to the Future. October 16th, 5.30pm to 9pm, Computer History Museum
Join your colleagues on Wednesday, October 16, 2013 for an evening of networking to celebrate the EDA industry!
Details (not many yet) here.


Wanna Buy A Blackberry?

Wanna Buy A Blackberry?
by Paul McLellan on 08-13-2013 at 2:26 pm

So Blackberry (formerly known as Research In Motion or RIM) is up for sale. Basically, apart from some cash in the bank, its main value now seems to be patents and, perhaps, some security technology. The murderers are in Cupertino and Mountain View, Apple’s iPhone (and iPad) and Google’s Android along with its licensees, most especially Samsung.

When iPhone was first released, Blackberry’s CEO said “In terms of a sea-change for Blackberry, I think that is overstating it.” Of course he wasn’t the only person to underestimate the impact that iPhone would have on the entire mobile market. Both Kallasuvuo, then CEO of Nokia, and Ballmer, CEO of Microsoft, made disparaging remarks about iPhone just being a handset and unappealing without a physical keyboard (like Blackberry had). Within a very short time Apple was making over half the profits of the entire mobile industry (with just 4% market share) and Samsung was making most of the rest.

Blackberry’s stock peaked at $236 soon after the iPhone launch and has gradually declined and is now under $10. They did several things wrong, not least underestimating the fact that their strong position in the enterprise market would not be affected by whatever happened in the consumer market. In fact the era of BYOD, bring your own device, started and people were not prepared to carry and iPhone for personal use and a Blackberry for business use. After all, email is email and when Blackberry was just about the only mobile device to support it (Nokia had some devices too, but mainly in Europe) it was a killer App. When every Android and iPhone had email, not to mention internet access, maps, videos and more, email was no longer enough to get every venture-capitalist and company executive to use a Blackberry. The addictiveness went out of the Crackberry.

They made an ill-advised move into tablets with the PlayBook. Astoundingly, the only tablet that didn’t support email, Blackberry’s killer App. The PlayBook needed to be paired with a Blackberry phone for that. Soon they were out of tablets.

Finally they introduced their first touchscreen devices and a completely reworked version 10 of their operating system (build on top of QNX which they had by then acquired). But it was too little, too late: consumers had already got smartphones and, by and large, would upgrade to newer versions of the phones that they were already used to.

So now, Blackberry is like a twenty year old Triumph TR-7, only useful for spare parts. And worth next to nothing. So who might buy them? I’ve no real idea. Other analysts have the usual suspects: Microsoft, Samsung, Amazon, Cisco, HP and IBM. All make some sense, but Blackberry is a tarnished brand based on aging technology.


How Resistant to Neutrons Are Your Storage Elements?

How Resistant to Neutrons Are Your Storage Elements?
by Paul McLellan on 08-13-2013 at 1:01 pm

There are two ways to see how resistant your designs are to single-event errors (SEE). One is to take the chip or even the entire system and put it in a neutron beam and measure how many problems occur in this extreme environment. While that may be a necessary part of qualification in some very high reliability situations, it is also too late in the design cycle in most circumstances. What is needed is software to estimate the reliability during design when there is still time to do something about it.

iRoc has two tools for doing this. TFIT is used to evaluate individual cells such as flip-flops and memory elements and assess their failure rate known as FIT (failure in time). The second program, SoCFIT is used at the chip level once TFIT has been used on all the cells. It works out the FIT for the entire design based on how the various cells have been connected. A neutron that misses a flop may still cause a problem if it hits a cell connected to the flop and the current spike causes the storage element to change its value.


iRoc have just released TFIT 3.0, the latest version of the cell-level analysis tool with some major changes:

  • the necessary design layout parameters are automatically extracted from the cell layout
  • the temperature of the device may now be set to be taken into account during soft error effect simulation
  • it can analyze a named cell in the middle of a design without it having to be moved to a separate file
  • output can now be exported in xml format file which can later be used by SoCFIT or by other analysis programs


Most SoC designers do not create their own cell libraries and so they are unlikely to use TFIT directly themselves. TFIT is intended to be used by library and memory designers so that they can create libraries with acceptable reliability. Note that it is not possible to design a library so it is completely immune to SEE but what is important is to create a library with fairly uniform FIT scores. Like building a chain, you want all the links to have roughly the same strength. A cell with an especially good FIT is like an extra-strong link in the chain, it is probably a waste of resources since it is the weakest links that determine the strength of the chain just as it is the cells with the poorest FIT scores that it makes sense to focus on improving to improve the reliability of the library.


Electronic System Level: Gary Smith

Electronic System Level: Gary Smith
by Paul McLellan on 08-12-2013 at 5:07 pm

Gary Smith has been talking about how the electronic system level (ESL) is where the future of EDA lies as design teams move up to higher levels encompassing IP blocks, high level synthesis, software development using virtual platforms and so on. At DAC this year in Austin he talked about how the fact that EDA controls the modeling process for semiconductors is the secret sauce that should allow EDA to start to move up into the embedded software space and start to improve their productivity in the same way as semiconductor design has improved over the last few decades.

I don’t think that the transition to ESL took place how we expected, nor did it take place as early as we expected. I worked for two virtual platform software development companies in the last decade, and Gary himself was famous for calling the move up to ESL in a big way as imminent several times before it really happened.

I think most of us expected that high-level synthesis (HLS) from C/C++/SystemC would take over from RTL synthesis and originally that was what people envisaged when you talked about ESL. Although HLS is indeed growing, and it has certain niches such as video processing where it is very strong, it turned out that for most SoC design, IP-based design was the way that we moved up a level. Many chips today contain very little “original” RTL, consisting largely of lots of IP blocks connected up using a network-on-chip (Noc), itself a form of IP.

Up another level from the IP blocks is the software component of a modern system. For many designs, this is a huge proportion of the entire design effort, often dwarfing the semiconductor design. Software is also longer lived. Any particular system, such as a smartphone, will go through many iterations of the underlying hardware, in this case what we call the application processor, while much of the software will be inherited from iteration to iteration. iOS, Android and their Apps have obviously continued to develop, but large amounts of code from several generations back are still shipped with each phone.

In fact there is a view that the only purpose of the application processor SoC is to run the software efficiently, fast enough and without consuming too much power. In this view, the specification of the design is almost all software to be run on a microprocessor such as an ARM. Only when that is either too slow or, more likely, consumes too much power, is specialized hardware used, either by creating a custom block or by using a customizable specialized processor such as CEVA, Tensilica or ARC that can offload the main microprocessor and implement special functions such as wireless modem processing, video encoding/decoding and so on, at a much superior PPA point.

On Monday August 19th from 11am to 11.45am Gary will be presenting a webinar entitled ESL—are you ready? along with Jason Andrews and Frank Schirrmeister from Cadence and Mike Gianfagna from Atrenta.

The ESL flow has been evolving and Gary believes that there have been significant breakthroughs that now mean that the ESL flow is real. Gary will review these breakthroughs and go into details of what today’s ESL tools look like and what it is capable off. “Vendors will be named and ESL heroes will be recognized.”

Registration for the webinar is here.


The Most Disturbing Economic Graphs Ever!

The Most Disturbing Economic Graphs Ever!
by Daniel Nenni on 08-11-2013 at 7:00 pm

After driving to Silicon Valley for the past 30 years I am acutely aware of traffic patterns and to me that directly relates to the economy. The recession of 2009 really hit traffic patterns with what I would estimate as a 20% unemployment rate in Silicon Valley. I could leave my home in Danville anytime of the day and have no traffic problems. That is certainly no longer the case and I blame the mobile electronics boom, absolutely.

One of the websites I frequently visit, second only to SemiWiki, is Business Insider. Henry Blodget has a staff of researchers and puts out the most interesting content on the internet today, lots of interesting graphs too.

In the back of my mind I wonder: “Where does all the money my family is spending go?” Mobile electronics and the monthly service plans, home electronics and the monthly service plans. Everywhere I look money is being spent, profits are being made, so why are so many American families still struggling financially?

According to Henry Blodget Corporate Greed is the culprit and given these graphs I agree 100%:

Corporate profits and margins are spiking:

Wages as a percentage of the economy are at an all time low:


Employment rates fell off a cliff in 2009 and still have not recovered :

The majority of the national income is going to the executive ranks:

Graphs were created by Henry Blodgets minions using FRED Graph, the Federal Reserve Economic Data Tools.

As I mentioned before, I think the stock market is a racket where insiders profit at the expense of the masses. Publicly traded companies are at the mercy of Wall Street so by my definition they are part of the racket. One of the reasons why I favor GlobalFoundries is that they are privately held and can make decisions based on the greater good of the fabless semiconductor ecosystem versus the short term gains Wall Street favors. Just my opinion of course.

Wall Street the Movie, Gordon Gekko:

The richest one percent of this country owns half our country’s wealth, five trillion dollars. One third of that comes from hard work, two thirds comes from inheritance, interest on interest accumulating to widows and idiot sons and what I do, stock and real estate speculation. It’s bullshit. You got ninety percent of the American public out there with little or no net worth. I create nothing. I own. We make the rules, pal. The news, war, peace, famine, upheaval, the price per paper clip. We pick that rabbit out of the hat while everybody sits out there wondering how the hell we did it. Now you’re not naive enough to think we’re living in a democracy, are you buddy? It’s the free market. And you’re a part of it. You’ve got that killer instinct. Stick around pal, I’ve still got a lot to teach you.



RTL Design For Power

RTL Design For Power
by Daniel Payne on 08-11-2013 at 2:25 pm

My Samsung Galaxy Note II lasts about two days on a single battery charge, which is quite the improvement from the Galaxy Note I with only a one day battery charge. Mobile SoCs are being constrained by battery life limitations, and consumers love longer-laster devices.

There are at least two approaches to Design For Power:

  • Gate-level techniques
  • RTL-level techniques


Continue reading “RTL Design For Power”