SILVACO 073125 Webinar 800x100

Executive Opinion: The Future of EDA is Bright

Executive Opinion: The Future of EDA is Bright
by pravin on 06-19-2012 at 7:30 pm

The days following a major conference like DAC are a good time to reflect on the overall health and vibrancy of the electronic design automation (EDA) industry. I’ve been in EDA for 21 years and built two successful startups, and over the last couple of years, have witnessed some decline in both new talent and in venture investment coming to EDA.


However, I am very optimistic that the future for EDA is brighter today than ever for a variety of reasons – the mobile computing revolution, a surge in semiconductor revenue, the transition to 20nm, and the emergence of 3D-IC manufacturing, just to name a few. These new industry trends and advanced process node challenges will likely make the EDA industry look more interesting to engineers, vendors, entrepreneurs, and investors.

Advanced process node challenges
The transition to smaller geometry nodes has enabled the semiconductor industry to create high performance systems-on-a-chip (SoC) with lots of functionality without compromising on overall power consumption. These versatile new SoC chips have helped spawn new market segments, the most successful being the smartphone and tablet markets. At 20nm, your mobile application processor will have a billion transistors, four processor cores, and processing power equivalent to what a typical desktop PC had just 3 years ago. It will have the ability to play full HD video with the graphics performance of a gaming console, while burning less power and providing longer battery life than ever before. 3D-IC manufacturing will enable stacking of ICs, resulting in packing even more functionality in a smaller footprint with overall lower power. The transition to smaller process nodes and the move to 3D-IC is accompanied by a myriad of technical challenges that should trigger the interest of the “problem solving” talent pool.

EDA is central to modern semiconductor advances
The revolution in semiconductor SoCs that is powering the mobile market would not have been possible without innovations in design automation software. 20nm brings a new variety of design challenges that stress every aspect of design automation. Billions of transistors and SoCs with multiple power domains mean new verification challenges. Simulation software has become more versatile over the past few years and has added new verification techniques like property checking and IP modeling, while continuing to speed up the core simulation. Newer emulation boxes are providing billion gate capacity for faster hardware bring-up and software debug for large SoCs. IC implementation offers new challenges in the areas of capacity, low power, and multi mode, multi-corner timing closure of gigahertz chips. 20nm is bringing new manufacturing requirements like double patterning that need to be handled effectively during design implementation.

EDA Upsides
To thrive in the EDA industry, engineers need a unique skill set that blends both electronics engineering and computer science. Specialization in design knowledge along with algorithm and software expertise creates a strong, sustainable, and marketable skill set. There are over 70 niches in the EDA market, giving opportunities to engineers to explore and contribute in many new areas. EDA is one of the very few industries that remain stable during recessions and downturns, as customers always continue to invest in new design starts, thereby guaranteeing predictable and long term employment.

On the vendor front, EDA companies have been able to build a strong business around their anchor products. The move from perpetual to term licenses has resulted in a much more predictable revenue model for EDA vendors. Innovative products and increasing design challenges have also resulted in improved pricing and better revenue for all EDA companies, large and small.
EDA is a great industry for entrepreneurs, and I can vouch for that from my personal experiences. Every transition to a smaller process node brings new disruptions to the design flow, opening new doors for entrepreneurs to challenge the incumbents. EDA is perhaps the only industry where a leading edge, multi-billion dollar company is willing to use software from a startup company on its next generation design. At this year’s DAC there were more than ten new startups exhibiting. From an investor perspective, the cost of building an EDA startup is relatively small compared to most industries, and many EDA startups have gotten to first revenue with less than 10 engineers. Since EDA startups can reach profitability fairly quickly, investment in EDA can result in superior returns if managed carefully. In the last three years, EDA startups have had over $800 million in exits resulting in great returns for investors.

Conclusion
The EDA Consortium, a representing body of the EDA industry, has a slogan “EDA: Where Electronics Begins.” It sums up the importance of this industry and how it affects the $300 billion semiconductor market. EDA technical challenges are more dynamic now than ever, and will continue to be a hotbed of engineering innovation, a driver of semiconductor advances, and a vibrant business.

Pravin Madhani is theGeneral Manager, Place and Route Division, at Mentor Graphics.


It takes an act of Congress…

It takes an act of Congress…
by Beth Martin on 06-19-2012 at 4:29 pm

Foreign students earn roughly two-thirds of the total engineering Ph.D.s earned in the U.S., yet there is no policy to allow, let alone encourage, them to stay in the U.S. after graduation. I was aware of this problem 14 years ago when I started working in EDA, but haven’t paid much attention since then.

So, I scoured the congressional websites to learn what our leaders have been up to in terms of making it easier for foreign engineers to work and create jobs in the U.S.. I also stumbled on a couple of bills that promote and foster science and technology innovation. I found less than I expected, many already dead or stalled. There are imponderable feats of politics involved in getting anything to the president’s desk for signing, but here are the main bills you should know about:

The Innovate America Act 2011 (S.239) was introduced in January 2011 by Senator Amy Klobuchar (D-MN) with bi-partisan cosponsors. It would expand research tax credits, tax credits for donating equipment to schools, fund 100 new STEM high schools, and remove regulatory barriers for exporting industries. It was referred to the Committee on Finance, of which Klobuchar is a member, and there it died.

But, a similar bill, the America Innovates Act of 2012 (H.R.4720), introduced by Rush Holt (D-NJ), would establish an Innovation Bank to fund science and technology job training. There has been no action on this bill yet.

There’s been more action specifically around immigration reform, with equally little to show for it. Zoe Lofgren (D-CA) and 25 cosponsors introduced the Immigration Driving Entrepreneurship in America (IDEA) Act of 2011 (H.R.2161) in June 2011. This would have given priority worker visas for immigrants with master’s degree or higher in science, technology, engineering, or mathematics (STEM workers). I say “would have” because the IDEA Act was sent to some committee to die.

But wait! The Fairness for High Skilled Immigrants Act(HR 3012) was introduced by Jason Chaffetz (R-UT) in November 2011 and passed the US House with a stunning 89% aye votes, even though the bill doesn’t have a catchy acronym. However, it’s been blocked in the senate by Charles Grassley (R-IA). I think there are ways to get around the hold (through a cloture motion), but what do I know? Stay tuned.

Then there is the Startup Act 2.0(S. 3217) introduced by Senator Jerry Moran (R-KS) in May 2012, and hearings should happen soon. This bill reforms immigration law to create new STEM visas, creates an entrepreneur’s visa, and eliminates per-country cap for work visas. You can read a thoughtful opinions on it at TechCrunch. There is a fair amount of press on this one, from the Washington Post to the Huffington Post.

Also introduced in May was the SMART Jobs Act(S. 3192) from Senator Lamar Alexander (R-TN). The Sustaining our Most Advanced Researchers and Technology Act (really? All that just to get the SMART acronym?) is designed to keep foreign-born US grad students here to work. A neat part is that green cards issued under this visa rule wouldn’t count towards the existing per-country caps. It’s getting a lot of support from technology companies and organizations, so maybe it will get some traction.

If the SMART Jobs Actfails, there’s always the similar STAR Act of 2012(ready for this? STAR=Securing the Talent America Requires for the 21st Century Act of 2012[S. 3185].Seriously, I imagine some high-fives going around the marbled halls for that one.) This one was introduced by Senator John Cornyn (R-TX), and would provide additional green cards for skilled immigrants in STEM fields.

Considering that only about 4% of introduced bills ever get past the committee stage (based on 2009-2010 data), there’s little hope any of these bills will become law. The best chance to exercise your influence (aside from handing over a chunk of your wealth) is to call or email your senator and representative and urge them to act on these bills. You can find your senator and representative through this webiste.

M. Beth



Selecting Non Volatile Memory IP: dynamic programming from Novocell Semiconductor lead to a lower “Cost Of Ownership”

Selecting Non Volatile Memory IP: dynamic programming from Novocell Semiconductor lead to a lower “Cost Of Ownership”
by Eric Esteve on 06-19-2012 at 9:07 am

NVM IP offering from NovocellSemiconductor is based on SmartBit, an antifuse, One Time Programmable (OTP) technology, and the OTP block are embedded in standard Logic CMOS without any additional process or post process steps and can be programmed at the wafer level, in package, or in the field, as end user requires. What makes SmartBit technology unique is the “breakdown detector”, allowing to precisely determining when the voltage applied to the gate (allowing programming thememory cell by breaking the oxide and consequently allowing the current to flowthrough this oxide) will effectively have created an irreversible oxide breakdown, the “hard breakdown”, by opposition of a “soft breakdown” which is an apparent, reversible oxide breakdown. This is a first advantage: when the OTP programming step is completed, the user can be sure that every bit will have been set at the forecasted value. Apparently, this is not the case with each of the various OTP architectures, even if this is what you would naively expect from any NVM technology!

How do Novocell competitors proceed to program an OTP block, without this “breakdown detector” feature? They have to determine the“optimal programming timing” (the set time) provided to program each bit, which is determined for each specific foundry process (by technology node, and variation) and program the block according with this “set time”. Unfortunately, this ‘set time’ programming method will always result in some bits being ‘remnant’ bits un-programmed in the initial programming cycle. To be sure that every bit in the NVM block will have been set to the correct value, an additional programming cycle –at least- will be necessary to increase the yield. This approach is obviously time consuming (it takes at least twice the initial programming time), which can be pretty costly if, for example, you have to program the OTP using the tester, at wafer or packaged IC test step, when you consider the cost related to the test time on multi-million dollar devices. Another good point for SmartBit architecture: not only the programmingis more reliable, deterministic, but the cost of ownership linked to theprogramming step is cheaper.

Novocell competitors are trying to offer a safer approach, based on Error Correction circuitry or redundancy. Just take a look at what this precisely means, if we consider that some IP includes a full 2X redundancy of bits in order to achieve yield and performance requirements. Full redundancy can lead toeffectively doubling the area required on the chip for the customer-ordered bit density! A safer approach, even if it is theoretically not as safe as SmartBit, at the price of Silicon over cost. This over cost can be highly penalizing, when the NVM block is large and the device in production generating volumes that we see in the Consumer Electronic or Wireless Handset segments: million if not dozen of million units! Again, a costly tradeoff for an inferior programming methodology versus the Novocell dynamic approach based on “breakdown detection”…



Clearly, one of the Novocell’s differentiator is reliability, thanks to the “breakdown detector”. We also have seen that choosing SmartBit technology can dramatically reduce the total cost of ownership, by reducing programming time by a factor of 2 or more, and by using to a smaller NVM IP block size, leading to a smaller real estate or chip size than competitors, being forced to add redundancy to reach the same level of reliability – or yield.

Eric Estevefrom IPNEST –



What’s new with HSPICE at DAC?

What’s new with HSPICE at DAC?
by Daniel Payne on 06-18-2012 at 5:50 pm

One year ago I met with Hany Elhak of Synopsys to get an update on what was new with HSPICE in 2011, so this year at DAC Hany met me at the Synopsys booth for a quick update.

HSPICE has something called Precision Parallel so with 16 cores your IC circuit simulations will have about 10 x speed up compared to a single core.
Continue reading “What’s new with HSPICE at DAC?”


TSMC Threater Presentation: Solido Design Automation!

TSMC Threater Presentation: Solido Design Automation!
by Daniel Nenni on 06-17-2012 at 9:00 pm

For a small company, Solido has some very large customers and partners, TSMC being on of them. Why? Because of the high yield and memory performance demand on leading edge technologies, that’s why.

Much has been made of and will continue to be said on the march of Moore’s Law. While economics of scale and performance vs. power are the main justifications, there are increased design challenges that make designs of prior decades seem quaint by comparison. Smaller transistors allow for lower cost per function and more power efficiency, but they also come with increased variation effects, making performance vs. power vs. yield tradeoffs a necessary part of the design flow.

With each successive process shrink, there is a corresponding increase in the number of SPICE simulations required to push design performance while ensuring manufacturability. Solido Design Automation provides solutions for reducing the number of simulations needed during design and verification, while still providing the same or more visibility into design choices, impacts on yield and risk. As a leading provider of efficient variation analysis tools, Solido continues to collaborate with TSMC to deliver effective analysis capabilities on the latest nanometer technologies, supporting designers of memory, standard cell, low power, and analog/RF circuits.

Memory designers have perhaps the greatest challenge in maximizing their design performance within the capabilities of a particular process technology, needing to validate yield and performance to 4-6 sigma on bit cells and sense amps and 2-4 sigma at the array level. While Monte Carlo is the preferred solution, it’s simply impractical to simulate the billions of points needed for 6-sigma analysis. Since the analysis still has to be done, a number of approaches have evolved that seek to bypass Monte Carlo, but they each suffer limitations in accuracy, scalability and, especially, verifiability.

Solido’s Memory+ Suite goes back to the core Monte Carlo analysis designers trust and handles the billions of samples with intelligent adaptive techniques to focus simulation resources towards the high-sigma tails of the distribution. Since Memory+ uses actual Monte Carlo samples, it is able to provide simulation results around the target sigma, high-sigma corners for use in design development and even the full PDF. These options give designers the detailed insights they need into non-linear effects, design sensitivities to make informed sizing decisions.

Unlike other approaches, Solido’s Memory+ is able to handle the more severe non-linear responses, rendering it applicable to a broad range of memory cells. In the following example, a 3- or 4-sigma analysis would appear linear with extrapolation completely missing the failure regions occurring at +/- 4.5-sigma.

Additionally, with the full PDF available for both the bit cell and sense amp, Memory+ can provide 3-sigma analysis at the system level, allowing designers to explore performance vs. yield tradeoffs directly. The following table shows the results of a 3-sigma analysis on a 256Mb SRAM array using the Memory+ System Memory tool, enabling visibility into the tradeoff between timing and system-level yield, in a matter of minutes. The tool is also applicable to system-level DRAM analysis.

Using memory design as just one example, Solido is able to provide designers with the necessary tools to analyze yield and performance, faster and with more consistent quality than before. As shown with Memory+, memory designers can quickly analyze designs at the cell level to 6-sigma and the system-level to 3-sigma, while keeping Monte Carlo and SPICE level accuracy.


Cadence IP Strategy 2012

Cadence IP Strategy 2012
by Daniel Nenni on 06-17-2012 at 7:00 pm

As I mentioned in a previous blog Cadence Update 2012, Martin Lund is now in charge of the Cadence IP strategy. Martin read my first blog and wanted to exchange IP strategies so we met at DAC 2012 for a chat. Not only did Martin connect with me on LinkedIn, he also joined the SemiWiki LinkedIn group, which now has 4,000+ members. So yes, he is serious about social media and the IP business.

During his 12 years at Broadcom, Martin grew the company to become the global leader in Ethernet switch SoCs for data center, service provider, enterprise, and SMB markets, and successfully drove several strategic acquisitions. His silicon and system level experience equips him well to scale the Cadence SoC Realization business.

Prior to Broadcom, Lund held various marketing and senior engineering management positions in the Network Systems Division of Intel Corporation and at Case Technology, a European networking equipment manufacturer acquired by Intel in 1997. Lund is an inventor on 26 issued and pending US patents. He holds a technical degree from Frederiksberg Technical College and Risø National Laboratory at the Technical University of Denmark.

Being Danish, it should not have surprised me when Martin used the Lego analogy for IP. Lego is a Danish company and often compared to semiconductor IP as the building blocks of modern semiconductor design. Legos were my number one toy as a kid. My father was convinced I would be an architect, until of course I got my hands on a Comodore Pet computer, sorry about that Dad. Did you know that Lego is the largest tire manufacturer in the world? Martin did.

As a father of four, my Lego habit continued through my kids and Lego blocks evolved into Lego subsystems with optimized sets targeted at vertical markets. I remember spending hours with my son building a Space Shuttle kit and not one part was left over. A good analogy of the emerging semiconductor IP subsystems, plug and play, no parts left over.

What happened next to the space shuttle is the future of the IP business according to Martin Lund and I agree whole heartedly. My son made changes and incrementally optimized the shuttle for many different uses, until of course it was reduced to a pile of building blocks by his baby sister, and we were on to the next project which was even bigger, more complex, and in desperate need of optimizing.

Bottom line: For advanced semiconductor design, complete IP kits (off the shelf subsystems) will not work. There must be a significant level of optimization for differentiation and ease of integration. Trade-offs are an integral part of modern semiconductor design: Power, Performance, Area, Yield and IP subsystems will be held to the same standard. Off the shelf subsystems will not win in competitive markets. Mass customization will be required. Software will be the key enabler. Clearly this is a Cadence IP versus Synopsys IP strategy, which I will blog more about later.

Side note: My oldest son, the Lego Master, very quickly mastered the computer and internet. He is co-architect and lead administrator of SemiWiki and just received his Masters Degree in Education. Moving forward, he will prepare the legions of Lego Masters for the mathematical challenges of the new world order.


What Will Happen to Nokia?

What Will Happen to Nokia?
by Paul McLellan on 06-15-2012 at 3:06 pm

News today is that Moody’s has downgraded Nokia to junk status. They also announced that they will lay off 10,000 people (including about 1 in 4 of the people they employ in Finland, where Nokia is headquartered).

For those of you who don’t know all the inside-baseball stuff about Nokia, here is a recent little history. The current CEO of Nokia Stephen Elop came from Microsoft at the start of last year. He wrote a famous (infamous) memo known as the burning platforms (it started with the choice of people on burning oil platform to leap off). Since that memo sales have fallen for 5 consecutive quarters wiping out $13B in revenues and $4B in profits.

The first thing the memo did was to say that current products were not good and as a result people stopped buying them. This is known as the Rattner effect after a very profitable British jewellery chain called Rattners which had a store on every high street in Britain. At a dinner with finance types Gerald Rattner said that the stuff they sold was so cheap because it was “total crap.” It got into the press, people stopped going there and the chain quickly cratered in value and almost went bankrupt.

The next thing Elop did was decide that all smartphones would be based on Microsoft’s Windows Phone. This at a time when Nokia still dominated the smartphone market (outside the US) with phones based on Symbian and another internal operating system called Meego (despite the memo saying sales of Symbian-based smartphones were in terminal decline they were actually growing strongly and outselling Apple 2:1). The only problem was that these WP-based phones would not be available for the best part of a year. This is the Osborne effect, named after a silicon valley businessman (coincidentally also British) who announced that their next product would be compatible with the IBM PC. This was obviously desirable so everyone stopped buying the current products and the company ran out of money before it could deliver the sexy IBM-compatible one.


The effect of all of this is that Nokia has had the biggest loss of market share of any major business anywhere ever. By that measure Elop has to qualify as one of the worst CEOs of any company ever.

And worse, once the Lumia (WP-based) smartphone was available it didn’t sell very well. If you are going to put all your wood behind one arrow it had better be a good one and this one is not.

My go-to guy on anything to do with the phone market in general and Nokia in particular is Tomi Ahonen (who used to work at Nokia, is Finnish, but lives in Hong Kong these days). His view on what is going on are always worth reading. Cell-phones are not sold like other products (DVD players say). You can’t just go and buy any old cell phone and then go find a carrier (at least in most countries). The carriers control which phones get sold and which do not. Many cell-phone carriers also have landline businesses (often one cellphone license would go to the incumbent telecom operator in a country) and there is one thing that they absolutely hate: Skype. And who owns Skype? Microsoft. So the carriers hate Microsoft and aren’t going to do anything to help them. Hence the lukewarm launch of Lumia (on Easter Sunday when all the stores were closed, wonderful). Even Elop admits this saying that retail salespeople “are reticent to recommend Lumia smartphones to potential buyers.” So this hate goes all the way to the front lines.


So Nokia is in trouble, as I’ve said before. Of course they won’t go bankrupt and shut down, someone will buy them. So who? Tomi’s pick is Samsung although that was before the current round of layoffs and firings (he thinks they are damaged goods now). They are the only company other than Apple making any real money in handsets and would have huge market share, pick up some additional manufacturing lines in other parts of the world and so on. Microsoft could buy them but it would be easier for them to keep Nokia on life support by sending them money. Lots of people, in particular, Apple, could buy them for patents (although Nokia has been selling a lot of patents to keep afloat) and to stop Samsung getting them. Facebook could buy them if they really wanted to be in the handset business (but at lot of Nokia’s business is “feature phone” i.e. dumb phone). Lots of other choices of course. But this blog has gone on long enough.


TSMC Theater Presentation: Ciranova!

TSMC Theater Presentation: Ciranova!
by Daniel Nenni on 06-14-2012 at 9:00 pm

Ciranova presented a hierarchical custom layout flow used on several large advanced-node designs to reduce total layout time by about 50%. Ciranova itself does automated floorplanning and placement software with only limited routing; but since the first two constitute the majority of custom layout time, and strongly influence the remainder, the overall impact can be substantial. Designs sensitive to nanometer effects like Layout Dependent Effects (LDE) and poly density are particularly well suited to automation; one example was a 28nm, 40,000 device mixed-signal IP block which had been completely placed by one engineer in 8 days, including density optimization.

The Ciranova-enabled flow has two main phases. In the first phase, the software automatically generates a first-pass set of constraints for the entire design hierarchy, and a range of accurate floorplans. This phase is “push button” – it starts with a schematic and requires no intervention or user constraint entry. In the second phase, the user interactively refines the initial constraints, running and rerunning hierarchical placement until the entire layout matches the user’s floorplan targets and other criteria. The whole process is very fast; since the layouts are DRC-correct irrespective of rule complexity, tens of thousands of devices can be placed accurately in a few days. Ciranova’s output is an OpenAccess database which can be opened in any OA environment.

Two major advantages of this flow over normal schematic-driven-layout are (1) the DRC correct by construction aspect; and (2) the entire layout is optimized at once. This approach lends itself especially well to handling proximity-related effects like LDE, where the behavior of a given device changes depending on what happens to be nearby. Since Ciranova optimizes entire regions at once, multiple LDE spacing constraints are managed together.

In a TSMC design, TSMC provides tools at the schematic level to help a user identify LDE-sensitive devices in his or her schematic, and determine the relevant spacing constraints necessary for those devices to perform correctly. Ciranova then takes this information and produces a correct-by-construction layout which optimizes not only to the LDE directives but also to any other requirements: design rules, density, designer guidance such as symmetry, etc. Also, the approach is a general one and not limited to individual modules like current mirrors and differential pairs.

The diagram above includes a post-placement simulation study with alternate layouts of the same design: one with LDE rules applied, and one without (net result: the LDE-optimized placement clocks slightly faster). Most users never get to see a comparison like this, because hand layout takes so long that few people ever do it more than one way. But an automated flow makes this kind of study and tradeoff analysis easy.

Using this approach, even very large custom IC designs under very complex design rules can be done quickly; and typically at equal or better quality to handcraft, since much broader optimizations can be achieved than a human mask designer normally has time to explore.