RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The Biggest EDA Company You’ve Never Heard Of

The Biggest EDA Company You’ve Never Heard Of
by Paul McLellan on 05-02-2012 at 8:30 pm

There’s this EDA company. They have over 100 tapeouts. They have a $28M in funding. They have 250 people. And you’ve never heard of them. Or at least I hadn’t.

They are ICScape. They started in 2005 with an investment from Acorn Campus Ventures and delivered their first product, ClockExplorer, in 2007 and their second, TimingExplorer in 2009. They then have gone on to develop a complete openAccess-based place and route system including placement, clock-tree-synthesis, routing, static timing analysis, parasitic extraction and…

In 2008-2010 during the technology downturn they survived purely on product revenue. They turned their attentions to China, which was one area that was still buoyant. Also in China is a 20 year old EDA company called Huada Empyrean Software (HES) who have an openAccess-based analog environment. HES is a subsidiary of China Electronics Corporation, China’s largest electronics conglomerate (and an SOE). HES want to expand outside of China and become a global player, so it was spun out of CEC and merged with ICScape and provided with $28M in funding. They have one engineering organization. HES sells the whole product line in China and Taiwan. ICScape everywhere else (the US, Korea and Japan today, and Europe soon).

They have big plans to become a big global EDA player. I have no idea how good their technology is but they claim that over 100 chips have been taped out, including some at the 28nm technology node, so it should be pretty solid. Customers include Marvell, Huawei, ZTE, NHK and more.

The SoC product line is based around accelerating design closure by reducing the number of iterations by 50%. It consists of four tools:

  • TimingExplorer, a physically aware multi-corner, multi-mode timing ECO tool
  • ClockExplorer, which can reduce clock insertion delay by up to 50% and clock-tree power by 40%
  • Skipper, a high-performance and ultra-large capacity chip finishing solution
  • FlashLVL, a high-speed layout comparison tool

The analog product line is now in its 6th generation. It is focused on big-A small-D designs with lots of analog and limited amounts of digital. It contains:

  • interconnect-aware layout editing
  • high-capacity parallel circuit simulation
  • hierarchical parallel physical verification
  • mixed-mode, multi-corner parasitic extraction and analysis

Going forward the plan is to bring all the technologies together, which is not such a daunting task as it might be since both product lines are native OA-based. At the same time expand their channel to have complete coverage everywhere.


Use a SpyGlass to Look for Faults

Use a SpyGlass to Look for Faults
by Paul McLellan on 05-02-2012 at 5:24 pm

There is a famous quote (probably attributed to Mark Twain who gets them all by default) “When looking for faults use a mirror not a spyglass.” Of course if you have RTL of your IP or your design then using a SpyGlass is clearly the better way to go. But it is getting even better since there is a new enhanced release, SpyGlass 4.7.

Of course there are enhancements to speed and capacity to keep up with the increase in design sizes. Some users have been running 280 million gate designs through flat overnight. There is some bottom-up hierarchical design support (and more coming in the future).

But the biggest changes are in the power area. There are some detailed improvements in UPF support, and how clock-domain-crossing analysis interacts with it.

The RTL power reduction capability has improved by a factor of two compared to the previous release. It seems to achieve around 12% power reduction typically, nearly 25% at times (and, of course, there are some designs where there just are not any gains to be had). The sequential equivalence checking engine has also been improved to do a better job of verification of RTL that has been modified to reduce power, both when this is done by hand or automatically.

Another new capability is that SpyGlass can now estimate design complexity using cyclomatic metrics, which is a measure based on branching analysis (usually in software but adapted to RTL). This is a good predictor for the time and effort that will be required to create a verification test bench for complete functional verification.

There are also improvements to SpyGlass Physical, in particular there is improved estimation of routing congestion and an early estimation of area, both of which give early and so actionable feedback about likely problems that will occur later with physical design.


Carl Icahn Blinks in Bid for Mentor Graphics

Carl Icahn Blinks in Bid for Mentor Graphics
by Daniel Payne on 05-02-2012 at 3:42 pm

10929430 large

One year ago activist investor Carl Icahn started a hostile takeover bid for Mentor Graphics and was able to offer up three new board members, however yesterday we read that Mentor Graphics will:

  • Have their annual shareholder meeting on May 30th
  • Two of Icahn’s board members are not on the roster for renewal
  • Mr. Icahn has no new board members to offer up

Read the complete proxy here.

To me this spells defeat for Carl Icahn in taking over Mentor Graphics because he is not offering up any new replacements on the board of directors. Had he been able to get just 5 out of 8 board members to agree with him, then he could’ve controlled the company. Now it appears that Carl’s three board members will be reduced to just 1, a very noticeable minority.

Mentor has a strong poison pill provision in place and only 1 Icahn board member will probably remain after the votes are tallied on May 30th.


Carl Icahn, AP Photo

Mr. Icahn does still own 14.6 percent of MENT shares, which he acquired at prices between $8 and $9 per share, so the present share price of $14.45 gives him a paper profit of over 50%, not too shabby.


If history is any indicator, then buy MENT in August, sell in April

I’ll attend the May 30th annual shareholder meeting at Mentor Graphics and let you know if there is any more drama left in this story, so hopefully we can all turn our attention to creating value for customers through EDA tools that enable the SOC revolution.


IC design at 20nm with TSMC and Synopsys

IC design at 20nm with TSMC and Synopsys
by Daniel Payne on 05-02-2012 at 10:25 am

willychen80x95

While the debate rages on about 28nm yield at foundry juggernaut TSMC, on Monday I attended a webinar on 20nm IC design hosted by TSMC and Synopsys. Double Patterning Technology (DPT) becomes a requirement for several layers of your 20nm IC design which then impact many of your EDA tools and methodology.
Continue reading “IC design at 20nm with TSMC and Synopsys”


ARM Models: Carbon Inside

ARM Models: Carbon Inside
by Paul McLellan on 05-01-2012 at 10:00 pm

ARM used to build their own models. By hand. They had an instruction-set simulator (ISS) called ARMulator that was largely intended for software development, and cycle-accurate models that were intended to run within digital simulators for development of the hardware of ARM-based systems.

There were two problems with this approach. Firstly, ARMulator was built using interpretive technology and consumed approximately 1000 host instructions to simulate each ARM instruction so ran at a few MIPS. Modern virtual platform models use Just-In-Time (JIT) compilation, cross-compiling the ARM code into host instructions and so avoiding any interpreter overhead (the compiler creates overhead but that is minimal for instruction sequences executed many times). They thus run hundreds of times faster, fast enough to boot operating systems and debug industrial-sized loads of embedded software.

ARMulator was reasonably cheap to maintain since it simulated the instruction set which didn’t change much, apart from its half-hearted attempt at cycle-counting. The same could not be said for the cycle accurate models. These were expensive to develop and had to be re-implemented for each separate ARM processor. Verification, in particular, was a huge challenge. It is comparatively easy to get the basic model to work, but handling all the corner cases (pipeline interlock, delayed writeback, branch prediction, bus contention etc) is a real challenge.

In 2004, ARM acquired Axys Design in an attempt to leverage their models more as true virtual platforms. They renamed it SoC Designer. At the time I was working for VaST (so competing with Axys) and predicted that this would fail. And it was nothing to do with the technology, which was perfectly fine.

I had watched at VLSI Technology as customers balked at using our design tools since they didn’t want to get locked into VLSI-only solutions and I felt ARM customers wouldn’t want to get locked into ARM-only solutions. If you were using, say, a CEVA (then still DSP Group I think) Teak DSP then how are you going to get that into the platform. And what sort of support will you get from ARM if there are problems.

In 2008 ARM gave up, and sold SoC Designer to Carbon. This fixed the ARM-only issue since Carbon is an independent company. Further, at the same time, they gave up trying to hand-build their own cycle accurate models. Instead, Carbon used existing technology to build the models automatically from ARM’s RTL. ARM shifted their modeling focus to concentrate on their own JIT technology which they introduced as ARM Fast Models. These TLM models don’t attempt to model cycle accuracy and therefore execute much faster.

Since then Carbon has moved from being an RTL acceleration company to a true virtual platform company, with a complete set of ARM models (and later MIPS, Imagination and others) and some powerful technology for both bringing in transactional level models such as ARM’s Fast Models, where they exist, or creating models automatically from RTL where they do not. In some cases where both models exist, such as with ARM’s Cortex A processors, Carbon has even introduced dynamic swap technology to enable a virtual platform to start running with TLM models (to do the operating system boot, for example) and then switch to accurate models at a point of interest.


RedHawk: On to the Future

RedHawk: On to the Future
by Paul McLellan on 05-01-2012 at 6:00 am

For many, maybe most, big designs, Apache’s RedHawk is the signoff tool for analyzing issues around power: electromigration, power supply droop, noise, transients and so on. But the latest designs have some issues: they are enormous (so you can’t just analyze them naively any more than you can run a Spice simulation on them) and increasingly there are 3D designs with a whole new set of electrical and thermal effects that result from stacking very thin die very close on top of each other.

Apache has announced the latest version of RedHawk called RedHawk 3DX (the previous versions were SD for static-dynamic, EV for enhanced version, NX for next version and now 3DX for 3D extensions — Apache didn’t spend a lot on branding consultants!). This attacks some of the big issues connected with 3D, in particular the thermal problem (how much heat is there and what happens when you don’t get it all out) and issues concerned with how you can analyze designs which are too large to analyze the old way.

There are 3 main changes:

  • in keeping with its name, RedHawk 3DX is indeed ready for 3D and 2.5D ICs
  • there is a gate-level engine in RedHawk now, enabling the RTL based analysis of PowerArtist to be pushed further
  • there are changes in capacity and performance to keep up with Moore’s law and take RedHawk down to 20nm designs

In 3D there are two big problems (apart from the sheer capacity issue of analyzing multiple die at once). The first is die-to-die power and die-to-die thermal coupling, and the other is modeling TSVs and interposers (which obviously you don’t have unless you are doing 3D).

With a multi-pane GUI you can now look at the different die and the interposer and see the thermal effects. TSVs are obviously an electrical connection between adjacent die but they are (often) made of copper and are a good thermal connection too. This can be good (the next die is a sort of heatsink) or bad (DRAM doesn’t like to get hot). You can look at voltage drop issues to make sure you don’t have power supply integrity problems, and also at current and thermal.



The second big change is that the RTL analysis approach of PowerArtist is pushed into RedHawk. PowerArtist does RTL analysis and prunes the simulation vectors down to perhaps a few hundred critical ones where power transitions occur or where power is very high (for example, inrush current when a powered down block is re-activated). These few vectors need more detailed analysis down at the level RedHawk operates. There is also a vectorless mode. You can see in the pictures below how well the RTL level analysis matches the gate-level analysis, both in the numbers which differ by a few percent and just visually looking at the voltage drop color maps.

Of course on a large chip you want the capability to analyze different blocks at different levels, some at gate, some at RTL, some vectorless, and you can do that.

The third aspect of the new generation of RedHawk is keeping up with design size. There is a new sub-20nm electromigration signoff engine which is current direction aware, metal topology aware and temperature aware (all things that affect EM).


Plus, adding an extraction reuse view enables a reduced model of part of the power delivery network. This enables up to (actually occasionally more) 50% reduction in the simulation node-count without reducing accuracy. This enables full-chip simulation including the package impact, with blocks of interest analyzed in detail and other blocks reduced using the ERV approach.


Book Review – Quantum Physics: A Beginner’s Guide

Book Review – Quantum Physics: A Beginner’s Guide
by Daniel Payne on 04-30-2012 at 8:00 am

It’s been 34 years since I graduated from the University of Minnesota with a degree in Electrical Engineering so I was curious about what has changed in quantum physics since then. Alastair Rae is the UK-based author who wrote the book – Quantum Physics: A Beginner’s Guide. I read this on my Kindle Touch e-book reader and iPad, preferring the Kindle for actual reading and iPad for better looking images.

In nine chapters I refreshed my memory on:

[LIST=1]

  • Quantum physics is not rocket science
  • Waves and particles
  • Power fro the quantum
  • Metals and insulators
  • Semiconductors and computer chips
  • Superconductivity
  • Spin doctoring
  • What does it all mean?
  • Conclusions

    Since 1978 the physicists have given names to the quarks and found most of them.

    I found it interesting to see the surface of Silicon through a scanning tunnelling microscope:

    Fussion (sun and stars) and fission were explained.

    When a neutron enters a nucleus of the uranium isotope U235, it becomes unstable and undergoes fission into two fragments along with some extra neutrons and other forms of radiation, including heat. The released neutrons can cause fission in other U235 nuclei, which can product a chain reaction.

    Chapters 4 and 5 pertain most directly to our semiconductor industry with explanations on conductors, insulators and semiconductors. When talking about transistors the author writes about Biploar instead of MOS devices, so if he ever does another addition I’m hoping that he will add the dominant MOS device because it is so central to all of consumer and industrial electronics today.

    Quantum bit or qubit was a new term that I hadn’t really heard of before: A quantum object that can exist in either one of two states or in a superposition make up from them.

    Chapter 8 delves into some philosophical discussions on quantum behavior where the act of measuring an object erases its past state or spin. The popular sci-fi concept of multiple universes comes into play based on the notions of objects being in a superposition of two states at once and then applying that to people or worlds.

    My Conclusions

    I found the book easy to read and it was almost like I was back at the university again, although this time with no homework to turn in. If you want a physics refresher or have avoided physics entirely and are curious, then this book will be a help to your learning.


  • Such a small piece of Silicon, so strategic PHY IP

    Such a small piece of Silicon, so strategic PHY IP
    by Eric Esteve on 04-30-2012 at 6:05 am

    How could I talk about the various Interface protocols (PCIe, USB, MIPI, DDRn…) from an IP perspective and miss the PHY IP! Especially these days, where the PHY IP market has been seriously shaken, as we will see in this post, and will probably continue to be shaken… but we will have to wait and look at the M&A news during the next few weeks or so.

    Before looking at these business related movements, doing some quick evangelization about what exactly is a PHY. The acronym comes from “PHYsical Media Attachment” (PMA) which described the part of the function dealing with the “medium” (PCB or optical). As of today, the vast majority of the protocols define high speed differential serial signaling where the clock is “embedded” in the data, at the noticeable exception of DDRn protocol where the clock is sent in parallel with the (non differential and parallel) data signals. The first reaction when seeing the layout view of an IC including a PHY function is that it’s damn small! A nice picture is always more efficient that a long talk, so I suggest you to look at the figure below (please note that the chip itself is a mid size IC, in the 30-40 sq. mm range).

    If we zoom in the PHY area, we will see three functional blocks: the PMA, usually mixed-signal, the Physical Coding Subsystem (PCS), pure digital as well as the block interfacing with the controller, named PIPE for Physical Interface for PCI Express in the PCI Express specification, SAPIS for SATA PHY Interface Specification and so on for the various specifications. Whatever the name, the functionality is always the same, allowing to interface high speed, mixed-signal and digital functions usually located in the pad ring (PMA and PCS) with the functional part of the specification (Controller) always digital and located into the IC core. The picture below shows a layout view for a x4 PCIe PHY. The Clock Management Unit (CMU) is a specific arrangement where the PLL and Clock generation blocks are shared by the four lanes.

    Zooming again within the above picture, we would come to the PLL and the SerDes, both functions representing the heart of the PHY, requiring the most advanced know how to be designed. When sending data from the chip, you receive digital information from the controller, say a byte (parallel), then encode it (add 2 bits for encryption purpose) and “serialized” it to send 10 serial bits; this in case of 8B/10B encoding, valid for SATA or PCIe gen-1 or gen-2. For PCIe gen-3, encryption becomes 134B/136B when for 10 Gigabit Ethernet it’s 64B/66B. Precisely describing the various mechanisms used to build the different SerDes would take pages. Which is important to note here is that both the PLL and the SerDes design are requiring high level know how, and that Moore law allow to design faster and faster SerDes (but not smaller): in 2005, designing a 2.5 Gbit/s SerDes on a 90 nm technology what state of the art, now we are talking about 12.5 Gbit/s PHY designed on 28nm; and the 25 Gbit/s PHY able to support 100 Gigabit Ethernet using four lanes only (instead of ten 10G today) is not that far!

    If we look at the business side, it’s easy now to understand that only a few teams around the world are able to design PHY functions, and an ever smaller number of companies to sell advanced PHY IP functions. Not because the market is small, in fact the massive move from parallel to serial interface specification has created a very healthy Interface IP segment weighting more than $300M in 2011, see this blog, but because it requires highly specialized design engineers, the type of engineers starting to do a decent job after five or ten years of practice, and being good after fifteen or twenty years experience!

    Until very recently, end of 2011, the PHY IP vendor landscape was made of a large EDA and IP vendor, Synopsys, selling PHY + Controller IP to the mainstream market, but not so comfortable with the most advanced PHY products like PCIe gen-3 (8G) or 10 Gigabit Ethernet, competing with a couple of companies: Snowbush (IP Division of Gennum), MoSys (after the acquisition of Prism Circuit in 2009) and V Semiconductor, founded in 2008 by former Snowbush employees, as well as Analog Bits.

    Snowbush, founded in 1998 by Pr Ken Martin (Professor of Microelectronic at Toronto University, this was the best location to hire young talents!), was considered by the market as the most advanced PHY IP vendor when the company was bought in 2007 by Gennum, a fabless company. Gennum realized that they had acquired a nugget, and quickly developed sales based on existing port-folio. They also realized that they were missing controller IP to be in the position to offer an integrated solution, and bought Asic Architect in 2008 for a very low price. At the same time, they fired the founders, including Pr Ken Martin, and try to develop and sale integrated solution, competing with Synopsys. Although Snowbush’ revenues quickly moved at the $10-12M range in 2008 and 2009, the company was still at the same level of revenue in 2011, on a market which grew 40% on the same period… As a matter of fact, if the PHY IP was still competitive, the controller IP never reach the same level of market acceptance.

    Prism circuit was founded in 2007 and could compete on the same market than Snowbush (most advanced PHY developed in the latest technology nodes). The company was already doing good with a $5-6M range revenue when it was bought (for $25M, more than Snowbush!) in 2009 by MoSys. The rational was for MoSys to use SerDes technology from Prism Circuit to build an innovative product, the Bandwidth Engine IC, moving to fabless positioning, but still selling PHY IP… As we will see in a minute, this strategy was not as fruitful as expected.

    In fact, both companies had success on a niche, lucrative segment (very high speed PHY) when a new comer, V Semiconductor, was started in 2008 by former Snowbush employees. V Semi was able to catch a good market share, as their revenue (not publicly disclosed) could be estimated around $10M in 2011. Among other design-in, they have designed the multi standard SerDes on Intel 28nm technology for FPGA start-up. This type of sale represent is a multi million dollar deal. Not that bad for supposedly not enough efficient business development people!

    Untill 2011, we had a large EDA and IP vendor, enjoying the larger market share, challenged by a few PHY IP vendors, so customers still had the choice, especially when needing above 10 G PHY solution designed in advanced technology node.

    Then, at the end of 2011, Gennum was acquired by Semtech, another fabless. A month ago or so (March 2012), it was clear that Semtech decided to keep Snowbush PHY for their internal use, which is a good strategy as such IP can be used as strong differentiators. The result is: PHY IP Vendor minus one. Then, still in March, came the announcement that Synopsys (see 8K form) has acquired MoSys PHY IP unlimited license for a mere $4.2M (former Prism Circuit was bought for $25M in 2009), preventing MoSys to grant new license. The result is now: PHY IP Vendor minus two!

    There is still one or two vendors (V Semi and Analog Bits), you’d say… But Analog Bits is more design service than pure IP vendor oriented, supporting various mixed-signal products, from SerDes to SRAM/TCAM passing through PLL. And, last but not least, the rumors about an acquisition of V Semi are multiplying, days after days. As I don’t want to relay an unverified rumor, I will not disclose the last company name I have heard, but such an acquisition would not go in favor of more competition on the PHY IP market…

    By Eric Estevefrom IPnest



    GSA 3DIC and Cadence

    GSA 3DIC and Cadence
    by Paul McLellan on 04-29-2012 at 10:00 pm

    At the GSA 3D IC working group meeting, Cadence presented their perspective on 3D ICs. Their view will turn out to be important since the new chair of the 3D IC working group is going to be Ken Potts of Cadence. Once GSA decided the position could not be funded then an independent consultant like Herb Reiter had to bow out and the position would need to be taken by someone funded by the company they work for. So thanks Cadence. And thanks for the beer and wine after the meeting too.

    Ken would have been there but unfortunately he has broken both his elbows (get well soon Ken) and driving over 17 from Santa Cruz is dangerous enough with both arms working. I know Ken from both Compass and my own Cadence days. He also owns a car repair shop in Santa Cruz with his family but somehow the gravitational attraction of EDA pulled him back in.

    John Murphy presented Cadence’s perspective. First, Murph pointed out that Moore’s law has been going on for a century. I’ve pointed this out before, indeed we both used the same graphic stolen from Ray Kurzweil that covers 5 technologies (mechanical, relay, tube, transistor, IC). From the system performance point of view, if not from the purist view of lithographic advance, this is likely to continue for the forseeable future.

    Cadence sees two “killer apps” for 3D-IC in the short term.

    The first is yield enhancement. The same defect density has a huge impact on yield depending on die size, so there are is a lot of upside to building a system out of several smaller die versus one huge one. Indeed, this is the motivation for Xilinx’s excursion into 3D (2.5D on silicon interposer). eSilicon used their own cost models to estimate that Xilinx is getting a huge bump in yield. And as Liam Madden of Xilinx had pointed out earlier in the day, they don’t need to put all the slowest die on one interposer so there is a performance gain too, since all 4 slices are never worst case as might happen with a single large die (processed with slightly too long gate lengths). This shows it graphically (obviously exaggerated, nobody is making 4″ chips, too big for the reticle anyway).


    The second killer app is memory subsystem, where by putting memory on logic (or stacks of memory on logic) the power can be significantly reduced while the performance is increased.


    Cadence claim that they have 8 test chips and 1 in production (Altera). In fact they said there is a second production design but they cannot say with who. One thing that is clear is that 3D requires a lot of cooperation between a lot of different partners, not just in EDA. In fact, designing a 3D IC is not that different from designing several separate ICs. Yes, some new stuff for TSVs is required, and test is complicated. But it is nothing compared to what the OSATs have to do: putting a 3D chip stack together is nothing like just putting a chip on a BGA with a whole new set of challenges to do with alignment, not breaking the very thin die, bonding and debonding them from something to make the handleable, making all the microball connnections and so on.

    Like everyone else, Cadence sees 3D coming as a series of steps. First 2.5D silicon interposer designs, then logic on memory with true 3D some way off.