Synopsys IP Designs Edge AI 800x100

Tablets & smart phones driving electronics growth

Tablets & smart phones driving electronics growth
by Bill Jewell on 03-19-2013 at 8:10 pm

Worldwide electronics bounced back strongly in 2010 after the recession of 2008-2009. Every region experienced solid growth, ranging from high single-digit growth in the U.S. to over 20% in the key Asian countries. However in the last two years electronics has slowed down significantly. Several factors contributed to this weakness: a sluggish recovery in the U.S., the European debt crisis, the Japan earthquake and tsunami, and slowing growth in China. Recent signs point to an improvement in electronics. The chart below shows government data on three-month-average electronics production versus a year ago for China, the U.S. and Japan. Total industrial production is shown for Europe and South Korea since electronics production statistics are not available. The black line shows three-month-average change versus a year ago for worldwide semiconductors, from WSTS.

China remains the key driver of electronics. December 2012 growth picked up to 12.7% after falling to 10% in August through October. South Korea’s industrial production growth was 3% in January 2013 after five months below 1%. U.S. electronics remains lethargic with January 2013 down 1.3%, the sixth consecutive month of year-to-year declines. Europe industrial production (for the 27 countries in the European Union) has shown year-to-year declines for 12 straight months. Japan electronics production recovered to positive year-to-year growth in April 2012, but has since fallen to a 16% decline in December 2012.

The recent moderate growth in overall electronics is reflected by three-month-average world semiconductor shipments from WSTS. December 2012 and January 2013 each showed 3.8% growth versus a year ago. Previously the WSTS data showed 16 months of year-to-year declines from July 2011 to October 2012.

What are the key drivers of this pickup in electronics? The PC has been a major factor in the electronics and semiconductors industries for 30 years. In the last two years PC units have been stagnant, based on reports from International Data Corporation (IDC). Over the same time period growth of media tablets has been explosive. Since the current wave of tablets began with Apple’s iPad in 2Q 2010 shipments have grown to 52.5 million units in 4Q 2012, according to IDC (as shown in the chart below). Tablet units in 4Q 2012 were equal to 58% of PC units. Tablets are obviously displacing some PC sales as well as creating a new market. Adding together the unit shipments of PCs and tablets reveals the healthy growth of the combined markets. The blue line shows the change versus a year ago for PCs plus tablets. The combined growth rate was over 20% in the second half of 2011, moderating to low double digits in the first half of 2012. Growth dropped to 1% in 3Q 2012 due to slow Apple iPad shipments as consumers waited for new models. Growth bounced back to 13% in 4Q 2012.

Mobile phones are another major driver of electronics and semiconductors. Overall mobile phone unit growth was weak in 2012, up only 1% from 2011 according to IDC. All the growth has been driven by smart phones, which grew 41% in 2012 as basic phones declined 15%. As shown in the chart below, smart phones accounted for 45% of total mobile phone units in 4Q 2012. Smart phones should account for the majority of mobile phone units in 2013. The high semiconductor content of smart phones compared to basic phones will drive higher semiconductor growth.

Continued growth in the electronics and semiconductor markets is dependent on improvement in the world economy. As shown in last month’s newsletter, the International Monetary Fund (IMF) expects improving economic growth in 2013 and 2014. (See “Semiconductors Down 2.7% in ’12, May Grow 7.5% in ’13” at http://www.semiconductorintelligence.com). Although PCs and total mobile phones are experiencing slower growth, media tablets and smart phones will be key elements in the electronics and semiconductor market recoveries.


A Brief History of Chips and Technologies

A Brief History of Chips and Technologies
by Paul McLellan on 03-19-2013 at 4:26 pm

I talked to Dado Banatao today. He is managing partner at Tallwood Venture Capital today but back in the mid-1980s he was the founder of Chips and Technologies, the first fabless semiconductor company. The rumors that they had a hard time raising money because VCs couldn’t comprehend a fabless semiconductor company are true. Even his friends told him it “wasn’t a real semiconductor company.” In fact the first $1M was raised from a real-estate investor! Only once they were further along were they able to raise another $3M from various Japanese investors including Mitsui.

Dado decided to use gate-arrays to get to market fast since the PC market was developing fast and the opportunity to build chipsets to serve it was there and then. They went with Toshiba, who they reckoned had the best gate-array technology at the time. But the design was too large for even the biggest gate-array so they partitioned it into a logic CMOS gate-array and all the drivers on a separate bipolar chip that Hitachi fabbed. Hitachi had a completely empty fab due to the semiconductor downturn at the time. C&T filled it completely. Since Hitachi were desperate for something in that fab they got unbelievably low prices.

The business took off fast. By the time they had their IPO, they still had $1M of their original $4M investment in the bank. The fact that Mitsui was an investor turned out to be fortuitous, since it meant that they could just order from Toshiba and Hitachi, without having to pay up-front with working capital that they didn’t have. Mitsui financed $50M in inventory.

There was no really competitive product for 2 years until VLSI had its first chipset. The bipolar chip turned out to be an edge since at that time ESD protection on CMOS was in its infancy and was still at least a potential problem, which meant C&T could create FUD about reliability against VLSI’s all-CMOS solution. 3 years later C&T had a solution that was all-CMOS too, but by then ESD protection was up to 20KV and those issues had gone away.

C&T got into Dell very early and they rode that rocket together. But in the meantime Compaq was king but they didn’t believe in using chipsets at that point. But then Taiwan, Korea and Japan were suddenly all making PCs and Compaq couldn’t compete so they had to switch too. Interestingly, at that point in the industry history, C&T were making more on each PC that Intel was. C&T was eventually acquired by Intel in 1997.

Dado went on to found S3 (graphics processors) again using gate-arrays initially to get to market fast once they decided what the market needed. They looked around for who had the biggest arrays at the time and found one at Seiko-Epson that they decided to use. In order to get data moved around fast enough they developed their own interconnect that they called Advanced Chip Interconnect, which, when Intel basically adopted it became PCI and PCIe.


RealTime Register Retiming

RealTime Register Retiming
by Paul McLellan on 03-19-2013 at 7:00 am

I was at the EDAC CEO forecast meeting last week and one of the questions that was asked of EDAC members was “which is the hottest EDA startup?” The one with the most nominations was Oasys. So Oasys is hot.

But register retiming is hotter.

The latest announcement from Oasys this morning is that register retiming is now available in the RealTime synthesis engine that underlies all of Oasys’s products. This is driven especially by Oasys’s customers designing high-performance graphics processors (GPUs) since these have very complex pipelines that are next to impossible to balance by hand in the RTL. However, it is also applicable to many other domains especially in communications.


Register retiming involves moving logic from before a register to after (or vice versa) in a way that preserves functionality, but improves timing, power and/or area. For a datapath, typically this is attempting to balance the amount of logic between all stages of a pipeline so that the entire pipeline can be clocked at the ideal frequency, rather than being limited due to especially long paths in some stages. This may involve adding (or removing) additional registers, as in the example above, to hold intermediate values from in the combinational logic.


In fact it makes the design easier since extra registers can be added almost trivially to the design, typically at the end of a datapath or just after a large cone of combinational logic, leaving the synthesis engine to pull the registers into the logic cones (and duplicate them if necessary) in order to balance the clocks, as in the example above.

Note that although the behavior at the inputs and outputs is identical, the sequential behavior will not be identical in that the number of registers and the register contents may differ. A trivial example is pushing an inverter from before a register to after it. The final output will be the same but the contents of the register will be inverted from before.


A tour of today’s Mixed-Signal solution

A tour of today’s Mixed-Signal solution
by Pawan Fangaria on 03-18-2013 at 10:00 pm


Mixed-Signal design is one of the very initial design methodologies, pioneered by Cadence with its lead in custom design; now taking centre space in the world of SoCs. Its growth is surmountable as it finds its place in most of the high growth electronics like smart phones, automotive applications, networks and communications, bio-medical engineering, safety and security applications, precise instrumentation etc. With the increase of design size having large analog content along with digital, shrinking technology node, power becoming critical and timing as ever overwhelming, the complexity of design and verification has increased tremendously.

There is an opportunity to learn about the latest mixed-signal methodologies and techniques from the experts in this domain from Cadence and how it provides the complete solution for the mixed-signal design. It’s a forum where one can build network with other technologists as well that can, at times, help meeting the challenges of this complex task.

It’s a complete one day session, here is the program:

[TABLE] style=”width: 100%”
|-
| style=”width: 12%” | Date:
| style=”width: 50%” | 02 Apr 2013 – 09 Apr 2013
|-
| Location:
| Ottawa, Ontario – April 2, 2013
Baltimore, MD – April 4, 2013
Chelmsford, MA – April 9, 2013
|-
|
| Register »
|-

Who should attend?

  • Circuit designers
  • AMS and SoC verification engineers
  • Analog/custom layout engineers
  • Digital P&R engineers
  • CAD engineers and managers
  • Design managers
  • Anyone involved with realizing mixed-signal designs in silicon

What is there to learn?

  • Techniques and tips to enhance your mixed-signal flow
  • Insight into the latest mixed-signal verification and implementation methodologies
  • Recommendations, based on silicon-proven successes, for effectively deploying new methodologies in your design environment today
  • Modeling analog behaviour with highly effective real number models
  • Applying assertion-based, metric-driven verification
  • Verifying low-power intent with dynamic and static methods
  • Floorplanning and integrating designs in a seamless, OA-interoperable flow
  • Analyzing timing and power for complex SoCs to prevent silicon re-spins

For one to gain true confidence, the session includes success stories from Cadence in terms of case studies. Also, IBM’s presence will be there with its latest technologies as foundry partner along with world class process design kits which enable high productivity and faster turn-around-time. It’s a day worth spent!! Register »

Complete details about the agenda can be found at –
http://www.cadence.com/cadence/events/Pages/event.aspx?eventid=768#sthash.e6Answas.dpuf

Any question about this event?

Send email to events@cadence.com


Mobile: A Death in the Family

Mobile: A Death in the Family
by Paul McLellan on 03-18-2013 at 3:49 pm

So Ericsson Mobile Platforms is to be shut down, with the loss of around 1600 jobs. Not to mention billions of dollars that ST and Ericsson sunk into the joint venture in an attempt to create a competitor to Qualcomm.

The history actually goes back a bit further. Nokia originally had an internal semiconductor design group and in 2007 they decided to get out of doing their own chip design and relying on Texas Instruments and ST Microelectronics. This included transferring most of their chip designers to ST.

Then NXP, the old Philips Semiconductors, had a mobile design group, many of which were the old VLSI Technology engineers in Sophia Antipolis. They merged this group into ST but retained an ownership share, although later ST bought out this remaining share.

Next Ericsson had an internal group called Ericsson Mobile Platforms (EMP). Its business plan was to create IP (software, silicon IP etc) to license to people who wanted to get into mobile, especially in upcoming markets like China.

The final step in the creation of ST-Ericsson happened 4 years ago when Ericsson set up a joint-venture with ST and merged the ST design group along with the EMP group.

STE had two lead customers: Sony-Ericsson (based across the traffic circle from EMP in Lund, Sweden) and Nokia (based in Finland of course). But both these companies initially screwed up the transition to smartphones when Apple and Samsung took all the profits. STE suffered too as Nokia and Sony-Ericsson dramatically lost market share.

Last year Ericsson decided to get out of the Sony-Ericsson joint venture and sold their half to Sony. Sony is now doing OK in the smartphone market but I don’t think it is primarily built on STE chips.

Meanwhile, Nokia decided to put all their eggs in the Microsoft Windows Phone basket. But Windows Phone only runs on Qualcomm chips so STE lost that customer.

So despite apparently having reasonable technology, including for 4G LTE, STE didn’t really have any customers. Both Ericsson and ST announced that they would look for a strategic buyer but nobody was interested. Today they announced that they would shut the company down completely. Some personnel would be repatriated and brought back into the parent companies but not everyone. And some technology, such as the LTE modem design, will live on.


What killed them? Firstly, the transition from featurephones to smartphones blindsided them. Nokia was doing well in smartphones but then decided to switch to Microsoft, which explicitly is specified to run on Qualcomm Dragonball processors only. STE was screwed as a side-effect. They lost their biggest customer. And Sony-Ericsson only made the transition to smartphones work after Ericsson bailed on that JV.

Apple and Samsung design their own silicon and/or use Qualcomm for the modems, so there is only a limited market for 3rd party silicon, although with companies like Huawei and Lenovo starting to get traction it might be increasing. It is an interesting case study in looking at a market that is booming (semiconductor is flat outside mobile) but where it is still possible to fail.


Create Beyond the NoC Solutions!

Create Beyond the NoC Solutions!
by Eric Esteve on 03-18-2013 at 9:09 am

The Network On Chip (NoC) concept is recent, about 10 years old, and the first implementation of commercially available NoC IP has happened in 2006. Should we drop the concept so quickly after it has been introduced? In fact, I don’t think so… But we could brain storm and imagine the new functions that could be implemented within or around the NoC, benefiting from the existing NoC architecture. Ever increasing SoC complexity generates needs for new functionalities, like error correction and resiliency to increase data robustness, software observability or cache coherency and distributed virtual memory in multi CPU SoC. The NoC provides a ready to use infrastructure (the physical links where to move data), packet based communication protocol (logical way to transport data), so why not using existing NoC physical and logical architecture and go “Beyond the NoC”: implement various serviced functionalities, opportunistically using the NoC to save real estate, power consumption and design resource?

Let’s have a look at additional SoC features and services, which are not the SoC primary functions, but need to be implemented in order to address SoC requirements, in some cases specific to addressed market segments.

  • Error correction and resiliencyare needed in markets like automotive, medical and industrial

    • If you want to add parity bits, ECC checking and other capabilities to ensure no data corruption, you may add it to the NoC transport protocol
    • Resiliency: Current resilient SoCs duplicate much of the interconnects within the chip to be able to test for errors. Using existing NoC infrastructure (logic and wires), you can implement identical error detection and correction functionalities without having to duplicate the entire NoC, therefor providing the same functionalities but saving real estate (die size) and minimizing power consumption.

  • SoC power management: in large SoC, designed in the latest technology nodes (28 nm and below), the power consumption is almost taking precedence on pure performance. Today’s SoC designs addressing various mobile electronic application have to provide multiple high performance features (Video, Imaging, Voice, Data broadband) within the smallest possible power budget, so battery operated system can run during days instead of hours. Implementing the power management techniques is still a “hand made” design process, and you need a specific team to do it.
  • I found a very interesting article, presented a DAC 2009, titled “NoC Topology Synthesis for Supporting Shutdown of Voltage Islands in SoCs”. In fact, in this article, the authors have created a NoC to connect all the power islands, supporting power gating of the islands, which sounds like a good idea: they have automated the tasks previously made by hand. It could be an ever better idea if the SoC architect could use the existing NoC as a foundation for power management purpose… Authors have used this algorithm:

  • Security: Every company has their own proprietary on-chip security schemes, including secure boot, data encrypt/decrypt (PKI), etc. It is already possible to create special “placeholders” within the NoC, where designers can insert their own proprietary security IP and logic. This allows the designer to retain 100% control of their security IP, we know how sensible such a service can be today. But this topic is also becoming a real concern for the 3[SUP]rd[/SUP] party IP providers, and they also could take benefit of such “placeholders” and have their security IP being inserted as well.

  • Software observability for debug: the TTM requirements are pushing for more efficient H/W and S/W co-development, and faster S/W integration and debug. Various S/W tracing mechanisms are already being used, like for example CoreSight On Chip Trace and Debug Architecture from ARM, but the NoC can offer additional probing capabilities. By definition of a Network on Chip, the NoC allow accessing every important block within the SoC, not only the CPU related blocks.

We have proposed a few examples, where the NoC can be a foundation for additional SoC features and services, and I am sure that creative designers will come up with other ideas. Architects first implement a NoC to take benefit of the communication capabilities, and to optimize their SoC architecture, as we have discussed here in Semiwiki. Once this NoC infrastructure is on chip, it could be possible to go “Beyond the NoC” and implement additional feature or service. Because you then use already “amortized” logic gates and wires, using NoC as a foundation to introduce new features, that you would introduce anyway (think On-chip Power Management for example), will allow to save Silicon, but not only! You would also save resource and creative energy, and finally launch faster a better SoC on the market. Beyond the NoC is a concept: either Arteris, either creative 3[SUP]rd[/SUP] party start-up or even SoC architect, could develop new features or services mapped to the existing NoC infrastructure and proven communication scheme, to create denser and smarter SoC.

By Eric Esteve from IPNEST


Schematic Migration Across Foundries and Processes

Schematic Migration Across Foundries and Processes
by Daniel Nenni on 03-17-2013 at 8:10 pm

A dedicated schematic migration tool can save weeks of effort and allow companies to explore new foundry opportunities. Unfortunately moving analog and mixed signal design data between foundries and processes is a complex business. While engineers would rather spend their days creating new circuits, many spend time translating existing designs by re-creating their data using components from new process design kits (PDKs).

Translating design data is usually a complex process and endless hours of expensive engineering time are wasted copying schematics, swapping new components for old and modifying parameters to satisfy a new PDK. With the increasing reliance on analog circuits in SoCs and an expanding market in analog IP, engineers need to migrate their schematics quickly to run simulations using models in the new PDK. A business can’t respond to new opportunities if it takes weeks before testing can begin in a new process or foundry.

Though it may first appear relatively simple, migrating schematics is much more involved than it looks. Tools must handle physical differences in component symbols and then deal with the hidden mysteries of parameter interactions and component description format (CDF). What starts out looking like a simple case of swapping a few components ends up becoming more and more complex and it’s little wonder that many designers end up copying their schematics manually.

At IN2FAB, we have spent many years building design migration technology as both a migration service company and an EDA tool vendor. Our OSIRIS schematic migration tools have been developed using hundreds of PDK variants for designs from IDMs and foundries and this has given us a great deal of insight in to the challenges of schematic migration. All of our technology is built on the Cadence platform and runs directly with foundry PDKs.

In an ideal world, symbols for components would all look exactly the same; at least for the common ones. If they were all the same size, drawn with the same origin and with pins in exactly the same place, an engineer would just have to change the name of the component library to be well on the way to getting a new circuit. Even if the component names were different, it would just be a case of writing a mapping script to swap one to the other.

Resistor migrated from source to target processes. Pin names are mapped between the PDKs and a new bulk pin tied to a net, all using the migration tools

Resistor migrated from source to target processes. Pin names are mapped between the PDKs and a new bulk pin tied to a net, all using the migration tools. Unfortunately, this is rarely the case, especially when moving from one foundry to another. Symbols change size, position and orientation, new pins appear and old ones are removed so swapping one symbol for another just isn’t good enough. Migration tools must take account of physical differences and include an automatic re-wiring capability to reconnect pins that move around or the engineer is left with a circuit full of badly placed components and a string of unconnected wires.

New pins such as bulk connections sometimes appear and they need to be connected while redundant wires that connected old pins that disappeared should be removed. Bulk connections can also be set as a property so this must also be addressed as part of the migration process.

Re-located or new pins can also short with existing wires so a short locator must find and fix places where new pins on bump in to old wires. Even when circuits go between radically different PDKs, they should have clean connectivity after migration without extended manual clean-up.

Even when the physical connections are all made, the hidden world of parameters and CDF can present a new round of problems. Entering values for new components using schematic tools is straightforward but mapping properties from one PDK to another is not as easy as it looks. At the simplest level, we need to know the names of the properties on the source and the target symbol and this is usually simple enough to find: e.g. old name = “w”, new name = “width”. With a little trial and error, a mapping system will come together but we also need to get the parameter type to match: e.g. the original is a string: (w = “1u”) while the new one is a floating point number: (w = 1e-06) so this must be adapted by the tools. The callbacks that set other parameters must also be triggered to make sure that everything is set correctly for netlisting and simulation.

Migration tools should handle all of the complexities of CDF and resolve differences between source and target PDKs automatically

These problems become more pronounced when calculating passive values. Migrating the width and length of a resistor is usually pointless as the resistance co-efficient is going to change. Far better to map the resistor value and width and let the new PDK work out what the length should be but that’s going to involve parameter manipulation and callback triggers which are another level of complexity again. A migration tool must analyse the PDKs and present clear information to the user through a GUI; then deal with all of the triggers and callbacks automatically during the migration process.

Lastly, we need to know if our new schematic matches the old one in order to be able to use it for simulation or other work. It’s possible to write netlists for the old and new circuits and run those in to an LVS tool but then we must allow for the differences in component and parameter value names. That can probably be fixed with some sort of perl script but that’s just making our job more complex again. A far better way is using a dedicated schematic comparator that understands all of the mapping and can identify a difference in between source and target in seconds and these are all built in to the schematic migration tools

Larger companies may have a CAD department that will put some sort of customized schematic porting capability in place when they have a big foundry move; doing part of the work and leaving the designers to clean up the rest. If a big corporation is has enough spare time and engineers to dedicate to the problem then they may spare the designers some of the pain but what does a designer do when they want to try out a foundry or two? Or a boutique IP company that’s offering a circuit in a new process they’ve never used before?

The increasing range of foundry and process options along with the dynamic nature of the analog IP market demands that engineers must be able to migrate and test their designs extremely quickly if they are to take advantage of a market window. A fast and intuitive schematic migration capability gives engineers the flexibility to move circuits and simulate in a new process in fraction of the time of other methods.

Tim Reagan
President & CTO at IN2FAB Technology



Cadence IP Report Card 2013

Cadence IP Report Card 2013
by Daniel Nenni on 03-17-2013 at 7:00 pm

The challenges of developing IP blocks, integrating them correctly, and hitting the power, performance, area, and time to market requirements of a mobile SoC is a growing problem. At 20nm and 14nm the probability of a chip re-spin due to an error is approaching 50% and we all know how disastrous a re-spin can be, those are not good odds even in Las Vegas.

Cadence talked a bit about IP during the CDNLive keynotes last week and even more so during a press lunch. Paul McLellan and I also spent time with Cadence IP Commander in Chief Martin Lund. Given the recent IP acquisitions it is clear that Cadence is serious about scaling their business so I have to give them an A+ on IP strategy thus far.

My first meeting with Martin is referenced in Cadence IP Strategy 2012, I liked him then and after two most excellent acquisitions I love the man. Great for IP, great for EDA, Cadence is now the #3 IP company behind ARM and Synopsys.

Unfortunately, assembling a robust IP offering is the easy part especially when you have a CEO (Lip-Bu Tan) who can raise money in his sleep. Selling commercial IP into a consolidating industry however is much more of a challenge that you might think. In my best guesstimate 80% of today’s silicon is shipped by the top 20 semiconductor companies and that is being generous. It could be less than 20 companies and of the top 20 companies listed below only UMC and GLOBALFOUNDRIES do NOT have sizable internal IP groups.


Clearly Martin Lund knows this since he worked at Broadcom for 12+ years and Broadcom has a VERY large IP group. So what is the Cadence IP strategy moving forward? In my opinion it is two-fold: IP Subsystems, which explains the Tensilica acquisition, and FinFETs, which is what the Cosmic Circuits acquisition is all about.

Dr. Paul McLellan covers Tensilica HEREand Dr. Eric Esteve covers CEVA HERE for SemiWiki. Click on over to the landing pages and you will read all about IP subsystems because that is what they do. That is how they differentiate themselves from the mighty ARM.

Cosmic Circuits does foundation IP which is the connection between the interface IP and the semiconductor process technologies. FinFETs are changing the foundation IP world as we speak. For layout people, the F word now stands for FinFETs because FunFETs they are not. There is an interesting thread in the SemiWiki forum HERE which talks about the FinFET layout challenges ahead. Bottom line: Not everybody will be successful with FinFETs so adopting commercial foundation IP is much more viable if you want to hit the power, performance, area, and time to market requirements of mobile SoCs.

Given that the Cadence Virtuoso dynasty has a good 90% AMS market share (my opinion) and probably a 99.9% FinFET layout market share thus far, I give Cadence a real shot at moving some commercial IP into the 20% of the companies that are shipping 80% of the silicon. They certainly have access to the top 20 IP groups through Virtuoso and IP subsystems fit right on top of that. Sound reasonable?

For the best detailed coverage of the CDNLive keynotes see Richard Goering’s posts:

Lip-Bu Tan at CDNLive 2013: Opportunities and Challenges for Electronics

Samsung CDNLive Keynote: Innovation and Challenges in the Post-PC Era

Martin Lund CDNLive Keynote: Why SoCs Need “Application Optimized” IP


Plotting to take over the time-domain only world

Plotting to take over the time-domain only world
by Don Dingee on 03-16-2013 at 10:00 am

The state machine nature of many digital designs has made time-domain debugging the favorite tool for most designers. We provide a set of inputs, data gets clocked in, and a set of outputs appears. We look for specific patterns in parallel paths, or sequences on serial lines.

Continue reading “Plotting to take over the time-domain only world”


EDAC CEOs: consolidation, clouds, and whether Intel will buy Synopsys

EDAC CEOs: consolidation, clouds, and whether Intel will buy Synopsys
by Paul McLellan on 03-15-2013 at 5:12 pm

Yesterday evening was the annual EDAC CEO forecast meeting. Actually it is not really a forecast meeting any more, more a sort of CEO response to some survey questions asked of EDAC members. Rich Valera of Needham moderated with Lip-Bu, Aart and Wally, along with Simon Segars representing the IP arm(!) of the business and Raul Camposano representing startup companies.

I’m not going to try and cover everything, just pick and choose things that I found interesting.


The first question asked was whether consolidation in EDA has helped innovation. 74% of people surveyed said ‘no’ but the CEOs all pushed back. For a start, lots of innovation takes place in the bigger companies. Nobody really made the point I would have done, which is people start little EDA companies in order to be acquired, and if acquisitions don’t happen, people won’t create startups. To some extent we see that because as acquisition prices have come down, the willingness to invest in startups has also reduced.

The next question was whether acquisitions have “helped” pricing. In the sense of have prices for tools increased. Wally pointed out that Moore’s law is really a learning curve, with cost per transistor coming down as you would expect as the total number of transistors shipped. EDA’s curve is exactly the same and has been 2% of semiconductor revenue for 15 years.

To a question about whether EDA would consolidate down to 2 companies, Wally pointed out that EDA has been a trimvirate (first Calma, Applicon, Computervision; then the DMV, now Synopsys, Mentor, Cadence). It seems to be very stable so seems unlikely (although, of course, Cadence did make an attempt to acquire Mentor a few years ago).

Next was whether Moore’s law breaks down at the sub-20nm level. Survey was split 50-50. Aart said that until now Moore’s law has driven opportunity but now opportunity is driving Moore’s law. So even if prices go up, it will continue. I’m not so sure myself, if the cost per transistor increases then that quad core cell-phone will cost more than the old dual core one. Sometimes that will be viable but not always.

How about verticalization, with companies like Apple and Samsung taking design in-house? Of course they buy tools to do this but leave other companies (such as TI) in their wake. Survey seemed to think it was negative, but CEOs were positive. Wally pointed out that concentration in the cell-phone industry was greater in 2007 when Samsung and Nokia had a greater market share than Samsung and Apple do today.

EDAC members were asked which was the hottest EDA startup. Most often mentioned was Oasys. But also OneSpin, BDA, ICScape, Jasper, Calypto, AtopTech, Forte, Breker, DeFact. Funny how many of the “hot startups” have been around for a dozen years or more.

Asked about the funding environment for startups, the EDAC membership came up with “bleak,” “dead,” “poor,” “you have to turn to Qualcomm, Xilinx, Apple and Intel.”

Raul pointed out that semiconductor in general and EDA too suffer from an image problem. If you ask what the next big thing is going to be, few people say “semiconductor.”

Peggy asked about a comment from the CEO of a small company who said it would not be long before Intel acquired Synopsys. Aart pointed out that the economics wouldn’t work. But if Intel or anyone else wanted Synopsy at the revenue multiple that Cadence just acquired Tensilica for then he’s available.

To a question on cloud computing, Aart said that Synopsys had made $0 on it. Not just a low number but actually zero. Luckily all the infrastructure they needed is what is required for deploying on internal clouds so it wasn’t a completely bad investment. Raul agreed. Nimbic expected big companies to be attracted to the cloud since even in companies with tens of thousands of servers, getting 100 at the same time is problematic. But he also made $0 on it, although smaller companies designing little circuit boards (and who don’t have big internal server farms) represent a long tail that is more accepting.