BannerforSemiWiki 800x100 (2)

EDA mergers: Accelicon acquired by Agilent

EDA mergers: Accelicon acquired by Agilent
by Daniel Payne on 12-06-2011 at 4:51 pm

Agilent acquired EEsof back in 1999, now the EEsof group acquired Accelicon on December 1, 2011. The terms of the deal are not disclosed.

SPICE circuit simulators are only as accurate as their models and algorithms. On the model side we have Accelicon that provides EDA tools to create SPICE models based on silicon measurements:

  • Model Quality Assurance
  • Model Builder Program
  • DFM-aware PDK verification
  • SPICE Model Services


Capacitance and IV curves for MOS devices

Accelicon has partnered with many other EDA companies to fit into standard flows:

  • Synopsys HSPICE
  • Cadence Spectre
  • Mentor Eldo
  • Berkeley DA AFS
  • IPL Alliance
  • MOSFET models: BSIM3v3, BSIM4, BSISOI, BJT
  • DRC/LVS tools: Assura, Calibre, Hercules

The Advanced Model Analysis flow:

SPICE model services include a test chip design, measurements from silicon, then running through Model Builder Program (MBP) and Model Quality Assurance (MQA):

Competitors
We have several EDA companies competing in this space:

  • Silvaco – UTMOST IV
  • Agilent – ICCAP
  • ProPlus – BSIMPro, BSIMProPlus
  • Synopsys – Aurora
  • Accelicon – MQA, MBP

Summary
I don’t see any disruption in the EDA business with this acquisition because we have so many sources for SPICE models. Accelicon founder Dr. Xisheng Zhang started the company in 2002 and hopefully received a fitting reward for building up the business over the past 10 years.

See the Wiki page of all known EDA mergers and acquisitions.


Mark Milligan joins Springsoft

Mark Milligan joins Springsoft
by Paul McLellan on 12-06-2011 at 2:01 pm

Mark Milligan recently joined SpringSoft as VP Corporate Marketing. I sat down with him on Monday to get his perspective on things.

He started life, as so many of us, as an engineer. He was an ASIC designer working on low-level microcode for the Navy Standard Airborne Computer at Control Data. It was actually the first ASIC that they had done. It was the early days of RTL level languages and Mark worked on the simulation environment to verify the ASICs.

Teradyne had bought several companies and built the first simulation backplane so Mark switched to marketing and did the technical marketing for that product line.

Then he moved to the West Coast and joined Sunrise Test Systems which Viewlogic acquired (and eventuallly ended up in Synopsys). There he was an early advocate of DFT and scan. Funnily enough one pushback they used to get on running fault simulation is that designers didn’t want to know how bad it was, there wasn’t time or good tools to get coverage up and it was too late, typically, to bite the bullet and switch to a full scan methodology on the fly.

A spell in CoWare and then Virtualogix gave him a new perspective on embedded software (as did my tours of duty and VaST and Virtutech).

When Mark arrived at SpringSoft they had commissioned a customer satisfaction survey. He was pleased to discover that their customer satisfaction was 25% higher than any of Synopsys, Cadence, Mentor or Magma.

One challenge he feels SpringSoft faces is that products like Laker and Verdi have better name recognition than SpringSoft does itself.

Between 2008 to the present, SpringSoft has continued to grow and it has stayed profitable. In fact he believes they are the most profitable public EDA company (as a percentage of revenue, of course. For sure Synopsys makes more total profit). They are around 400 people, about 3/4 of them in Taiwan.

This profile, profitable and growing and medium sized, allows them to focus on specific pain areas to bring innovation to customers. They are big enough to be able to develop solutions and deliver them but small enough that they don’t have to try and do everything.

One similarity Mark noticed from a previous life was the way that, in the past, people wanted to know how good their test solution was (fault simulation) and now everyone wants to know how good their test benches are, which is a harder problem.

We talked a bit about innovation. Historically most innovation has come from small startup companies but these are no longer being funded in EDA. One the other hand, we have a few medium-sized EDA companies: SpringSoft, Atrenta, Apache (now part of Ansys but still being run as an EDA company). In the areas they cover, which for sure is not everything, there has been a lot of innovation there broadening out from single products to a portfolio that addresses a problem area.


Nvidia’s Chris Malachowsky on "Watt’s next"

Nvidia’s Chris Malachowsky on "Watt’s next"
by Paul McLellan on 12-06-2011 at 12:31 pm

The video and slides of the CEDA lunch from a month or two ago are now (finally) up here. Chris Malachowsky presented “Watt’s next.” Chris is one of the founders of nVidia and is currently its senior VP of research. He started by talking a bit about the nVidia product line but moved on to talking about supercomputers and their power requirements. Of course nVidia builds graphics chips that go in PCs and phones, but the basic parallel compute engine in those chips can be harnessed for other tasks. Given the title of the talk you won’t be surprised to know he spent most of the presentation on the challenges of power. Unless you’ve been under a rock for the last decade you have to know that power is one of the biggest challenges in chip design today. Computer architecture is one area that can make a big contribution along with all the techniques that have been developed at the SoC level. The microarchitecture can make an enormous difference. But moving data around is really a big problem: which uses more power, a 64-bit floating point multiply and add, or moving one of the operands 20mm across the die. Moving the data is already 5 times as costly, and by 10nm it will be 17x as costly (not to mention hundreds of times as costly to move it off chip). Science needs a 1000x increase in computing power but without requiring a power station to provide the power and remove the heat. The entire talkwas recorded on video and is synchronized with the slides. He ended up talking about the department of energy program to build a 1000 petaflop computer (1 exaflop) and consuming “only” 20MW. By comparison we are currently at 2 petaflops consuming 6MW, so a 500X increase in speed for only a tripling of power. Click on the thumbnail to get a graphic that is large enough you can read the details.


HSPICE – I Didn’t Know That About IC Circuit Simulation

HSPICE – I Didn’t Know That About IC Circuit Simulation
by Daniel Payne on 12-05-2011 at 11:14 am

HSPICE is over 30 years old, which is a testimony of how solid the circuit simulator has been and how widely used it is. To stay competitive the HSPICE developers have to innovate or the product will slowly loose ground to the many other simulator choices. I listened to the webinar last week to find out what was new with HSPICE.

Szekit Chan was the webinar presenter and his title is HSPICE Staff Corporate AE at Synopsys.

User-specified Options can be used for your netlists, but first look at your .st0 file to see what all of the settings are. You can probably just remove these settings because the default values are now suitable for the vast majority of IC designs. I didn’t know that you could really safely remove the .options because they were usually set by some expert and we were told, “Don’t ever touch these settings or you will get the wrong results.”

The .lis file shows how much time it takes for: operating point, transient, etc.

Use a single option instead of all those individual options: .option RUNLVL = 1 | 2 | 3 | 4 | 5 | 6

1 – Fastest
3 – Default
6 – Most accurate

This approach of simplifying the speed versus accuracy tradeoff to a single number reminds me exactly of what HSIM uses, the hierarchical Fast SPICE tool. It certainly lowers the learning curve for a tool and doesn’t require an expert to experiment with arcane tool settings.

Remove the old convergence options, just let the built-in auto-convergence do its job instead.

If you have extracted netlists with thousands or millions of RC elements, then consider using the new RC Reduction: .option SIM_LA=PACT.This also works on files like SPF and SPEF.

Avoid probing all signals with v(*), be more selective instead. If you use v(*) then you tend to fill up the disk with too much data.

Want Results Faster?
Try Multi-core with HSPICE Precision Parallel (HPP) technology. One license uses two threads.

For Monte Carlo or corners use a compute farm. Use the “-dp” switch for distributed processing. Supports both SGE and LSF.

Some other options to speed up run times:

The summary of ways that you can speed up your long circuit simulation run times using command-line options or netlist statements:

During the Q&A time:

Q: Does RC reduction only apply to post-layout?
A: No, you can use it for both pre and post-layout.

Q: Any limitation on circuit size for HPP?
A: Not really, we’ve seen up to millions of elements.

Summary
The single biggest thing that I learned was that speed versus accuracy is now a simple integer in HPICE of 1 to 6, so good bye to the old way of tweaking .options for every different netlist topology. Using up to 8 cores to simulate a design looks very efficient, returning speed improvements up to 7.43X versus a single core.

Since the Magma acquisition announcement occurred on Friday after the webinar on Wednesday I’ve been thinking about the product overlap in the SPICE and Fast SPICE categories:

[TABLE] style=”width: 400px”
|-
| Product
| Synopsys
| Magma
|-
| SPICE
| HSPICE
| FineSim SPICE
|-
| Fast SPICE
| CustomSim
| FineSim Pro
|-

HSPICE has a much larger installed base because of its age however FineSim SPICE according to some offers better speed than HSPICE.

On the Fast SPICE side of things CustomSim offers hierarchical simulation and co-simulation with the Verilog simulator VCS, while FineSim Pro is a flat simulator with a less efficient co-simulation using the Verilog API. If you have a hierarchical design or need Verilog co-simulation then I’d use CustomSim.

It will be interesting to learn how Synopsys sorts out the new product roadmap in SPICE and Fast SPICE, hopefully by DAC they will have a story to tell us that makes sense. Possible scenarios are that:
[LIST=1]

  • HSPICE users can swap for FineSim SPICE
  • FineSim SPICE users can swap for HSPICE
  • FineSim Pro users can upgrade to CustomSim
  • HSPICE and FineSim SPICE are merged and given a new name
  • FineSim Pro and CustomSim are merged and given a new name

    We’ve started a lively discussion on the Magma acquisition here on the forums.

    There’s also a Wiki page listing all SPICE and Fast SPICE tools known.


  • Taiwan Trip Report: Semiconductors, EDA, and the ASIC Business!

    Taiwan Trip Report: Semiconductors, EDA, and the ASIC Business!
    by Daniel Nenni on 12-04-2011 at 7:00 pm

    Just returning from my monthly trip to Taiwan and I find myself energized! Semiconductors, EDA, and the ASIC business have never been more exciting! The travel itself is not so exciting but since I make frequent trips the airline and hotel treat me like a king. And let me tell you, it is good to be a king!

    Speaking of royalty, I saw Dr. Morris Chang at the Royal Hsinchu Hotel on Wednesday night. He made it up the two flights of lobby stairs faster than I did! It truly would be an honor to write his autobiography, let’s hope someone does it soon.

    Speaking of autobiographies, the new book on Steve Jobs is a great read. He not only transformed the computer industry, the music industry, and mass media, most importantly, Steve Jobs transformed the semiconductor industry. Where would we, as semiconductor professionals, be without the iMac, iPod, iPhone, iPad, and Apple TV? If you have any doubts on the future of the semiconductor industry read the book. It is one of the best books on innovation I have ever read. It also gives you an intimate look at who Steve Jobs really was, and yes I read it on an iPad2.

    The big news in EDA last week is the $500m+ acquisition of Magma by Synopsys. There is a spirited discussion on the SemiWiki forum here. Please add your thoughts when you get a chance, it’s important. Communication is the key to success in any industry, even more so for EDA.

    I asked pretty much everybody I met with in Taiwan last week what they thought about the acquisition, which was one of the hot topics of the trip. The most common theme of EDA discussions is why our industry is a mere crumb of the total semiconductor pie. During a lunch conversation last week I proposed that ALLof the EDA software licenses be deactivated for one month so the semiconductor industry better appreciates EDA. With the recent acquisitions and lack of capitol investment in new EDA companies, that scenario is now much more plausible!

    Spending the day with the Global Unichip(GUC) team was the highlight of my week. As you may have read, GUC announced itself as the “Flexible ASIC Leader” taking direct aim at the traditional ASIC market led by the likes of IBM, ST Micro, TI, Renesas, and Samsung. GUC HQ is directly across the street from TSMC Fab 12 so I literally walked there.

    After three very technical presentations on IP Solutions, High Speed/Low Power ARM Design, SiP and 3D IC Services, a flash of marketing genius ran through my head. Above and beyond the technical elegance GUC offers the system houses around the world, GUC is selling insurance, insurance that your leading edge SoC will arrive on time, within specifications, and at the expected cost (first silicon success). GUC is really selling a NO RISK SoC SOLUTION.

    Dinner with Jim Lai, President of GUC, and his team highlighted the point. It was an elegant Japanese dinner with French wine and the best service you could ask for. And the cost was much less than one would have expected. It is good to be a king in Taiwan, even for just a week!



    Interoperability Forum

    Interoperability Forum
    by Paul McLellan on 12-03-2011 at 3:19 pm


    Earlier this week I went to the Synopsys Interoperability Forum. The big news of the day turned out to be Synopsys wanting to be more than interoperable with Magma, but that only got announced after we’d all gone away.

    Philippe Margashack of ST opened, reviewing his slides from a presentation at the same forum from 10 years earlier. Back then, as was fashionable, he had his “design productivity gap” slide showing silicon increasing at 58% CAGR while design productivity only increased by 21% CAGR. The things he was looking to in order to close the gap were: system level design, hardware-software co-design, IP reuse and improvements to the RTL to layout flow and in analog IP.

    We’ve made a lot of progress in those areas but, of course, you could put up almost the same list today. ST has been a leader in driving SystemC and virtual platforms and now have over 1000 users. But the platforms still suffer from a lack of standardization for modeling interrupts, specifying address maps and other things.

    One specific example that he went over was a set-top-box chip (or DVR chip) called “Orly” that can do 4 simultaenous HD video decodes on a single chip. The software was up with all their demos running just 5 weeks after they received silicon.

    Next up was John Goodenough of ARM who also took the slides from a presentation by his boss (that he says he actually put together) and compared them to the situation today. “Everything has changed but nothing has changed” was the theme. Ten years ago they needed ot simulate 3B clocks to validate a processor. Now it is two orders of magnitude bigger doing deep soak validation on models, on FPGA prototypes and, eventually, on silicon. Back then they had 350 engineers; now 1400. They had 250 CPUs for validation and now they have tens of thousands of job slots for simulation, multiple tens of thousands of CPUs, multiple emulators, FPGA prototype farms.

    Jim Hotalked about standards living for a long time (as I did recently but with different examples). He started from Roman roads and how the railway gauge came from that and so, in turn, the space shuttle boosters that had to travel by rail. That US railways are the same gauge, 4′ 8.5″, as UK is not surprising since the first railroads were built to run British locomotives. But the original gauge in Britain was based on the gauge used in the coal mines which was 4′ 8″ (arrived at by starting from 5′ and using rails 2″ wide). As with the Romans they were choosing a width that worked well behind a horse, although there is no evidence that the Roman gauge was copied. In fact in Pompeii the ruts are 4′ 9″ apart. And as for the space shuttle booster, it doesn’t depend on the track gauge but the load gauge (how bit a wagon can be and still clear bridges and tunnels). The US load gauge is very large and the UK one is very small (US trains and even French ones cannot run on UK rails despite the rails being the same distance apart for this reason).

    Mark Templeton, who used to be CEO of Artisan before it was acquired by ARM, talked about making money. In almost all markets there is a leader who makes a lot of profit, a #2 who makes some and pretty much everyone else makes no money and struggles to be able to even invest enough to keep up. So it’s really important to be #1. He talked about going to a conference where John Bourgoin of MIPS presented and went into the many neat technical details of the MIPS architecture. Robin Saxby of ARM, at the time about the same size as MIPS, presented and talked nothing about processors but about the environment of partners they had build up: silicon licensees, software partners, modeling and EDA partners and so on. For Mark it was a revelation that winning occurs through interoperation with partners. Today MIPS has a market cap of $300M and ARM is $11.7B.

    Michael Keating talked about power “just in time clocking, just enough voltage” and how over the last few years CPF and UPF (why do we need two standards?) have improved the flows so that features like multi-voltage regions, power-down, DVFS are usable. But power remains the big issue if we are going to be able to use all the transistors that we can manufacture on a chip.

    Shay Gal-On talked about multi-core and especially programming multi-core. I remain a skeptic that high core count multi-core chips can be programmed for most tasks. They work well for internet servers (just put one request on each core) and for some types of algorithm (a lot of photoshop stuff for instance) but not for most things. Verilog simulation, placement etc all seem to fall off very fast in their ability to make use of cores. The semiconductor industry is delivering multi-core as the only way to control power, but making one big computer out of a lot of little ones has been a reseach project for 40 years. He had lots of evidence showing just how hard it is: algorithms that slow down as you add more cores, different algorithms that cap out at different core ceilings and so on. But it’s happening anyway.

    And don’t forget Coore’s Law: the number of cores on a chip is increasing exponentially with process generation, it’s just not obvious yet since we are on the flat part of the curve.

    Shishpal Rawat talked about the evolution of standards organizations. There are lots of standards organizations. Some of them are merging. There will still be standards organizations in… I’m afraid it was a bit like attending a standards organization meeting.


    Microsoft’s New Tablet Strategy: Here, There and Everywhere

    Microsoft’s New Tablet Strategy: Here, There and Everywhere
    by Ed McKernan on 12-03-2011 at 10:33 am

    As mentioned in a previous post, Microsoft has started to come clean on its software strategy as it relates to Windows 8 for PCs and Tablets. The strategy has been changing quite rapidly since their first admission in September. Essentially the Windows 8 O/S will be forked based on whether the mobile device is operating on an x86 or an ARM processor, with legacy apps only being supported on Intel processors.

    The change in Microsoft’s strategy I believe is based as much on economics and profits as it is the degree of difficulty porting apps to a new processor architecture. Intel and Microsoft are in the process of stringing along the world’s longest divorce proceedings because every now and then they wake up to see that they still need each other to remain heavily profitable. The common thread or customer base is the corporate world, who fear any breakup. In addition, the HP drama that unfolded this summer shows what happens when a big company cuts off its right arm.

    Microsoft realizes that there can be two Windows 8 operating systems and perhaps three. The consumer market, both PC and tablets, is growing at an incredible rate given that costs continue to drop. Emerging PC markets, as shown by every Intel presentation to analysts, is growing at a mid teens rate because notebook prices are continuing to drop making it more affordable for new purchasers. On the flip side, Apple has carved out a leading position in tablets and greater than $1000 notebook PCs in retail. Microsoft will be under pressure to shave O/S prices in the consumer space to blunt Apple’s charge.

    When Meg Whitman rejoined HP, the immediate strategy recognition was that they would not catch Apple in the consumer tablet market and that they needed to create a firewall in the corporate market where, outside of printers, most of the revenue and profits are derived. Microsoft and Intel have a similar business model where most of their profits come from corporate. If Microsoft issued one O/S across the board, then they would be leaving a huge pile of corporate profits on the floor. In addition, Microsoft would not be able to follow the traditional model of upsell the security blanket. The right strategy called for Intel and Microsoft to join hands with HP and Dell to issue a corporate tablet that cost a few hundred $$$ more than a consumer model but retains the x86 processor and full Windows compatibility.

    These corporate tablets that arrive in 2H 2012 will in reality be very similar in hardware to the ultrabook platform being pushed by Intel. The key difference is that it will have a smaller, lower power LCD and likely a slower, lower power and lower cost Intel processor. The cost will end up being less than a similar ultrabook but closer in range to an iPAD. However, unlike the consumer market that demands non-iPAD tablets to be in the $199 – $299 range, these tablets will find a home in the corporate world at prices closer to $500.

    If Microsoft is not successful with the new corporate tablet model then they run the risk of losing O/S and Office business to Apple in corporate PCs. It is still early to tell how fast Apple will be able to penetrate the corporate world over the next few years. However, to make sure they have all the bases covered, Microsoft, according to this article, appears to be working on an Office suite for iPAD. No mention of whether this is specifically for ARM processors or whether it will be independent of the underlying hardware. In addition, there is no mention of backward compatibility.

    As the tectonic plates shift in the PC and tablet industry, one can see that for all the changes, there is still a lot of money to be tapped in the corporate market. Existing suppliers like Dell and HP are trying desperately to hold onto hardware sales as they increase services and cloud based computing. Apple has decisions to make as well. Corporate is used to buying on 3-5 year cycles and will overbuy in terms of PC performance to get the best ROI over the life of the PC. Apple may decide that they will have to ramp the performance of their next version ARM processor (A6) dramatically so as to stay within sight of Intel’s 22nm tablet processor that arrives with Windows 8 next year. 2012 should be very interesting, indeed.

    FULL DISCLOSURE: I am Long AAPL and INTC


    IP-SoC 2011: prepare the future, what’s coming next after IP based design?

    IP-SoC 2011: prepare the future, what’s coming next after IP based design?
    by Eric Esteve on 12-03-2011 at 2:58 am

    IP-SoC 2011is the 20[SUP]th[/SUP] anniversary for the first Conference completely dedicated to IP. IP market is a small world, as EDA a small market if you look at the generated revenue… but both are essential building blocks for the semiconductor industry. It was not clear back in 1995 that IP will become essential: at that time, the IP concept was devalued by some products exhibiting poor quality level, un-efficient technical support, leading program manager to be very cautious to simply decide to buy. Making was sometimes more efficient… In the mean time, the market has been cleaned up, the poor quality product suppliers disappearing (being bankrupt or sold for asset) and the remaining IP vendors have understood the lesson. None of the renewed vendor marketing a protocol based (digital) function would take the chance to launch a product which has not passed an extensive verification program, and the vendors of mixed-signal IP functions know that the “Day of Judgment” will be when the Silicon prototypes will be validated. This leaves very small room for low quality products, even if you may still find some new comers deliberately launching a poor quality RTL function, naively thinking that lowering the development cost will allow to sell at low price and buy market share, or some respected Analog IP vendor failing to deliver “at spec” function, just because… analog is analog, and sometimes closer to black magic than to science!

    If you don’t trust me, just look at products like Application Processor for Wireless handset, or for Set-Top-Box: these chips are made at 80% of reused functions, whether internal or coming from an IP vendor. This means literally that several dozen functions, digital or mixed-signal, are IP. Would only one of these failed and a $50+ million SoC development will miss the market window. That said, will the IP concept, as it is today in 2011, will be enough to support the “More than Moore” trend? In other word, if IP in the 2000-10’s is like Standard Cell was in the 1980-90’s, what will be the IP of the 2020’s? You will find people addressing this question at IP-SoC Conference! Just have a look at the program, with some presentations:
    ……
    The past and the next 20 years? Scalable computing as a key evolution
    …..

    IP’s 20 year evolution – adaptation or extinction
    …..

    Interface IP Market Birth, Evolution and Consolidation, from 1995 to 2015. And further?”
    ……

    Obviously, you first have to look at the past to be able to forecast the future, but the latter is the most important reason to attend the conference. Because, as we have moved from Transistor based design to Standard Cell based deign, then from Standard Cell to IP, we will have to invent the next move.

    So, the interesting question will be to know where the IP industry stands on the spectrum starting from a single IP function, ending to a complete system. Nobody would allege that we have reached the upper side of the spectrum and claim that you can source complete system from an IP vendor. The death of EDA360 is a clear illustration of this status. Maybe because the SC industry is not ready to source a complete IP system (what would be the added value of the Fabless companies if/when will occur?), most certainly because the IP vendors are far to be able to do it (it will require strong understanding of specific application and market segment, associated technical know-how of such application and, even more difficult to met, adequate funding to support up-front development, accepting the risk to miss the target…). This is why an intermediate step may be to offer IP Subsystem. According with D&R, who organize IP-SoC, the IP market is already here: “Over the year IPs have become Subsystems or Platforms and thus as a natural applicative extension IP-SoC will definitively include a strong Embedded Systems track addressing a continuous technical spectrum from IP to SoC to Embedded System.” So IP-SoC 2011 will be no more IP-centric only, but IP Subsystem centric!

    It will be interesting to hear the different definitions of what is exactly an IP Subsystem. If I offer a PCI Express Controller with an AMBA AXI application interface, may I call it a subsystem? I don’t think so too! But should I add another IP function (like for example Snowbush offering PCI Express plus SATA) to call it a subsystem? Or should I consider the application first, and pick –or design- the different functions needed to support this specific application? Then, how to market the CPU, the memories and probably other IP which belongs to my competitor? The answer is far to be trivial, and this will make the next IP-SoC conference worth to attend! You probably should not expect to come back home with a 100% definite answer (if anybody knows the solution, he should start a company a.s.a.p.) but you will have the chance to share the experience of people who have explored different tracks, and learn from them.

    If you plan to attend, just register here, and send me a note (
    eric.esteve@ip-nest.com
    ) , it will be a pleasure to meet you there!

    By Eric Esteve from IPnest


    It’s not just handsets

    It’s not just handsets
    by Paul McLellan on 11-30-2011 at 7:58 pm

    I usually write about the handset business (terminals in wireless-speak) because it is a consumer business and drives, directly and indirectly, a large part of the semiconductor business. But there is another part to the business, base-stations.

    The largest supplier of wireless networking equipment is Ericsson. Ericsson used to be a big supplier of handsets too, I think #2 behind Nokia at one point. They were a major customer of VLSI for ASIC, at one point making up 40% of VLSI’s business. Then they decided that VLSI was charging them too much and was also using the profit from their Ericsson business to create their own business to supply GSM chipsets. So they decided to buy libraries from Compass and go the COT foundry route. They didn’t know enough about semiconductor design, screwed it up, missed a generation of handsets and were never a force again. Eventually they created a JV with Sony, Sony-Ericsson and a platform company Ericsson Mobile Platforms (EMP) which sold reference designs and software. EMP was never really very successful and after rounds of layoffs was folded in with ST’s wireless business and NXP’s wireless business to create ST-Ericsson. Just a week or two ago Ericsson announced it was selling its half of Sony-Ericsson to Sony and finally was completely out of the handset business at both the device and IP levels. They now focus entirely on base stations (and other non-wireless stuff).

    Two other big players on the wireless network side were Nokia and Siemens. But after the initial buildout of wireless networks they both struggled. They also created a JV, Nokia-Siemens Networks (NSN) which in turn acquired the networking side of Motorola’s business (Google, of course, acquired the handset side). NSN have struggled too, announcing recently that they are exiting the WiMax business and some others, and having a large layoff of 17,000 people (23 percent of the company).

    The two other big players are Huawei and ZTE, both based in China. Originally they focused on selling cheap hardware but gradually they have built up a reputation for good products and have caused trouble for all the western network companies.

    I actually had a front row seat at a little bit of network buildout. Across the street from where I was living this fall some scaffolding went up. But not very much and it seemed a bit pointless, there wasn’t anything where it went, just a blank wall on an office building. Then three antennas appeared and a couple of days were spent connecting them up. Obviously a new base station going in. Then the antennas disappeared. That day they boxed them in, and painted the box so you had to look twice to see anything. About a week later, I suddenly got a text message on my iPhone from AT&T telling me that there was a new base station just gone live on Franklin Street. My service just got a whole lot better, especially for 3G data.


    Synopsys acquires Magma

    Synopsys acquires Magma
    by Paul McLellan on 11-30-2011 at 4:41 pm

    So Synopsys announced today that it has signed an agreement to acquire Magma. There will be a regulatory delay etc before it finally closes.

    So why did they do it? Despite Magma being thought of as a place and route company, they have two other product that are perhaps more significant for Synopsys: FineSim and Tekton.

    FineSim, Magma’s circuit simulator, has been eating Synopsys’s lunch. According to their financial filings they have lost about $50-70M in the fast Spice market, some to Berkeley Design Automation but also a lot to FineSim. I’ve heard, but I’ve not seen any definitive data anywhere, that FineSim is actually a bigger business for Magma than place and route. It also has a lot of momentum and the market is less fragmented, especially for digital and memory circuit simulation where FineSim is strong. It is less strong in the analog markets since they don’t have an environment of their own.

    Tekton is Magma’s static timing analyzer. Earlier this week Magma announced that 25 companies have adopted Tekton, the fastest rate of adoption for any product in Magma’s history (it has been out for a about 18 months). It seems to be a real threat to PrimeTime’s dominance of the signoff timing space. My guess is that the Tekton technology will be slotted under the hood of PrimeTime and it will continue to be called PrimeTime.

    In place and route it is hard to know what will happen. Synopsys are supposedly internally developing a new router and Magma’s place and route may fit in with that.

    The other major product area is analog design and custom layout. Synopsys and Magma (along with Springsoft and others) are all competing against the Cadence Virtuoso franchise and the proprietary SKILL language that gives it a lot of lock in (especially since Virtuoso has been tweaked to not accept non-SKILL Pcells under some circumstances).

    Funnily enough I was at Synopsys all morning when this was going on, at the interoperability conference. Aart appeared on video. Now we know one reason he had some other stuff on his plate today!