100X800 Banner (1)

IP-SoC 2011: prepare the future, what’s coming next after IP based design?

IP-SoC 2011: prepare the future, what’s coming next after IP based design?
by Eric Esteve on 12-03-2011 at 2:58 am

IP-SoC 2011is the 20[SUP]th[/SUP] anniversary for the first Conference completely dedicated to IP. IP market is a small world, as EDA a small market if you look at the generated revenue… but both are essential building blocks for the semiconductor industry. It was not clear back in 1995 that IP will become essential: at that time, the IP concept was devalued by some products exhibiting poor quality level, un-efficient technical support, leading program manager to be very cautious to simply decide to buy. Making was sometimes more efficient… In the mean time, the market has been cleaned up, the poor quality product suppliers disappearing (being bankrupt or sold for asset) and the remaining IP vendors have understood the lesson. None of the renewed vendor marketing a protocol based (digital) function would take the chance to launch a product which has not passed an extensive verification program, and the vendors of mixed-signal IP functions know that the “Day of Judgment” will be when the Silicon prototypes will be validated. This leaves very small room for low quality products, even if you may still find some new comers deliberately launching a poor quality RTL function, naively thinking that lowering the development cost will allow to sell at low price and buy market share, or some respected Analog IP vendor failing to deliver “at spec” function, just because… analog is analog, and sometimes closer to black magic than to science!

If you don’t trust me, just look at products like Application Processor for Wireless handset, or for Set-Top-Box: these chips are made at 80% of reused functions, whether internal or coming from an IP vendor. This means literally that several dozen functions, digital or mixed-signal, are IP. Would only one of these failed and a $50+ million SoC development will miss the market window. That said, will the IP concept, as it is today in 2011, will be enough to support the “More than Moore” trend? In other word, if IP in the 2000-10’s is like Standard Cell was in the 1980-90’s, what will be the IP of the 2020’s? You will find people addressing this question at IP-SoC Conference! Just have a look at the program, with some presentations:
……
The past and the next 20 years? Scalable computing as a key evolution
…..

IP’s 20 year evolution – adaptation or extinction
…..

Interface IP Market Birth, Evolution and Consolidation, from 1995 to 2015. And further?”
……

Obviously, you first have to look at the past to be able to forecast the future, but the latter is the most important reason to attend the conference. Because, as we have moved from Transistor based design to Standard Cell based deign, then from Standard Cell to IP, we will have to invent the next move.

So, the interesting question will be to know where the IP industry stands on the spectrum starting from a single IP function, ending to a complete system. Nobody would allege that we have reached the upper side of the spectrum and claim that you can source complete system from an IP vendor. The death of EDA360 is a clear illustration of this status. Maybe because the SC industry is not ready to source a complete IP system (what would be the added value of the Fabless companies if/when will occur?), most certainly because the IP vendors are far to be able to do it (it will require strong understanding of specific application and market segment, associated technical know-how of such application and, even more difficult to met, adequate funding to support up-front development, accepting the risk to miss the target…). This is why an intermediate step may be to offer IP Subsystem. According with D&R, who organize IP-SoC, the IP market is already here: “Over the year IPs have become Subsystems or Platforms and thus as a natural applicative extension IP-SoC will definitively include a strong Embedded Systems track addressing a continuous technical spectrum from IP to SoC to Embedded System.” So IP-SoC 2011 will be no more IP-centric only, but IP Subsystem centric!

It will be interesting to hear the different definitions of what is exactly an IP Subsystem. If I offer a PCI Express Controller with an AMBA AXI application interface, may I call it a subsystem? I don’t think so too! But should I add another IP function (like for example Snowbush offering PCI Express plus SATA) to call it a subsystem? Or should I consider the application first, and pick –or design- the different functions needed to support this specific application? Then, how to market the CPU, the memories and probably other IP which belongs to my competitor? The answer is far to be trivial, and this will make the next IP-SoC conference worth to attend! You probably should not expect to come back home with a 100% definite answer (if anybody knows the solution, he should start a company a.s.a.p.) but you will have the chance to share the experience of people who have explored different tracks, and learn from them.

If you plan to attend, just register here, and send me a note (
eric.esteve@ip-nest.com
) , it will be a pleasure to meet you there!

By Eric Esteve from IPnest


It’s not just handsets

It’s not just handsets
by Paul McLellan on 11-30-2011 at 7:58 pm

I usually write about the handset business (terminals in wireless-speak) because it is a consumer business and drives, directly and indirectly, a large part of the semiconductor business. But there is another part to the business, base-stations.

The largest supplier of wireless networking equipment is Ericsson. Ericsson used to be a big supplier of handsets too, I think #2 behind Nokia at one point. They were a major customer of VLSI for ASIC, at one point making up 40% of VLSI’s business. Then they decided that VLSI was charging them too much and was also using the profit from their Ericsson business to create their own business to supply GSM chipsets. So they decided to buy libraries from Compass and go the COT foundry route. They didn’t know enough about semiconductor design, screwed it up, missed a generation of handsets and were never a force again. Eventually they created a JV with Sony, Sony-Ericsson and a platform company Ericsson Mobile Platforms (EMP) which sold reference designs and software. EMP was never really very successful and after rounds of layoffs was folded in with ST’s wireless business and NXP’s wireless business to create ST-Ericsson. Just a week or two ago Ericsson announced it was selling its half of Sony-Ericsson to Sony and finally was completely out of the handset business at both the device and IP levels. They now focus entirely on base stations (and other non-wireless stuff).

Two other big players on the wireless network side were Nokia and Siemens. But after the initial buildout of wireless networks they both struggled. They also created a JV, Nokia-Siemens Networks (NSN) which in turn acquired the networking side of Motorola’s business (Google, of course, acquired the handset side). NSN have struggled too, announcing recently that they are exiting the WiMax business and some others, and having a large layoff of 17,000 people (23 percent of the company).

The two other big players are Huawei and ZTE, both based in China. Originally they focused on selling cheap hardware but gradually they have built up a reputation for good products and have caused trouble for all the western network companies.

I actually had a front row seat at a little bit of network buildout. Across the street from where I was living this fall some scaffolding went up. But not very much and it seemed a bit pointless, there wasn’t anything where it went, just a blank wall on an office building. Then three antennas appeared and a couple of days were spent connecting them up. Obviously a new base station going in. Then the antennas disappeared. That day they boxed them in, and painted the box so you had to look twice to see anything. About a week later, I suddenly got a text message on my iPhone from AT&T telling me that there was a new base station just gone live on Franklin Street. My service just got a whole lot better, especially for 3G data.


Synopsys acquires Magma

Synopsys acquires Magma
by Paul McLellan on 11-30-2011 at 4:41 pm

So Synopsys announced today that it has signed an agreement to acquire Magma. There will be a regulatory delay etc before it finally closes.

So why did they do it? Despite Magma being thought of as a place and route company, they have two other product that are perhaps more significant for Synopsys: FineSim and Tekton.

FineSim, Magma’s circuit simulator, has been eating Synopsys’s lunch. According to their financial filings they have lost about $50-70M in the fast Spice market, some to Berkeley Design Automation but also a lot to FineSim. I’ve heard, but I’ve not seen any definitive data anywhere, that FineSim is actually a bigger business for Magma than place and route. It also has a lot of momentum and the market is less fragmented, especially for digital and memory circuit simulation where FineSim is strong. It is less strong in the analog markets since they don’t have an environment of their own.

Tekton is Magma’s static timing analyzer. Earlier this week Magma announced that 25 companies have adopted Tekton, the fastest rate of adoption for any product in Magma’s history (it has been out for a about 18 months). It seems to be a real threat to PrimeTime’s dominance of the signoff timing space. My guess is that the Tekton technology will be slotted under the hood of PrimeTime and it will continue to be called PrimeTime.

In place and route it is hard to know what will happen. Synopsys are supposedly internally developing a new router and Magma’s place and route may fit in with that.

The other major product area is analog design and custom layout. Synopsys and Magma (along with Springsoft and others) are all competing against the Cadence Virtuoso franchise and the proprietary SKILL language that gives it a lot of lock in (especially since Virtuoso has been tweaked to not accept non-SKILL Pcells under some circumstances).

Funnily enough I was at Synopsys all morning when this was going on, at the interoperability conference. Aart appeared on video. Now we know one reason he had some other stuff on his plate today!


100 USB 3.0 IP Design-In…Is PLDA rocketing SuperSpeed USB technology?

100 USB 3.0 IP Design-In…Is PLDA rocketing SuperSpeed USB technology?
by Eric Esteve on 11-29-2011 at 10:19 am

Did we (the analyst) completely underestimate SuperSpeed USB take-off, or is the company tweaking the meaning of “USB 3.0 IP Design-In”? This PRfrom PLDA could be understood as a claiming from the IP vendor that they have achieved the 100[SUP]th[/SUP] design win for their USB 3.0 IP… Let’s try to understand how PLDA can make more design win than the Total Available Market for SuperSpeed IP.

In fact, as an analyst providing “USB 3.0 IP Market Forecast”, I feel very uncomfortable, as the cumulated forecast for 2009, 2010 and 2011 gives 12 + 20 + 59, or 91 ASIC design starts including USB 3.0 IP (sold by an IP vendor). IPNEST thinks we will see Smartphone and Media Tablet supporting USB 3.0 on the market as soon as next year, that USB 3.0 enabled external HDD and SSD are shipping now, and that there will be a second wave of consumer electronics devices to transition, namely the Digital TV, Set-Top-Box, Blue Ray Players to ship in 2012-2013. This means IP sales starting now and continuing in 2012 to allow for a minimum development time. In fact we have built a forecast for USB 3.0 IP sales based on a bottom-up analysis, looking at the different application in every market segment which could transition to USB 3.0, and even more important, we have tried to determine when the IP sales will happen, application by application. The result is a very complete 50 pages document, where you can find this type of information, like the design start evaluation (generating USB 3.0 IP sales) up to 2015:

The first point is that PLDA is not the only vendor selling this IP: Arasan, Evatronix, Faraday, Inventure, NEC (now Renesas), Synopsys and Snowbush are all active on this market. Even if the forecast is wrong by 10% (yes, this can happen!), it’s not possible to see a single company enjoying 100% market share. Especially when reminding that Synopsys has claimed a few weeks ago that they have made 40 USB 3.0 IP design wins, which is rather realistic. First outcome of this investigation; PLDA “design win” does not mean sales of a USB 3.0 IP linked with an ASIC design start. Looks like some design win are for potatoes, when some other are like counting tomatoes. Next question is; which is tomato, which is potato!

Going further in PLDA’ PR, you read that the company is marketing USB 3.0 IP under different form:

  • USB 3.0 Host and Device controller IP for implementation in ASIC,
  • USB 3.0 Device controller IP for implementation in FPGA,
  • USB 3.0 Development boards and kits based on Altera and Xilinx FPGA

Potatoes could be USB 3.0 IP for implementation in ASIC, and tomatoes for implementation in FPGA? Unfortunately not! SuperSpeed USB is a protocol addressing market segment like PC peripheral (External HDD or SSD…), Consumer Electronic (Video Camera…) or Wireless Handset (Smartphone), at least for the time being, until a wider pervasion of the protocol occurs in other segments. Considering the targeted production volumes (several million of units), none of these Application could afford the high ASP value for FPGA devices. We could imagine a scenario where the chip maker decide to validate the concept by targeting a FPGA implementation first, this strategy make sense and is used by chipmakers. But, in this case, they tend to use exactly the same IP than for the ASIC implementation. This would not generate 50 “tomato” design win, as it would mean that PLDA has made 50 “potato” (ASIC) design wins as well, which is very doubtful: how could the challenger in USB 3.0, who was not present on the USB IP market so far could do better than the historical leader? So, the answer is: neither potato, neither tomato! Next question is: what is the vegetable hidden in these 100 design wins?

SuperUSBV6-550 USB 3.0 Development Kit from PLDA

SuperUSBC3-55USB 3.0 Development Kit from PLDA

As I did not want to provide Semiwiki readers with corrupted information, based on my own guess, I simply have asked to PLDA. The vast majority of these 100 design wins have been made by selling FPGA based boards! Because PLDA ‘ Superspeed Controller IP was implemented into the FPGA, PLDA could claim that they have sold, not only the Xilinx or Altera based board, but also the USB 3.0 IP. Every board sale is counting as a design win for the IP! Which is true, when you consider that the customer buying the board could “play” with PLDA’s IP, and interconnect it with his own design. In that sense, PLDA is doing USB 3.0 evangelization. The drawback is that the revenue generated by one of these design wins is in the few $K, when the revenue generated by a SuperSpeed Controller source code IP is in the few $100K! So, it’s likely that these 100 design wins generate far less money than the 40 design wins from Synopsys… The comparison was between potatoes and tomatoes, but the latest was a USB 3.0 enabled FPGA board, not a USB 3.0 IP license for FPGA implementation, the potato still being a SuperSpeed USB IP license for ASIC implementation.

Eric Esteve from IPNEST – Table of Content for “USB 3.0 IP Forecast 2011-2015” available here


Blitz, blazing fast layout

Blitz, blazing fast layout
by Paul McLellan on 11-29-2011 at 8:00 am

One of the challenges with today’s SoCs is that chip-finishing, putting the final touches to the SoC working at the chip level, stresses layout editors to the limit. Either they run out of capacity to load the entire chip, or they can handle the entire chip but everything is like wading through molasses, it takes an awfully long time to get anything done.

As a result there are a number of chip viewer tools that focus just on being able to load the SoC and very fast to display. The problem with these is that editing capabilities are either non-existent, it is strictly a viewer, or extremely limited.

Laker’s Blitz is a tool that brings the best of both worlds. It can handle extremely large designs and is between 5 and 20 times faster than regular layout editors. However, it has most of the editing features that regular layout editors have since it is built on top of Laker Custom Layout. It has the same user-interface, same basic editing, same in-memory schema, same integration with DRC/LVS, and the same Tcl extensions.

Blitz is optimized for chip-level operations on very large chips, basically reading GDS, editing, and then writing the GDS back out again. The four big tasks it has been optimized for are:

  • Chip-finishing: chip-level review, editing, assembly and debugging
  • IP merging: replacing IP black-boxes with physical layout
  • SoC assembly and review: assemble IP blocks, trace critical nets, verify and fix boundary DRC errors
  • DRC review and repair: chip-level signoff DRC

Of course not every single thing that you can do in Laker Custom Layout is supported, otherwise it would make no sense to have two tools. In particular, Laker Blitz is 64-bit only, it cannot create or modify Pcells and so on. It is focused strictly on the typical tasks done at the chip level with today’s advanced technology node SoCs.



Will Amazon’s Kindle Fire Force x86 Processors To Revisit the 1980s?

Will Amazon’s Kindle Fire Force x86 Processors To Revisit the 1980s?
by Ed McKernan on 11-29-2011 at 12:07 am

What if Amazon’s new Kindle Fire, priced at $199 and using a sub $10 TI processor, has effectively started the ball rolling towards forcing Intel and AMD to building a Very Low Cost (perhaps even <$10) x86 mobile processor? A recent article entitled “Amazon’s Risky Strategy” explores the ramifications of Amazon selling Kindle Fires at a loss in order to get eyeballs stuck on Amazon’s web pages for longer and thereby increasing sales. The article goes on to speculate that Amazon will sell a Smartphone in the future to further its strategy. Amazon’s sub $200 subsidized hardware model may be like a magnet pulling mobile PC’s into its price range which would have an impact on x86 CPU features, power and price.
Continue reading “Will Amazon’s Kindle Fire Force x86 Processors To Revisit the 1980s?”


A Review of an Analog Layout Tool called HiPer DevGen

A Review of an Analog Layout Tool called HiPer DevGen
by Daniel Payne on 11-28-2011 at 1:11 pm

My last IC design at Intel was a Graphics Chip and I developed a layout generator for Programmable Logic Arrays (PLA) that automated the task, so I’ve always been interested in how to make IC layout more push-button and less polygon pushing. Today I watched a video about HiPer DevGen from Tanner EDA and wanted to share what I learned. This technology came from IC Mask, a Dublin-based services and training company.
Continue reading “A Review of an Analog Layout Tool called HiPer DevGen”


GlobalFoundries Versus Samsung!

GlobalFoundries Versus Samsung!
by Daniel Nenni on 11-27-2011 at 7:00 pm

Some call it co-opetition (collaborative competition), some call it keeping your enemies close. Others call it for what it is, unfair competition and/or other types of legally actionable behavior. GlobalFoundries calls it“Fab Syncing”, which in reality will SINK their FABS!

“With this new collaboration, we are making one of the industry’s strongest manufacturing partnerships even stronger, while giving customers another platform to drive innovation in mobile technology. Customers using this new offering will gain accelerated time to volume production and assurance of supply, and they will be able to leverage significant learning from the foundry industry’s first high-volume ramp of HKMG technology at 32nm in H1 2011,” said Jim Kupec, senior vice president of worldwide sales and marketing at Globalfoundries.

Unfortunately Jim Kupec no longer works for GlobalFoundries and Samsung may be one of the reasons why. In 2010 Globalfoundries and Samsung Electronics said they would synchronize global semiconductor fabrication facilities to produce chips based on a gate-first implementation of 28nm HKMG technology. They will do the same at 20nm switching to Gate-last HKMG. As a result, Globalfoundries and Samsung will be able to make 28nm and 20nm chips for the SAME customers?!?!?!? Putting aside the gritty technical details, what this means is that GFI will have to compete not only with superpower TSMC, but also their PARTNER Samsung. Samsung is not only the second largest semiconductor company, Samsung is also one of the most fiercely competitive companies in the world. Is that really a good idea?

As it turns out it was a very bad idea for a number of reasons. First and foremost is yield. Samsung is the only “Fab Syncing” partner yielding at 28nm Gate-First HKMG (IBM and GFI are not). Remember Samsung is the largest memory maker so they know how to ramp yield quickly at any node. Are they sharing that manufacturing expertise with GFI and other Common Platform members? Not now, not ever. Samsung is aggressively targeting TSMC and GFI 28nm top customers including AMD, Nvidia, Qualcomm, Broadcom, Marvell, and Xilinx.

Cost and delivery are the key components of a wafer manufacturing contract and Samsung is an expert in both areas. Especially since margins for the Samsung foundry business are not broken out so they could literally dump wafers to get market share. TSMC on the other hand has the biggest wafer margins in the industry which they could cut in half and still make money.

The Samsung cut throat culture is inside the company as well. Multiple Samsung groups compete for a given market. Samsung has phones and tablets based on Nvidia, Qualcomm, and TI processors as well as having their own ARM based processors. They compete in the same way with their largest customer Apple. Apple will purchase close to eightBILLION dollars in parts from Samsung for the iSeries of products this year alone, making Apple Samsung’s largest customer. Samsung is also Apple’s largest competitor and now they are engaged in a mega legal battle which will literally change the face of consumer electronics, believe it.

Even the marketing guys are mixing it up with Samsung firing the first shot:

I’m looking forward to Apple’s response and the Samsung response to that etc…

Let’s not forget the Samsung corruption scandalthat engulfed the government of South Korea. Let’s not forget the chip dumping probes. The book “Think Samsung” by ex-Samsung legal counsel accuses Samsung of being the most corrupt company in Asia.

This battle will be bloody entertaining to say the least! Not so much for GFI though, or the other second source foundries as they see already thinning margins get thinner. For us consumers however it means two things: Semiconductor manufacturing innovation and CHEAP CHIPS! w00t!


Did Apple Influence AMD’s TSMC Foundry Switch?

Did Apple Influence AMD’s TSMC Foundry Switch?
by Ed McKernan on 11-27-2011 at 7:00 pm

During the weekend, I read two articles that highlighted Apple’s LCD supply chain build out and started to think of how this would look if Apple were to do the same on the x86 side of the ledger. The two articles, one related to Hitachi and Sony building a new 4” LCD for iphones and a more extensive one on Sharp building a new LCD for the iPAD3 due in 2012 highlight the extent to Apple’s involvement in design and investment to guarantee supply at a much reduced cost so that competitors are left gasping. Turning to the processor world, we know Apple has selected TSMC to Fab their 28nm A6 processor. Why not pull AMD into the Apple-TSMC supply chain ecosystem in order to outmaneuver the raft of Intel based Ultrabook PCs that are headed to the market in 2012?

Continue reading “Did Apple Influence AMD’s TSMC Foundry Switch?”


December 1st – Hands-on Workshop with Calibre: DRC, LVS, DFM, xRC, ERC (Fremont, California)

December 1st – Hands-on Workshop with Calibre: DRC, LVS, DFM, xRC, ERC (Fremont, California)
by Daniel Payne on 11-24-2011 at 9:57 am

I’ve blogged about the Calibre family of IC design tools before:

Smart Fill replaced Dummy Fill Approach in a DFM Flow
DRC Wiki
Graphical DRC vs Text-based DRC
Getting Real time Calibre DRC Results with Custom IC Editing
Transistor-level Electrical Rule Checking
Who Needs a 3D Field Solver for IC Design?
Prevention is Better than Cure: DRC/DFM Inside of P&R
Getting to the 32nm/28nm Common Platform node with Mentor IC Tools

If you want some hands-on time with the Calibre tools then consider attending the December 1st workshop in Fremont, California.