Synopsys IP Designs Edge AI 800x100

Cosmic Circuits acquisition by Cadence: IP battle with Synopsys has officially started!

Cosmic Circuits acquisition by Cadence: IP battle with Synopsys has officially started!
by Eric Esteve on 02-09-2013 at 6:18 am

If you have missed the announcement that Cosmic Circuits has been acquired by Cadence, dated February 7, you may want to read this PR. In any case, by reading this article you will understand why this acquisition is ringing the start of the “IP battle” between cadence and Synopsys. Let me remind you that I said in Semiwiki last September (at that time I was qualifying the IP battle as being a chess game in my article):

Coming back to the chess game, my personal conviction is that the “Queen” will be the PHY IP, as the company being able to provide an integrated IP solution, PHY and Controller, should be able to run the game. You may prefer to put it this way: the company unable to provide a PHY (supporting the latest standard release like PCIe Gen-3 or MIPI M-PHY) on the most advanced technology node, will most certainly lose the game, on the long term…”

In fact, the IP war between Cadence and Synopsys has really started in 2010, when Cadence has acquired Denali (DDRn memory controller IP, PCIe controller IP and large VIP port-folio) and Synopsys Virage Logic, both companies being sold for more than $300 million, or about X6 or X7 their 2009 revenue. We don’t know how much Cadence has spent to buy Cosmic and we don’t think it was cheap, as the company had a very good path:

  • Founded in 2005 and based in Bangalore, India, Cosmic Circuits has been profitable from its first year of operation and has more than 75 customers worldwide. The company received TSMC’s 2010 and 2012 awards for Analog/Mixed Signal IP Partner of the Year.
  • Provides IP for mobile devices including USB, MIPI, Audio and WIFI at advanced process nodes including 40nm and 28nm
  • Top tier customer base has shipped more than 50 million ICs containing Cosmic Circuits IP in 2012

Cosmic Circuits has started, like ChipIdea, by selling mixed-signal IP (ADC, DAC, PLL, WiFi…), and recently add the Interface PHY IP support to the port-folio. They put a strong focus on MIPI D-PHY and M-PHY, the latest being supported on TSMC 85nm, 65nm, 40nm, 28nm and finally 20nm! They also support a complete SuperSpeed USB PHY, including both USB 2.0 and USB 3.0 PHY IP.

If we look at the Interface IP business during the last 5 years, USB, PCIe and DDR were the most important, followed by HDMI and SATA. But the forecast for the 2012-2016 (see above) clearly shows that, if USB and PCIe are still part of the Top 5, DDRn and MIPI IP will see the higher growth (DDRn IP being also the largest segment). This simply means that Cadence is well positioning to fight with Synopsys in the near future, as the company being already strong in DDRn IP, present in PCIe IP, is now strong in MIPI PHY IP and present in USB PHY IP, thanks to this acquisition.

So, I would agree with Martin Lund, Cadence’s senior vice president of R&D for SoC Realization when he says “The addition of Cosmic Circuits’ stellar technology and talent enhances Cadence’s position as a leading provider for analog/mixed-signal IP. The combination of Cadence and Cosmic Circuits will provide customers with high quality IP to accelerate getting products to market.”

And please don’t forget it: the PHY IP support will be an important part of the winning strategy, in this IP battle between Synopsys and cadence!Eric Esteve from IPNEST


Please Help Me Understand IBM – Common Platform Technology Forum 2013

Please Help Me Understand IBM – Common Platform Technology Forum 2013
by Zvi on 02-08-2013 at 11:00 pm

“Innovations for Next Generation Scaling”

The 2013 Forum today (Feb 5, 2013) started with a presentation by Dr. Gary Patton, VP, IBM Semiconductor Research & Development Center. Gary very clearly articulated the two irresolvable challenges the industry now faces:

  • On chip interconnect
  • Lithography

These two challenges connect very well with our recent blog IEDM 2012 – The Pivotal Point for Monolithic 3D IC. Gary showed both the exponential increase of RC which results from the dimensional scaling of copper below 100nm width and the high cost associated with double and quad patterning. In addition, he showed how the extreme scaling of the copper metallization creates reliability challenges such as fatal EM modes, and scaling of the insulator k breeds TDDB and strength issues. As a reminder, in the recent IEDM (Dec. 2012) short course, IBM presented the following slide indicating that interconnect now dominates device power!

L. Chang, D.J. Frank – IEDM 2012 Short Course – IBM Watson Research Center

Gary also presented a multi-decade past to future slide that resembles the one presented here below. The decade ending at the year 2000 was the good old days of easy scaling of planar transistor, which he called the gate oxide limit. Then the industry followed with a decade of “Material Innovation” that he called the planar device limit, and starting in 2010 is the beginning of the “3D Era” – 3D transistors and stacked devices.

Finally he shared with us his vision of 3D devices with three planes of devices:

  • Logic Plane
  • Memory Plane
  • Photonic Plane

A vision we mutually share.

Now, here is my failure to understand. As a company who has been in the forefront of 3D and TSV research, IBM is well aware of the severe limitations of TSV as an alternative for vertical interconnect. The following cross-sectional picture by IBM, presented at the recent GSA Summit, clearly illustrates how large a TSV is in comparison to an interconnect via.

IBM Systems and Technology Group – GSA Silicon Summit 2012 (S.s lyer) – 2012 IBM Corporation

With TSVs of 5 micron diameters (and pitches of 15 micron due to keep out zones from stress issues) vs. vias of less than 50 nm, the ratio in vertical connectivity is 1:10,000 as illustrated in the following chart by Perinne Batude of CEA Leti.

Clearly IBM technologists are well aware of the many research papers showing that TSVs, with their relatively huge size compared to all the other on-chip elements, diminish the performance or power benefits in folding designs to 3D. For example, the chart below was presented by Kim at the 2011 IEEE International Interconnect Technology Conference. The chart illustrates the performance benefits of folding a design twice (4 tiers of transistors) as dependent on the via size. At a via size of 5 microns there are actually no benefits, while at a via size of 0.1 micron the benefits are the equivalent of two nodes of dimension scaling!!!

So can someone please explain to me how come IBM is still talking about TSV as if it is the only representative of the “3D Era”???

And particularly now, when monolithic 3D is finally practical, and the NAND Flash memory vendors are adopting it across the board!?


Another Winner at DesignCon

Another Winner at DesignCon
by Daniel Payne on 02-08-2013 at 5:44 pm

After a show like DesignConwraps up we get a chance to ask ourself what it all meant, and how was this year different than last year. Reading through many posts about DesignCon I came to discover that the Awards at DesignCon are less contentious than at CES, and also that ANSYSreceived a DesignVision awardfor the 2nd year running. Their winning tool in the category of Modeling and Simulation is called HFSS for ECAD.

Typically the design engineer doing PCB layout would use Cadence tools, then export their design data, ask the SI expert to import them into a field solver like HFSS, setup and run the field solver, interpret the results, and communicate verbally with any recommendations. Not a very productive flow, so the process has been integrated quite a bit.

You can now stay working in your familiar Cadence layout tools and launch HFSS for analysis:

  • Allegro PCB Designer
  • Allegro Package Designer
  • SiP Layout
  • Virtuoso Layout Suite

The software that enables this integration is called:

  • ANSYS HFSS, or HFSS solver-only option
  • ANSYS Designer Pre-Post, or ANSYS DesignerSI
  • Ansoft Links for ECAD

Using this combination of ECAD tools from Cadence and field solver tools from ANSYS lets engineers design and quickly analyze chip, package or boards where you need a 3-D model to take into account high-speed interconnect or complex packaging structures. The manual steps have been eliminated and the setup streamlined, so that you can stay in the Cadence tools to run field solver jobs like:

  • HFSS port drawing and setup
  • Automated clipping of nets
  • Setup of HFSS meshing frequency
  • Frequency sweep type, and range setup
  • HFSS convergence criteria
  • Solver and basis function selection
  • Airbox definition

Summary
If you want to save some time and agony, then consider looking at this integration between Cadence and ANSYS tools for your chip, board and packaging design challenges where a 3-D field sovler is required. I’m glad that Mark Ravenstahl of ANSYS talked with me two weeks ago about what to look for at DesignCon in 2013.


9 Micron Wooden Gate

9 Micron Wooden Gate
by Paul McLellan on 02-08-2013 at 4:55 pm

When I started in this business, we were at 3 micron HMOS. A few other things are close to that size. A red blood cells is about 9 microns, a human hair is about 100 microns. And in a bizarre “only in Japan” video, people compete to plane the thinnest shaving off a plank of wood. It turns out the answer is 9 microns. That’s a sharp blade!


Navigating the new patent landscape

Navigating the new patent landscape
by Beth Martin on 02-08-2013 at 12:00 am

If you are considering filing a patent, you should know about the new patent rules effectinve on March 16, 2013. Most importantly, patent rights will switch from “first-to-invent” to “first-to-file.” Before we continue, I am not a lawyer; I’m just a dumb blogger. Seek actual legal advice about the new patent laws if you think you might be affected.

As I understand it, the first-to-invent model worked basically like this: You invent rocket shoes on April 1, 2010 and start working on a patent application, which you file one month later. The guy next door you independently invents rocket shoes (what are the odds!) on April 10, 2010 and files a patent one week later. He got his patent application in first, but under first-to-invent, you, not your neighbor, might have the right to the patent. You would still, I assume, have the right to contest that application.

The new rules, which put the US in line with European, Canadian, and Japanese patent law, is “first-inventor-to-file.” Under this model, you should use those new rocket shoes to race to the patent office.

The America Invents Act makes changes to the definition of “prior art,” i.e., information that has been made available to the public that might be relevant to a patent’s claims of originality. So, if someone (say another of your terrible neighbors) describes your rocket shoes in a journal a day before you get to the patent office, her publication could have a bearing on your claim. In practice, this means that corporate lawyers will need to do much more diligent prior-art discovery.You should consult your legal advisor to understand these changes, and, as always, be careful about what you share publicly before filing for a patent.

The US patent office also now offers a fast track option that moves your application through in 1 year. Entrepreneurs and small businesses get a discount on the additional fast-track filing fee. In fact, the fee schedule changes as of March 19. All the fees are reduced, and there are two discount categories; one for small entities and one for ‘micro’ entities. There is no mention of ‘nano’ entities, so your tiny devices apparently have to pay full price…

Oh, and to reiterate: IANAL (I am not a lawyer) and this post should in no way be contrued as legal advice. In fact, the opinions set forth here are only my own and could be completely false (like the idea that those rocket shoes would actually work). If you actually attempt to navigate the incredibly complex world of patent law based on this blog, and you fail, it’s your own dumb fault. If you plan to file a patent, go see a lawyer for goodness sake.


Apple and Samsung Do It Again

Apple and Samsung Do It Again
by Paul McLellan on 02-07-2013 at 11:54 pm

The numbers are starting to come in for how everyone did in Q4. According to Cannacord Genuity, Apple made 69% of the profit and Samsung made 34%. What do you notice about those numbers? They add up to more than 100%. HTC supposedly made 1% of the profit and everyone else either broke even or lost money. Basically Apple and Samsung have rendered everyone else in the market irrelevant and pretty much doomed to run out of money.

In terms of smartphone units, IDC’s numbers have Samsung at #1 with 63.7M units (up 70% on last year), Apple is #2 with 47.89M and a surprising #3, admittedly a long way behind, is Huawei with 10.8M.

Surprisingly, Google’s Android, which accounts for about 70% of smartphone activations, is completely dominated by Samsung when it comes to making any money. How long all the other Android players can continue without making any profit remains to be seen. Google’s own Motorola is losing money still, and it will be interesting to see what they bring to market this year since they claim that the existing product pipeline is now flushed and so future products should be ones designed after the Google acquisition of Motorola Mobility. It will also be interesting to see if Google keeps the Android playing field level or gives Motorola Mobility something that Samsung doesn’t get its hands on so quickly.

Also worryingly for Google, Samsung has a third operating system Tizen that it is just starting to bring to market. This is the metamorphosis of project Meego that Samsung took over when Nokia dropped it to focus on its Microsoft strategy (how’s that working, by the way?). Since carriers apparently would really like a 3rd ecosystem to keep Apple and Google off-balance, it will almost certainly be successful. For sure Microsoft/Nokia is not that third ecosystem and, while the recently renamed Blackberry (the company used to be called Research In Motion or RIM) has just come out with a brand-new version of their operating system, it seems unlikely to do more than shore them up inside major corporate strongholds such as government and some huge companies.

Tablets are big. So big that many people are predicting that in 2013 tablets will outsell notebooks and desktop PCs. According to Canalys, Apple sold 22.9 million tablets for 49% share, Samsung sold 7.6 million, Amazon sold 4.6 million tablets, and Google’s Nexuses sold 2.6 million. What will happen to the tablet market is going to be interesting. Amazon presumably sells its tablets close to their cost and makes it up on other revenue streams. Apple, which certainly isn’t selling anything like near cost, also has other revenue streams of content. Embarrassingly for Google, Amazon is selling more Android tablets than they are despite Google’s strategy of flooding the market with cheap tablets and making the money on search and other monetization schemes. Amazon is flooding better. Samsung is doing fine for now but could get squeezed since it has a pure hardware business, at least for now.


Tubes of the Future

Tubes of the Future
by Paul McLellan on 02-07-2013 at 10:00 pm

So what is a silicon nanowire? It is basically a FET where the active element is a wire 3-20nm in diameter. So where a FinFET has the gate wrapped around 3 sides of the transistor, a nanowire (NW) has it wrapped around all four. In essence, the wire runs through the middle of the gate.

There seem to be three issues about building a silicon nanowire. First, suspending the wire above the substrate, then depositing the gate around the wire, and controlling the shape of the wire (you’d like it to be as circular as possible). One thing you can do with NW that is new is to run several wires through the same gate, either to switch multiple signals simultaneously or to get higher current.

Beyond 7nm, silicon is not the ideal substance for nanowires and it is better to use carbon nanotubes. A carbon nanotube FET (CNTFET) has much higher currents compared to silicon. This has the potential of an enormous increase of 3-10X in power and/or performance. Carbon nanotubes are a roll of carbon 1nm in diameter. The bandgap can be adjusted which means we can have both normally-on and normally-off transistors (like “p” and “n” type transistors in CMOS) and so we can continue to use complementary logic.

The fabrication process is completely different than for silicon NW. The carbon nanotubes are created as a sort of raw material away from the wafer fab. One issue is that some percentage of them are metallic rather than semiconducting and so repeated purification steps via column chromatography are required to ensure that (almost) none of the metallic ones get through.

The carbon nanotubes are then laid onto the wafer in a single layer at 6nm spacing. Using cut masks these can then be patterned.


One big challenge to be solved with CNT is that currently we don’t have a good way to contact to them without requiring an extremely long (say 300nm) contact area. This will need a lot of work to solve to make CNT practical.


ARMs in the Clouds

ARMs in the Clouds
by Paul McLellan on 02-07-2013 at 7:10 pm

The most interesting session at the Linley Tech Data Center Conference last week was the last one, on Designing Power Efficient Servers. What this was really about was whether ARM would have any success in the server market and what Intel’s response might be.

Datacenters are now very focused on power efficiency and many track Power Usage Efficiency (PUE) which is the ratio of the power used by the servers to the power used by everything else (routers, cooling, power distribution, backup etc). 2 is average and new facilities target 1.5. Power is generally the limiting factor on the size of a rack and the size of a datacenter so further improvements are required. More than a third of the cost of ownership of a datacenter is proportional to the electrical usage. So despite the obvious issues with a change of architecture (porting software), if big savings can be made they can be truly compelling.

Historically, server processors focus on complex highly-superscalar CPUs designed for the best possible single thread performance. But all that instruction reordering wastes power as does very long pipelines and high clock rates. For many datacenters focused on heavy computation this is the right type of server. But many other datacenters are focused on highly threaded workloads that can easily take advantage of more cores per chip/server/rack. There are also opportunities to integrate high-speed I/O and networking all on the same chip.

The obvious beneficiary of this means of thinking is ARM. They announced the 64-bit Cortex-A57 (no, I don’t understand ARM’s numbering system either) focused on this opportunity. Intel has responded with Centerton which is their first server processor based on Atom. But, as Linley pointed out, it only has the same level of integration as Xeon and so requires external USB, Ethernet, disk controllers etc. TDP is 6.3W at 1.6GHz which looks nice until you consider that its performance is so much lower than Xeon and, in fact, it is less power efficient than Xeon. Intel’s next generation will be Avoton (no, I don’t understand Intel’s naming system either) in 22nm with second generation Atom architecture and “integrated system fabric.” But details have not yet been announced.

So who are working on alternatives? Tilera has repurposed their massively multicore processor for cloud servers. Calxeda is shipping an ARM-based server processor. AppliedMicro and Cavium are developing 64-bit ARM CPUs and AMD has announced that it will use the Cortex-A57 in 2014 server products.

Calxeda presented their roadmap. Today they can put 3,000 servers in a single rack with 12,000 cores, 12TB DRAM, power requirements down by 90%, eliminating 9 miles of cabling and 125 ethernet switches. That’s the sort of thing that will get the attention of Google, Facebook and Amazon.

They had an interesting example: server capacity to service 10,000,000 HTTP requires per second on a 1Gb network infrastructure. The densest x86 solution requires 1997 servers on 4 racks with 44 switches and consumes 37kW. Using Calxeda’s ARM-based SoCs, this is 1535 servers on 1.6 racks with 2 switches and 13kW of power. 40% lower TCO, 61% less space, 95% fewer switches, 65% less energy. Close to their elevator pitch: 1/10th of the energy in 1/10th of the space and 1/2 the TCO and all the performance.

AppliedMicro had a similar message. There are computationally intensive workloads such as high-frequency trading or data-mining. But many cloud workloads are not like that and can take advantage of lots more cores even if the scalar performance of each one is not the maximum. Their X-Gene microserver integrates network, I/O and storage all on one chip.

They have their comparison example too. A traditional server architecture providing 2560 cores requires 160 nodes, 28kW in 2 racks. With X-Gene, the same 2560 cores just requires 320 nodes, 19kW in 1 rack. So half the size and half the power.

The server processor market is about $10B today so it is a prize worth fighting over. And, unlike the smartphone processor market, the major players are not designing their own SoCs so the whole market is available for merchant suppliers.


Notes from Common Platform: Collaborate or Die

Notes from Common Platform: Collaborate or Die
by Beth Martin on 02-07-2013 at 2:16 pm

FinFETs are hot, carbon nanotubes are cool, and collaboration is the key to continued semiconductor scaling. These were the main messages at the 2013 Common Platform Technology Forum in Santa Clara.

The collaboration message ran through most presenations, like the afternoon talk by Subi Kengeri of GLOBALFOUNDRIES and Joe Sawicki of Mentor Graphics.

Subi talked about technology development, identifying the market drivers of technology (mobile), the technology challenges (power density, metrology for FinFETs), and the future device architectures (III-V, SiGe, carbon tube). Listening to him lay out the technology landscape, one starts to understand why cooperation between Common Platform Alliance members (IBM, Samsung, GlobalFoundries) is so important.

While the industry was once full of vertically integrated semiconductor companies who could conceive of, design, and manufacture their ICs under one roof (“real men have fabs”), today, the fabless semiconductor industry depends on the “ecosystem” of IP, EDA, manufacturing, test and packaging. The EDA part of the equation, design enablement and manufacturing ramp, was covered by Joe Sawicki.

Joe emphasized that DFM today involves pulling manufacturing knowledge into the design flow as early as possible. Mentor is keenly aware of the importance of manufacturing on all stages of design, so much so that they built a new website focused on foundry solutions.

To get a manufacturable design, you have to go far beyond simply reading in a technology file with basic spacing rules. You need enough information flowing between tools for a true design/manufacturing co-optimization. From the EDA point of view, this means merging physical verification with place & route (Calibre InRoute), fusing DFM with test and yield analysis, streamlining final verification (pattern matching, DFM scoring), improving metal fill, developing technologies to improve circuit reliability (PERC), and improving test coverage for low- or 0-defect applications (cell-aware ATPG). It begins to look as if the nice, distinct boxes of the IC design flow are escaping their boundaries, blending together. Is it chaos? Is it cats and dogs living together? No, it’s the future; everything working better through mashups and collaborations.

In fact, my analysis of word usage in the day’s presenatations revealed that “collaboration” was the most frequently used term of the day. Keep in mind that this reporter’s analysis is based on general recollections during happy hour, but still, I stand by it.


No EUV before 7nm?

No EUV before 7nm?
by Paul McLellan on 02-07-2013 at 1:31 pm

I was at the Common Platform Technology Forum this week. One of the most interesting sessions is IBM’s Gary Patton giving an overview of the state of semiconductor fabrication. Then, at lunchtime, he is one of the people that the press can question. In this post, I’m going to focus on Extreme Ultra-Violet (EUV) lithography.

You probably know that the biggest challenge is the light source. This is actually made with droplets of tin. One laser shapes the drop and then another high powered laser zaps the drop to create plasma and EUV light. This is collected, goes through 6 mirrors with very low efficiency and a reflective mask. Very little, maybe 5%, of the original light actually makes it to the wafer to expose the photoresist. Right now the power of the light source is up to about 30W but it needs to be more like 250W and so it is off by a factor of 10.

Gary also mentioned two other problems that I’ve written about before but which don’t get mentioned nearly so much. After all, these are only problems if we get a usable EUV light source that works reliably ten billion times a day with adequate power.


The two other problems are that masks will not be defect free and that it is not possible to put a pellicle on the mask (since it would absorb all the EUV).

First, the defect problem. There are two issues. Firstly, the mask will have defects that will print and so either the design needs to be redundant enough that it doesn’t matter or else the defects need to be on parts of the mask that don’t matter since they are going to print anyway. This means knowing where the defects are and that is the next problem. You can’t just look for them with visible light since they are too small. So you need to scan them. But the scale of a mask versus the scale of a defect is huge: it is equivalent to searching 10% of California to make sure there are no golf balls there.

In refractive optics (what we use today) the mask is covered with a thin transparent film called a pellicle. One purpose of the pellicle is that any contamination that would end up on the mask instead ends up on the pellicle. It is thus out of the focal plane of the stepper and so doesn’t print. EUV masks cannot have a pellicle. But that means that any contamination will print. Worse, it won’t be obvious that this his happening and it will continue to print until the mask is cleaned. This is potentially very expensive since hundreds more fabrication steps may take place on a wafer that is actually already a dud. The whole wafer. Every die.

Some people have said that getting EUV to work is “just engineering.” Gary says this is not the case. “It is not just hard engineering work, it is hard physics problems too.” At lunch someone asked Gary if it was like 80% engineering with 20% of physics problems to be resolved. He said he wasn’t really sure how you put a number on it like that but his guess would be 60% physics problems and 40% engineering.

If EUV becomes workable, IBM thinks that it will be for the 7nm node. Not 14nm or 10nm. It seems that “EUV will eventually work because it has to.” But reality doesn’t always cooperate on that basis.