BannerforSemiWiki 800x100 (2)

Kadenz Leben: CDNLive! EMEA

Kadenz Leben: CDNLive! EMEA
by Paul McLellan on 04-27-2012 at 2:01 am

If you are in Europe then the CDNLive! EMEA user conference is in Munich at the Dolce Hotel from May 14th to 16th. Like last month’s CDNLive! in Cadence’s hometown San Jose, the conference focuses on sharing fresh ideas and best practices for all aspects of semiconductor design from embedded software down to bare silicon.

The conference will start on Tuesday May 15th with a keynote presentation by Lip-Bu Tan (Cadence’s CEO, but you knew that). This will be followed by an industry keynote from Imec’s CEO Luc van den Hove.

After that are more than 60 technical sessions, tutorials and demos. There will also be an expo where companies such as Globalfoundries, Samsung and TSMC will exhibit.

Further, the Cadence Academic Network will host an academic track providing a forum to present outstanding work in education and research from groups or universities. The Network was launched in 2007 to promote the proliferation of leading-edge technologies and methodologies at universities renowned for their engineering and design excellence.

The conference fee is €120 + VAT (total €142.80)

The conference hotel is:Dolce Hotel
Andreas-Danzer-Weg 1,
85716 Unterschleißheim,
Germany

To register for CDNLive! EMEA go here.


Intel’s Ivy Bridge Mopping Up Campaign

Intel’s Ivy Bridge Mopping Up Campaign
by Ed McKernan on 04-26-2012 at 9:03 pm

In every Intel product announcement and PR event, there are hours of behind the scenes meetings to discuss what they should introduce, what are the messages and what are the effects on the marketplace to maximize the impact of the moment. The Ivy Bridge product release speaks volumes of what they want to accomplish over the coming year. If I could summarize, they are primarily, as I mentioned in a recent blog, in a mopping up operation with respect to nVidia and AMD, however with a different twist. And second, Intel is ramping 22nm at a very rapid pace, which means Ivy Bridge ultrabooks and cheap 32nm Medfields are on their way.

When Ivy Bridge was launched, I expected a heavy dose of mobile i5 and i7 processors and a couple ULV parts as well. We didn’t get that. What we got instead was a broad offering of i5 and i7 Desktop processors and high end i7 mobile parts with high TDP (Thermal Design Point) geared towards 7lb notebooks. I believe Intel shifted their mix in the last couple months in order to take advantage of the shortfall of 28nm graphics chips that were expected to be available from nVidia and AMD. When the enemy is having troubles, then it is time to run right over them. You see, Intel believes that they may be able to prove to the world, especially corporate customers that external graphics is unnecessary for anyone except teenage gamers and why wait for the parts from nVidia and AMD.

In a couple weeks, Intel will have their annual Analyst Meeting. I expect that they will highlight Ivy Bridge and the future Haswell architecture in terms of overall performance, low power and especially graphics capabilities. The overriding theme is that Intel has found a way to beat AMD and nVidia with a combination of leading edge process and a superior architecture. Rumors are that Haswell will include a new graphics memory architecture in order to improve performance. I am not a graphics expert, but I do know improved memory systems are always a performance kicker.

But back to the short term, which is what I find so interesting. Product cycles are short and Intel knows that shipping a bunch of Ivy Bridge PCs into the market without external graphics for the next 3-6 months can put a dent in nVidia and AMDs revenue and earnings, which will hurt their ability to compete long term. I expect both to recover somewhat by Fall, but by then Intel will shift its focus to ultrabooks and mobiles making a stronger argument that the Ivy Bridge graphics is sufficient and the best solution for maximizing battery life in mobiles. At that point they should start to cannibalize from the mobile side of AMD and nVidia’s business. Both companies could be looking at reduced revenues for the remainder of the year.

Bottom line to these tactical strategies is that Intel seeks to generate additional revenue off the backs of AMD and nVidia. In the near term it should come from the desktop side and then later in the Summer and Fall from the Mobile side. There is $10B of combined nVidia and AMD revenue for Intel to gold mine, which is much more than what they could accrue by pushing their Smartphone strategy ahead of their PC mopping up operation. A weakened AMD and nVidia will have a hard time fighting off the next stage, which is the Haswell launch in 1H 2013 and the coming 22nm Atom processors for smartphones and tablets.

FULL DISCLOSURE: I am long AAPL, INTC, ALTR and QCOM


Do you need more machines? Licenses? How can you find out?

Do you need more machines? Licenses? How can you find out?
by Paul McLellan on 04-26-2012 at 9:00 pm

Do you need more servers? Do you need more licenses? If you are kicking off a verification run of 10,000 jobs on 1,000 server cores then you are short of 9000 cores and 9000 licenses, but you’d be insane to rush out with a purchase order just on that basis. Maybe verification isn’t even on the critical path for your design, in which case you may be better pulling some of those cores and re-allocating them to place and route or DRC. Or perhaps another 50 machines and another 50 licenses would make a huge difference to the overall schedule for your chip. How would you know?

When companies had a few machines, not shared corporate wide, and a comparatively small number of licenses, these questions were not too hard to get one’s head around, and even if you were out by a few percent the financial impact of an error would not be large. Now that companies have huge server farms and datacenters containing tens if not hundreds of thousands of servers, these are difficult questions to answer. And the stakes are higher. Computers are commoditized and cheap, but not when you are considering them in blocks of a thousand. A thousand simulation licenses is real money too, never mind a thousand place & route licenses.

The first problem is that you don’t necessarily have great data on what your current usage is of your EDA licenses. RunTime Design Automation’s LicenseMonitor is specifically designed for this first task, getting a grip on what is really going on. It pulls down data from the license servers used by your EDA software and can display it graphically, produce reports and generally give you insight into whether licenses are sitting unused and where you are tight and licenses are being denied or holding up jobs.


The second challenge is to decide what would happen if you had more resources (or, less likely, fewer, such as when cutting outdated servers from the mix). More servers. More software licenses. This is where RunTime’s second tool WorkloadAnalyzer comes into play. It can take a historical workload and simulate how it would behave under various configurations (obviously without needing to actually run all the jobs). Using a tool like this on a regular basis allows tuning of hardware and software resources.

And it is invaluable coming up to the time to re-negotiate licenses with your EDA vendors, allowing you to know what you need and to see where various financial tradeoffs will leave you.

In the past this would all have been done by intuition but this approach is much more scientific. The true power of the tool lies in its Sensitivity Analysis Directed Optimization capability. This tells you which licenses or machines to purchase (or cut), and by how much, given a specific budget, such that doing so would produce the greatest decrease (or smallest increase) in total wait time. Scenario Exploration allows one to see what is the impact of introducing 10 more licenses of a tool, or removing a group of old machines, or increasing company-defined user limits, etc


More details on LicenseMonitor are here.
More details on WorkloadSimulator are here.


TSMC 28nm Beats Q1 2012 Expectations!

TSMC 28nm Beats Q1 2012 Expectations!
by Daniel Nenni on 04-26-2012 at 9:00 am

TSMC just finished theQ1 conference call. I will let the experts haggle over the wording of the financial analysis, but the big news is that TSMC 28nm Q1 revenue was 5%, beating my guess of 4%. So all of you who bet against TSMC 28nm it’s time to pay up! Coincidentally, I’m in Las Vegas where the term deadbeat is taken literally!

Per my blog The Truth of TSMC 28nm Yield!:

28nm Ramp:
[LIST=1]

  • 2% 1/18/2012
  • 4% 4/26/2012 (my guess)
  • 8% 7/19/2012 (my guess)
  • 12% 10/25/2012 (my guess)


    “By technology, revenues from 28nm process technology more than doubled during the quarter and accounted for 5% of total wafer sales owing to robust demand and a fast ramp. Meanwhile, demand for 40/45nm remained solid and contributed 32% of total wafer sales, compared to 27% in 4Q11. Overall, advanced technologies (65nm and below) represented 63% of total wafer sales, up from 59% in 4Q11 and 54% in 1Q11.”
    TSMC Q1 2012 conference call 4/26/2012.

    “Production using the cutting-edge 28 nanometer process will account for 20 percent of TSMC’s wafer revenue by the end of this year, while the 20 nanometer process is being developed to further increase speed and power” Morris Chang, TSMC Q1 2012 conference call 4/26/2012.

    So tell me again that “foundry Taiwan Semiconductor Manufacturing Co. Ltd. is in trouble with its 28-nm manufacturing process technologies”Mr. Mike Bryant, CTO of Future Horizons. Tell me again that “TSMC halted 28nm for weeks” in Q1 2012 Mr. Charlie Demerjian of SemiAccurate. And special thanks to Dan Hutchenson, CEO of VLSI Research, John Cooley of DeepChip, and all of the other semiconductor industry pundits who propagated those untruths.

    Lets give credit where credit is due here, I sincerely want to thank you guys for enabling the rapid success of SemiWiki.com. We could not have done it without you! But for the sake of the semiconductor ecosystem, please do a better job of checking your sources next time.

    During the TSMC Symposium this month, Dr. Morris Chang, Dr. Shang-Yi Chiang, and Dr. Cliff Hou all told the audience of 1,700+ TSMC customers, TSMC partners, and TSMC employees that TSMC 28nm is: yielding properly, as planned, faster than 40nm, meeting customer expectations, etc…

    Do you really think these elite semiconductor technologists would perjure their hard earned reputations in front of a crowd of people who know the truth about 28nm but are sworn to secrecy? Of course not! Anyone that implies they would, just to get clicks for their website ads, are worse than deadbeats and should be treated as such. Just my opinion of course!

    TSMC also announced a 2012 CAPEX increase to between $8B and $8.5B compared to the $7.3B spent in 2011. My understanding is that the additional money will be spent on 20nm capacity and development activities (FinFets!?!?). In Las Vegas that may not qualify as “going all in” but it is certainly a very large bet on the future of the fabless semiconductor ecosystem!


  • Non Volatile Memory IP: a 100% reliable, anti fuse solution from Novocell Semiconductor

    Non Volatile Memory IP: a 100% reliable, anti fuse solution from Novocell Semiconductor
    by Eric Esteve on 04-25-2012 at 9:36 am

    In this pretty shaky NVM IP market, where articles frequently mention legal battles rather than product features, it seems interesting to look at Novocell Semiconductor and their NVM IP product offering, and try to figure out what makes these products specific, what are the differentiators. Before looking at SmartBit cell into details, let’s just have a quick look about the market size of the NVM IP market. The NVM as a technology is not so young, I remember my colleagues involved in Flash memory design back in 1984 at MHS then moving to STM to create a group who has, since then, generated billion dollars in Flash memory revenues. But the concept of NVM block which could be integrated into an ASIC is much more recent, Novocell for example has been created in 2001. I remember that, in 2009, analyst predictions for the size of this IP market was around $50 million for 2015. The NVM IP market is not huge, and probably weights a couple of dozen $M today, but it’s a pretty smart technology: integrating from a few bytes to up to Mbits into a SoC can help reducing the number of chips in a system, increase security and allow for Digital Right Management (DRM) for video and TV applications, or provides encryption capability.

    To come back to embedded NVM technology, the main reason for their lack of success in ASIC in the past (in the 1990’s) was the fact that integrating a Flash memory block into a SoC was requesting to add specific mask levels, leading to an over-cost of about 40%. I remember trying to sell such an ASIC solution in 1999-2001: it was looking very attractive for the customer, until we talk about pricing and the customer realizes that the entire chip would be impacted. I made few, very, very few sales of ASIC with embedded Flash! The current NVM IP offering from Novocell Semiconductor does not generate such a cost penalty; the blocks can be embedded in standard Logic CMOS without any additional process or post process steps and can be programmed at the wafer level, in package, or in the field, as en use requires.

    An interesting feature offered by the Novocell NVM family, based on antifuse One Time Programmable (OTP) technology, is the “breakdown detector”, allowing to precisely determining when the voltage applied to the gate (allowing programming the memory cell by breaking the oxide and consequently allowing the current to flow through this oxide) will effectively have created an irreversible oxide breakdown, the “hard breakdown”, by opposition of a “soft breakdown” which is an apparent, reversible oxide breakdown. If the oxide has been stressed during a period of time which is not long enough, the hard breakdown is not effective and the user can’t program the memory cell. Looking at the two pictures (below) help understanding the mechanisms:

    • On the first, the current (vs time) is going up sharply only after the thermal breakdown is effective

    • The next pictures shows the current behavior of a memory cell for different cases, and we can see that when the hard breakdown is effective the current value is about three order of magnitude higher than for a progressive (or soft) breakdown.

    Thus, we can say that one of the Novocell’s differentiator is reliability. To avoid the limitations of traditional embedded NVM technology by utilizing the patented dynamic programming and monitoring process and method of the Novocell SmartBit™, ensuring that 100% of customers’ embedded bit cells are fully programmed. The result is Novocell’s unmatched 100% yield and unparalleled reliability, guaranteeing customers that their data is fully programmed initially, and will remain so for an industry leading 30 years or more. Novocell NVM IP scales to meet all NVM size and complexity challenges that grow exponentially as SoCs continue to move to advanced nodes such as 45nm and beyond.

    Eric Esteve from IPNEST –


    Mentor’s New Emulator

    Mentor’s New Emulator
    by Paul McLellan on 04-25-2012 at 8:00 am

    Mentor announced the latest version of their Veloce emulator at the Globalpress briefing in Santa Cruz. The announcement is in two parts. The first is that they have designed a new custom chip with twice the performance and twice the capacity. It supports up to two billion gate designs and many software engineers. Surprisingly the chip is only 65nm but Mentor reckons it outperforms competing emulators based on 45nm technology. I’m not sure why they didn’t design it at 45nm and go even faster, but this sort of chip design is a treadmill and so it is not really a surprising announcement. In fact, I can confidently predict that in 2014 Mentor will announce the 28nm version with more performance and more capacity!

    Like most EDA companies, Mentor doesn’t do a lot of chip design. After all they sell software. But emulation is the one area that actually uses the tools. Since one of the big challenges in EDA is getting hold of good test data for real chips, the group is very popular in other parts of Mentor since the proprietary nature of the data is less of an issue inside the same company.

    The other thing that they announced is VirtuaLAB. I assumed that this was already announced since Wally Rhines talked about it in his keynote at the Mentor Users’ Group U2U a week or two ago and I briefly covered it here. Historically, people have used an in-circuit-emulation (ICE) lab with real physical peripherals. These suffer from some big problems:

    • expensive to replicate for large numbers of users
    • time consuming to reconfigure (which must be done manually)
    • challenging to debug
    • doesn’t fit well with the security access procedures for datacenters (Jim Kenney, who gave the presentation, said he had to get special security clearance to go and get a picture inside the datacenter since even the IT guys are not allowed in)
    • is never where you want it (you are in India, the peripherals are in Texas)

    VirtuaLAB is a software implementation of peripherals. They run on Linux and are hardware-accurate. They can easily be shared, after all it’s just Linux. They can be reconfigured by software. You don’t need to go into the datacenter on a regular basis to reboot/reconfigure anything. Of course the purpose of all this is so that you can develop/debug and test device drivers and so on using the models. For example, here is a model of a USB 3.0 Mass Storage Peripheral (aka Thumb drive).

    Afterwards I talked to Jim. He confirmed something I’ve been hearing from a number of directions. Although people have been saying for years that simulation is running out of steam and you need to switch to emulation (especially people whose job is to sell emulation hardware), it does finally seem to be true. You can’t do verification of a modern state of the art SoC including the low-level software that needs to run against the hardware, without emulation. For example, a relatively small camera chip (10M gates) requires two weeks to simulate or 20 minutes to emulate.

    I asked him who his competition is. Cadence is still the most direct competition. Customers would love to be able to use an emulator at Eve’s price-point but it seems that for many designs, getting the design into the emulator is just too time-consuming. And EDA has always been a bit like heart-surgery, it’s really difficult to market yourself as the discount heart-surgeon.


    Fast buses at DAC

    Fast buses at DAC
    by Paul McLellan on 04-24-2012 at 10:05 pm

    UPDATE: there is free WiFi on all buses.

    OK, these are not the 128 bit 1GHz buses we have to hear about every day. They go roughly 40 miles in roughly an hour. But they take you from Silicon Valley to DAC and back, and they are cheaper than BART or Caltrain.

    For the first time this year, DAC has buses from Silicon Valley to Moscone for DAC. They depart from the Cadence parking lot at 2655 Seely Avenue (where you can leave your car all day even if you are not a Cadence employee). The buses run Monday through Wednesday.

    Into San Francisco there are buses at:

    • 7.30am
    • 8.00am
    • 8.30am.

    Return buses are at:

    • 6.15pm
    • 6.45pm
    • 7.15pm.

    I am trying to find out if there is WiFi on the buses. Or maybe Google and co already have all the WiFi enabled buses for their daily fleet of Gbuses that trawl from San Francisco to Mountain View and vice-versa.

    You can’t just show up to get on the bus. You need to attach a bus ticket to your DAC registration. If you are already registered then use the link in your confirmation email. If you are about to register, then don’t forget to add the bus before you check out.

    Full details and registration links are here.


    Audio, not your father’s MP3

    Audio, not your father’s MP3
    by Paul McLellan on 04-24-2012 at 9:26 pm

    Chris Rowen, Tensilica’s CTO, presented in Santa Cruz at the Globalpress briefing. He was basically presenting Tensilica’s audio strategy, which I’ve written about before. But he provided an interesting perspective. Globalpress (which flies journalists in from all over the world and then fills the few remaining empty seats with a few of us local guys) has been going ten years.

    Ten years ago, Globalpress was, in the audio processing area:

    • talking about 0.13um (or 130nm).
    • the first mp3 player, the Rio, was just 3 years old.
    • iPod was out…just…not even for a year. The one with a mechanical touch-wheel.
    • VHS would outsell DVD for another couple of years.
    • First cell-phone with a built-in camera released in US (by Samsung) with VGA resolution (OK, that’s not audio).

    And Tensilica had not introduced an audio core. But one year later, 9 years ago, they had. Now it is accepted wisdom that audio processing on a general purpose processor (i.e. ARM) is silly. You should offload it onto a specialized core such as the Tensilica one (or the ARC-based Synopsys one that we also recently covered). I was at the Linley Tech Mobile Conference (nee Microprocessor forum) and a Tensilica demo by the Wolfson Microelectronics (yeah, Edinburgh, one of my alma maters) showed it dramatically. The ARM would sleep for over 9 seconds and wake up for less than a second to feed a Tensilica core with data and then go to sleep. I forget the precise power reduction (there were ammeters and oscilloscopes to keep everyone honest) but it was dramatic.

    The problem is that voice requirements are going up faster than Moore’s Law (or More that Moore as we are learning to say). Basic voice runs at 200MHz today but will go up to 600MHz in 2-3 years. And those power budgets, not so much.

    We are looking forward to much higher sound performance, especially on the voice receiving side:

    • active noise control (knocking out ambient noise with its inverse)
    • beam forming microphone arrays
    • always-on voice recognition

    The point of Chris’s story wasn’t so much that voice is different, but it is a pioneer going where all the other technologies: video, radio processing (bluetooth, wireless, LTE etc), camera image processing and so on are going. And it is exploding…



    Smart mobile SoCs: Texas Instruments

    Smart mobile SoCs: Texas Instruments
    by Don Dingee on 04-24-2012 at 9:00 pm

    TI has parlayed its heritage in digital signal processing and long-term relationships with mobile device makers into a leadership position in mobile SoCs. They boast a relatively huge portfolio of design wins thanks to being the launch platform for Android 4.0. On the horizon, the next generation OMAP 5 could change the entire mobile industry. Continue reading “Smart mobile SoCs: Texas Instruments”


    Broadcom announces an HFC

    Broadcom announces an HFC
    by Paul McLellan on 04-24-2012 at 8:00 pm

    For a long time Cisco had a very high end product whose official internal name during its years of development was HFR, which stood for Huge F***ing Router (the marketing department insisted it stood for ‘fast’). Eventually it got given a product number, CRS-1, but not before I’d read an article about it in the Economist under its old name. Wikipedia is on it. I was at the Globalpress briefing in Santa Cruz today and Broadcom announced their next generation network processor, definitely a chip deserving of the HFC appellation.

    Unless you are a carrier equipment manufacturer such as Alcatel-Lucent, Ericsson or Huawei then the precise details of the chip aren’t all that absorbing. If you are, it’s called the BCM88030.

    What I think is most interesting is the scale of the chip. It’s an amazing example of just what can be crammed onto a 28nm chip. Not just in size, but also in performance and power (or lack of it).

    Firstly, this chip is a 100Gbps full-duplex network processor. This means it handles 300M packets/second, or a packet in approximately 3ns. Since its clock rate is 1GHz, that means in the time to execute 3 instructions so the only way this is workable is through parallelism. Indeed the chip contains 64 custom processors. Even that is not enough, each processor can handle up to 32 packets at a time, by advanced hardware multi-threading. Even that is not enough, some specialized functions just aren’t suited to general microprocessors and are offloaded to one of 7 specialized engines that perform functions like lookup (MAC addresses, IP addresses etc), police funtions, timing. All this while reducing power and area compared to previous generation solutions by 80%.

    That’s just the digital dimension. The chip also contains the interfaces to the outside world with 24 10Gb/s Ethernet MACs, 6 50Gb/s Ethernet MACs and 2 100Gb/s Ethernet MACs.

    What is driving the need for this amount of bandwidth is that carriers are switching completely to using Ethernet as their internal backbone between the different parts of their networks, from the base-station to the access network, to the aggregation network and in the core. This extremely high performance chip is targeted at aggregation and the core.

    In turn this is driven by 3 main things:

    • millions of smartphones and tablet computers
    • upgrade of networks from 3G to 4G with increased bandwidth
    • increasing use of video

    These are causing an explosion in mobile backhaul, the (mostly) wired network that hooks up all the base-stations into the carriers network and to the core backbone of the internet.

    The growth is quite significant. A smartphone generates 24X the data of a regular phone (I’m not sure if the includes the voice part, although in terms of bits per second that is quite low with a modern vocoder). Tablets generate 5X the data of a smartphone (and so 120X a regular phone). And the number of units is going up fast. By 2015 it is predicted that the number of connected devices will be 2X the world population. As for that video, by 2015 one million minutes of video will cross the network each second. That’s a lot of cute kittens. In total, mobile data traffic is set to increase 18 fold between 2011 and 2015.

    This is driving 100G Ethernet adoption, forecast to have 170% CAGR over the next 5 years. Hence Broadcom’s development of this chip. But, like any other system of this complexity, the chip development is accompanied by an equally challenging software development problem, to develop a tool chain and a complete reference implementation so that customers can actually use the chip.