Banner 800x100 0810

Cadence ♥ TSMC

Cadence ♥ TSMC
by Daniel Nenni on 04-19-2013 at 6:00 pm

TSMC has been investing in the fabless semiconductor ecosystem for 25+ years and that is why they are the #1 foundry and lead this industry (my opinion). I’m a big fan of joint webinars. Not only is it collaboration open to the masses, it is a close collaboration between the two sponsoring companies. Having worked on the TSMC AMS reference flows for the past four years I can tell you that these webinars are definitely worth your time.

Interested in advanced node designs?
Enhance your expertise with two new webinars from TSMC and Cadence!

Addressing Layout-Dependent Effects: At 9am and 6:30pm PDT on April 25, Manoj Chacko and Bala Kasthuri of Cadence and Jason Chen from TSMC will present, “Variation-Aware Design: Detecting and Fixing Layout-Dependent Effects Using the Cadence® Virtuoso® Platform, Part II, a sequel to Variation-Aware Design, Part I. You’ll learn about:

  • The solutions jointly developed by Cadence and TSMC, to provide a complete layout-dependent effect (LDE) flow for circuit and layout designers working at 28nm and below
  • When, why, and how you should incorporate TSMC’s LDE-API with Cadence Virtuoso tools into an analog, custom, or mixed-signal design flow to achieve the most efficient design cycle time

Register Now: https://www.secure-register.net/cadence/TSMC_Q2_2013

Managing Design Complexity at 20nm: At 9am and 6:30pm PDT on May 23, Rahul Deokar and John Stabenow of Cadence and Jason Chen from TSMC will present, “20nm Design Methodology: A Completely Validated Solution for Designing to the TSMC 20nm Process Using Cadence Encounter®, Virtuoso, and Signoff tools.” You’ll learn about:

  • The TSMC-Cadence solutions in the TSMC 20nm Reference Flow, tools certification, and Cadence tools and methodology to enable 20nm design with double patterning technology (DPT)-aware capabilities, to reduce design complexities and deliver required accuracy
  • How in-design DPT and design rule checking (DRC) can improve your productivity
  • How both colored and colorless methodologies are supported, and data is efficiently managed in front-to-back design flows
  • How local interconnect layers, SAMEMASK rules, and automated odd-cycle loop prevention are supported
  • How mask-shift modeling with multi-value SPEF is supported for extraction, power, and timing signoff

Register today: https://www.secure-register.net/cadence/TSMC_Q2_2013

Cadence enables global electronic design innovation and plays an essential role in the creation of today’s integrated circuits and electronics. Customers use Cadence software, hardware, IP, and services to design and verify advanced semiconductors, consumer electronics, networking and telecommunications equipment, and computer systems. The company is headquartered in San Jose, Calif., with sales offices, design centers, and research facilities around the world to serve the global electronics industry. More information about the company, its products, and services is available at www.cadence.com.

lang: en_US


Andes, ARM, Imagination, MIPS

Andes, ARM, Imagination, MIPS
by Paul McLellan on 04-19-2013 at 12:44 pm

The last session of the day for Linley Mobile was about processors to go into smartphones. One surprise was that there is a core that nobody seems to have heard of since it is only really used in Taiwan up until now, and it is used in several Mediatek chips.

The most “glamorous” processor in a smartphone is the one in the application processor chip (or the one exposed to the apps in an integrated AP+BB chip). However there may be as many as 15 more processors in a smartphone inside things like the GPS, WiFi, power management. These processors are not automatically ARM since the code is purely internal to the chip and is not exposed to the user. It is hard for anyone to win an AP processor from ARM (although Intel is trying) since even on Android where apps are written in Java and is supposed to be portable, in reality many apps, especially games, contain ARM assembly.

The new processor that nobody had heard of is Andes Core. I was sitting next to a strategic marketing guy from Qualcomm and he’d not heard of it either. They were actually announcing a new ultra-low-power core, the Hummingbird N705, at the conference. They claim that the performance is 30% better than the ARM Cortex-M0 measured by Dhrystone MIPS/mW (although I thought the Dhrystone benchmark was regarded as obsolete these days compared to others more focused on browsing etc). They also said they had over 60 licensees, and that their development environment has over 5000 installations.

Next up were ARM. They emphasized that when it comes to the process, one size does not fit all. For the high end of the market (iPhone, Galaxy etc) ARM’s roadmap is dual and quadcore Cortex A57/53. But the low end of the market is not viable in anything except premium products since 20nm and below is not a cost reduction from 28nm. So to reduce area and power and so sort of keep on Moore’s law trajectory for low end requires micro-architectural innovation.

The Cortex-A7 has a very efficient power architecture with in-order 8-stage partial dual issue pipeline and integrated improved L2 cache sybsystem. It consumes less than 100mW at 1GHz (don’t know what process, I’m assuming 28nm).


ARM’s view on Quadcore is that although four core scales well on threaded benchmarks that these don’t correlate with user-experience. Since the 3rd and 4th cores handle background and OS threads they do not need to be big. The current big.LITTLE architecture doesn’t allow this, in any pair either the big core is running, or the little, but not both. However that will change soon and it will be possible to use both cores of any pair. ARM believes that this will be the most efficient way to build a Quad core (or six or eight if required for the high end) delivering energy savings of as much as 75% for the same peak performance. If a Mali GPU is added, it offloads so much of the performance needs that the high-performance graphics drivers can all run on just a little processor for power efficiency (or high FPS for the same power).

Finally, to close out the day, was Imagination who, of course, have just recently acquired MIPS. Their view is that the GPU is the heart of a smartphone. Although most of the software runs on the regular CPU (and ARM almost always), over half the die area is taken up with the GPU and it is the GPU that provides the ‘wow’ factor. Performance is moving towards 1 TFLOP on-chip for mobile.

Although there are lots of standard APIs, especially for graphics, the inherent architectural efficiencies remain important. Frequency, power and area all impact the user experience (or the price, which I suppose is part of the experience).

Of course Imagination would love to use MIPS to displace all those ARM application processors but that is clearly not going to happen. Both ARM (who have Mali) and Imagination (who have MIPS) know that they have to work with each other. iPhone doesn’t use ARM’s graphics processor nor MIPS for the control processor, so definitely coopetition, where they compete and cooperate at the same time.


Linley Mobile

Linley Mobile
by Paul McLellan on 04-19-2013 at 12:11 pm

I was at the Linley Mobile Microprocessor conference earlier in the week. Well, just the first day since the second day overlapped with the GSA Silicon Summit. The first surprise was seeing Mike Demler in a suit. It turns out that he has joined the Linley Group as a full-time analyst in the mobile space.


Linley Gwennap started the day with his overview of the whole space. Smartphone forecast is increasing but there is a lot of growth in low-cost (sub $100 BOM) smartphones. By 2017 these will even be cannibalizing basic phones with about half of basic phones switched. So the smartphone market has a sort of “dumbbell” shaped distribution, with the high end (iPhone, Galaxy) part of the market already approaching saturation, and with a lot of future growth at the low end of the market.


Vendor consolidation is continuing as the standalone application processor (AP) model collapses. Although high end designs like iPhone have a separate application processor, Apple designs it themselves and so they are not a customer for merchant AP chips. The merchant market is almost entirely integrated baseband (BB) and perhaps more (wireless, GPS etc).

TI and Freescale exited the smartphone and tablet markets. ST Ericsson finally collapsed, unable to survive the loss of Nokia to Qualcomm. Mediatek shipped more than 100 million AP+BB chips. Spreadtrum released their first integrated AP+BB. In a break with trend, Intel got some smartphone AP wins (without integrated BB). They have not yet announced any roadmap for AP+BB (which they can do using the Infineon Wireless technology they acquired a few years ago).

The two big winners at present seem to be Qualcomm at the high end, replacing TI (and ST) at Nokia and Marvell at RIM. And at the low end, Mediatek is gaining market share fast.

While the high end top-selling phones like iPhone and Galaxy use a best-of-breed approach, picking and choosing vendors for various components, most others use a single vendor reference design. Obviously this reduces R&D cost for lower-volume models but only vendors with a complete portfolio can offer a complete reference design.

[TABLE] align=”left” class=”cms_table_grid” style=”width: 480px”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” |
| class=”cms_table_grid_td” | AP
| class=”cms_table_grid_td” | WCDMA
| class=”cms_table_grid_td” | LTE
| class=”cms_table_grid_td” | WiFi
| class=”cms_table_grid_td” | GPS
| class=”cms_table_grid_td” | NFC
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Qualcomm
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Marvell
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | In Qual
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Broadcom
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Mediatek
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Licensed
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | N
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | nVidia
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | AT&T
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Intel
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
|-


Moore, or More Than Moore?

Moore, or More Than Moore?
by Paul McLellan on 04-19-2013 at 12:05 pm

Yesterday was the 2013 GSA Silicon Summit, which was largely focused on contrasting what advances in delivering systems will depend on marching down the ladder of process nodes, and which will depend on innovations in packaging technology. So essentially contrasting Moore’s Law with what has come to be known as More Than Moore: 2.5D interposer-based designs using TSV and other innovations.

The first panel was focused on communication. The second on Internet of Things (IoT). Finally, the third was on integration challenges.

The industry is preparing for a 1000X increase in traffic (for example, data volumes doubled between 2010 and 2011). But there are major challenges. Moore’s Law is slowing down and cost reduction (per transistor) with each process node is dubious or not happening. On air interfaces we are now close to the Shannon limit of information/Hz. Integrating RF onto CMOS (especially in FinFET) is an opportunity/challenge. Advances in packaging technology, especially those that allow a mix of die in the same package, offer alternative ways of assembling systems if the cost can be got under control.

There seemed to be general acceptance that 20nm is not going to be significantly cheaper than 28nm which has a few implications:

  • 28nm will be a very long lived node
  • there may be opportunities for innovation at 28nm such as fully-depleted options to get many of the advantages (especially low power) of moving to 20nm without the additional cost (due to double patterning in particular)
  • cost-sensitive designs will not go to 20nm unless the volumes are enormous but premium products that can take advantage of the extra gates and lower power will eat the cost
  • old nodes such as 0.13um and 0.18um will continue to be important, and in fact these are both currently growth nodes
  • the cost of moving to 20nm is not just the wafer cost but the development cost, so it needs $Bs of revenue to justify
  • IP availability may be as important when moving to a new node, as process availability. SoC design groups cannot afford to design all their own Serdes, Phys etc

There seemed to be a general feeling that true 3D won’t happen any time soon, but 2.5D interposer-based designs will play a big role in the next few years. The two big challenges are that silicon interposers are expensive and organic interposer (what some people called 2.1D) may be better. The other problem big problem is thermal, getting the heat out from the design.

However, another driver for the More Than Moore packaging approach may be that “split chips” become important because analog turns out to be too difficult in FinFET, which has quantized transistor sizes. So analog (and RF) may need to be left outside the main SoC as we move down the process nodes.

Another packaging driver in the mobile market appears to be that RF front ends won’t have integrated filters (on-chip) so they will need to be integrated into the package to get close to the IC.

So Moore and More Than Moore are going to be required in the coming years.


Semiconductor PLM – Needs to be smart for techies

Semiconductor PLM – Needs to be smart for techies
by Pawan Fangaria on 04-18-2013 at 8:15 pm

During my long career in semiconductor, EDA, I have heard, believed and experienced that this is a knowledge industry swamped with rapid innovation and technology drivers; typical manufacturing product development processes like Gantt charts and others do not apply here. The fallback is that most of the time estimations are ad hoc, based on gut-feel or expert opinion. Not only schedule, most of the processes are run by individual preferences; in other words the whole process is more people driven than process driven. Naturally, we see missed targets, re-spins, cost overruns, lost market opportunities and so on. It is said that success rate to first silicon is 0%! And we attribute the Product Lifecycle Management (PLM) issues to high complexity of designs at nanometer scale, high density, analog and digital mixed-signal and so on.

After I read the Kalypso white paper on Semiconductor Lifecycle Management at Dassault website located at –
Semiconductor Product Lifecycle Management Industry Adoption, Benefits and The Road Ahead
my perspective changed. Yes, I believe if we take PLM as a strategic direction towards improving product lifecycle in semiconductor space, many of the issues related to short window of opportunity, time-to-market, reduced design cost, profitability etc. can see significant improvement.

So where is the problem? Why is PLM adoption slow in Semiconductor space? As I have experienced myself, there is no common standard packaged PLM tool in this space, amplified by lack of awareness and limited understanding of value proposition of PLM. It is important that a product must be looked at not only from technical angle, but also from commercial angle.

[Components of a comprehensive PLM solution for Semiconductor industry]

The white paper describes in great detail the analysis by Kalypso researchers (who interviewed Semiconductor industry executives) and proposes a comprehensive method of PLM specifically for Semiconductor industry which includes complete value chain including design, software, data management, supply chain and so on. A key message here is “Think big, start small, move fast and build incrementally”. By using this principle the PLM strategy can be successfully implemented. Unless it is fully implemented, we do not see major results. Often we try to do it all at once and end up with dissatisfaction. If implemented gradually and completely, I am sure we then start seeing results project after project.

With strategic PLM program in place companies are seeing time-to-market and time-to-profit being condensed by 5 to 30% meaning more revenues due to improved market penetration. It also frees up R&D resources to work on other new product development early in the cycle. Also, PLM helps in commercializing and launching products globally and concurrently. A summary of PLM benefits is concisely listed as –

PLM provides a single comprehensive repository of complete data about any project which includes overall value chain. Once implemented this provides great ease in re-use, process improvement and implementation of subsequent projects.

Some of the early adopters have seen the benefits of PLM in semiconductor space. A long term strategy needs to be built starting with a high impact business problem and then building on it to realize the full potential of the system for the overall business.


Using Android Multimedia Framework drastically reduces power consumption in mobile

Using Android Multimedia Framework drastically reduces power consumption in mobile
by Eric Esteve on 04-18-2013 at 8:05 pm

The multiplication of chips capable to run Multimedia processing (sound, image or video) in a mobile device, smartphone or media tablet, like Application Processor (AP), Baseband (BB), Codec or companion chip, each of these embedding one or multiple processor can be seen as a good opportunity to simplify the device architecture… However, running multimedia tasks on a non-optimized processor for that task, results in higher power and limits the device processing capacity, for example:

  • Running audio/voice tasks on CPU, instead of the DSP
  • Running imaging and vision tasks on GPU, instead of the imaging platform

This is especially true in the presence of an Operating System that is un-aware of the multimedia DSPs available in the system. CEVA has developed Android Multimedia Framework (AMF) to solve this system integration problem.

AMF is a system level software solution and allows offloading of multimedia tasks from CPU/GPU to most efficient application-specific DSP platforms. When running Android OS, you need either to develop such a solution by yourself, either to benefit from a ready to use framework, allowing using deeply embedded programmable hardware engines, and software modules optimized for them. Due to its OS agnostic standard API, CEVA’s AMF would comply with any Android endorsed mechanism for multimedia offloading (e.g. KLP).

On the above picture, we see how to use standard OpenMAX API (AMF complies with the current Android 4.x versions) and how to access, through an Host Link Driver located in the CPU subsystem, to the DSP, hardware drivers and Power Scaling Unit resources. The benefits and functions enabled by AMF include:

  • Multimedia tasks are abstracted from the CPU and are physically running on the DSP. Furthermore, tasks can be combined (“tunneled”) onto the DSP – saving data transfer, memory bandwidth and cycles overhead on the CPU
  • Scalability – developers can utilize multiple DSPs in the system through AMF, e.g. multiple CEVA-TeakLite-4 DSPs, or CEVA-TeakLite-4 for audio/voice and CEVA-MM3101 for imaging/vision tasks
  • Utilization of the PSU (Power Scaling Unit) available for CEVA DSPs to significantly lower power consumption further, when running multimedia tasks
  • Easy activation of CEVA-CV computer vision (CV) software library for the development of vision-enabled applications targeting mobile, home, PC and automotive
  • Support for future standards such as OpenVX, a hardware acceleration API for computer vision
  • Automatic tile management for multimedia tasks which includes managing memory transfers and organization into DSP memory for efficient processing
  • An optional Real-Time Operating System (RTOS) is offered for the DSP

The benefit of using AMF and run API on CPU and SW on DSP, instead of running both API and SW on the CPU is crystal clear: the latest drains CPU resources, which may not be such an issue if the first consequence was that this approach is absolutely not power efficient. Power in-efficiency is not only a design issue, or a fault, it’s a real crime when the system has been designed for battery-powered mobile application! Just think about Intel, almost unable to sell their application processor based on their own CPU, just because these are too much power hungry for the emerging and exploding smartphone and media tablet markets. We can see on the right side of the above picture an AMF based architecture where the API run on the CPU, within the Stagefright Framework, and the SW run on DSP, allowing to offloads CPU for other tasks and drastically reduce power consumption.

Eric Esteve from IPNEST

lang: en_US


The Nokia Diet: shedding pounds and adding margin

The Nokia Diet: shedding pounds and adding margin
by Don Dingee on 04-18-2013 at 12:40 pm

I was trying to find the comment one of my counterparts here made eight months ago that given the cash burn rate, Nokia would be out of business in eight months, so I could gloat a bit. Bzzzzzzt – cash position actually increased in 1Q13. The numbers tell a painful story about a company on a difficult diet to survive, one that analysts and pundits don’t like much.

Continue reading “The Nokia Diet: shedding pounds and adding margin”


Prediction is very difficult… is it a reason for writing down everything that springs to mind?

Prediction is very difficult… is it a reason for writing down everything that springs to mind?
by Eric Esteve on 04-18-2013 at 10:20 am

Smartphone and Media tablet markets are exploding, generating huge level of profit for the Apple, Samsung or Qualcomm, and we don’t see yet when the growth will stop. That’s a point. But is it a reason for analyst to write down everything that springs to mind, including obvious insanity? Let’s have a look at the diagram below, and try to figure out if it makes any sense…

This figure is supposed to demonstrate that worldwide shipment of integrated chip solution for mobile devices will grow from almost 0 to 46% from 2012 to 2018 (integrated AP/BB/WC in brown). In fact, if you consider that such integrated solution will serve the low end side of the smartphone market, this trend makes sense. Let’s now have a look at the way the figure is built, more specifically at the Y scale and the multi-color bar height:

  • To describe the market adoption for one solution (here Integrated AP/BB/WC) in competition with other solutions (like Standalone AP, Integrated AP/BB etc.), an analyst will tend to use a percentage based graphic; is it the case here?

    • No, because the Y scale clearly indicates (Millions)
    • No, because the seven bars are not of equal height
  • So, in this case, the bar height represents the shipment value in Million, which is a very acceptable representation…
  • Except that, in this case, this analyst shows that the forecast for “Total Mobile Device Integrated Platform” is growing by… 10 or 15% from 2012 to 2018! Here, I really have a problem trusting this analyst. Why? Just see below!

I have started building this forecast back in 2010, initially to help calculating MIPI IC forecast (see the other graphic below) and I keep updating it using “actual” values, smartphones shipment effectively measured each quarter. To be honest, I must say that I had to revise this forecast many times, because (like many of us) I had under-estimated the incredible growth of smartphone shipments, in the 40% range year after year. Will this growth halt during the next couple of years, as regions like Europe, Japan and North America have reached around 60% penetration rate for smartphone? I don’t think so at all, because the reservoir for new adopters (people who buy their first smartphone) is simply huge when simply counting China, India and Brazil. These new adopters will buy enough smartphone so the market will double within the next five years, at least this is my prediction. That’s why I don’t understand the prediction made by the above mentioned analyst. But I am open to any suggestion, if anybody can figure out why the “Total Mobile Device Integrated Platform” graphic forecast such a minimal growth…

Eric Esteve from IPNEST

lang: en_US


Kilopass Sidense Legal Battle

Kilopass Sidense Legal Battle
by Eric Esteve on 04-17-2013 at 4:12 am

The decision made by United States Court of Appeals for the Federal Circuit, “Affirming” the District Court for the Northern District of California’s summary judgment of non-infringement on Kilopass’ patent claims and its dismissal, with prejudice, of all remaining claims against Sidense, is certainly a good news for IP and EDA vendors playing a fair sales and marketing game in the field. Let’s make the assumption that you have not infringed anybody else rights, but developed innovative product (IP function or EDA tool), be clever enough in marketing the product and generate numerous design win, so your sales revenue start growing fast, leading your direct competitor to prefer using the legal field instead of fair market competition… We have seen many legal cases in the recent years in the EDA and IP ecosystem, and I am almost sure that some of these cases have been initiated to compensate for a marketing weakness.

The “typical” case is as follow: an IP vendor (or an EDA vendor) use to get the highest market share on a specific segment, enjoying good sales revenue because the company was the first to have positioned on this segment. Being the market leader, the company has neglected to develop new product. Suddenly, a new comer jump in this segment, bringing a really innovative product. If the product is really good, and the company is able to develop a top class marketing and sales organization, sales eventually rocket (even if this process takes at least 3 to 5 years), and the historic vendor see his market share downsizing and start losing best customers. Is it too late for this vendor to come back with a really innovative product? Probably not, but it looks much easier to initiate a legal case, management just need to select and pay a lawyer firm instead of investing in R&D: searching for a design guru, rebuilds a team, develop and launch a competitive product…

Do you know what? I am sure that this biased strategy works from time to time, as neither a judge or a jury are supposed to have PhD and to be able to make a decision on a topic they are far to understand!

I don’t know the NVM IP market history enough to be able to understand if the legal case between Sidense and Kilopass was similar to this above described typical case. That I know is that Kilopass was well established on this market segment, that Sidense was enjoying fast growing sales during the last couple of years, when the case was initiated by Kilopass, and that Kilopass has lost this case two times: “in summary judgment, and now on appeal, United States courts have agreed with Sidense that Kilopass’ lawsuit against it was entirely without merit.”

We can easily understand why litigation counsel for Sidense, Roger Cook, partner at Kilpatrick, Townsend and Stockton, was gratified by the win.

“Judge Illston ruled in Sidense’s favor on the patent infringement claims for four separate reasons, each one of which was soundly based and by itself sufficient to defeat Kilopass’ infringement claims. Outcome of the appeal was never in doubt. In more than 40 years of handling patent infringement cases, this one stands out. Sidense is seeking and richly deserves recovery of its attorney fees from Kilopass on the basis of bad faith and baseless patent litigation. Companies who engage in this type of anti-competitive litigation need to pay the price.”

Eric Esteve

lang: en_US