Banner 800x100 0810

Linley Mobile

Linley Mobile
by Paul McLellan on 04-19-2013 at 12:11 pm

I was at the Linley Mobile Microprocessor conference earlier in the week. Well, just the first day since the second day overlapped with the GSA Silicon Summit. The first surprise was seeing Mike Demler in a suit. It turns out that he has joined the Linley Group as a full-time analyst in the mobile space.


Linley Gwennap started the day with his overview of the whole space. Smartphone forecast is increasing but there is a lot of growth in low-cost (sub $100 BOM) smartphones. By 2017 these will even be cannibalizing basic phones with about half of basic phones switched. So the smartphone market has a sort of “dumbbell” shaped distribution, with the high end (iPhone, Galaxy) part of the market already approaching saturation, and with a lot of future growth at the low end of the market.


Vendor consolidation is continuing as the standalone application processor (AP) model collapses. Although high end designs like iPhone have a separate application processor, Apple designs it themselves and so they are not a customer for merchant AP chips. The merchant market is almost entirely integrated baseband (BB) and perhaps more (wireless, GPS etc).

TI and Freescale exited the smartphone and tablet markets. ST Ericsson finally collapsed, unable to survive the loss of Nokia to Qualcomm. Mediatek shipped more than 100 million AP+BB chips. Spreadtrum released their first integrated AP+BB. In a break with trend, Intel got some smartphone AP wins (without integrated BB). They have not yet announced any roadmap for AP+BB (which they can do using the Infineon Wireless technology they acquired a few years ago).

The two big winners at present seem to be Qualcomm at the high end, replacing TI (and ST) at Nokia and Marvell at RIM. And at the low end, Mediatek is gaining market share fast.

While the high end top-selling phones like iPhone and Galaxy use a best-of-breed approach, picking and choosing vendors for various components, most others use a single vendor reference design. Obviously this reduces R&D cost for lower-volume models but only vendors with a complete portfolio can offer a complete reference design.

[TABLE] align=”left” class=”cms_table_grid” style=”width: 480px”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” |
| class=”cms_table_grid_td” | AP
| class=”cms_table_grid_td” | WCDMA
| class=”cms_table_grid_td” | LTE
| class=”cms_table_grid_td” | WiFi
| class=”cms_table_grid_td” | GPS
| class=”cms_table_grid_td” | NFC
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Qualcomm
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Marvell
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | In Qual
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Broadcom
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Mediatek
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Licensed
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | N
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | nVidia
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | AT&T
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Intel
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
|-


Moore, or More Than Moore?

Moore, or More Than Moore?
by Paul McLellan on 04-19-2013 at 12:05 pm

Yesterday was the 2013 GSA Silicon Summit, which was largely focused on contrasting what advances in delivering systems will depend on marching down the ladder of process nodes, and which will depend on innovations in packaging technology. So essentially contrasting Moore’s Law with what has come to be known as More Than Moore: 2.5D interposer-based designs using TSV and other innovations.

The first panel was focused on communication. The second on Internet of Things (IoT). Finally, the third was on integration challenges.

The industry is preparing for a 1000X increase in traffic (for example, data volumes doubled between 2010 and 2011). But there are major challenges. Moore’s Law is slowing down and cost reduction (per transistor) with each process node is dubious or not happening. On air interfaces we are now close to the Shannon limit of information/Hz. Integrating RF onto CMOS (especially in FinFET) is an opportunity/challenge. Advances in packaging technology, especially those that allow a mix of die in the same package, offer alternative ways of assembling systems if the cost can be got under control.

There seemed to be general acceptance that 20nm is not going to be significantly cheaper than 28nm which has a few implications:

  • 28nm will be a very long lived node
  • there may be opportunities for innovation at 28nm such as fully-depleted options to get many of the advantages (especially low power) of moving to 20nm without the additional cost (due to double patterning in particular)
  • cost-sensitive designs will not go to 20nm unless the volumes are enormous but premium products that can take advantage of the extra gates and lower power will eat the cost
  • old nodes such as 0.13um and 0.18um will continue to be important, and in fact these are both currently growth nodes
  • the cost of moving to 20nm is not just the wafer cost but the development cost, so it needs $Bs of revenue to justify
  • IP availability may be as important when moving to a new node, as process availability. SoC design groups cannot afford to design all their own Serdes, Phys etc

There seemed to be a general feeling that true 3D won’t happen any time soon, but 2.5D interposer-based designs will play a big role in the next few years. The two big challenges are that silicon interposers are expensive and organic interposer (what some people called 2.1D) may be better. The other problem big problem is thermal, getting the heat out from the design.

However, another driver for the More Than Moore packaging approach may be that “split chips” become important because analog turns out to be too difficult in FinFET, which has quantized transistor sizes. So analog (and RF) may need to be left outside the main SoC as we move down the process nodes.

Another packaging driver in the mobile market appears to be that RF front ends won’t have integrated filters (on-chip) so they will need to be integrated into the package to get close to the IC.

So Moore and More Than Moore are going to be required in the coming years.


Semiconductor PLM – Needs to be smart for techies

Semiconductor PLM – Needs to be smart for techies
by Pawan Fangaria on 04-18-2013 at 8:15 pm

During my long career in semiconductor, EDA, I have heard, believed and experienced that this is a knowledge industry swamped with rapid innovation and technology drivers; typical manufacturing product development processes like Gantt charts and others do not apply here. The fallback is that most of the time estimations are ad hoc, based on gut-feel or expert opinion. Not only schedule, most of the processes are run by individual preferences; in other words the whole process is more people driven than process driven. Naturally, we see missed targets, re-spins, cost overruns, lost market opportunities and so on. It is said that success rate to first silicon is 0%! And we attribute the Product Lifecycle Management (PLM) issues to high complexity of designs at nanometer scale, high density, analog and digital mixed-signal and so on.

After I read the Kalypso white paper on Semiconductor Lifecycle Management at Dassault website located at –
Semiconductor Product Lifecycle Management Industry Adoption, Benefits and The Road Ahead
my perspective changed. Yes, I believe if we take PLM as a strategic direction towards improving product lifecycle in semiconductor space, many of the issues related to short window of opportunity, time-to-market, reduced design cost, profitability etc. can see significant improvement.

So where is the problem? Why is PLM adoption slow in Semiconductor space? As I have experienced myself, there is no common standard packaged PLM tool in this space, amplified by lack of awareness and limited understanding of value proposition of PLM. It is important that a product must be looked at not only from technical angle, but also from commercial angle.

[Components of a comprehensive PLM solution for Semiconductor industry]

The white paper describes in great detail the analysis by Kalypso researchers (who interviewed Semiconductor industry executives) and proposes a comprehensive method of PLM specifically for Semiconductor industry which includes complete value chain including design, software, data management, supply chain and so on. A key message here is “Think big, start small, move fast and build incrementally”. By using this principle the PLM strategy can be successfully implemented. Unless it is fully implemented, we do not see major results. Often we try to do it all at once and end up with dissatisfaction. If implemented gradually and completely, I am sure we then start seeing results project after project.

With strategic PLM program in place companies are seeing time-to-market and time-to-profit being condensed by 5 to 30% meaning more revenues due to improved market penetration. It also frees up R&D resources to work on other new product development early in the cycle. Also, PLM helps in commercializing and launching products globally and concurrently. A summary of PLM benefits is concisely listed as –

PLM provides a single comprehensive repository of complete data about any project which includes overall value chain. Once implemented this provides great ease in re-use, process improvement and implementation of subsequent projects.

Some of the early adopters have seen the benefits of PLM in semiconductor space. A long term strategy needs to be built starting with a high impact business problem and then building on it to realize the full potential of the system for the overall business.


Using Android Multimedia Framework drastically reduces power consumption in mobile

Using Android Multimedia Framework drastically reduces power consumption in mobile
by Eric Esteve on 04-18-2013 at 8:05 pm

The multiplication of chips capable to run Multimedia processing (sound, image or video) in a mobile device, smartphone or media tablet, like Application Processor (AP), Baseband (BB), Codec or companion chip, each of these embedding one or multiple processor can be seen as a good opportunity to simplify the device architecture… However, running multimedia tasks on a non-optimized processor for that task, results in higher power and limits the device processing capacity, for example:

  • Running audio/voice tasks on CPU, instead of the DSP
  • Running imaging and vision tasks on GPU, instead of the imaging platform

This is especially true in the presence of an Operating System that is un-aware of the multimedia DSPs available in the system. CEVA has developed Android Multimedia Framework (AMF) to solve this system integration problem.

AMF is a system level software solution and allows offloading of multimedia tasks from CPU/GPU to most efficient application-specific DSP platforms. When running Android OS, you need either to develop such a solution by yourself, either to benefit from a ready to use framework, allowing using deeply embedded programmable hardware engines, and software modules optimized for them. Due to its OS agnostic standard API, CEVA’s AMF would comply with any Android endorsed mechanism for multimedia offloading (e.g. KLP).

On the above picture, we see how to use standard OpenMAX API (AMF complies with the current Android 4.x versions) and how to access, through an Host Link Driver located in the CPU subsystem, to the DSP, hardware drivers and Power Scaling Unit resources. The benefits and functions enabled by AMF include:

  • Multimedia tasks are abstracted from the CPU and are physically running on the DSP. Furthermore, tasks can be combined (“tunneled”) onto the DSP – saving data transfer, memory bandwidth and cycles overhead on the CPU
  • Scalability – developers can utilize multiple DSPs in the system through AMF, e.g. multiple CEVA-TeakLite-4 DSPs, or CEVA-TeakLite-4 for audio/voice and CEVA-MM3101 for imaging/vision tasks
  • Utilization of the PSU (Power Scaling Unit) available for CEVA DSPs to significantly lower power consumption further, when running multimedia tasks
  • Easy activation of CEVA-CV computer vision (CV) software library for the development of vision-enabled applications targeting mobile, home, PC and automotive
  • Support for future standards such as OpenVX, a hardware acceleration API for computer vision
  • Automatic tile management for multimedia tasks which includes managing memory transfers and organization into DSP memory for efficient processing
  • An optional Real-Time Operating System (RTOS) is offered for the DSP

The benefit of using AMF and run API on CPU and SW on DSP, instead of running both API and SW on the CPU is crystal clear: the latest drains CPU resources, which may not be such an issue if the first consequence was that this approach is absolutely not power efficient. Power in-efficiency is not only a design issue, or a fault, it’s a real crime when the system has been designed for battery-powered mobile application! Just think about Intel, almost unable to sell their application processor based on their own CPU, just because these are too much power hungry for the emerging and exploding smartphone and media tablet markets. We can see on the right side of the above picture an AMF based architecture where the API run on the CPU, within the Stagefright Framework, and the SW run on DSP, allowing to offloads CPU for other tasks and drastically reduce power consumption.

Eric Esteve from IPNEST

lang: en_US


The Nokia Diet: shedding pounds and adding margin

The Nokia Diet: shedding pounds and adding margin
by Don Dingee on 04-18-2013 at 12:40 pm

I was trying to find the comment one of my counterparts here made eight months ago that given the cash burn rate, Nokia would be out of business in eight months, so I could gloat a bit. Bzzzzzzt – cash position actually increased in 1Q13. The numbers tell a painful story about a company on a difficult diet to survive, one that analysts and pundits don’t like much.

Continue reading “The Nokia Diet: shedding pounds and adding margin”


Prediction is very difficult… is it a reason for writing down everything that springs to mind?

Prediction is very difficult… is it a reason for writing down everything that springs to mind?
by Eric Esteve on 04-18-2013 at 10:20 am

Smartphone and Media tablet markets are exploding, generating huge level of profit for the Apple, Samsung or Qualcomm, and we don’t see yet when the growth will stop. That’s a point. But is it a reason for analyst to write down everything that springs to mind, including obvious insanity? Let’s have a look at the diagram below, and try to figure out if it makes any sense…

This figure is supposed to demonstrate that worldwide shipment of integrated chip solution for mobile devices will grow from almost 0 to 46% from 2012 to 2018 (integrated AP/BB/WC in brown). In fact, if you consider that such integrated solution will serve the low end side of the smartphone market, this trend makes sense. Let’s now have a look at the way the figure is built, more specifically at the Y scale and the multi-color bar height:

  • To describe the market adoption for one solution (here Integrated AP/BB/WC) in competition with other solutions (like Standalone AP, Integrated AP/BB etc.), an analyst will tend to use a percentage based graphic; is it the case here?

    • No, because the Y scale clearly indicates (Millions)
    • No, because the seven bars are not of equal height
  • So, in this case, the bar height represents the shipment value in Million, which is a very acceptable representation…
  • Except that, in this case, this analyst shows that the forecast for “Total Mobile Device Integrated Platform” is growing by… 10 or 15% from 2012 to 2018! Here, I really have a problem trusting this analyst. Why? Just see below!

I have started building this forecast back in 2010, initially to help calculating MIPI IC forecast (see the other graphic below) and I keep updating it using “actual” values, smartphones shipment effectively measured each quarter. To be honest, I must say that I had to revise this forecast many times, because (like many of us) I had under-estimated the incredible growth of smartphone shipments, in the 40% range year after year. Will this growth halt during the next couple of years, as regions like Europe, Japan and North America have reached around 60% penetration rate for smartphone? I don’t think so at all, because the reservoir for new adopters (people who buy their first smartphone) is simply huge when simply counting China, India and Brazil. These new adopters will buy enough smartphone so the market will double within the next five years, at least this is my prediction. That’s why I don’t understand the prediction made by the above mentioned analyst. But I am open to any suggestion, if anybody can figure out why the “Total Mobile Device Integrated Platform” graphic forecast such a minimal growth…

Eric Esteve from IPNEST

lang: en_US


Kilopass Sidense Legal Battle

Kilopass Sidense Legal Battle
by Eric Esteve on 04-17-2013 at 4:12 am

The decision made by United States Court of Appeals for the Federal Circuit, “Affirming” the District Court for the Northern District of California’s summary judgment of non-infringement on Kilopass’ patent claims and its dismissal, with prejudice, of all remaining claims against Sidense, is certainly a good news for IP and EDA vendors playing a fair sales and marketing game in the field. Let’s make the assumption that you have not infringed anybody else rights, but developed innovative product (IP function or EDA tool), be clever enough in marketing the product and generate numerous design win, so your sales revenue start growing fast, leading your direct competitor to prefer using the legal field instead of fair market competition… We have seen many legal cases in the recent years in the EDA and IP ecosystem, and I am almost sure that some of these cases have been initiated to compensate for a marketing weakness.

The “typical” case is as follow: an IP vendor (or an EDA vendor) use to get the highest market share on a specific segment, enjoying good sales revenue because the company was the first to have positioned on this segment. Being the market leader, the company has neglected to develop new product. Suddenly, a new comer jump in this segment, bringing a really innovative product. If the product is really good, and the company is able to develop a top class marketing and sales organization, sales eventually rocket (even if this process takes at least 3 to 5 years), and the historic vendor see his market share downsizing and start losing best customers. Is it too late for this vendor to come back with a really innovative product? Probably not, but it looks much easier to initiate a legal case, management just need to select and pay a lawyer firm instead of investing in R&D: searching for a design guru, rebuilds a team, develop and launch a competitive product…

Do you know what? I am sure that this biased strategy works from time to time, as neither a judge or a jury are supposed to have PhD and to be able to make a decision on a topic they are far to understand!

I don’t know the NVM IP market history enough to be able to understand if the legal case between Sidense and Kilopass was similar to this above described typical case. That I know is that Kilopass was well established on this market segment, that Sidense was enjoying fast growing sales during the last couple of years, when the case was initiated by Kilopass, and that Kilopass has lost this case two times: “in summary judgment, and now on appeal, United States courts have agreed with Sidense that Kilopass’ lawsuit against it was entirely without merit.”

We can easily understand why litigation counsel for Sidense, Roger Cook, partner at Kilpatrick, Townsend and Stockton, was gratified by the win.

“Judge Illston ruled in Sidense’s favor on the patent infringement claims for four separate reasons, each one of which was soundly based and by itself sufficient to defeat Kilopass’ infringement claims. Outcome of the appeal was never in doubt. In more than 40 years of handling patent infringement cases, this one stands out. Sidense is seeking and richly deserves recovery of its attorney fees from Kilopass on the basis of bad faith and baseless patent litigation. Companies who engage in this type of anti-competitive litigation need to pay the price.”

Eric Esteve

lang: en_US


Denali+Tensilica+Cosmic = Cadence

Denali+Tensilica+Cosmic = Cadence
by Paul McLellan on 04-17-2013 at 1:00 am

I won’t be able to attend Chris Rowen’s presentation here at the GlobalPress Electronic Summit since I’m going to the first day of the Linley Mobile Microprocessor conference. In fact I wonder if Chris himself will make it since he was running in the Boston marathon on Monday. He finished about 10 minutes before the explosions but was close enough to hear them and see the smoke.

Anyway, I have a advance copy of his presentation which looks at a couple of things. One is Tensilica’s recently announced video processor that I already covered here. So I’m not going to cover that again.

Of course, Tensilica is in the process of being acquired by Cadence and so the other topic is how Tensilica fits in, or technically will fit since I don’t believe the acquisition has closed. Cadence are also in the process of acquiring Cosmic Circuits which may take some time to close since they are an Indian company.


Along with other IP Cadence already has, in particular resulting from acquiring the Denali product line and its expertise, Cadence has a much more rounded out IP offering than was the case as early as the beginning of this year. Martin Lund seems to have a fat wallet and likes going shopping.

The combination of the earlier Denali memory interfaces along with additional interfaces from Cosmic Circuits gives a rich portfolio of connectivity. Then the Tensilica offering, along with other partners such as ARM, gives a range of different specialized processors for particularly attractive markets. ARM and Tensilica are complementary, in the sense that the ARM processor is the control processor for a design and then one or more Tensilica processors can be used to offload, for example, video compression or hi-fi audio.


Having talked to Martin recently, Cadence has a sort of factory view of IP. If you just go out and acquire random IP based on price, there are basically limited quality standards and low expectation that everything will work well together. If IP is basically completely pre-characterized with no opportunity to make incremental change, it is hard to differentiate and the IP is not well matched to the design (for example, there may be a lot of silicon used up that implements features that are not used in the design). By adding a service component, the IP is customized to what is required without compromising quality or performance.

Where Tensilica fits in is to accelerate time-to-market with silicon-proven customizable design IP, optimized for various high-volume applications such as audio, video, cell-phone LTE modems. It is very complementary to Cadence’s other IP in connectivity, AMS design, VIP and so on. For the key market segments where Tensilica has customizable application specific dataplane processors, the Tensilica acquisition will really strengthen the Cadence IP offering, and make it much easier to go seamlessly from architectural definition to tapeout.


FPGAS – The New Single Board Computers?

FPGAS – The New Single Board Computers?
by Luke Miller on 04-16-2013 at 10:00 pm

I have always felt that FPGAs have been the red haired step child of Silicon Valley. Software weenies have hated them, they are mysterious and take too long to route. Even though they can be massively parallel and the most deterministic piece of silicon you can buy besides a million dollar ASIC, the GPU steals their glory, for now. Until the System Architect realizes once again, they just have to use the FPGA, sigh. But if they could design them out they would. Evil laugh… Muhahahahah

You are probably thinking what a horrible picture of FPGAs. I know but it is true but that is going to change and in the future really change! The game changer is of course the Zynq SOC from Xilinx. Let’s face it, and you know it, you only used that big honking Intel or Free Scale CPU to do BIT, Status, Control or some out of band math when it is tied to an FPGA. Then for some reason the software team always quote a $1,000,000 NRE for a change for one line of code. I found it a bit amusing when the FPGAs were called the off load engine, yeah right. Any time an FPGA meets a CPU, the CPU is the offload and the FPGA is doing the real work. Please no emails or comments just agree please.

This Zynq is really a Rack on a Chip, ROC. It is also an almost self-contained single board computer. Take some Elmer’s all glue and put some Flash and DDR3 and there you go, maybe a few pieces of macaroni like my kids do for art. My point is that the software guys are going to be programming FPGAs very soon. This will open the FPGAs up to these nerds like the GPUs are via CUDA and the likes. The next step is learning Vivado HLS. The communities that once disliked FPGAs are now the ones that will really fall in love with them. As for the board vendors, they have much to think about and perhaps relations with CPU vendors may become a bit stressed, as well with the RTOS as well, maybe.

If I could talk with the Xilinx CEO I would have one suggestion for him. The Zynq is indeed in the right direction and keeps Xilinx as the FPGA player for this node. They need to do more to address the GPU threat, and it is real. The advantage of the GPU is that the GPUs are virtually in every PC. No one has to buy an evaluation board for $2000. Ever hear of a college hacker with 2k? A whole underground world of nerdom supports the GPUs for free. If Xilinx could weedle its way onto motherboards or into a smart phone offering, Xilinx would gain a whole community that will develop apps, IP and more for free provided that Xilinx opens the tools up for free. I know that sounds self-serving and I used the word weedle but I can think of many areas where the Zynq will fit into smart phone or mother board realm. Sounds fun does it not? FPGAs are changing for the better and more probably coming soon, embrace it or probably lose your job to some new hire.

lang: en_US