CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Cadence Enters the RTL Power Estimation Game

Cadence Enters the RTL Power Estimation Game
by Bernard Murphy on 12-09-2015 at 12:00 pm

At the Cadence front-end summit last week, Jay Roy presented the Cadence Joules solution for RTL (and gate-level) power estimation. Jay is ex-Apache, so knows his way around RTL power estimation which should make Joules a product to watch. Joules connects very natively to Palladium for power characterization for realistic software loads, which I’ll cover in a separate blog. Here I want to focus on Joules as a characterization competitor to Apache/Ansys, Atrenta/Synopsys and other products.

Jay’s claim, and I think he’s right today, is that Joules has all the pieces to get high accuracy for RTL power estimation. They have Genus for synthesis and Innovus for implementation so they can do (somewhat) production quality fast estimations straight from RTL and know they are (somewhat) going to correlate with the real implementation, and therefore they can get power estimates from RTL simulations which will correlate within ~15% of gate-level estimation. Jay showed a table of comparison which indeed support this assertion.

You may notice I am (somewhat) hedging my support for the attainable level of accuracy. I also know a little about this domain and some of the challenges in RTL estimation. Part of the problem is indeed in using the same tools for fast physical synthesis as you use for production implementation. But that’s not all of the problem. Fast physical synthesis is fast because it cuts corners and that can lead to correlation problems between RTL and gate-level estimates, even if you use the same physical synthesis tools you use for production.

It seems obvious that the way to understand this problem should be a detailed analysis of sources of miscorrelation between RTL and gate-level estimates. But I have yet to see such an analysis from any provider and that’s a problem because it leads to unscientific trial and error approaches to improving correlation, with no deep understanding. Scientific approaches (you know, start with a hypothesis, test against data) would provide a credible basis for knowing how to repeatably improve correlation or, just as important, knowing that perhaps 15% is as low as you can go and you cannot repeatably improve on that. This would be a lot of work, but whoever does this first will be able to claim the laurels of true expertise in this domain.

I don’t think it is necessary to test every conceivable design – that would not be a scientific approach. Useful hypotheses are simple – I’ll offer a couple to get the ball rolling. First, I believe the harder you push performance, the worse the correlation will become. The harder you push, the more buffers have to be upsized; also there are implications for routing in the presence of factors not considered in fast estimation (DFT, detailed routing, signal integrity, …), leading to yet more buffer upsizing, further impacting power. A related but not identical hypothesis is that accuracy will negatively correlate with the number of near-critical paths. As you get into implementation, some of these will become critical, requiring (probably) buffer upsizing; the more of these you have, the more implemented circuit power will deviate from initial estimates. Cadence has a running start with a fully integrated solution which should minimize known systematic sources of error from the estimation tool – they could lead the field with a detailed correlation analysis.

None of this is intended to diminish the role Joules can play today. As far as I know today they have the only full in-house flow for estimation based on implementation class physical synthesis, so they are likely to be best in-class for estimation until Synopsys inevitably releases something similar. And then they will both have a significant edge in accuracy over Apache and Calypto for the foreseeable future.

To learn more about Joules, click HERE.

More articles by Bernard…


New Book: Mobile Unleashed!

New Book: Mobile Unleashed!
by Daniel Nenni on 12-08-2015 at 6:00 pm

This is the origin story of technology super heroes: the creators and founders of ARM, the company that is responsible for the processors found inside 95% of the world’s mobile devices today. This is also the evolution story of how three companies – Apple, Samsung, and Qualcomm – put ARM technology in the hands of billions of people through smartphones, tablets, music players, and more.

It was anything but a straight line from idea to success for ARM. The story starts with the triumph of BBC Micro engineers Steve Furber and Sophie Wilson, who make the audacious decision to design their own microprocessor – and it works the first time. The question becomes, how to sell it? Part I follows ARM as its founders launch their own company, select a new leader, a new strategy, and find themselves partnered with Apple, TI, Nokia, and other companies just as digital technology starts to unleash mobile devices. ARM grows rapidly, even as other semiconductor firms struggle in the dot com meltdown, and establishes itself as a standard for embedded RISC processors.

Apple aficionados will find the opening of Part II of interest the moment Steve Jobs returns and changes the direction toward fulfilling consumer dreams. Samsung devotees will see how that firm evolved from its earliest days in consumer electronics and semiconductors through a philosophical shift to innovation. Qualcomm followers will learn much of their history as it plays out from satellite communications to development of a mobile phone standard and emergence as a leading fabless semiconductor company.

If ARM could be summarized in one word, it would be “collaboration.” Throughout this story, from Foreword to Epilogue, efforts to develop an ecosystem are highlighted. Familiar names such as Google, Intel, Mediatek, Microsoft, Motorola, TSMC, and others are interwoven throughout. The evolution of ARM’s first 25 years as a company wraps up with a shift to its next strategy: the Internet of Things, the ultimate connector for people and devices.

Research for this story is extensive, simplifying a complex mobile industry timeline and uncovering critical points where ARM and other companies made fateful and sometimes surprising decisions. Rare photos, summary diagrams and tables, and unique perspectives from insiders add insight to this important telling of technology history.

The forward by Sir Robin Saxby alone is worth the price of admission, not to mention the picture of Simon Segar as a young engineer when he first joined ARM… 😉 There is also a cameo by Wally Rhines from his TI days.

I truly believe you need to fully understand, as a semiconductor professional, how we got to where we are today to better understand where we are going tomorrow and that is what this book is all about. On a personal note, writing books like this is a lot like giving birth (although my wife may disagree). It was nine months of hard work but let me tell you one thing, Don Dingee made this whole process a lot easier. Don is the most dedicated, thorough, and hardworking researcher I have ever worked with, absolutely!


The Twists and Turns of Xilinx vs Altera!

The Twists and Turns of Xilinx vs Altera!
by Daniel Nenni on 12-08-2015 at 12:00 pm

The battle between Xilinx and Altera continues to be one of the more interesting stories to cover. It really is the semiconductor version of a reality TV show. In the beginning it was two fabless companies partnered with rival foundries going head-to-head controlling a single market that touches a variety of industries.

Then things got interesting when Xilinx left UMC to share TSMC with Altera taking the foundry differences out of the equation. Next Altera left TSMC for Intel? Say what!?!?! Then Altera did a head fake back to TSMC and Intel bought them for a 56% premium! Now that process differences are back in the equation let’s take another look.

In the beginning FPGA vendors had very close relationships with the foundries. FPGAs were used during process development and ramping due to their dense design blocks that are used repeatedly throughout the chip. It was a very intimate partnership, one that made FPGAs bleeding edge chips at the process level. That all changed of course at 28nm when Xilinx joined Altera at TSMC which brought a level playing field where design and implementation was key.

It is well documented that the first FPGA vendor to a new process node is awarded majority market share. Even Intel CEO Brian Karazchi recently called it out as one of the three reasons why Intel bought Altera during a fireside chat with John Pitzer of Credit Suisse:

“there is strong data that suggests that the percentage — the company that’s had leadership position, first products on the first node, you have to have the right design, right architectural point and all, but they’ve tended to gain share.”

For the record, Xilinx won the 28nm battle by out-implementing Altera at TSMC 28nm and again at TSMC 20nm. Altera moving to Intel Custom Foundry made the race to FinFET interesting but Xilinx again won that one. In fact, I have yet to see Altera/Intel 14nm silicon while Xilinx started shipping FinFET parts at the end of Q3 2015. Last word on the Altera roadmap had “Cedar” (replacing Cyclone) fabricated using TSMC 16nm (due to “cost/power” reasons) to be delivered 1H 2016. “Oak” is Intel 14nm and is due in 2H 2016. “Sequoia” is Intel 10nm due sometime in 2018.

The good news is that technically Altera will win the 10nm process node race since Xilinx is skipping 10nm in favor of an accelerated 7nm process. The bad news is that TSMC 7nm will be in production at about the same time as Intel 10nm so it will be a hollow win.

It is too soon to tell if the Intel process is favorable to FPGAs, 14nm will tell us that next year. I highly doubt Intel will use Altera FPGAs to ramp their 10nm process so it really is a coin toss. Even so, it may not be enough advantage over Xilinx/TSMC if they are a process node ahead. Remember, I’m not a journalist reporting the news here, this is the opinion, observation, and experience of a 30+ year fabless semiconductor professional who also likes to write.


Syncing Up CDC Signals in Low Power Designs

Syncing Up CDC Signals in Low Power Designs
by Ellie Burns on 12-08-2015 at 7:00 am

So far in my blog series on low power we’ve looked broadly at what’s changing in the low power verification landscape and focused on a new methodology developed by Mentor Graphics and ARM called successive refinement, which is now included in the UPF standard. Power management techniques create their own brand of clock domain crossing (or CDC) problems, so it is important to include CDC verification in the successive refinement, or any, verification flow.

In order to better understand these new CDC challenges and how to deal with them, I’ve asked our resident CDC expert, Ping Yeung, to join me. Ping has studied these issues and helped develop their solutions for 15 years and presented several papers on the topic.

Hello Ping. How about starting out with what CDC is in general, and what is it trying to address?

Certainly, Ellie. CDC signals are having a greater impact because people are breaking up their designs into more clock and power domains. Partitioning an ASIC or SoC into multiple power domains is a very effective way to reduce power consumption. The power of these domains is then controlled by either switching off power or reducing voltage levels.

The tricky thing is that partitioning a design creates various challenges because all of the signals going to and from these different domains need to be synchronized. If they are not, they will behave unpredictably. When it comes to power specifically, the interdependence of logic between power domains requires designers to add isolation, retention, and voltage shifter components at the power domain interfaces. The addition of power control logic, which is becoming increasingly prevalent in all sorts of designs, introduces new challenges to both the design and verification efforts.

There are two main challenges.

The first problem introduced by splitting the design into multiple power domain is that when clock signals cross power domains, they are not synchronous anymore because of the level shifters or isolation cells that have been inserted at the domain interfaces. The power domains affect the clock tree and the reset tree. And every part of the design requires the clock and reset signals to operate correctly. So when you’re partitioning your design into multiple power domains, you have to be very careful about how those clock and reset trees need to be distributed.

This is not a big concern with clock gating, but when you start using multi-voltage techniques that’s where real problems come in. The same goes for dynamic voltage frequency scaling (or DVFS) which is useful for making tradeoffs between performance and power by scaling the voltage either up and down. But when you shift the voltages, signals crossing from one power domain to the other must move from one voltage level to another. As a result, clocks in different power domains with significant voltage differences are no longer synchronous even though they may have the same source.

Now let’s talk about the second issue. When you introduce power domains in a design, they need to be controlled by various control signals that are generated by a power controller. This entails introducing new modules and new signals into the RTL code (i.e., the power controller and all the power management architecture that goes with it). These are not part of the original specification or the RTL code, but are captured in the Universal Power Format, the UPF file.

All of those control signals are generated by the power controller, so they correlate to the power controller clock domain. And the power controller clock often runs independently from the design clock, resulting in a major clock domain crossing between the power controller and the rest of the design. So you have to make sure all these power control signals, which are introduced by the UPF, are synchronized with the rest of the design before they’re used. If you don’t you have synchronization problems and unpredictable behaviors again.



Figure 1. Power Aware CDC Flow

So, what is the solution?

In order to verify this type of design, you need to represent the design with the all of the domain information in place. The designer needs to know which power domain a signal belongs to, they want to know which clock domain it belongs to, and they want to know which reset domain it belongs to. That way they can make the decision as to whether there is a CDC problem or not. Because the power domain will impact the clock and reset domains, we want to present this information so that when the designer makes a decision of partitioning, they understand immediately the impacts on the other things in their design.

Designers now have a new flow available to help them get these tricky domain crossings right. Mentor’s Questa CDC solution supports a new concept in UPF 2.0 and UPF 2.1 called the power supply set(not to be confused with the power supply net). This is a power network grouping option that allows designers to define and test the power distribution network earlier in the project cycle before the power distribution network has been implemented. So you don’t need to know the physical implementation yet. Using the supply set, I can define the power domain at the RTL level early in the design cycle without knowing which voltage or power supply net it is actually connected to. So you can verify this regardless of how the supply net will be implemented.

The new power distribution network in UPF 2.1 is an example of the successive refinement methodology you’ve been talking about in your blogs, Ellie. In this context, the power network can be incrementally built over the duration of the project cycle by the different design project teams. The block and system designers can begin to verify the power management logic before the power distribution network has been implemented, then the final power management logic verification will occur later in the design flow when the physical designers add the power distribution network.

That’s very useful because when you are integrating multiple IP together, every IP can come with its own supply set. Now when you integrate, you can look at the supply set at the IP level and start finding a common ground to satisfy all the supply sets. And then you can design the supply net.

When we do the verification, we can now actually look at the supply set and figure out, by looking at the supply set, how many power domains you have in your design. Using the supply set information you can make sure the design can support those power domains later on. You use the supply set to verify whether your clock and reset trees were partitioned correctly. Once you have the tree information refined, you can run the CDC analysis using Questa CDC.

Our customers have already had success using our CDC solution in production. For example, AMD presented a Power Aware CDC (PA-CDC) verification paper at DAC in 2014. One of the things they uncovered was that control signals were not synchronized. They reported that complete PA-CDC checking at the RTL allowed them to find issues earlier in the design cycle and make turnaround times faster, and they used the PA-CDC flow to double check their generated UPF files against the RTL design.

Thank you, Ping, for giving us a quick insight into the effects of advanced low power management techniques on CDC design and verification. For those wanting to learn more, please check out the new whitepaper, Power Aware CDC Verification of Dynamic Frequency and Voltage Scaling (DVFS) Artifacts, which was presented at DVCon Europe 2015.

I’ll see you next year when we talk about what to look for in a low power debugging flow.


Intellectual Ventures Patents for Internet of Things (IoT)

Intellectual Ventures Patents for Internet of Things (IoT)
by Alex G. Lee on 12-07-2015 at 4:00 pm

More than 15,000 Intellectual Ventures’ US issued patents and published patent applications are reviewed for finding the good candidates for the IoT strategically packaged patent portfolio. Even if the Internet of Things (IoT) gets a huge attention recently, the concept of interconnected devices and connecting billions of devices to the internet in the IoT is not new and has been researched for over 10 years. Thus, there may be a large number of patents (that were not intended for specific IoT application at the time of invention) that can be exploited for developing new IoT business by forming the strategically packaged patent portfolio for providing the new IoT value propositions.

The strategically packaged patent portfolio is the collection of the existing patents that can be exploited for developing new products/services (and thus, new business) by integrating the value propositions of each patent of the portfolio. The strategically packaged patent portfolio can be exploited for monetization through patent sale, patent licensing, commercialization, spin-off, patent banking, and financing.

To speed up the review process, carefully designed search keywords for the specific IoT applications are used with the search tool PatSnap. Through the IoT application motivated analysis, more than 100 the IoT related patents, which can be exploited for the IoT strategically packaged patent portfolio, are found. Intellectual Ventures’ IoT related patents cover IoT Security, IoT Platform Connectivity, Connected Cars, Wireless Sensor Network, Smart Grid, Connected Health, IoT Intelligence (e.g., machine learning, predictive analytics) and Smart Home.

Followings summarize some examples of the key Intellectual Ventures’ IoT related patents.

US8195106 illustrates the system for controlling vehicle remotely via devices in proximity. The devices in proximity include an information system, an audio and/or video system, a heating and/or air conditioning system, a lighting control system, a navigation system, a lock system, an ignition system, a driver settings system, a security system, a camera and a communication system. The devices in proximity transmit a signal to the user device (e.g., smartphone) to cause the user device to display information related to vehicle operations when the user is in an area that is near to the devices in proximity. The user, then, uses the user device to control the devices in proximity using the variety of different techniques such as locking a door or adjusting the heat via heating A/C system.

The measurement of physiological data can be utilized to detect abnormal situations in individuals. In some instances, physiological monitoring methods relate to the heart rate, electroencephalography (EEG), electrocardiography (ECG), body temperature, or oxygen density in blood. However, such physiological monitoring methods are not convenient for long-term measurements. For example, ECG measurements need electrodes to be pasted on the skin, which may reduce the suitability of ECG for long-term uses. US8172777 illustrates the sensor-based health monitoring system that is suitable for long-term uses. The health monitoring system includes a sensor unit that can detect body motion of a user. The health monitoring system analyzes body motion data received from the sensor unit to detect abnormalities of the user. Then, the health monitoring system communicates with the remote site such as a home or a hospital.

US6847995 illustrates the security architecture for providing secure transmissions within distributed processing systems. A server system is coupled to a network that is coupled to the distributed devices. The server system utilizes a security measure that is partitioned and distributed to multiple distributed devices. The distributed device receiving electronic information reconstructs the security measure by obtaining the various partitioned portions from the multiple distributed devices. The security measure can be the generation of a hash value for the electronic information to be transmitted.


More articles from Alex…


Evolution of Non Volatile Memory for Sensitive Data

Evolution of Non Volatile Memory for Sensitive Data
by Tom Simon on 12-07-2015 at 12:00 pm

When first interested in computers while I was in junior high school in the early 70’s I remember seeing a core memory board for the first time. It was a seriously large circuit board with a myriad of wires woven across it going through the tiny metal doughnuts that stored the bit values. The computers it went into only had a total of around 4K bytes of memory. The only other storage option was paper or magnetic tape, or for some extravagantly well-heeled institutions there was drum memory. I remember buying my first PC and being thrilled that it had 64K of RAM some years later.

Of course back then plain old switches were used to ‘store’ values for interrupt numbers, device addresses, or configuration data. Remember when disk controllers needed to have the dip switches set according to some cryptic data sheet so they had the correct number of heads, sectors and cylinders?

While the technology has changed incredibly since those days, the underlying need for various data storage types and sizes continues and is expanding. On one end of the spectrum there is mass storage in the form of hard disks or increasingly as solid state drives (SSD). Like core memory gave over to semiconductor based storage, we see DIP switches and even hard drives moving to the chip level. Removing mechanical components saves money and improves reliability. It also has a side effect of improving security in many instances. DIP switches could get accidentally, or even maliciously, changed. However, the need for storing small amounts of essential data has, if anything, expanded over the years.

Decades ago it was conceivable that you might add a device package to your system board just to store a couple of registers worth of data. Today it would be a crime. Even making room for traces that can be cut is out of the question with today’s board density. Really the only solution is to bring this data storage on-board into an SOC. To determine the best medium for any given storage need, we have to step back and look at the data storage requirements for the information we intent to save/use. Today designers can choose from e-fuse, NAND Flash, EEPROM, anti-fuse, mask ROM, or potentially, other types of storage.

At the bottom of the list if we go by capacity there is storage for things like MAC addresses, trim info, encryption keys, and of course configuration data that is fairly static. For this we probably do not want to add mask layers or change the fab process at all. This rules out quite a few of the above choices. If security is a concern too then we are most likely left with anti-fuse OTP, such as what is available from Sidense. They offer one time programmable (OTP) non-volatile memory (NVM) that can be added to just about any chip on any fab, even FinFET, AMS, or legacy CMOS. Their OTP NVM IP is drawn using a standard layer stack up with no special layers or extra processing.

Of course, mask ROM also uses a standard stack up, but the data has to be part of the mask set, which rules out using it for unique device trim, hardware addresses and security keys. OTP-NVM can be programmed on the tester or in the field. It also offers simulated re-write, so values can be updated in the field if necessary. Sidense can include an IP module for programming the NVM even if the supply voltages are only for logic. A higher voltage is used for programming, but it can be produced internally using a charge pump.

Next up the food chain is storage for things like boot code or microcode. These are usually unchanging as well, but could possibly be updated over the course of the life of the product. With this category of data, security becomes an even bigger issue. Having it on the SOC helps a lot, because observable circuit board bus traffic is not taking place. However, there are a number of reverse engineering techniques that rely on physical inspection or current monitoring during reads. Here is another place that Sidense OTP NVM excels. There is no physical marker when anti-fuse is programmed. During programming the 1T cell is modified in the gate oxide by overvoltage, but it leaves no detectable artifacts. Sidense also uses a symmetrical storage configuration so the read current does not change based on the bit value retrieved.

Sidense OTP can even simulate re-write up to the limit of bytes available by dynamically changing addressing. So, critical updates can be applied in the field if needed. This is known as few times programmable (FTP).

Using patented technology delivered as an IP product for one time programmable non volatile memory has a number of advantages. It comes with the physical implementation for the bit cell array and also with proven soft IP for the the support circuitry that embodies all the security. The entire package is rigorously tested using test chips for each target process for yield and reliability.

Things have come a long way from DIP switches and magnetic tape for parameter and boot code storage, but the demands for these vital pieces of data have grown and are central to our lives in many instances. We count on reliability and security in mobile communications, automotive systems, medical devices, and finance, among other things. It’s good that companies like Sidense are providing an essential link in the chain. For more information, I suggest going to the Sidense website.


CEVA Royalty Revenues in 2015 Supports Future IoT Design Win

CEVA Royalty Revenues in 2015 Supports Future IoT Design Win
by Eric Esteve on 12-07-2015 at 7:00 am

DSP IP addressing modem for the mobile phone market is still the flagship product and CEVA enjoys design-win at major semiconductor account (one of them being vertical and also selling the smartphone), but the acquisition of Riviera Waves in 2014 has been a strong sign of diversification. CEVA’ port-folio includes signal processing and IP supporting wireless interconnects standards like WiFi and Bluetooth or BLE. If you define a generic architecture supporting IoT, you need processing, short range wireless and sensors.

Historically, CEVA has developed a family of DSP IP cores to support modem for mobile phone application. In the early 2000’s, the top cell phone OEM, Nokia, Ericsson or Motorola were using TI baseband solution, including TI DSP core. But TI DSP was not licensable as an IP which could be used with another ASIC supplier. OEM had to buy the complete solution, branded OMAP later on, to benefit from TI DSP. TI strategy appeared to be great opportunity for CEVA and the company has started to penetrate the mobile phone market with TEAK and TEAK-Lite DSP IP cores.

If we explicit CEVA business model, we better understand the company dynamic. Because CEVA DSP is unique, like can be a processor core (but not a protocol related function like USB or PCI Express controller) the company can define a business model based on up-front license fee plus royalty, usually a small percentage of the chip ASP. Such a model provides several benefits when compared with up-front license fee only. Royalties linked revenues may come several years after the IP design-win. On a long period, the company revenue flow can be smoothed, the royalties being paid by quarter all along the chip production life time. Investors tend to prefer IP vendors who can define a business model based on royalties + up-front license fee to IP vendors using up-front license fee only…

If we consider CEVA, the design-win made during the 2000 decade have generated a revenue flow strong enough to heavily invest into R&D. The company has developed a family of various DSP core, each of them tailored for a specific market segment: CEVA-XC core for baseband, CEVA TeakLite-4 to address Audio/Voice/Sensing and CEVA XM4 and MM-3101 to support Imaging and Vision applications.

CEVA has enjoyed modem design-win at major semiconductor account (including vertical OEM building chips and selling smartphone), even if the company doesn’t disclose royalty revenue by market segment, our guess is that mobile communication generates the higher share. But CEVA is rapidly diversifying; the acquisition of Riviera Waves in 2014 has been a strong sign of this diversification. CEVA’ port-folio includes the various DSP IP families above listed and specific wireless IP supporting interconnects standards like WiFi and Bluetooth, including BLE.

If you define a generic IoT architecture, you need processing, short range wireless and sensors. CEVA has no sensor IP, but the signal directly coming from the sensor, once digitalized, is sent to a DSP. Moreover, if you want to use a low cost sensor, you will need strong and efficient signal processing (coming from CEVA DSP core) to clean and process the sensor output: the better the DSP, the better the result. Once the data has been processed, the system sends the information to an upper level (network, smartphone, base station, etc…) by the means of short range wireless communication like WiFi or Bluetooth Low Energy (BLE), both IPs available on CEVA port-folio.

To develop effective solutions for emerging application and support customers far before generating revenue require R&D resource and funding. Here we come back to the royalty based business model. Just take a look at CEVA revenue for the third quarter of 2015 of $16.2 million. Licensing and related revenue for the third quarter of 2015 was $8.6 million and royalty revenue was $7.6 million, an increase of 42% compared to $5.4 million reported for the third quarter of 2014. We can guess that a significant part of this royalty revenue can be assigned to new project development.

We find an interesting indication in the third quarter of 2015 earnings announcement. CEVA concluded eight new license agreements: three of the agreements were for CEVA DSP cores, platforms and software, and five were for CEVA connectivity IPs. Riviera Waves acquisition is about one year old but more than 50% of the new licenses are coming from connectivity IP. In fact CEVA is not only preparing the future through R&D, the company is already addressing emerging applications built around varieties of connected devices.

Is CEVA healthy IP vendor? In 3Q 2015 CEVA has registered all time high revenue of $16.2 million up 15% year-over-year and royalty revenue of $7.6 million, up 42% year-over-year. We think the real question should be: will CEVA be healthy in 2020?

Thanks to royalty-based business model, CEVA has the opportunity to invest into acquisition (Riviera Waves) and R&D efforts to address emerging applications. This strategy is already successful as CEVA is enjoying this quarter more new licenses with connectivity IP than with DSP IP cores, although Riviera Waves acquisition is about one year old (September 4[SUP]th[/SUP], 2014). If the ultimate question is about CEVA success in 2020, yes we think that the IP vendor should be successful at that date, with a revenue mix made of royalties linked with DSP IP supporting LTE and with license fees generated by interconnect and DSP IP for emerging connected applications.

Eric Esteve from IPNEST

More articles from Eric…


Optimizing power for wearables

Optimizing power for wearables
by Bernard Murphy on 12-06-2015 at 4:00 pm

I was at the Cadence front-end summit this week; good conference with lots of interesting information. I’ll start with a panel on optimizing power for wearables. Panelists were Anthony Hill from TI, Fred Jen from Qualcomm, Leah Clark from Broadcom and Jay Roy from Cadence. Panels are generally most entertaining when the panelists disagree. This group didn’t disagree on much, but that in itself was revealing. The main messages I heard were:

· A lot of this stuff is being built on older processes, in part for cost reasons I assume
· Analog is a much bigger part of IoT devices than digital
· EDA support for power in AMS design is weak to non-existent

There was a lot of discussion on “Big A, little D” and all the compute being pushed to the fog or the cloud, which doesn’t sound quite right to me. If these are intended to sit in any fashion on the Internet, you have to have at least some local compute, you have to have communication and you have to have (these days) security. That doesn’t add up to little D. Also anything required to provide real-time control has to have more than a little local compute power.

But none of that takes away from the reality that the A part on these devices is significant. And very little we do today in the power flow is any help with that. UPF/CPF still has no real understanding of analog. Verification is also a problem. A significant percentage of reworks are caused by bad connections across the AMS interface (for example use of incorrect level shifters). Along similar lines, clever power switching or voltage scaling requires regulators and these burn power too. So you can’t get to realistic power numbers for ultra-low power applications like wearables without considering the total system. So for me big takeaways are that tools and standards need to ramp up AMS-aware modeling and verification in the system (in fairness, UPF 3.0 should help with system-level modeling).

There was also a discussion on power sources. What you’re really trying to optimize for wearables is time between charges (or alternatively to minimize the inconvenience of charging) rather than that the power consumed. Yes, they’re related, but power in these devices is intrinsically low, so thermal is not a problem. However you do have to worry about consumers losing interest because of the charging hassle. Several participants talked about wireless charging from which I infer that there is probably a lot of activity in this area (I know Broadcom has a group working on this). There’s also something called the AirFuel Alliance, a recent partnership between inductive charging and magnetic resonance charging standards. Then there’s Qi (pronounced chee) which is yet another (incompatible) solution. Busy area, still very confusing but we can hope something will emerge as a workable standard at some point.

Still on power generation for wearables, there was a brief discussion on energy harvesting. Sounds like this is still very much a curiosity rather than a practical solution, except for specialized applications. Power delivery is down in the nA-uA range, nowhere near the mA range you need for anything aiming to communicate with the internet.

Good debate – wish it had gone further and there has been more fireworks, but still good information. For a review of the messy wireless charging market, read more HERE. To get my take on power in wearables, read HERE.

More articles by Bernard…


IDMs are Much Ahead of Fabless Semicon Companies

IDMs are Much Ahead of Fabless Semicon Companies
by Pawan Fangaria on 12-06-2015 at 7:00 am

In a balancing global economy, it’s a common phenomenon that at certain times a few sectors or segments within the sectors grow much faster compared to others. And a few companies within the growing sectors lead those sectors. Both the growing sectors and the leading companies in those sectors become the centers of attraction. In such a scenario, we tend to overlook the real picture and look at only the largest growth numbers of those centers of attraction.

Similar is the situation with semiconductor IDMs and fabless companies. Since the start of fabless model of semiconductor design, it has seen continuous high growth rate that outpaced the growth rate in IDMs in terms of percentage points. However, it’s important to note the base figures as well. Higher is the base, more difficult it is to attain a higher percentage growth number. Lower is the base, easier it is to attain a higher percentage growth; of course the company has to do well. Okay these are general perceptions; let’s look at the actual numbers.


Reviewing this IC Insights report, one can clearly see continuous high growth in fabless companies compared to IDMs. In 2010, only once, IDM semiconductor sales growth outpaced fabless semiconductor company sales growth by 6% (encircled in the chart above). According to IC Insights forecast, 2015 will be the second time when, I must say it differently; fabless company sales growth will be lower than IDM semiconductor sales growth. The reason of this twist this time is that the fabless sales growth will go into negative, while IDM semiconductor sales will remain flat. Let’s review the numbers of top10 (post-merger) IDM semiconductor sales as well as fabless semiconductor sales. Also ponder over the base sales figures in both cases.


The reason of 5% negative growth in fabless semiconductor sales in 2015 is due to 20% decline in Qualcomm/CSR’s sales. It’s obvious because Qualcomm lost Samsung application processor business as the Samsung started using its own Exynos processors in its smartphones. An interesting point to note in the fabless semiconductor table is that both the China players, HiSilicon and Spreadtrum are about to register very handsome growth in sales. Apple/TSMC sale is the application processor sales to Apple from TSMC.

It’s expected that going forward the fabless semiconductor sales will continue their high percentage growth. So, is there any possibility of their absolute sales going higher than IDM semiconductor sales anytime in future? I don’t think so.

See the absolute total sales of top10 IDMs and fabless companies in 2015; the IDM semiconductor sales are at $175+ billions compared to fabless semiconductor sales at $60+ billions. The IDM semiconductor sales are ~2.9 times higher than that of fabless semiconductor. The base sale of top IDM company, Intel is more than 3 times the base sale of top fabless company, Qualcomm/CSR. Only top3 fabless company sales are above the lowest ranked Sony’s sale among the top10 IDM companies. Sony is making great progress in O-S-D segment, an almost exclusive segment for IDMs. Among fabless companies, only Avago/Broadcom has some presence in O-S-D segment.

Any guesses on how and when fabless semiconductor sales can catch up with IDM semiconductor sales, if at all it can? We should also consider the possibilities of IDMs acquiring fabless companies where it makes sense for them to fuel innovative designs.

The IC Insights report can be found HERE for your reference.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Double Digit Growth and 10nm for TSMC in 2016!

Double Digit Growth and 10nm for TSMC in 2016!
by Daniel Nenni on 12-05-2015 at 12:00 pm

Exciting times in Taiwan last week… I met with people from the Taiwanese version of Wall Street. They mostly cover the local semiconductor scene but since that includes TSMC and Mediatek they are interested in the global semiconductor market as well. They also have an insider’s view of the China semiconductor industry which is very complicated.

The big news of course is that TSMC is predicting double digit revenue growth and 10nm is on schedule for production in 2016. What that really means is that Apple will use TSMC 16FFC exclusively for the A10 (iPhone 7) and 10nm will be ready for the A10x. Morris Chang of course predicted this last year when he said TSMC would regain FinFET market leadership in 2016. This also means that TSMC will officially have the process lead in 2016 since Intel has pushed out 10nm until 2017. So congratulations to the hard working people at TSMC, absolutely!

The other big news is that 7nm is also on track. It will be déjà vu 20nm to 16nm for TSMC where 10nm will be a very quick transitional node right into 7nm. 20nm and 16nm used the same fabs which is why 16nm ramped very quickly, one year after 20nm. 10nm and 7nm will also share the same fabs so yes we will see 7nm in 2017 and that means TSMC 7nm will again have the process lead over Intel 10nm. Exciting times for the fabless semiconductor ecosystem!

Given the quick transition of 10nm to 7nm, quite a few companies will skip 10nm and go right to 7nm. Xilinx has already publicly stated this, I’m sure there will be more to follow. SoC companies like Apple, QCOM, and Mediatek that do major product releases every year will certainly use 10nm. I would guess AMD will use 10nm as well to get a jump on Intel. That would really be interesting if AMD released 10nm and 7nm CPUs before Intel. The server market would certainly welcome the competition.

The other interesting news is that Chipworks confirmed that the A9x in the iPad Pro is manufactured using TSMC 16FF+. I have read the reviews of the iPad Pro and have found them quite funny. One very young “Senior Editor” from Engadget, who has zero semiconductor experience and doesn’t even own an iPad Pro, made this ridiculous statement:

“It’s often vaunted that ARM-based chips are more power efficient than those based on Intel’s x86. That’s just not true. ARM and x86 are simply instruction sets (RISC and CISC, respectively). There’s nothing about either set that makes one or the other more efficient.”

I brought my iPad Pro with me to Taiwan and must say it is a very nice tablet. When it first arrived I was a little shocked at how big it actually was but the performance, display, and battery life is absolutely fabulous! I’m comparing it to a Dell Core i7 based laptop and an iPad 2 of course so the bar is pretty low. But it also runs circles around my iPhone 6. Given the size of the A9x (147mm) versus the A8x (128mm) I’m wondering if it will be used for the next iPad Air. If so, that would be the tablet of the year for sure.

And can you believe our own Oakland Warriors are 20-0 to start the season which is an NBA record!?!?!?!? GO WARRIORS!!!!!