Synopsys IP Designs Edge AI 800x100

Strategic Analog IP Power Management for SoCs

Strategic Analog IP Power Management for SoCs
by Daniel Nenni on 10-27-2013 at 12:00 pm

This tutorial describes how analog IP is becoming more important in any power management strategy and shows the major analog building blocks to manage power and temperature in a SoC on leading edge technology nodes.

The tremendous demand for high-performance computing devices has led to aggressive technology scaling, allowing for millions of transistors to be placed on the chip. This technology scaling, predicted by Moore’s Law in 1965, has led to an exponential growth in dynamic power dissipation due to faster and faster transistors. In sub-nanometer technology nodes, a significant portion of the total power consumption is due to transistor leakage currents. In recent years, we have seen an explosive growth in battery-operated mobile devices such as notebooks, cellular phones and tablets. In these mobile applications, energy consumption is the most critical resource and the transistor leakage currents have a direct impact on the battery life.

There are various design techniques used for reducing power dissipation in a digital circuit, such as clock gating and power gating. More recent techniques include dynamic voltage and frequency scaling (DVFS). This technique is often used for reducing the supply voltage or clock frequency in parts of the system that does not require peak performance. To effectively deploy DVFS, we need voltage regulators that can respond quickly to changes in the system workload. In state-of-the-art processors, the workload and transient currents can change 100X faster than the external voltage regulators can respond too. Hence, there is an increasing motivation in a system on a chip (SoC) design to implement voltage regulators on the die.


Power dissipation is not the only problem the designer needs to contend with in the SoC. System workloads also introduce hot-spots at different regions of the die and we need to monitor the temperature to ensure that the heat generated by circuits remains below an acceptable thermal limit. Higher temperatures also increase the leakage currents, reduce the circuit performance and can increase the package and cooling cost. Hence, temperature sensors are also an integral part of the overall energy management strategy in a SoC design.


The design of analog circuits on sub-nanometer process nodes is very sensitive to layout and demand careful attention to detail. The analog circuits also suffer from reduced voltage headroom and require careful isolation from noisy switching digital circuits. In addition, the integration of analog blocks requires the foundry to support transistors running at higher voltages and employing thick-oxide transistors with longer channel lengths to help reduce device variability. This usually complicates the validation effort since analog and digital circuits have to be tested together thereby increasing the design cycle time.

In summary, to effectively manage both power dissipation and temperature on the die, modern SoC’s need to integrate an increasing number of analog functions. Digital IP has been readily available from most foundries, but analog IP is usually not. This means analog IP needs to be part of the overall SoC design strategy from the beginning and should not be left as an afterthought.

ARM TechCon

Bal Sandhu | Engineer, ARM Inc.
Session Code: ATC-116
Location: Grand Ballroom F
Date: Tuesday, October 29
Time: 1:30pm-2:20pm
Track: Maximizing Chip Energy Efficiency
Area of Interest: Chip Design, SoC Implementation

lang: en_US


An easier way to deal with design rule waivers (video)

An easier way to deal with design rule waivers (video)
by Beth Martin on 10-26-2013 at 11:00 am

At advanced nodes, design rules are necessarily more complex and restrictive. Although most of the time you can find a way to live with them, sometimes it’s necessary to seek a waiver from the foundry for a particular design feature. This involves documenting the feature, the design rules in question and the conditions under which the rules are failing, and can be a real pain in the neck.

Wouldn’t it be nice to have a more automated way to document waiver requests? If you’re a Calibre user there is a way, although you might not yet be aware of it. To rectify that, Saunder Peng, Mentor Graphics’ TME for Calibre Interfaces, has created a short video explaining exactly how document, review, and share Calibre DRC errors and waivers between designers and foundries using the facilities of Calibre RVE.

Once you have approval from the foundry to waive specific checks on a particular part of your design, how do you make sure those violations no longer pop up in your verification runs and waste your time? That is explained in this video

More “How To” videos are available on the YouTube IC Nanometer channel. Do you have an idea for a Calibre How-to video? They take requests! Chime in on the Mentor Communities Discussion Board.


An Affair to Remember: EDA’s 50th Anniversary

An Affair to Remember: EDA’s 50th Anniversary
by Daniel Nenni on 10-26-2013 at 11:00 am

What an amazing night! I celebrated the 50[SUP]th[/SUP] anniversary of the industry I grew up in! With my beautiful wife at my side and a table full of friends we all went down memory lane, ate, drank, and then enjoyed the auction.

The tour of the new computer museum was amazing. I was learning so much up until the 1970’s, then I was living it! There were minicomputers, microcomputers and the game systems they inspired, right in my sweet spot. Wait, computers and games I owned and loved are now in a museum!?!?!!? AARP here I come! I did impress my wife by knowing quite a bit of the computer lore. It took us back to the day we met. 32 years ago I was fixing a Data General Nova Computer that ran a department store point of sale system. She did alterations at the store and our eyes met across the room. I followed her to the lunch room and we have been lost in love ever since.

The dinner party was great. There must have been close to 300 people dressed in their finest: luminaries, co-workers, friends, and foes alike. We didn’t get a chance to talk to everyone but the people we did talk to were very flattering and left my wife with the impression that I’m a Very Important Person. Thanks for that by the way.

The auction was the fun part. The funniest bit was John Cooley paying $200 for a box of wine and the Auctioneer responding that it was $150 more then he paid for his suit! The CEO lunch auctions were interesting. Wally Rhines went for the most, $2,500, to no other than ex-Cadence CEO Joe Costello. That would have been my choice too but I have lunch with Wally for free so there was no need to bid. Here is a quick Wally lunch story: I was having lunch with a friend from Altera and Wally came to our table and chatted a bit. After Wally left my friend said, “Well done Dan, where did you get the Wally impersonator?”

The highest bid auction item went to me: the pit crew package for Flying Lizard Motorsports. eSilicon is one of the team sponsors so Jack Harding donated this one. While being on a Porsche racing pit crew was not on my bucket list it should have been. I have driven Porsches for the past 25 years and dream of driving one 100+ MPH (legally). I also have the utmost respect for Jack Harding. I worked with him at Zycad many years ago and, as the founder of eSilicon, Jack revolutionized the ASIC business model enabling more design starts than any other person I know.

All in all, it was an expensive dinner with the SemiWiki sponsorship, private donation, auction, and a new dress for my wife, but as you can see by the picture above it was well worth it! Getting EDA into the Computer History Museum is now my favorite cause and I can’t wait to take my family to see it when it is complete since I am part of that history. Thank you EDAC!

lang: en_US

More Articles by Daniel Nenni…..


TSMC ♥ Synopsys (HSPICE)

TSMC ♥ Synopsys (HSPICE)
by Daniel Nenni on 10-24-2013 at 5:05 am

In case you haven’t noticed, Synopsys has been in the press lately talking about their relationship with TSMC. Since I’m an internationally recognized industry expert they gave me a call for a briefing and I was happy to do it. Staying connected with the #1 EDA company is important and fun since I get to ask questions that most people don’t dare. Did you know that 90% of the 20nm tape outs used Synopsys tools? Not surprising at all. But first let’s set the stage here with what happened at the TSMC OIP conference earlier this month.

TSMC made a simulator company partner of the year for two reasons. One is modeling accuracy, which is key to silicon correlation, and two, simulation is now the center of the EDA universe. Case in point: SemiWiki hosted a webinar on Device Noise Analysis of Switched-Capacitor Circuits Webinar and it maxed out at 100 people very quickly. That’s a big number for a 4:30pm PST webinar. Simulation is king, absolutely. Unfortunately as design cycles shrink and mixed signal circuits are being forced down to leading edge process nodes, the simulation challenge is increasing and that spells opportunity for Synopsys.

The most interesting point of the conversation with Synopsys was about the Magma FineSim simulators. Not only are they actively supported, they are still in development. Customers love FineSim so this was big news to me. We also talked aboutHSPICE which is still the golden simulator for all of the foundries including TSMC and I don’t see that changing ever. HSPICE was my absolute favorite product when I worked for Avant!, people loved it and still do. Synopsys is also still supporting and developing Custom Sim, which includes the simulators that were acquired from Epoch and Nasda. Who knew?

The other interesting discussion was about the Laker tools. The Laker layout tool has now been married to the Synopsys circuit design environment. That means a much tighter integration to Synopsy simulation, definitely a win-win. Considering Cadence Virtuoso has 80%+ market share I’m happy to see a viable alternative from the #1 EDA company, absolutely.

We also discussed the IPL Alliance (Interoperable PDK Libraries), which is an industry standard organization established to develop an interoperable eco-system for custom design (iPDK). This has been the biggest disappointment for me as a Strategic Foundry Relationship Expert (my day job). The foundries support iPDK and the EDA companies support iPDK (with the exception of Cadence of course). The problem is the top semiconductor companies do not support iPDK and without customer demand it just isn’t going to happen. PDKs are a critical part of the fabless semiconductor ecosystem. Design starts are critical to the fabless semiconductor ecosystem. Open PDKs enable design starts. So what is the problem here?

As I write this I’m at the IEEE standards Symposium on EDA Interoperability (formerly EDA Interoperability Forum) with Paul McLellan. Paul ran the Virtuoso group at Cadence after the Ambit acquisition in 1999 so he should have much more to say about this.

More Articles by Daniel Nenni…..

lang: en_US


Tablets, smartphones & China still driving growth

Tablets, smartphones & China still driving growth
by Bill Jewell on 10-23-2013 at 10:36 pm

Media tablets and smartphones have been the two most significant drivers of electronics and semiconductor growth for the last few years. Forecasts from two major market research firms indicate these devices will continue to be major drivers for the next few years. For 2013, Gartner and IDC (International Data Corporation) both expect tablet units will growth over 50% from 2012 while PCs should decline from 8% to 10%. IDC forecast 2013 smartphone unit growth of 40% compared to total mobile phone growth of 7.3%.

For 2014, Gartner projects tablet growth of 43% while PCs should be flat. IDC’s compound annual growth rate (CAGR) from 2013 to 2017 for tablets is 16% and for smartphones is 14%. IDC expects smartphones will account for over 50% of total mobile phone units in 2013 and 75% in 2017. Tablet units should pass PC units by 2015 or 2016.

How important are tablets and smartphones to semiconductor growth? Our estimate at Semiconductor Intelligence for 2013 semiconductor market growth is 6%, resulting in a market of $309 billion, an increase of $17 billion from $292 billion in 2012. IDC estimates 84 million more media tablets will be shipped in 2013 than in 2012 and smartphone shipments will increase 289 million units. A conservative assumption of $60 in semiconductor content per tablet or smartphone means the increase in semiconductors in 2013 will be about $5 billion from tablets and $17 billion from smartphones. Thus the $22 billion semiconductor market increase from tablets and smartphones exceeds the total market growth of $17 billion. Tablets and smartphones are not the only devices driving semiconductor growth, but they are certainly the most significant.

In terms of geographic markets for semiconductors, China continues to be the key driver. The chart below shows three-month-average change versus a year ago (3/12 change) in electronics production (valued in local currency) for the U.S., Japan and China. Also shown is total industrial production 3/12 change for Europe, Taiwan and South Korea. The data is from government sources in each region. Worldwide semiconductor 3/12 change from WSTS is shown for comparison. The latest available data is from July for China and the U.S., September for Taiwan and August for all other data points


China continues to show strong growth in electronics production, with growth of 10% of higher for the last several years. Japan is the weakest country with double digit declines in electronics. U.S. electronics dropped 7% in July after being close to flat for most of the year. Industrial production turned slightly positive in August for South Korea and in July for Taiwan while Europe continues at about a 1% decline. Semiconductors were up 6% in August, the strongest 3/12 growth since March 2011.

While it is somewhat of an oversimplification, the semiconductor market continues to rely on tablets and smartphones for overall market growth and China for growth in electronics production. These trends are likely to continue for at least the next few years.

lang: en_US


Webinar on IP Lifecycle Management

Webinar on IP Lifecycle Management
by Daniel Payne on 10-23-2013 at 5:44 pm

EDA and Semiconductor companies are offering new webinars almost every week of the year, so there’s always something worth learning about that only takes an hour of time. On November 5th there’s an interesting webinar planned on the topic of IP Lifecycle Management, hosted by Methodics. I blogged two weeks ago about, Managing All of that IP on Your SoC, based upon a recent White Paper. I recommend attending the webinar if your SoC projects include dozens to hundreds of IP blocks and you’re wanting to move from ad-hoc IP management to something more structured and productive.


IP Lifecycle Management

SoC design has changed radically over the last 10 years – moving from an integration of small amounts of external IP with significant amounts of unique internal design, to one in which the majority of the SoC is external IP integrated with a small amount of highly differentiated internal design. With this change an SoC’s complexity has shifted to the process of how these IP’s are integrated, configured and managed

Given this complex eco-system of internal and external IP, the critical nature of IP to the design, and its continually evolving nature; SoC design needs to transform to embrace this new IP-centric nature or face unacceptable levels of risk (a single bad IP is all that is needed to break an SoC). This requires an end-to-end view of how IP is created, verified, distributed and integrated.

This webinar will define “IP Lifecycle Management”, its core components, and how it ultimately impacts SoC design. There is a registration process for the webinar.

More Articles by Daniel Payne …..

lang: en_US


Kathryn: "Formal Will Dominate Verification"

Kathryn: "Formal Will Dominate Verification"
by Paul McLellan on 10-23-2013 at 4:16 pm

At the Jasper Users’ Group meeting, Kathryn presented the state of Jasper. The numbers are impressive. The company has grown at a CAGR of over 35% since 2007, which is 6 times faster than EDA as a whole. They have been profitable at 15-20% EBITDA for 14 consecutive quarters.

Jasper is focused on engaging deeply with a small number of customers who are committed to proliferating formal approaches (as opposed to selling a couple of licenses to everyone). This shows up in some other metrics. Since 2010, while revenue has grown at a CAGR of 35%, the number of customer logos has only grown at 11%, the number of users has grown at 79% and the number of licenses has grown 129% (and that is a CAGR, so the number of licenses in use has gone up by something like a factor of 10 since 2010). So basically customers are spending more each year and spreading formal methods across their companies. The price per license is going down with volume, so customers are getting huge value.

The growth in the size of designs that formal approaches can cope with has been growing faster than Moore’s law with a CAGR of over 100% compares to Moore’s Law which is around 40%. As a result of this, plus having seen something similar before when she was at Verisity, Kathryn concludes that:


What makes her so sure?

  • a scalable technical strategy has been reached
  • market acceptance has reached critical mass
  • ROI is understood and clear
  • areas of applicability continue to expand


Jasper themselves have also reached a sort of critical mass, whereby they have enough customers doing enough different types of design, that they can leverage requests from one customer into more generally applicable apps that can be used by everyone. This is Jasper’s Open Innovation Framework. This name comes from a book by Henry Chesborogh. The core principle is:

JasperGold’s underlying architecture enables experimentation, exploration, customization and feedback. Analolgous to agile software development, this allows rapid delivery of partial solutions.

JasperGold’s architecture:

  • has multiple representations of the design netlist within a single database
  • operations stack to allow reversible actions
  • full programmmable API that exposes transformations and traversals at multiple levels
  • direct connection of waveform view to the engines and the design representation (Visualize)
  • engine collaboration and multi-tasking through ProofGrid
  • Apps architecture to encapsulate methodologies for specific solutions.

We are not there yet, of course. But at some point in the future, formal will be the default choice for every verification task in the way that simulation/emulation is today. The tool will have the capability to selectively apply the right heuristics for each situation. As a result, engineering productivity will skyrocket.


More Articles by Paul McLellan…


3DIC, the World Goes to…Burlingame

3DIC, the World Goes to…Burlingame
by Paul McLellan on 10-23-2013 at 2:09 pm

For the tenth year, the big 3DIC conference takes place in the Hyatt Regency at Burlingame (just south of San Francisco Airport). Officially it is 3D Architectures for Semiconductor Integration and Packaging or ASIP. This year there have already been some significant 3D announcements: TSMC’s 3D program, and Micron’s hybrid memory cube. If you want to know what the current state of the art is, not just in design, but in the supply chain, then this is the one-stop shop for all things 3D. It takes place all day December 12-13th. There is also a pre-conference symposium the afternoon before, December 11th, on silicon photonics.

The conference presents a broad perspective on the technical and market opportunities and challenges offered by building devices and systems in the vertical dimension, and provides participants the unique opportunity to gain the latest technology and market insights on 3D integration and packaging efforts, and technology and industry trends impacting this dynamic arena.

3D ASIP targets senior-level technologists, managers, and executives as speakers and attendees from leading companies and organizations from around the world, and strives to serve the needs of the entire 3D supply chain, from technology developers to equipment and materials suppliers to designers, manufacturers, and end users. All speakers are invited.

The format of the conference and its presentations enable speakers to present the most up-to-date and forthright perspectives possible, and gives exceptional opportunities to network with and learn from other senior-level technology and business leaders.

In 2012, speakers, sponsors, and attendees representing 130 companies, organizations, and universities attended 3D ASIP.

The Thursday keynote is by Doug Yu of TSMC Déjà Vu – Wafer Level System Integration Technology.

The Friday keynote is by Kaivan Karimi of Freescale The Role of Advanced 3-D Packaging Architectures in Support of the Internet of Things (IoT) Edge/sensing Node devices.


Hierarchical Clock Domain Crossing

Hierarchical Clock Domain Crossing
by Paul McLellan on 10-23-2013 at 1:31 pm

One of the first blogs I wrote on SemiWiki was on clock domain crossing (CDC). I thought it was rather a specialized subject, a sort of minority interest. It turned out to be one of the most-read blogs I’ve written. Modern SoCs have lots of unrelated clocks, maybe hundreds, and so ensuring that signals going from one clock domain to another are correctly handled is not a minority interest at all, it is right in the mainstream. Design and verification teams spend a huge amount of time on verifying the correctness of asynchronous boundaries on the chip. Incorrect asynchronous boundaries can cause multiple design defects not encountered in traditional single-clock-domain designs.


Metastability is one of the major defects. A flip-flop is metastable if its clock and data change very closely in time causing the output to be at an unknown logic value for an unbounded period of time. While metastability cannot be eliminated, it is usually tolerated by adding a multi-flop synchronizer to control asynchronous boundaries and using those synchronizers to block the destination of an asynchronous boundary when its source is changing. FIFOs, 2-phase and 4-phase handshakes are typical structures used for this type of synchronization.

Glitches on asynchronous boundaries are the causes of defects as well. A glitch on an asynchronous crossing can cause the capture of an incorrect signal transition. Data coherency issues occur in a design when multiple synchronizers that have settled to their new values in different cycles interact in downstream logic. While the concepts and methodologies for verification of such issues have been extensively researched in the past ten years, little work has been attempted to tackle clock domain crossing (CDC) verification signoff of large system-on-chip (SoC) designs such as the design below.


There are three main methodologies used to verify CDC correctness:

  • flat CDC verification
  • hierarchical bottom-up CDC verification
  • hierarchical top-down CDC verification

In flat CDC verification, the entire SoC is verified in a single run. Flat SoC verification covers all the critical issues I discussed earlier: metastability, glitches and loss of coherency in addition to functional requirements of the asynchronous interfaces and other critical issues across data, control, clock and reset circuitry. The size and complexity of a design is no excuse for missing a CDC bug. The main advantage of flat SoC verification is setup simplicity. Typically, clocks, modes and other design constraints are available at the chip level and therefore design setup for CDC verification is straightforward. The big problem, though, is that the whole chip is only put together late in the design cycle which means that errors that could have been caught early in the design cycle when there is slack in the schedule to fix them only get caught later when it is right on the critical path to tapeout.

In hierarchical bottom-up CDC verification, blocks are verified as they are completed. As blocks are assembled to build subsystems and finally the SoC, the verification is scaled to the subsystem or SoC level leveraging the information available from the verification of blocks previously verified. This approach finds problems early and is especially well-suited to distributed development where many of the blocks are developed in different groups from the group putting the whole SoC together, a typical way of working for modern semiconductor companies.

Sometimes, SoCs are developed in a top-down manner where top-level constraint are created very early in the design cycle and blocks are then developed and gradually integrated to complete the SoC design. In such a design flow the early availability of SoC constraints can be leveraged for effective top-down CDC verification. In other words, CDC verification can be applied to the top level SoC and the CDC issues can be associated with specific blocks, or with inter-block boundaries. Note that this is different from a simple flat SoC verification in the sense that block boundaries and owners are known and they take responsibility in analyzing reported issues and fixing their blocks. In some sense it is more of a requirements-driven design process, pushing CDC issues down into the block design teams along with all the other things like timing and power budgets.

Since a single CDC bug can kill a chip, having a disciplined approach to CDC verification and doing it in a way that fits in with the approach already being used to design and assemble the blocks is important.

An Atrenta white paper CDC Verification of Billion Gate SoCs is here.


More Articles by Paul McLellan…


Qualcomm start selling DSP IP core?…

Qualcomm start selling DSP IP core?…
by Eric Esteve on 10-23-2013 at 7:41 am

In recent times semiconductor companies have revealed their intentions to license their in-house processor architectures for the first time – IBM want to license their Power CPU architecture, nVidia to license their GPU architecture. Most recently, a rumor has surfed: Qualcomm will license their DSP architecture. We should notice that this rumor has absolutely not been confirmed or commented by the company, and the reason is simple: we don’t think that Qualcomm has any interest to license their DSP, on the short term like on the long term. We’re going to look at the challenges and requirements faced by a semiconductor company who wants to enter the DSP IP licensing market place and compete with established DSP IP licensors such as CEVA and Tensilica.

The DSP market is a fragmented and diverse one that requires a broad set of solutions, with flexibility and scalability being essential. It is not a market where a “once size fits all” DSP approach meets the broad set of market requirements for different end markets. Essentially, to license DSP, a company needs to offer a portfolio of application-specific DSPs, each tailored and maintained to most efficiently address the end market that it is designed for. The DSP IP vendors in the market today including CEVA offer customized, application-specific platforms consisting of hardware and software offerings for the unique needs of applications such as LTE/baseband, WiFi/connectivity, imaging and vision, voice and audio.

Qualcomm’s products, like Snapdragon processors, are performing extremely well, tailored for a specific market, application processor for mobile application. For example, LTE and vision applications require a unique vector instruction set architecture, and must enable special data flow and processing offloading. In the audio/voice domain, the low power requirements of always-on use cases cannot be met with a VLIW / multi-threading processor such as Qualcomm’s QDSP or Mediatek’s Blackfin DSP; it requires a much lower power and smaller footprint DSP. Why would Qualcomm address other market segment to sell a DSP IP, and develop hardware and software offering, specifically tailored for these segments?

As important today for a company looking to license a DSP is the availability of value added software, delivered in source code format allowing licensees modification rights. Would a semiconductor leader like Qualcomm be willing to deliver its proprietary software and algorithms to a potential licensee of their QDSP architecture? After all, without this software, customers will hesitate to license just the hardware from a company they may see as a competitor. Will any semiconductor vendor or OEM feel comfortable to license a DSP from a competitor? Moreover, what could be the benefit for Qualcomm to licensee its proprietary software to a direct competitor, when the company spend a lot of money, through acquisition (AMD’s mobile graphic unit for $65 million in 2009) or internal development (ARM Architecture compatible CPU), just to make sure their product differentiate?

Developing a software ecosystem is another challenge wireless semiconductors will face if and when they license their DSP. DSP IP licensor CEVA has invested thousands of man years into developing its ecosystem, and now has a massive developer community around its DSPs. This has been a key reason why the CEVA DSPs have shipped in more than 4 billion handsets, smartphones and other types of computing, communications, video, imaging, gaming, entertainment and automotive products. Speaking about a well-known CPU IP vendor, ARM, the company has invested on the long term to develop an ecosystem around their CPU IP and more than 1000’s companies are now part of this ecosystem. Any company willing developing a CPU or DSP IP core can do it, providing the right engineering resources are available. Would the company try to address the market as an IP vendor, they can be successful on some niche market, but they will fail on the mainstream market! Why? This new IP vendor can’t rely on a strong ecosystem.

So to sum up, most customers looking to license processors for CPU, GPU or DSP functionality in their SoC designs are not looking for just a blueprint of processor architecture. They are looking for a solution to address their design requirements, from tools through to system integration capabilities and even the software that runs on the processor. Semiconductor companies and OEMs looking to licensing a DSP are looking for a combination of a special purpose DSP architecture combined with value added software and a robust ecosystem that can meet the power, performance and area needs of their targeted applications.

Why successful semiconductors like Qualcomm, strongly investing to build differentiation, would step into the Processor IP licensing realm, offering to their direct competitors some of the nuggets (DSP or GPU IP cores) being integrated into their most successful products? We can guess that Snapdragon processor product line generates several billion dollar revenue each year, does anybody can think that Qualcomm will put at risk such a large amount of their revenue, to generate a few dozen million dollar (at best, if DSP IP sales are really successful)…

I have found a document, scanning the web, written by Qualcomm. Here is an extract:

“Qualcomm’s Hexagon DSP SDK enables developers and manufacturers to create compelling user experiences by improving audio, imaging, computer vision and video performance on devices powered by Snapdragon processors. “

This document is probably the source of the rumor, but I think if you read with attention, you will notice that the important wording is: “…devices powered by Snapdragon processors.” My opinion is that Qualcomm want to sell IC like Snapdragon, not DSP IP core, and that licensing DSP (or GPU) would be a strategic nonsense. As far as I know, Qualcomm strategy has allowed the company to build a $12 billion IC business from scratch, in less than 20 years. There wasn’t any room for nonsense in this strategy, why would the company start now?

Eric Esteve from IPNEST


More Articles by Eric Esteve …..

lang: en_US