CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

CEVA in More than 6 Billion Chips!

CEVA in More than 6 Billion Chips!
by Daniel Nenni on 08-05-2015 at 8:00 pm

One of the IP companies that I track is CEVA, the largest licensor of DSP cores. CEVA is the fifth largest IP company behind ARM, Synopsys, Imagination Technologies, and Cadence (Lattice acquired Silicon Image). CEVA is actually a combination of companies which started with the DSP Group and Parthus Technologies in 2002 and RiveriasWaves (WiFi and Bluetooth) in 2014. Today CEVA has signed 375+ license agreements that have resulted in more than 6 billion chips shipped around the world serving the mobile, consumer, automotive, and IoT markets. CEVA has been with SemiWiki since 2012 with 60 blogs published to date and viewed 231,626 times so we know them quite well.

First let’s take a quick look at the Q2 financials:

  • Total revenue was $13.4 million (45% YoY Increase)
  • Licensing and related revenue was $7.7 million (76% YoY increase)
  • Royalty revenue was $5.7 million (17% YoY Increase)

And the most interesting comment during the investor call to me was:

“From CEVA perspective, the smartphone market opportunity is under-exploited. IDC estimates that over 8.5 billion smartphone will be sold from 2015 through 2019. According to GSMA intelligence, 2G technology still accounts for 58% of the world’s 7 billion mobile connections. This large installed base is a prime candidate for upgrade to 3G and LTE smartphone.”

To this I agree wholeheartedly and to highlight that, three key CEVA customers were also discussed: Samsung, Xiaomi, and Intel. With the advent of chip teardowns (chipworks and iFixit) the building blocks of SoCs are no longer secret. Xiaomi, the largest smartphone vendor in China, is using CEVA for LTE as is Samsung, the largest smartphone vendor in the world. Even Intel, the largest semiconductor company in the world, uses CEVA for LTE in their SoFia SoCs and XMM7360 modems.

The other part of CEVA’s business is the newly acquired RivieraWaves Bluetooth and Wi-Fi which you may have seen advertised on SemiWiki. According to CEVA RivieraWaves provides the industry’s lowest power Bluetooth IP compatible with any MCU/CPU and RF available on the market today. RivieraWaves Wi-Fi on the other hand offers a comprehensive suite of platforms for embedding Wi-Fi 802.11a/b/g/n/ac into SoCs and ASSPs. Optimized implementations are available targeting a broad range of connected devices, including smartphones, wearables, consumer electronics, smart home, industrial, automotive applications, etc…

For a more detailed look at RivieraWaves Wi-Fi read: Apple Watch Design Revisit with a Wi-Fi Twist. SemiWiki blogger Majeed Ahmad did a really nice job on this one, absolutely.

About CEVA, Inc.
CEVA is the leading licensor of cellular, multimedia and connectivity technologies to semiconductor companies and OEMs serving the mobile, consumer, automotive and IoT markets. Our DSP IP portfolio includes comprehensive platforms for multimode 2G/3G/LTE/LTE-A baseband processing in terminals and infrastructure, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (Smart and Smart Ready), Wi-Fi (802.11 b/g/n/ac up to 4×4) and serial storage (SATA and SAS). One in every three phones sold worldwide is powered by CEVA, from many of the world’s leading OEMs including Samsung, Huawei, Xiaomi, Lenovo, HTC, LG, Coolpad, ZTE, Micromax and Meizu. Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.


An Open Letter to Qualcomm CEO

An Open Letter to Qualcomm CEO
by Pawan Fangaria on 08-05-2015 at 12:00 pm


Dear Steve,

Let me first clarify about myself that I am a humble blogger at Semiwiki and admire your company as the #1 Semiconductor Fabless Company and #4 in terms of overall semiconductor sales of the top10 semiconductor companies in the world as per 2014 data. Also, I must mention another point of admiration that your company is #2 in terms of R&D expenditure. I neither own a single Qualcomm stock, nor recommend anyone to buy or sell. However, observing the turmoil your company is going through this year, I would hate to see it getting out of the top10 list or even falling into lower ranks in that list. So, I thought of writing this sincere, open letter to you in the spirit of what best Qualcomm could do to stay relevant and even win the war in near future. Of course, there is a substantial struggle going forward, but even a single wrong step can lead the company into further trouble. Think it from a complete neutral perspective; keep the pressure from various sides aside for a while.

It takes years of effort, energy, resources, and what not to build a scale of business like Qualcomm has. It must not be destroyed. Remember the BCG matrix? In a large organization, it generally has all kinds of businesses – Stars, Cash Cows, Dogs, and Question Marks. In Qualcomm, as I see it, it has a great Cash Cow and a Star which has moved into Question. Fortunately, I do not see Dogs there. So, situation is not that bad as it is being projected. Truly speaking these integrated elements of an organization cannot be separated. The Cash Cow has to feed others and get fed in return; there needs to be an unquestionable integrity into that. Yes, there is pressure from investors to split the cow and the star in question. The investors care about short-term returns on their investments. If they can be pacified by other means, that’s it; in the long run they would realize that ‘not splitting’ was a blessing in disguise. Also, a split can be in the form of actually ‘not a split’. Here are the strategies which can be adopted for a win-win from all sides –

Form a Holding Company – Constitute a new Qualcomm Holding Company (QHC). Provide autonomy to QCT and QTL to run as stand-alone businesses with their separate BOD and equity capital. QHC will own both QCT and QTL and it will have participation in the board from QCT and QTL, as well as institutional investors. JANA Partners already has two of their representatives in the Qualcomm board. They can very well represent in QHC. By doing this Qualcomm doesn’t have to actually sell any of its unit and at the same time it can bring more returns to investors. QTL and QCT should keep complementing each other as they have been doing; earnings will be different because of the types of business models in them, but then investors will be compensated according to their holdings in them.

RIF is already on the horizon – You have already announced about RIF. That can provide a breather on cost cutting and pacify investors. However, my sincere opinion would be to cut where the fat is. Often large organizations do a mistake cutting at wrong places. You have to take the pain of identifying the fatty regions in your organization before cutting. If you do that it would be right to cut and would provide short as well as long term benefits as well.

Partner instead of straight M&A – I understand, you want to diversify to bring higher returns to your investors. This can be done in more creative ways than acquiring a company. There are rumours about Qualcomm acquiring AMDto get into server business. This is a good strategy as it augments well with your IoT plan which is gaining strength on Gateway side and the server business can let you enter into the Cloud. However in my view, acquiring AMD and wielding it against Intel to win in the server business could be a long haul.

Similarly, there are other companies being talked about to be the acquisition targets of Qualcomm. Also, QCT is being talked about as the acquisition target of Intel. I would not go into those details here. My simple suggestion to you would be on the following lines –

Identify your immediate battle ground, it’s not server business, it’s China region which is key to mobile processor business today. The mobile business is QCT’s backbone. So, protect that. What’s the key issue there? It’s price war. Who are the main players? They are you, Intel, Spreadtrumand MediaTek. You know your main rival there. Now identify a player among the others who is strong and may need your help. There you go.

Intel too wants to cut the pie in the mobile market. They are aggressively following the China market, rightly. However, they are losing substantial dollars in the mobile business. Given a favourable deal, in my opinion, they should be willing to pursue. Qualcomm has the mobile technology, but you need to cut cost. You have a chance to share mobile technology with Intel and get into manufacturing pact with Intel to cut cost. Both companies can make it a win-win here in the mobile business. Eventually, you will have an answer for your high-end turf in the mobile business too. Today, the route for mobile business starts from China!

Coming to IoT, you stand a great chance to win with your powerful connectivity solution from Atheros acquisition. QCA401x is an ideal, power and cost optimized chip for IoT applications with right amount of memory for holding data and interfacing with several devices. It also has security and various communication protocols including WiFi, IPv6, and http. Similarly, QCA4531 is low-cost solution leveraging Linux environment, good for hub in IoT systems. Also, with your next generation 4G LTE and secure Bluetooth and GPS technology from CSR acquisition, you can bring new dimensions in the IoT connectivity, short as well as long distance. You are good at Gateway solution. To enter the cloud, again get into some kind of agreement with Intel for servers at this point of time. You can expand later. PC business is a no-no, it will not provide growth. For IoT edge devices, just wait for a while. The market will be flooded with edge devices in a few years from now. You can get a better value at low price for edge-device solution at that point of time.

I am leaving it open for the audience to share their views. Their unbiased views can definitely come a long way in rebuilding of the pioneer, innovator, and trend-setter in the fabless world.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Acoustic Resonators for RF: MEMS with No Moving Parts

Acoustic Resonators for RF: MEMS with No Moving Parts
by Paul McLellan on 08-05-2015 at 7:00 am

There is an annual conference known officially as the Sensors and Actuators Workshopand informally as Hilton Head since it is held on Hilton Head island in South Carolina. Coventor talked to some of the top researchers last year about RF filters and decided to develop a simulation solution that would better serve both the researchers and commercial designers. Recently they delivered with the recent CoventorWare 10 release which includes a new (and unique in the industry) fast analysis capability for acoustic resonators.

Market pressure for RF filters to be compact and inexpensive, yet meet the higher performance requirements of the 4G standards, has spurred great interest in novel filter design concepts that further miniaturize the transmit/receive chain of RF frontends The first successful filters were surface acoustic wave (SAW) filters, but in the past few years the number of bulk acoustic wave (BAW or FBAR) filters within a handset has grown rapidly to fill the market demand above 1 GHz where SAW device performance degrades. Most recently, bulk mode resonators that vibrate in plane to allow multi-frequency filters on one substrate have garnered significant interest within the research community. These in-plane bulk-mode resonators come by various names such as contour-mode resonators (CMR) and laterally-vibrating resonators (LVR).

These acoustic resonators have become more popular as the number of filters in a typical mobile device has increased to around 30 per phone. The best known success story in the space has been Avago’s FBAR which proved the higher performance of bulk-mode devices compared to conventional SAW filters.

Acoustic resonators are designed using MEMS techniques even though they do not have any moving parts. It is one thing to be able to design them but there has been a major missing piece to the jigsaw, computationally efficient algorithms to perform frequency sweep simulations in a reasonable time. Current solutions take days (literally) and that is a 2D simulation, which is not good enough. 3D takes so long as to be impractical. This obviously limits both the amount of exploration that can be done and the accuracy with which the performance can be predicted.

Matt Kamon of Coventor told Professor Songbin Gong at the University of Illinois about the proprietary fast frequency-sweep algorithms specifically for MEMS piezoelectric resonators. He claimed Coventor could simulate his 3D designs in minutes where his current tool took hours, and do in hours what previously took days. Since the Hilton Head 2014 conference, Professor Gong and his group have worked with Coventor to polish this offering specifically for rapid design of cutting-edge acoustic resonators.

Coventor have named the offering “FastPZE” and it is part of CoventorWare 10. The diagram above shows the accuracy. The red line shows conventional simulation (actually the red circles are the only points simulated and then the points were just joined up). The black line, which shows a lot more detail, is the result of the FastPZE algorithm. Both simulations took the same amount of time.

Another point to emphasize is that designs are not necessarily constructed with Manhattan geometries. A design may consist of arbitrary polygons or curves. In some simulation tools, this would require a mesh made of tetrahedrons. Unfortunately, since these are thin-film devices, this can lead to an extremely large number of mesh elements and consequently very long simulation times. CoventorWare, in contrast, provides very efficient meshing for thin-film devices of non-Manhattan shape with straight or curved edges as shown in the above diagram.

Download Matt’s white paper Fast Acoustic Resonator Analysis for the Rapidly Growing Premium RF Filter Market here.


China (and Cupertino) Are Killing Korea in Mobile

China (and Cupertino) Are Killing Korea in Mobile
by Paul McLellan on 08-04-2015 at 7:00 am

Samsung, #1 in the mobile phones based on unit shipments, has two big problems in mobile. Apple’s iPhone; and China in general and Huawei in particular in the Android world where they live. They have just announced their fifth quarter of decline. Revenue was down 8% year on year but operating profit declined 38%. They sold 89M handsets (which includes about 20% of non smartphones, so maybe 72M smartphones). Their big problem market is China where they used to be the market leader but now are behind Apple, Huawei and Xiaomi (and maybe Lenovo/Motorola).

One problem is that the Galaxy 6 is not selling as well as expected. I’ve read that a problem with Galaxy 6 is that Samsung pretty much cloned an iPhone. Since iPhone doesn’t have a replaceable battery, or a memory card slot, and is not waterproof, those things don’t matter, right? The problem is that the Galaxy 5 had all of those things and probably some of its success was due to people who cared about them picking Galaxy over iPhone. After all, if you want an iPhone then why not just buy one; Apple will be happy to sell you one. But if you really want something different from iPhone, then how about the curved screen. Buy the “edge” version. The problem is that you can’t since, according to Bloomberg anyway, they failed to produce enough curved screens to satisfy demand. They said that they will increase their capacity for curved screens early next year (horses and stable doors spring to mind).

Or if you want a replaceable battery, waterproofing, memory slot then you have lots of choices and Huawei seems to be one of the beneficiaries. Analysts were expecting them to sell 50M smartphones in the first half but they also just announced that they sold 10M phones in each of May and June, which is clearly a higher run-rate. They must be close to 30M smartphones for the quarter. And, yes, they are profitable. They look to be #3 with 7% market share, ahead of Xiaomi who are now trying to sell in markets other than China and Singapore (such as India) where they don’t have existing distribution channels.

Samsung have said that they will further reduce prices on the Galaxy 6 to drive growth, but as Yoo Eui Hyung, an analyst with Dongbu Securities in Seoul, says:Poor sales of S6 only proved that it can’t beat Apple in brand loyalty among users and just ended up being one of the many Androids. The price cuts may increase sales, but I highly doubt it could promise bigger profit growth.

But there are two Korean cell-phone manufacturers. If Samsung is struggling then surely LG should be a beneficiary. But they only sold 14M phones in Q2, down 8% from Q1, and are basically breakeven. Their flagship phone, the G4, has the features that Samsung dropped from the Galaxy 6 so it is weird that they haven’t picked up share. Since they are only breakeven they have already executed what Samsung plans, namely selling the phones at a low price. That can drive volume but not profit. LG’s profits fell 45% mainly due to smartphone sales being down.

One area where Samsung is doing well is semiconductor. The switch from Qualcomm’s Snapdragon to internally developed Exynos was widely reported, not least in veiled remarks on Qualcomm’s earnings calls. Between the application processor business, and their memory business (plus some other smaller businesses) their profits in semiconductor are up 83%. But Samsung semiconductor needs Samsung mobile to be successful. Every Galaxy 6 sale lost to Huawei is an Exynos chip not sold. That dynamic may change a little going forward since Apple’s 6S application processor is known (or at the very least strongly rumored) to be fabricated by Samsung Foundry. So Samsung Semiconductor would love Galaxy 6 to be wildly successful, but when they lose a sale then they want them to lose to Apple not Huawei or Xiaomi.

The Bloomberg piece is here.


Why Modern SoC need cache-coherent NoC?

Why Modern SoC need cache-coherent NoC?
by Eric Esteve on 08-03-2015 at 4:00 pm

Launching high technology product on the semiconductor market after your competitors is not necessarily a weakness. NetSpeed has developed NocStudio, a front end optimization design tool helping architects to create SoC architecture bridging the gap with the back end, floor planning and place and route. Created about 20 years after Sonics and 8 years after Arteris, NetSpeed has capitalized on the positive (Sonics and even more Arteris have evangelized the semiconductor industry about how important NoC integration could be for design optimization and to avoid routing congestion) and learn from competitor’s weaknesses, the most crucial being the need for a NoC to support cache-coherency in modern multi-core designs.

Approaching design teams involved into System-on-Chip (SoC) like Application Processors for smartphones and the numerous SoC developed to support Multimedia, Network Processing, Servers, Computing and so on, NetSpeed has realized that their main competitor is internal design team! NetSpeed evaluates that about 80% of the SoC integrates internally developed solutions like proprietary buses, crossbars, and fabrics.

NocStudio is a graphical tool helping automate an SoC design and generating Gemini NoC, for chips that have cache coherent processor cores (CPUs, GPUs, or DSPs), or Orion NoC for chips that don’t need cache coherence. Gemini supports up to 64 processor clusters and up to 200 other components that may be I/O coherent. Gemini enables a massively parallel chip design with up to 256 CPUs.

Figure 1. NetSpeed’s Gemini network-on-chip.

NocStudio final output includes performance statistics, the RTL files required to synthesize the NoC, a C++ functional model, and verification test benches. By speeding the design process and reducing risks, NetSpeed’s tool helps cutting costs and shortening the time to market. With NetSpeed’s On-Chip Network IP topology that connects all the IP blocks in a preliminary floor plan that optimizes the design for performance, power efficiency, die area, low latency, and deterministic quality of service (QoS).

In addition, NocStudio is a correct-by-construction design tool that prevents fatal errors such as protocol- and network-level deadlocks (To learn more about why this front-end design tool is “correct by construction” and how the tool has been designed, just read this post). NocStudio provides a software layer helping SoC architect providing configurability in a coherent system at no risk. In fact, customer who desires to configure a complex system requires that they either know all of the details needed to configure it correctly, or that there is some method of ensuring that whatever they do specify will behave correctly.


Figure 2. Designing a NoC with NocStudio.

Architects can drag and drop all the desired IP blocks into NocStudio’s main window. With each addition or modification, NocStudio automatically displays a script in the lower window that defines the IP blocks for the synthesis compiler. The other approach is to manually write or edit the script that defines the IP blocks by using the command-line interface in the lower window.

To optimize traffic among IP blocks that may have different latency, bandwidth, or protocol requirements, NocStudio can vary the data-path widths from 8 to 1,024 bits and create up to 8 heterogeneous physical networks and 32 virtual networks. (Virtual networks appear as separate NoCs but use the same wires.) Because Orion and Gemini are intended mainly for ARM-based SoCs, they connect directly to IP blocks that support AMBA 4 (an AMBA 5 version is in development) and AXI protocols. NocStudio’s final output includes performance, power, and area statistics; the RTL files required to synthesize the NoC; a C++ functional model; and verification test benches

Figure 3. Optimizing Orion.

On the figure 3 we can see the chip optimization on a real life example, step by step, placement, layers, routing and channels optimization allows generating a SoC dissipating 60% less power than with AMBA AXI interconnects.

NocStudio is a front end optimization design tool and we think that such tools will become unavoidable for today’s SoC designs, like was software compiler and RTL synthesis before: the Time-To-Market pressure linked with the incredible race for always higher complexity (100’s of million gates, dozens of CPU/GPU/DSP cores) offered by the latest technology node are now pushing to change design methodology. It’s no more acceptable to discover deadlocks at Tape Out (and if you are even more unlucky, afterTO), as new iteration generated by this architecture issue is not only costly, but may jeopardize the SoC success and lead to miserable ROI, just because the TTM window has been missed.

Sooner or later, the industry will embrace front-end design tools that inevitably will look very much like NocStudio. Architects who need a scalable, high-performance, correct-by-construction SoC interconnect should evaluate NetSpeed’s technology, especially if the design requires cache coherence.

I encourage you to read the white paper Automating Front-End SoC Design With NetSpeed’s On-Chip Network IP” By Tom R. Halfhill from the Linley Group.

From Eric Esteve from IPNEST


More FPGA-based prototype myths quashed

More FPGA-based prototype myths quashed
by Don Dingee on 08-03-2015 at 12:00 pm

Speaking of having the right tools, FPGA-based prototyping has become as much if not more about the synthesis software than it is about the FPGA hardware. This is a follow-up to my post earlier this month on FPGA-based prototyping, but with a different perspective from another vendor. Instead of thinking about what else can be done beyond just prototyping, Synopsys has taken on three big myths surrounding the concept in a new white paper.

A very interesting point to me is the first one taken on by product manager Troy Scott: FPGA-based prototype capacity is often perceived as limited to less than 100 million ASIC gates. If we believe other sources, the magic numbers in this equation are 5 million gates or less, and 80 million gates or more. Stunningly, the 2014 Wilson Research Verification Study shows that people are actually less successful on the smaller designs. The overall first and second spin success rate, according to that study, is lower for designs at 5M gates than it is for designs at 80M gates.

That may be because people are spending more time and effort in verification and validation of larger designs. For that job, they are using better tools such as emulation and FPGA-based prototyping platforms – which are about even in terms of industry adoption, both around 33-35%.

However, prevailing wisdom says that as designs get larger, the reluctance to use FPGA-based prototyping seems to increase and the adoption rate drops. We’ve all seen news that the raw capacity of platforms like HAPS-70 have increased substantially with introduction of Xilinx UltraScale VU440 parts – Synopsys is now shipping these platforms to early adopters.

So, what’s the hold up? Myth #1 is the 100M gate barrier. There is no doubt one can now get 100M gates poured into an FPGA-based platform. The first concern is can one build and rebuild a design on that platform for 100M gates in a reasonable amount of time? Troy goes through an analysis using Synopsys ProtoCompiler, which leverages parallelism up to four concurrent processes of synthesis. The result is a 10 hour turnaround – 4 hours in synthesis and partitioning, 6 hours in place & route.


In a more advanced situation, ProtoCompiler supports any number of compile points, even allowing nesting. This facilitates incremental builds, where only part of the design is rebuilt. The four current processes can be applied to four compile points, so rebuilds of changed areas can go faster than the entire build. Multiple licenses can also be ganged to increase parallelism.

Myth #2 is the partitioning effort. Troy shares some data from the R&D team at Synopsys – recall they are also in the IP business, and they eat their own dog food so to speak – showing benchmarks on partitioning time across 13 programs. (They aren’t triskaidekaphobic, apparently.) These benchmarks include the use of high-speed interconnect TDM schemes, automatically generated by ProtoCompiler. The results may be surprising, showing a realistic view of days, not months, to get to a working design.

Myth #3 is debug. That does get tricky on multi-FPGA prototypes. Troy explains how ProtoCompiler handles instrumentation, coordinates with the deep trace debug capability in HAPS, and deals with external DDR3 memory for debug resources. One critical point Troy makes is that hundreds of signals can be captured for full seconds of clock time. Or, things can be stretched to grab more signals for shorter periods.

The full paper is here (registration required):

Busting the 3 Big Common Myths About Physical Prototyping

The upshot of all this is we’re not just talking about gate capacity any more. Maybe we should be talking more about the low end, which is why Synopsys introduced HAPS-DX for smaller environments. Being able to synthesize designs faster, while inserting effective partitioning and debug, is what makes an FPGA-based prototyping platform really useful. In both the small and large cases, it is the ProtoCompiler technology where Synopsys is making progress.


John Koeter: How To Be #1 in Interface IP

John Koeter: How To Be #1 in Interface IP
by Paul McLellan on 08-03-2015 at 7:00 am

John Koeter is in charge of marketing Synopsys’ IP and prototyping solutions. I talked to him last week.

He grew up in upstate New York, son of a Scottish mother and a Dutch father who immigrated to the US, so he is first generation American, unlike everyone else I’ve interviewed so far for this series who were born overseas. He stayed in upstate New York for college and got a BSEE from Cornell.

After graduating, he joined TI in Dallas and did various jobs in TI ASIC, at one point moving to California for a couple of years to run a couple of design centers. One of his primary competitors was VLSI Technology where I was working at the time. We were always frustrated that TI had Nokia, which was the cell-phone market leader, locked up and we never had any success there. After 11 years at TI he decided he preferred Austin to Dallas and joined AMD in the embedded processor division, which was in the process of switching from the 29K bitslice approach to low end x86. But after a couple of years there was the usual semiconductor downturn.

He moved on and in 1998 joined Synopsys where he has been for 17 years now. He started as a program manager and then moved into business development for services doing production turnkey designs (this was the Tality Design Services era). After doing that for a time he went into sales for IP and services as a sort of east-coast overlay. When the position of VP marketing for the IP and prototyping opened up he took it. He has done that job for about 7 years now, covering IP, prototyping and FPGA synthesis. He also runs the pre-sales AE organization for those businesses.

We started by talking about IP. This is a business that Synopsys has been in for 25 years, starting with DesignWare which was basically datapath, UART, i[SUP]2[/SUP]C, timers and other basic building blocks of that era. The big transition came when they purchased inSilicon and got into USB and PCI Express on the digital side. A year later Synopsys acquired Accelerant and was in the SERDES business. They grew the business, partially organically and partially through other acquisitions such as Cascade. They really got heavily into analog when they acquired the analog business of MIPS (aka Chipidea).

One big change was that they started to package up a complete solution for interface IP. There was a digital controller, an analog PHY and verification IP (VIP). Over time this completely changed the market and all their competitors needed to do the same or get out of the business. Customers wanted to buy the complete solution. More recently they upped the ante again by launching their IP Accelerated initiative and adding software development kits, and prototyping kits and interface IP subsystems. It turns out that having SoC experts from the customer company along with IP experts from Synopsys is a powerful mixture. Although the IP may be standard, each chip is different in terms of power domains, power management, clcoks, which options of the IP are not required and so on.

In 2010 Synopsys bought Virage Logic bringing them standard cell libraries, memories (with test and repair) and the ARC microprocessor. This meant that they could, as Emeril used to say, kick it up a notch to IP subsystems, pre-integrated suites of IP including software, microprocessor, interfaces and more. The first was an audio subsystem. Then a complete sensor and control IP subsystem. At DAC this year they announced they were working with TSMC on a 40nm IoT platform. They also announced that they were pushing their IP portfolio up to automotive grade, with features to address functional safety, reliability and quality management. To qualify for automotive it requires a lot more than just slapping an “automotive” label on existing IP. At the same time they are optimizing IP for IoT applications in 45ULP and 55nm but very low voltage.

They are not done acquiring. In just the last couple of weeks they announced the acquisition of Bluetooth IP (for wireless interface) and security IP with Elliptic Technology.

The combination of interfaces, Bluetooth, security, the ARC microprocessor, memories, data converters and more gives them the broadest set of IP for IoT of anyone in the market.

There is clearly a major transition from making IP internally to buying it. It is getting so much more difficult to make, for a start. USB 2.0 to USB 3.0 has a verification requirement that is 20 times bigger. Standards turn fast, every couple of years. USB 3 to USB 3.1. DDR to LPDDR4. PCIe 3.0 to PCIe 4.0. And so on. Not many companies can keep up. They often don’t have the knowledge even if they would be willing to spend the time and the money.

Going back to 2010, when Virage was acquired, Synopsys made a couple of other key acquisitions in the system design space: VaST (where I used to be VP marketing) and CoWare. They had previously acquired Virtio. This gave them a lot of virtual platform technology. As I discovered when I had worked at VaST (and subsequently Virtutech) there is a big problem with modeling: it takes too long, costs too much and is hard to keep synchronized with the RTL. But Synopsys has three weapons that we never had: they have a lot of their own IP so they can provide TLM models, they have an emulator family ZeBu, and they have an HAPS FPGA-based prototyping system. This gives them the capability to do all sorts of hybrid solutions with processors and perhaps interfaces running as virtual models, combined with RTL running in HAPS or ZeBu. This makes it possible to look at functionality and performance, but especially these days power. Plus they have PlatformArchitect (ex CoWare) which allows for analyzing and optimizing architecture very early using TLMs, processor subsystems, DDR interfaces and more. Synopsys also now provide virtualizer design kits (VDKs) especially for some automotive MCUs and for ARM subsystems. By prepackaging everything it makes it easy for the software engineers to use the solutions immediately at low cost.

HAPS has also been very successful and leads the market. The HAPS FPGA-based prototyping solution allows high speed prototyping of SoC designs for software development, hardware/software integration and system validation. They have an optimized FPGA synthesis solution for HAPS (ProtoCompiler) that understands all the partitioning giving the highest performance for a design along with fast prototyping bringup.

At Synopsys, the prototyping business has started to be quite successful—both virtual and FPGA-based. The market for FPGA and virtual prototyping seems to be about $450M but only 1/3 is in the commercial marketplace, and 2/3 is internally developed. This means there is a big $300M potential market available if people switch from make to buy, the same problem as IP used to have a decade ago.

IP is already close to 20% of Synopsys’ business and the prototyping segment is a big opportunity. The future’s so bright you’ve got to wear shades.

See also Synopsys’ Andreas Kuehlmann on Software Development
See also Antun Domic, on Synopsys’ Secret Sauce in Design
See also Bijan Kiani Talks Synopsys Custom Layout and More


Good Morning Vietnam

Good Morning Vietnam
by Paul McLellan on 08-03-2015 at 7:00 am

This morning I went to a presentation in Palo Alto about outsourcing in Vietnam. You have probably heard that Vietnam is the new China for manufacturing, as wages have increased in the Shenzhen area then companies like Foxconn have opened plants in Vietnam. But this meeting was mostly about services, software and design. In this area, perhaps Vietnam is the new India. The meeting was organized by VNITO the Vietnam Information Technology Outsourcing Organization.

The main presentation was by Hung Ngyen of LogiGear who do software testing, especially for the videogame industry. Somewhat confusingly there was a second Hung Ngyen from Microchip. Everyone seemed to have been at high school with Tom Quan of TSMC! The numbers for Vietnam are impressive: it is often ranked #1 emerging market location based on business conditions, risk and cost. Ho Chi Minh City (HCMC, the old Saigon) and Hanoi are in the top 20 outsourcing cities. It is not even on the radar yet for these types of studies, but Da Nang is an up and coming hub investing billions in infrastructure. For engineering graduates, Vietnam is in the top 10 countries. Perhaps more to the point, the talent pool is likely to continue to outstrip demand for years meaning that companies that move there now can pick from the top 10-20% of graduates. The deep pool also means that outsourcing organizations are very scalable, start small and grow.

It is a young country. LogiGear’s local manager is about 30, and the team of around 800 people is split roughly 50:50 male and female. It seems that intracompany marriage is pretty common and, unlike in the US, is even encouraged. The speakers all figures that they were building the first layer of a middle class in the country. It is not insignificant. There are 100K software engineers in Vietnam and about 50K other digital content providers (web designers, graphic designers for video games etc).

The three big worries people have about outsourcing to Asia are:

  • cultural fit
  • language
  • IP protection

Cultural fit turns out to be surprisingly good. Presenting companies pointed out that it is a proud culture which means that people there are prepared to push back when, for example, a product is not ready for release. EA’s experience in Canada and Argentina was less successful than in Vietnam, where engineers will follow processes but not completely blindly. American companies seem to have no problem working closely with their Vietnam teams.

English is similar to China. Hung tried to claim that English was as good in Vietnam as India since they talk too fast with too strong an accent there. But the educated classes in India speak English since not everyone speaks any other language, they are all to some extent local. Plus their studies were all in English too.

As to IP protection, the risk is objectively much lower since there is no market for stolen IP in Vietnam in the same was as there is in China. Nobody seemed to have had any problems. They also don’t have a culture of pirating the software that they use.

In the semiconductor world, Intel is there. Samsung has just recently picked Vietnam. Renasas has operations there. But one company I knew about was eSilicon. I talked to Deepak Sabharwai, the VP Engineering for IP which is mostly in Vietnam. In fact most of eSilicon is in Vietnam. Out of 500 people in the company, 300 are there at two sites, HCMC and Da Nang.

eSilicon got into Vietnam when they acquired Silicon Design Systems. They focus on memory design since they want to have a good selection of differentiated IP, to separate them from their competition, but not so much that they are competing with their customers. Since memory is regularly half the real-estate on a chip it makes sense to specialize in that to have both standard IP and deep knowledge to create custom memories too. They also do custom ASIC design work there. They have around 250 people working on design.


They also do all their software Q/A there, with another 50 people. The team has done a great job of creating and automating the tests for both the customer-facing STAR suite and the enterprise software that they use internally to run the company. Software engineers can be hired from school but for memory design they hire smart graduates and train them internally. And they train them to a high standard, doing 14/16nm FinFET memories for example.

Deepak confirmed what the breakfast meeting had said. Cultural fit was good, the senior people speak good English but not everyone speaks so well. And they have all their IP development over there so don’t consider IP theft a major risk. Deepak, who worked for Cadence in india, said he felt Vietnam is now where India was 15+ years ago.


If you are seriously interested in considering operations in Vietnam, then there is a 3 day conference Vietnam, an Emerging Destination for IT Outsourcing from 14th to 17th October. There will be IT outsourcing companies, multinationals, technical universities and more. The conference will be held in The Reverie Saigon at Times Square in HCMC Vietnam. Details are here.


Semiconductor Mergers – Innovation or Consolidation?

Semiconductor Mergers – Innovation or Consolidation?
by Pawan Fangaria on 08-02-2015 at 8:00 pm

About 3 years ago, I had written an article about consolidation in the semiconductor landscape where I had articulated 4 main reasons of consolidation – Macroeconomics, Business Leadership, Technology Leadership, and IP leadership. Back then, based on the state of affairs in the semiconductor industry, I had also mentioned about my perceived notion that more mergers & acquisitions would be happening in near future. Today, we are s Continue reading “Semiconductor Mergers – Innovation or Consolidation?”


Talking Directly to EDA R&D

Talking Directly to EDA R&D
by Daniel Payne on 08-02-2015 at 12:00 pm

Many EDA companies keep their R&D engineers focused on product development and bug fixing, shielding them from any and all direct contact with end-users, mostly for fear of what might be revealed if such direct dialog were allowed. Customer support people are allowed to talk directly with customers, then pass along enhancement requests or file bug reports to R&D. Another way is for product marketing and technical marketing folks to talk with EDA users and uncover new product requirements, then report to R&D what they’ve heard about possible roadmap features. Direct conversations between EDA tool users and EDA R&D engineers is kind of rare, however it can be quite beneficial to both parties as there is no intermediate filtering of ideas, design challenges, likes and dislikes.

I was actually kind of surprised to learn that Dassault Systemes is having a two day user meeting in September and October where the second day is fully dedicated to EDA users talking directly with EDA R&D engineers to actually share their user experience, have roundtable discussions, discuss product roadmaps, describe how they actually use the tools, and explain what they really want to see in the next release of software. This kind of direct interaction between users and developers is a wonderful approach that keeps a company like Dassault in tune with what’s really happening, and able to plan and respond accordingly.

Related – A Systems Company Update from #52DAC

As a quick recap, Dassault has introduced the Silicon Thinking Experience which offers four solution areas for SoC design teams that provide a business platform allowing Design, Product and Manufacturing engineering to work together:

  • Design Collaboration using DesignSync and Pinpoint
  • Requirements driven verification
  • Enterprise-level IP management
  • Manufacturing Collaboration

DesignSync has been used since 1998 to help SoC teams manage both the hardware and software content in their electronic products. Designers can share their hierarchical design to all team members, even across the globe during their collaboration of design and verification.

The Pinpoint tool came from Tuscany Design Automation, and was acquired by Dassault in late 2012, it provides a dashboard with info for both the front-end and back-end IC design flows, helping you to reach design closure quicker.

Knowing if your system design implementation actually meets the original requirements is important for success, so having a methodology that supports requirements driven verification is essential. Ad-hoc verification just isn’t sufficient for a complex electronic product today.

How you manage the hundreds of IP blocks on a single SoC can be a critical success factor, so using a proven system will help reduce risks, enable IP reuse, eliminate duplication of IP and track issues and defects through multiple projects.

Related – Design Collaboration, Requirements and IP Management at #52DAC

Also Read

A Systems Company Update from #52DAC

Design Collaboration, Requirements and IP Management at #52DAC

Managing Semiconductor IP