webinar banner2025 (1)

USB 3.x IP Revenue Have Grown by 31% in 2017 (IPnest)

USB 3.x IP Revenue Have Grown by 31% in 2017 (IPnest)
by Eric Esteve on 09-04-2018 at 12:00 pm

Despite the strong consolidation in the semiconductor industry, the Design IP market is going well, very well with YoY growth of 12%+ in 2017, according with the “Design IP Report” from IPnest. If we look at the Interface IP category (20% growth in 2017) and analyze the IP revenues by protocols, we can see that USB IP is amazingly healthy, showing a 31% YoY growth for USB 3.x IP. It’s amazing because the USB protocol was first released in the 1990’s and USB 3.0 in 2008.


If you look at the above picture, you realize that USB protocol (USB 3.x, USB 2 and before) has generated 1 billion of IP revenues since 2003. Ipnest is closely following the wired interface IP category since 2008, I can confirm that USB IP yearly revenues have grown every year since 2003, again in 2017 and the 5 years forecast (2017-2022) is also exhibiting YoY growth. If you consider USB 3.x only, the attached 5 year CAGR is similar to the other protocols like PCIe, memory controller, MIPI or Ethernet, in the mid-ten points of percentage.

Now, if you integrate all the USB functions like USB 2 and below, the overal USB IP revenue is higher by 40%, but the growth rate lower. In fact, IPnest has splitted USB 3.x and USB 2 IP business since the begining in order to provide more accurate analysis. These two families have a different behavior, addressing different type of application. This split is unique if you consider the various wired interface protocols. Taking PCI Express as an example, when the version 2 was released, the chip makers who adopted PCIe 1.0 decided to move to PCIe 2.0 with no exception, the same again for the release 3.

If we consider USB adoption, the behavior is completely different in respect from USB 3.0 adoption. In many application integrating USB, the need was for a standard interconnect technology, allowing to plug and play with no burden, not for always more bandwidth capacity (like for PCIe). We have separately monitored USB 2 and USB 3.x IP revenues since 2008, and we can confirm that USB 2 IP revenues have grown up to 2014, althought USB 3.0 has been released in 2008. In 2017, the trend is clear, USB 2 is declining with revenues 30% lower than in 2013, when USB 3.x is growing, with IP revenue almost reaching $100 million and 31% YoY growth.


For every protocol, IPnest build 5 years forecast, like you can see on the above picture (except for USB, where we build two distinct forecast). To build a forecast, the analyst has two options. The first is to be an excellent Excel user, introduce a certain equation, and delegate your intelligence to the tool. That’s why you frequently see this type of forecast result: “$2142.24 million in 2027, with 23.45% CAGR” …

As far as I understand, the author thinks “the most digits after the comma, the best the forecast”.

The other option gives less impressive results, but hopefully more accurate. In fact, you need to consider all the parameters that make sense in our industry. Like the market trends by segment (data center, storage, PC, PC peripherals, mobile, wired networking, wireless networking and so on). You need to evaluate the number of designs starts where the protocol will be integrated, and the target technology node (if you look at the above picture, we have evaluated this number to be 500 in 2020 for USB 3.x. They can’t be all on 7nm, right?). You also need to have an accurate idea of the average license pricing, for the digital part (the controller) and for the PHY. Moreover, you need to be accurate when forecasting this license ASP over 5 years.

At this stage, forget that you may have learned during your MBA, the wired protocol license ASP is GROWING, not declining with time (like commodity pricing would do).

At the end, you may use Excel, in fact you need to, but the best tool, by far, is your market intelligence and your experience in the semiconductor business (I have started in 1983, have seen many downturns, bubbles and heard enough stupid statement to stay reasonable…).

I must confess that IPnest has an advantage over the competition in term of forecast. Because we have started in 2009, we can confront a forecast made in 2010 with actual results 5 years later! And the difference is below 10%, and sometimes even below 5%…

The next picture shows that I am very proud to be part of the DAC IP committee (since 2016), where I really enjoy to discuss with other IP experts and confront our ideas. By the way, last DAC was great, and we have seen that the booths left empty by the lack of EDA start-up have been filled by IP start-up!

I will propose later in September a detailed analysis for another protocol (PCIe? Memory controller? Very High Speed SerDes?… you may suggest which one in a comment). If you’re interested by the complete analysis of the wired interface IP market and 5 years forecast, the “Interface IP Report” will be released at the end of September, just contact me: eric.esteve@ip-nest.com .

Eric Esteve from IPnest


The Ever-Changing ASIC Business

The Ever-Changing ASIC Business
by Daniel Nenni on 09-04-2018 at 7:00 am

The cell-based ASIC business that we know today was born in the early 1980s and was pioneered by companies like LSI Logic and VLSI Technology. Some of this history is covered in Chapter 2 of our book, “Fabless: The Transformation of the Semiconductor Industry”. The ASIC business truly changed the world. Prior to this revolution, custom chips were only available to huge, integrated device manufacturers. These behemoth organizations housed massive design teams, mask-making equipment and wafer fabs. They did it all.

Once ASICs began to flourish, all of that changed. The custom chip market became democratized. Suddenly, anyone with a vision and a reasonable budget could build a custom chip. The result was the ubiquitous deployment of semiconductor technology for custom applications of all kinds. Products became smaller, smarter and more sophisticated. We continue to see this trend today. In spite of this dramatic impact, ASIC has become something of a boutique market. In the 1980s and 1990s many analysts tracked market size, growth, and weighed competing technologies to implement custom chips. Today, hardly anyone tracks this business in spite of its revolutionary impact on our world. I did find a Gartner reference that predicts the ASIC market will be about $27B by 2020 which I think is conservative. AI and other heavy duty applications running on specialized ASICs dominate general purpose silicon so be optimistic, absolutely.

So, where does all that ASIC money go? The answer to that question is definitely changing. There are a lot of specialty suppliers for various forms of analog, mixed signal or sensor-based designs. Rather than get lost in all that detail, let’s look at the top-end of the market. Who is doing the most advanced designs? For many years LSI Logic was king of that hill. So was IBM Microelectronics. There were also several strong Japanese suppliers (e.g., NEC). ST Microelectronics in Europe participated in the ASIC business as well. In Taiwan, Global Unichip and Faraday are still in the mix. In China it is Brite Semi and Verisilicon. Back in the US, Open-Silicon and eSilicon dominate. eSilicon pivoted to be a top-end supplier a few years ago, more on them later.

Fast-forwarding to today, things are different. The Japanese suppliers have all but disappeared with the exception of Socionext which is the combination of the LSI businesses from Fujitsu and Panasonic. ST Microelectronics is still on the playing field, albeit with a somewhat unclear focus. LSI Logic became part of Avago who then bought Broadcom. So, there is still ASIC inside Broadcom but it’s dwarfed by the rest of the business and in theory they compete with their ASIC customers. The once mighty IBM Microelectronics got swallowed up by GLOBALFOUNDRIES a few years ago and recently there was a new chapter to that story.

Also read: GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies

In a surprising announcement, GF announced it would put all 7nm and below technology development on hold. The company did have active development in this area and some advanced manufacturing in Malta, N.Y. With no technology roadmap at 7nm and below, GF is likely to become a boutique foundry. Certainly, a very large boutique foundry. We’ll see how their technology roadmap unfolds over time.

But the second part of their recent announcement is also headline news. GF will spin out its ASIC business as an independent, wholly-owned subsidiary. The new company will source designs above 7nm from GF and purchase wafers on the open market at 7nm and below. It will be interesting to see how a wholly-owned subsidiary of one foundry gets advanced technology (think PDKs) from another foundry. Or maybe the GF ASIC unit will be put up for sale?

We’ll have to see how this new ASIC supplier fits in the market. One thing is for sure, all these changes could spell significant opportunity for pure-play, focused top-end ASIC suppliers like eSilicon, absolutely.


IP Management Using both Git and Methodics

IP Management Using both Git and Methodics
by Daniel Payne on 09-03-2018 at 12:00 pm

I use Quicken to manage my business and personal finances because it saves me so much time by downloading all of my transactions from Chase for credit card, Amazon for credit card, Wells Fargo for banking and Schwab for IRA. Likewise, for IP management in SoC design you want an app like Quicken that plays well with other tools that you are already familiar with. Such is the case with IP management as many engineers have used Git before to manage their software source code projects they can also use Git to manage their RTL code because they are related text files.

The challenge comes in SoC design when you want to start managing IC design files that are binary like IC layout, AMS designs or even SPICE waveform files. Perforce is a popular version management system that can handle these binary files quite well, so how would you connect Perforce and Git together cohesively?

At Methodics they have created an IP Lifecycle Management platform called Percipient that does tie together Perforce and Git so that each component of your design can be managed as an IP, then each IP use the data management system of choice: Git, Perforce, Subversion or others. With the Percipient tool you build workspaces for your project that have any combination of IPs that use different data management tools. Now thats giving you choice and flexibility, instead of being locked in to a single vendor approach.

Git IP
Let’s walk through the details and setup of using Git IP. In Percipient create an IP with a ‘repo-path’ field that holds the repository URL used by Git. When you do a commit in Git it creates a 40 character revision identifier using a SHA-1 hash. Percipient will load the Git IP by doing:

  • git clone (to retrieve the repository)
  • git reset –hard (puts the repository at the defined commit that matches the IPV release)

If you just load a Git repository in a workspace at some commit, then that repository is in something called a “detached HEAD” state which has a limitation where any user changes and commits do not belong to any branch. This is overcome in Percipient because it does a git reset –hard when loading a Git IP at a known version. Here’s a diagram to help explain what Percipient is doing:

  • Previous commits become dangling
  • Updates the currently checked-out branch

No need to worry about the dangling commits because Git does garbage collection to remove them.

Git Branches in Percipient
Git has a branch and merge flow, while Percipient will map its TRUNK and lines to Git by:

  • TRUNK is the “master” Git branch
  • Other lines are a Git branch.

You don’t have to match each Git branch into a Percipient line, just match Git branches that are important or used to share with others.

Updating a Git IP
Your IPs in a particular workspace can easily be updated to a different release or a version by using Percipient. If you’re doing a pi update command then Percipient first checks the Git IP status, for clean status then Percipient can apply update modes to this action, but they will only by at the IP level.

Releasing a Git IP
To release your Git IP run the command pi release. Some of the checks when releasing an IP are:

  • Is workspace status clean?
  • IPV line and Git branch match?
  • Commit has been pushed?

Independent Repositories
Each Git IP needs to reside in its own repository, so your IP can’t be just a portion or a sub-directory inside of a repository. Your Git repository needs to be mapped to one IP.

No Submodules or Subtrees
A Git repository cannot reference submodules to work with Percipient.Use Percipient to manage your hierarchies.

Summary
Mixing DM tools is desirable and possible, so now you can enjoy using both Git and Methodics on the same SoC project. Git can handle text files quite well, and Percipient understands binary files, a perfect fit for modern chip design projects that need to track both file types.

White Paper
Read the complete white paper here online.

Related Blogs


The Importance of Daughter Cards in FPGA Prototyping

The Importance of Daughter Cards in FPGA Prototyping
by Daniel Nenni on 09-03-2018 at 7:00 am

FPGA Prototyping started with the advent of FPGAs in the 1980s and today it is a fast growing market segment due to increasing chip and IP complexities up against tightening windows of opportunities. Getting your design verified quickly and allowing hardware and software engineers the opportunity to develop, test, and optimize their products in parallel is critical for competitive and emerging markets, absolutely.

Working with S2C Inc. for the past few years has really been an eye opening experience in regards to FPGA Prototyping, Emulation, and the emerging Chinese fabless chip market. I will be writing a series of blogs based on my experiences with S2C in hopes of publishing a follow-on to our ebook “Prototypical” and this one starts with implementation and the importance of daughter cards.

The use of daughter cards for FPGA prototyping is an important concept as it allows flexibility. A common problem is the limitations of current FPGA vendor’s evaluation kits and in-house FPGA boards. Most evaluation boards have fixed interfaces embedded on the board which often cannot meet the SoC/ASIC prototyping requirements and is hard to reuse on new projects. Many home-grown FPGA boards also have the same limitation and easily get outdated when the design specification changes and they are hard to reuse for future projects. So more and more people choose general purpose FPGA prototyping systems making daughter cards a very important part of the solution.

S2C Interfaces and Accessories

What are the considerations for choosing daughter cards or the FPGA prototyping systems that will hold the daughter cards? Well, the FPGA prototyping system you select should have abundant unused I/O pins on I/O connectors in order to use daughter cards. Naturally, the most important methods for evaluating a prototyping system in this regard is to examine the type of I/O connectors the system uses, how the pins are defined, the availability of different types of daughter cards and what features the FPGA board supports for using the daughter cards, so you do not have to build anything on your own.

S2C Request for Quote

The number of expansion connectors and types are important criteria when selecting suitable prototyping hardware. Of course, the more expansion connectors there are on the board, the more flexible it is. But keep in mind that the expansion connectors may also be used to interconnect between FPGA boards when design requires partitioning. The FMC connector is a good example that has many off-the-shelf daughter cards but because of its large footprint and pin definitions, it is not ideal to use as general interconnect between FPGA boards. As an example, S2C’s Prodigy Connector is a compact, high-performance, 300 pin connector that can support the running of multi-GHz transceivers. It supports 3 full FPGA I/O banks and all traces from the same I/O connector have the same length. The Prodigy Connector has a matching Prodigy Cable that can connect 2 Prodigy Connectors with pin 1 matching pin 1. The Prodigy Connector supplies 2 voltages from the FPGA board to the daughter board: 3.3V and VCCIO voltages.

A vast library of pre-tested daughter cards can save precious engineering time and reduce risks. Many of today’s chip interfaces use industry standards, such as USB, HDMI, PCIe, Ethernet, DDR, and others, and the commercial daughter card can meet the requirements. S2C has dozens of daughter cards coverring processors, embedded & multimedia, high-speed GT and memory applications. Take Prodigy 3 Channel RGMII/GMII PHY Interface Module as an example:

[LIST=1]

  • On the PCB or layout design stage, we have a rigorous design process to ensure the daughter card runs over 125MHz, so the Gigabit Ethernet can work well.
  • We implemented the auto-detection technology, so that the teams across the globe can identify the presence of specific daughter cards remotely and test on it.
  • The IO voltage detection function is integrated too, if somebody inputs a wrong voltage, the power will be automatically shut off to protect the hardware.
  • We also developed a reference design for it, within this, a customer can quickly bring up their designs.These daughter cards can be reused across multiple configurations of FPGA prototyping and among multiple projects/locations. The goal is to speed up and simplify customers’ system prototyping process.Another key consideration is whether the FPGA prototyping vendor can provide daughter card customization services. Off-the-shelf solutions solve many problems but sometimes special system interfaces or optimizations for specific SoC and ASIC applications are required. So customization services can be extremely beneficial especially when you are working on a time-to-market critical project. S2C has had numerous customization services over the past 15 years, the typical SOW covers schematic design, PCB layout, SI simulation (optional), manufacture, hardware initial bring-up and test design develop and delivery. Even if you would prefer to build the daughter card in-house S2C can provide the daughter card design guideline and support to speed your efforts and reduce risk of delays.

    About S2C
    Founded and headquartered in San Jose, California, S2C has been successfully delivering rapid SoC prototyping solutions since 2003. S2C provides:

    With over 200 customers and more than 800 systems installed, S2C’s focus is on SoC/ASIC development to reduce the SoC design cycle. Our highly qualified engineering team and customer-centric sales force understand our users’ SoC development needs. S2C systems have been deployed by leaders in consumer electronics, communications, computing, image processing, data storage, research, defense, education, automotive, medical, design services, and silicon IP. S2C is headquartered in San Jose, CA with offices and distributors around the globe including the UK, Israel, China, Taiwan, Korea, and Japan. For more information, visit www.s2cinc.com.


Forget the Saudis: Apple or Google should acquire Tesla

Forget the Saudis: Apple or Google should acquire Tesla
by Vivek Wadhwa on 09-02-2018 at 7:00 am

Steve Jobs wanted to build an electric car as far back as 2008. In 2014, Tim Cook reportedly funded the project. To date, though, Apple has had little to show for it, and the rumors are that its electric vehicle will launch as late as 2025— long after such things become common commodities. Google has already had self-driving electric cars on the road for about four years, though it has decided to focus primarily on the software.

A decade after Tesla announced the Model S, and six years after its delivery, no other company has been able to produce anything comparable. The big automotive manufacturers are claiming that they will soon eat Tesla’s lunch, but even the strongest offerings — those of BMW and Mercedes — are merely souped-up cassette players trying to compete with an iPod.

Tesla learned the hard way the intricacies of combining legacy automotive technologies with modern software — through trial and error and constant delay. It also struggled to automate production. Using advanced robots, however, it has finally figured out how to build an astonishing 6000 cars per week, some in a tent.

Now, as Tesla struggles with its cash balances, extremely negative press, and Elon Musk’s erratic tweets, it is at another crossroad and, in order to reach its potential, needs a strategic partner. It may not make sense for it to continue as a public company.

The best acquirer would not be Saudi Arabia, whose interest Musk tweeted about, because that nation’s interests inherently conflict with Tesla’s. Electric vehicles and solar technologies will cause the price of oil to plummet and decimate the value of Saudi oil reserves, so it would lose heavily if its investment in Tesla paid off. Technology companies, however, share Musk’s goals and ambitions, particularly Apple and Google. They have the money, technology, and marketing strengths to greatly enhance Tesla’s offerings. Apple also has great manufacturing prowess and distribution channels.

Tesla would provide Apple with an entirely new set of technology platforms on which it could build a new line of products. Apple desperately needs these in order to sustain its trillion-dollar market capitalization; after the release of the iPhone, in 2007, it has had virtually no world-changing products. It needs to enter new markets, and, with its automotive, energy storage, and solar technologies, Tesla would provide them.

Apple’s existing products would also benefit from the advanced technologies that electric cars have incorporated, such a batteries and in-car electronics. And Apple would gain the second-best self-driving software in the industry.

Tesla could, in turn, integrate the iPad, Apple TV, iTunes, and App Store into its automotives, literally turning its vehicles into iCars. And it could replace its clunky operating system with macOS. I am sure that all Tesla owners—such as me—would love to be able to download apps and music onto a console that’s more user friendly than the Tesla’s present one.

Apple would bring its world-class manufacturing and inventory-management process to Tesla and create new types of automobiles, in different sizes and shapes — and at lower prices. This would give it a second chance to wow markets it has largely lost; specifically India and China.

Google’s interests too coincide with Tesla’s. Google doesn’t have Apple’s manufacturing capability, but its maps and self-driving software are one or two notches above any other. Tesla’s mapping software is substandard, and its self-driving software can use a major upgrade. Google’s self-driving-car spinoff, Waymo, could focus on the software and let Google’s Tesla arm deal with the hardware.

Given that Morgan Stanley has just valued Waymo at $175 billion, Tesla’s $70 billion price would be a no-brainer, and the combination would be formidable.

Would Musk even entertain such an offer? Given that he reportedly turned down an offer from Google in 2013 and laughed off the idea of Apple’s buying Tesla in emails I exchanged with him in April 2014, and in an earningscall last year, it would seem very unlikely. Yet, having reached his personal limits and being close to burnout, as Musk has admitted; after seeing the disastrous impact of his tweet about having secured funding; and with Saudi Arabia offering investment in a competing startup, things may have changed.

I’ll bet that Musk would take an offer that solved his financial problems and gave him autonomy. With the headaches of funding and quarterly stock pressure taken away, the world’s greatest innovator would be free to develop world-changing ideas that transform entire industries, including automotive, energy, and space. That would be a win–win for Tesla — and for humanity.

For more, follow me on Twitter: @wadhwa and read my bookThe Driver in the Driverless Car


Apps Before there were Apps

Apps Before there were Apps
by Daniel Nenni on 08-31-2018 at 7:00 am

This is the thirteenth in the series of “20 Questions with Wally Rhines”

My development of a calculator program to determine the Black Scholes value for an option was not the only application that attracted financial people to programmable calculators. As the SR-52, and later TI 59, grew in popularity, and took market share from the HP 65, we began to discover a vast community of innovative people writing programs for these calculators. Peter Bonfield and Stavros Prodromou drove the formation of PPX -52 Professional Program Exchange, a forum where a contributor of a useful, well documented program could receive credits for the purchase of other programs. As these programs accumulated, TI moved to publish booklets of programs on various topics and sold them. Because of the success of my Black Scholes program, and because we were short of person power, I was appointed to provide management supervision for the PPX Xchange.

Each month, we met to review the new programs that had been submitted. It was at that point that I began to comprehend the enormous resource available to us. Thousands, maybe millions, of talented people wanted to demonstrate their expertise through programmable calculator programs. In most cases, they didn’t care if they were compensated. They just wanted to show other people how brilliant they were.

The ultimate example came when I reviewed a program for a “one armed bandit” that simulated a Las Vegas slot machine. By loading the program and then pressing “enter”, the display showed three single digit random numbers separated by dashes. Dashes? There were no dashes on the SR 52 or TI 59 calculators. How could they possibly have done this? It took one of our expert engineers to analyze the program and figure out how it worked. The creator of the program had discovered that execution loops could be created that would simultaneously display more than one number; since the segments in the LED displays were strobed at about 14 times per second, the program could create overlap and thus a dash between numbers. The program was so brilliant that we contacted the author to see if he wanted to work for TI. Similarly, the SR 52 had extra, undocumented registers that programmers discovered and used for applications that were not anticipated by the developers of the SR 52.

Over the next year, many communities of people became connected through their common interest in different applications. TI’s published booklets of applications carried contact information for the authors of programs. Although there was no internet for authors to communicate, they found ways to share information. TI then sponsored events to showcase the diverse set of applications available for the programmable calculators. I was asked to demonstrate my Black Scholes program at one of these events in New York City where analysts and others from the financial community were targeted. Ben Rosen, the Morgan Stanley semiconductor analyst who was the most respected in the industry, came to the event. He was fascinated with the Black Scholes program and invited me to tour the trading floor at Morgan Stanley. Later, he visited Lubbock, Texas (a major trip for a New York investment banker) and we showed him what we were doing. I continued to run into Ben at conferences and other events. And then, after I moved to Houston to run the Microprocessor Division, I received a strange phone call. It was Ben. He said that he was leaving Morgan Stanley and that he and L.J. Sevin were starting a venture capital fund. And he said that he would be in Houston the following week and wondered whether I would be able to have dinner with him and L.J. Of course, I was available.

Ben gave his pitch for how Sevin Rosen wanted to set up potential entrepreneurs and fund them while they worked on ideas for new businesses. That way, any conflict of interest with present employers could be avoided. I was flattered that they would think of me. In fact, I was amazed that they would make a trip to Houston just to talk with me. A few months later it became apparent that they had not come to Houston just to talk with me, as evidenced by the announcement that Sevin Rosen would fund Compaq Computer, a Houston startup headed by Rod Canion, another TI employee.

I didn’t take advantage of Ben and L.J.’s offer. My responsibilities were growing too rapidly at the time to consider leaving TI. But since Ben had to sell rights to his semiconductor newsletter, the Rosen Electronics Letter. Esther Dyson bought it and renamed it Release 1.0 and began the reorientation from semiconductors to software. She, along with George Gilder, continued some of the semiconductor theme. When TI announced the TMS 7000 8-bit microcontroller, I made a trip to New York and did a series of one hour interviews with representatives of various electronics journals. Gordie Campbell, then CEO of SEEQ, gave the presentation with me as our alternate source for the TMS 7000. Gordie highlighted the Ethernet controller that SEEQ had embedded in their version of the TMS 7000. After giving the presentation seven times during the day, Gordie and I became bored and we switched places; I gave his presentation and he gave mine. And that’s how we met Esther, who wrote up the announcement in her newsletter and then proceeded to invite us to speak at the PC Forum each year.

The 20 Questions with Wally Rhines Series


Analog IC design across PVT conditions, something new

Analog IC design across PVT conditions, something new
by Daniel Payne on 08-30-2018 at 12:00 pm

Transistor-level design for full-custom and analog circuits has long been a way for IC design companies to get the absolute best performance out of silicon and keep ahead of the competition. One challenge to circuit designers is meeting all of the specs across all Process, Voltage and Temperature (PVT) corners, so that silicon yields high enough to maximize profits. In the early days of my IC design career at Intel in the 1970’s we had a simple design methodology:
Continue reading “Analog IC design across PVT conditions, something new”


The Robots are Coming!

The Robots are Coming!
by Bernard Murphy on 08-30-2018 at 7:00 am

Moshe Sheier, VP Marketing at CEVA, recently got back from MWC Shanghai and commented that robots are clearly trending. He saw hordes of robots from dozens of companies, begging for someone to brand and offer them in any one of many possible applications: in an airport to guide you to a connecting flight, for elder care, in hospitals for food and drug delivery, in education for learning about robotics and programming but also as assistants in dealing with special needs kids, food delivery in restaurants, the list is endless. Think of this as the next big thing after smart speakers (Amazon already has 100k+ robots working in their warehouses, so obviously they’re working on home robots as a sequel to the Echo).


Which Moshe said made him think about what it will take to offer competitive robot solutions. He pointed me to Gartner’s list of the top 10 AI and sensing capabilities they believe will be needed in personal assistant robots by 2020, among which they (Gartner) include computer vision, a conversational user interface, biometric recognition / authentication, acoustic scenery analysis, location sensing, autonomous movement and of course local (edge) AI.

Why is it so important for all of this to be available in a robot? Why not let the cloud do the heavy lifting? There may be a scalability problem in that concept, but also we’re starting to get wise to why the cloud isn’t the answer to every need. Latency is an issue – if you want a quick response you can’t wait for a round-trip and possibly delay in getting a resource to do the work. Privacy/security is a big concern. Do you want you’re your medical symptoms or payment details exposed to eavesdropping hacks? Power is always a concern – robots aren’t much use when they’re parked at a power outlet. Having to go to the cloud and back burns significant power in communication. It often makes sense to do as much compute as possible locally, as counter-intuitive as that may seem.

Take computer vision – move it to the edge. But you have to be careful; dropping the cloud-based solution into a robot probably won’t work. You could handle vision on a leading GPU – positioning, tracking and gesture recognition are examples. Add more intelligence and the robot can find objects and people. But a big GPU used for graphics, intelligent vision and deep learning will be a real power hog. Not a problem in the cloud but a real issue on the edge. Offloading some of these tasks, particularly vision and a lot of recognition onto DSPs is a natural step since DSPs have a well-recognized performance per watt advantage over GPUs.

Autonomous movement requires ability to recognize and avoid objects which, unless the robot has to relearn object positions over and over again (slow and power hungry), requires an ability to build a 3D map of a room or floor of a building. Naturally this should be updated as objects move but that should only need incremental refinement. This again highlights the accelerating trend to move AI to the edge. Learning is typically thought of as a cloud-based activity, where trained networks are downloaded to the edge. But 3D-mapping and ongoing refinement can’t depend on cloud support (sorry I knocked the lamp over – I was waiting for a training update from the cloud?).

Acoustic scene analysis is a hot topic these days, extracting significant sounds or speakers from an acoustically busy background. The family is in the living room chatting away, the TV’s on and you want to ask your robot to answer a question. How does the robot figure out it’s being asked to do something and who asked? Or you’re away from the house and an burglar breaks a window or your dog starts barking. Can the robot understand there’s cause for concern?

This has to start with acoustic scene analysis – it doesn’t make sense to ship an unedited audio stream to the cloud and have that figure out what to do. A lot of intelligent processing can happen before you get into command recognition and even natural language processing (NLP). Separating sources, recognizing sounds like breaking glass and your dog barking, also keyword and some targeted command recognition, these can be processed locally today. General-purpose NLP will likely be a cloud (and continuing research) function for quite a while, but domain-specific NLP shows promise to be supported locally in the not too distant future.

So when you’re thinking about building that robot and you want to differentiate not just on features but also time to market and usability – a lot of the hard work already done for you and much longer uptimes between charges – you might want to check out CEVA’s offerings, in their platform for local AI, in front-end voice processing and in deep learning in the IoT.


ISO 26262: People, Process and Product

ISO 26262: People, Process and Product
by Bernard Murphy on 08-29-2018 at 12:00 pm

Kurt Shuler, VP Marketing at Arteris IP, is pretty passionate that people working in the automotive supply chain should understand not just a minimalist reading of ISO 26262 as it applies to them but rather the broader intent, particularly as it is likely to affect others higher in the supply chain. As an active ISO 26262 working group member, I guess he has better insight than many of us regarding latent problems that might emerge after an IP, chip or system has nominally been signed off. He makes the point that, in compliance, everyone through supply chain is still learning; a subtle problem at one stage might never become an issue or might eventually emerge only in integration testing in the car.


At each stage in the chain, it is the responsibility of the integrator to determine if vendors of the products they use can validate all the claims they make in asserting they and their product(s) are compliant with the standard. This isn’t just about the product. Kurt summarizes it as being about people, process and product. Claims are required in each of these areas. It can be temptingly easy to go with minimalist check-marks especially on people and process and still pass all requirements on handoff to the next stage. Then later you hear of a problem at a Tier-1 or an Uber or Waymo which might have been flagged as a potential concern in a more robust reading of compliance. Would this have been your fault? Probably not. Did you make money? Almost certainly not. Probably best to accept that participating in the design of a (successful) car these days needs to be a much more collaborative enterprise than it used to be. We all have responsibilities to bound as well as we can how our product may behave throughout the supply chain.

People training is an area where Kurt is clearly concerned about divergence between how he sees compliance being implemented versus the spirit of the standard, especially in IP development. The requirement calls for a trained functional safety manager supported (maybe) by (some) functional safety engineers. But the spirit of the standard calls for a sustainable safety culture which implies training, demonstrated competences and qualifications much more broadly across the organization. This can extend to executives, marketing personnel, engineering staff, documentation teams, quality assurance managers, application engineers and others. Proof of this training can be and often is required by customers and third parties who verify compliance. Obviously this requires a bigger investment but supply chain learning will likely tend over time to those who invest more in organization training.

Kurt also sees deficiencies in how people interpret Quality Processes. As engineers we naturally gravitate towards technologies and tools (requirements management, change management, verification, etc) to address this area but in his view, while these can play a role, this is the wrong place to start thinking about quality management.

Quality management systems (QMS) have been around for a while. You’ve probably heard of ISO 9001, there’s something called automotive SPICE (nothing to do with circuit simulation, this is for automotive software development) and Capability Maturity Model Integration (CMMI). These aren’t tools, they’re processes defining best practices for development and support, though some provide quite specific guidance for semiconductor and software IP development. What is important in demonstrating adherence to a quality process is choosing one of these QMS systems and demonstrating continual use of that system by all employees. Again, not something you can just delegate to the safety team.

In product, Kurt highlights a couple of points that maybe aren’t at the top of your safety checklist. You are designing an IP and someone else will be designing using that IP. Also your testing will not be based on testing the IP in the content of the car, or a Tier-1 system or even in the chip. These could be fairly significant limitations when it comes confidence in the safety of your component in that car. ISO 26262 gets around this by requiring the component provider to document assumptions of use that detail what is expected from the integrator in reasonable use of that component. But the integrator is likely also going to configure your IP and you don’t know what configuration they will ultimately use. Messy. So the standard requires IP vendors and chip integrators to agree upon a Development Interface Agreement (DIA) defining the assumptions used by and responsibilities of both parties. That takes a lot of thought and work on both sides.

Finally. Kurt has concerns about how well IP developers understand the intent behind and practice of failure mode analysis. The natural engineering bias is to get as quickly as possible to quantitative analysis to assess and grade mitigation mechanisms for potential failures – the FMEDA step. But he points out that first you have to spend quality time on the FMEA step, otherwise the FMEDA is meaningless. Unfortunately FMEA doesn’t have a lot of tool support that I’m aware of. This is just hard engineering judgment work, partitioning the design into manageable pieces, deciding what are possible failure modes in each piece, what are reasonable assumptions about the likelihood of those failures and what methods you are going to use to mitigate such failures (duplication, parity, …).

Altogether this is a lot of work and a bigger commitment than some vendors may have fully understood. But to survive the natural selection that will emerge in automotive supply chains, it may not be avoidable. You can get more detail by downloading Kurt’s technical paper.


An FPGA Industry Veteran’s View of Future

An FPGA Industry Veteran’s View of Future
by Tom Simon on 08-29-2018 at 7:00 am

There are tectonic changes happening in the world of FPGAs. A lot has changed since their introduction in the 80’s. Back then they were mostly used to implement state machines or glue logic. Subsequently they grew more complex with the addition of high speed IOs, eRAM, DSPs, other processors and other IP. More recently though FPGAs have come into the limelight because of their ability to help solve today’s data processing challenges. These include enhancing data center throughput and accelerating machine learning applications.

One person who has witnessed many of these changes is Manoj Roge, Achronix Vice President of Strategic Product Planning. During his career he has worked at both Xilinx and Intel (Altera) in strategic positions. I had the pleasure of talking with him about the changing landscape for FPGAs recently.

Microsoft showed with their Catapult project that FPGAs are extremely useful for accelerating datacenter workloads. This comes at a good time because CPU performance scaling is slowing due to the end of Moore’s law improvements previously provided by generational process improvements. Even multiprocessing using CPU architectures is not meeting the computational needs of today’s applications.

FPGA are inherently parallel and offer fewer constraints in many applications than GPUs. One of the big factors that helped GPUs grow in market share for general computing applications was the readily available development tools that allowed programmers to move applications to that platform. Similarly, Manoj pointed out during our conversation, FPGA programming is moving from RTL to coding languages like C++ and Python. This will have the effect of opening up the benefits of FPGA throughput to a much larger pool of application developers. Manoj was adamant that the quality and usability of development tools for FPGAs is crucial, especially now that the audience has expanded to include coders and not just hardware engineers.

According to Manoj, the other factor leading to the increased usage of FPGAs is the enormous mask costs for advanced nodes such as 16nm and beyond. It’s not unusual for a mask set to costs upwards of $10M. These higher costs are pushing system designers to build fewer, but more generally applicable, ASICs. Adding programmability to an ASIC through embedded FPGA is an ideal way to accomplish this.

Achronix is unique in having an embedded FPGA fabric. In concurrent with increasing masks costs, the cost per LUT has gone down, making the use of FPGA fabric for this purpose feasible. The dual advantage of embedded FPGAs are lower power and lower latency. Manoj pointed out that there are a number of papers that show a 10X reduction in power when the need to go off-chip is eliminated. The power overhead of driving IOs and maintaining signal integrity in board traces is huge. An on-chip embedded FPGA fabric avoids these power sinks. Latency also goes down with embedded FPGA fabric. Many applications call for microsecond scale latency, and an embedded FPGA fabric can deliver this.

Manoj told me that eFPGA is ideal for the new compute workloads found in AI/ML such as image classification and video recognition, real time video transcoding (4K/HD), 5G backhaul and baseband radio, and smart city and smart factory. While these systems may initially rely on off chip FPGA, as volumes ramp up on-chip FPGA becomes increasingly attractive.

FPGAs have a major role to play in the most advanced computing systems being used for the tide of emerging applications. Manoj sees that Achronix will be playing a major role in these markets. There is more information about Achronix eFPGA on their website.