RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

RISC-V Business

RISC-V Business
by Tom Simon on 12-04-2017 at 7:00 am

I was at the 7[SUP]th[/SUP] RISC-V Workshop for two days this week. It was hosted by Western Digital at their headquarters in Milpitas. If you have not been following RISC-V, it is an open source Instruction Set Architecture (ISA) for processor design. The initiative started at Berkeley, and has been catching on like wildfire. There are a number of RTL implementations that work in FPGA’s or SOC’s and there is also production silicon from companies such as Si-Five. The RISC-V Workshop was sold out with over 500 attendees – most of whom stayed for the full two days.

The agenda was filled with detailed technical presentations from a wide range of institutions and companies. They covered details of proposed additions to the specification, commercial products using RISC-V, and research projects leveraging the ISA. The presenters talked about everything from server farm simulation, machine learning, debugging tools, novel applications, and more.

The keynote was given by Western Digital CTO Martin Fink. He had several surprising things to tell us. First off, after talking in depth about Western Digital’s take on big data versus fast data, he mentioned that Western Digital actually ships about 1 billion processors a year. These processors enable USB drives, hard drives, solid state drives and more. They play a crucial and growing role in moving and processing data. We are all familiar with the cache schemes to improve performance and monitoring to maintain data integrity. In the future, filtering and processing might even occur on the storage device directly, aided by more advanced and powerful processors.

The second surprising announcement that Martin made was that Western Digital is committing to transition all of these processors to RISC-V. While unexpected, it probably should not have come as a complete surprise. The slide showing companies supporting RISC-V barely has any white space on it these days. Almost every large semiconductor company is represented.

The two days of talks made clear that the RISC-V ecosystem is being built out at a rapid pace and there is a lot of momentum. Low end implementations of RISC-V were handed out to some of the guests in a smart name tag designed by Antmicro that uses the E310 from Si-Five. Si-Five has announced a 5-core chip that is suitable for running Linux. At the upper end of the performance spectrum, a new company called Esperanto came out of stealth mode at the Workshop to announce its technology that uses massively parallel RISC-V processor chips to tackle machine learning.

I’ll be writing more about RISC-V, but because it is open source, you can go directly to the RISC-V website and view the specs and learn about the current implementations, development tools and future work planned to add to the spec. It’s worth noting that the core parts of the ISA are already defined and frozen, so they can be relied upon for development.

RISC-V has the potential to be as transformative as Linux, or HTML. It appears to have the ability to scale from MCU to server class. Already people are using it in a wide range of applications. As an analyst, I attend a lot of technology events and I think the turnout and enthusiasm for this event was exceptional.


IP-SoC 2017: IP Innovation, Foundries, Low Power and Security

IP-SoC 2017: IP Innovation, Foundries, Low Power and Security
by Eric Esteve on 12-03-2017 at 12:00 pm

The 20[SUP]th[/SUP] IP-SoC conference will be held in Grenoble, France, on December 6-7, 2017. IP-SoC is not just a marketing fest, it’s the unique IP centric conference, with presentations reflecting the complete IP ecosystem: IP suppliers, foundries, industry trends and applications, with a focus on automotive. It will be also the celebration of Design & Reuse 20[SUP]th[/SUP] anniversary, and the conference program is very high level, with people like Aart De Geus, chairman and Co-CEO of Synopsys or Sir Robin Saxby, the founder of ARM, presenting keynotes to start the conference.

You probably know Charles Janac, CEO, ArterisIP, the chairman of the session “The Past and the Next Decade Vision”. If you remember, he was CEO of Arteris when the company was acquired by Qualcomm in 2013 for several hundred million dollars… but, in fact, only the Network-on-Chip (NoC) IP portfolio was acquired and Arteris became ArterisIP, still developing and selling NoC IP.

In this session, Mark Ma will give a review of China IP to IC industry in 2017, Eklovya Sharma (Sankalp Semiconductor) will tell how he expects “Changing dynamics in semiconductor industry” and Bill finch from Cast will share his experience about the “Reusable IP Revolution and How a Small Company Took Advantage”… Last year, Bill Finch has given a presentation at IP-SoC “Back to the Future. The End of IoT”, and I admit that it was provocative, but I loved it! The presentation summary was: The term Internet of Things is the most over used, over hyped, mis-used and mis-understood phrase of the last few years. It now has so many meanings that it has become useless to describe anything worthwhile. As designers of IP and electronic systems we need to refocus on what we want to accomplish going forward. As always, it’s about customer needs and long term benefits. I will certainly attend to Bill’s presentation this year.

If a business need an ecosystem to grow and develop, this is certainly the IP business. And foundries are with EDA a very strategic part of this ecosystem. That’s why the “Foundry Vision” session is dedicated to IP friends like Samsung, GlobalFoundries and Soitec. There is a clear focus about FDSOI technology, as a reminder, Soitec is the #1 SOI wafer provider and GF will talk about FDXcellerator and 22FDX ecosystem. Don’t expect me to complain about this FDSOI focus as I have written numerous blogs, along with other at Semiwiki like Paul Mc Lellan, to introduce FDSOI technology to our readers in 2012-2013, even before the technology being adopted by Samsung and GlobalFoundry as a mainstream solution, complementary with more power-hungry FinFET technology. In FDSOI we trust, especially for battery powered applications, in pure digital or RF IC!

There will be other sessions dealing with IP ecosystem, like “From IP to SoC: What is the Trend” or “Automotive IP and Software”. You will hear about Analog IP from Mahesh Tirupatur with Analog Bits, one of the most talented IP vendor dealing with highly complex IP, from engineering standpoint, or Interconnect IP from Charles Janac (ArterisIP). Embedded FPGA will be honored by no less than two vendors, Flex Logix Technologies and Menta as Imen Baili (Menta) will explain why “eFPGA is the key solution for Automotive embedded systems”.

You should stay up to Thursday 7[SUP]th[/SUP] , as the 2[SUP]nd[/SUP] day is very busy with very interesting topics, in these seven sessions:

– –Power Management and IoT vision (Microchip, Synopsys and CSEM)

– –Security (Inside Secure, Dolphin Integration and Secure IC)

– –Design methodology, Innovative IP in FD-SOI Technology, IP SoC design and System design

IP-SoC 2017 is clearly this kind of high level conference where complexes engineering topics are addressed by industry experts, not just a marketing fest!

IP-SoC conference will be located on December 6-7 in Hôtel EUROPOLE, 29 rue Pierre-Sémard, Grenoble Grenoble, France, you can register here

See you on Wednesday 6[SUP]th[/SUP] December in Grenoble

From Eric Esteve from IPNEST


Making Your Next Chip Self-Aware

Making Your Next Chip Self-Aware
by Daniel Payne on 12-01-2017 at 12:00 pm

One holy grail of AI software developers is to create a system that is self-aware, or sentient. A less lofty goal than sentient AI is for chip designers to know how each specific chip responds to Process variations, Voltage levels and Temperature changes. If a design engineer knew exactly which process corner that each chip was fabricated under, then they could dynamically control each chip to perform in an optimal manner. If I knew how much current my transistors consumed with a simple test measurement, then I could predict which test bin they should fall into instead of running a complete functional test. If I could measure the performance of my transistors over time, then I could predict their performance during the aging process and make adjustments to the operating conditions. The list of benefits is quite long if only there was a method to measure the PVT characteristics of each chip.

Crafty engineers have figured out how to enable these benefits through something called embedded in-chip monitoring, placing special circuits onto each chip that can dynamically measure and report process variation, voltage levels and temperature values. Moortec is such a company that has commercialized on-chip monitoring and offering the SoC industry their IP application, integration and device test during production since 2005, based in Plymouth, UK.


Here are some reasons that you should consider adding embedded in-chip monitoring:

  • Measure process variation per chip
  • Dynamically measure voltage levels for VDD during chip operation
  • Enable Dynamic Voltage and Frequency Scaling (DVFS) to control power consumption
  • Support Adaptive Voltage Scaling (AVS) to manage power
  • Optimize your CMOS designs from 40nm down to 7nm process nodes

As device geometries get smaller the variations in switching speed delays start to increase as a function of supply voltage, so knowing your internal VDD levels is critical to performance.


Switching speed delay variations against supply voltage for a number of different process technologies

Voltage threshold values shift over time caused by negative bias temperature instability (NBTI) and hot carrier injection (HCI), so being able to measure your Vt values over time is crucial for reliable operation.


V degradation due to NBTI and HCI

To learn more about embedded in-chip monitors from the experts at Moortec then you should consider attending their upcoming webinar online.

Webinar Details

Who Should Attend?
The webinar is aimed at IC developers and engineers working on advanced node CMOS technologies from 40nm down to 7nm, will also highlight the challenges posed by process, temperature and voltage variability and how those challenges relate to the stability of complex SoC designs. Moortec provide complete PVT Monitoring Subsystem IP solutions on 40nm, 28nm, FinFET and 7nm. As advanced technology design is posing new challenges to the IC design community, Moortec are able to help our customers understand more about the dynamic and static conditions on chip in order to optimize device performance and increase reliability. Being the only PVT dedicated IP vendor, Moortec is now considered a centre-point for such expertise.

About Moortec
Established in 2005, Moortec provide in-chip monitors and sensors, such as embedded Process Monitors (P), Voltage Monitors (V) and Temperature Sensors (T). Moortec’s PVT monitoring IP products enhance the performance and reliability of today’s Integrated Circuit (silicon chip) designs. Having a track record of delivery to tier-1 semiconductor and product companies, Moortec provide a quick and efficient path to market for customer products and innovations. For more information please visit www.moortec.com


Hierarchy Applied to Semiconductor IP Reuse

Hierarchy Applied to Semiconductor IP Reuse
by Daniel Payne on 11-30-2017 at 12:00 pm

When I first started doing IC design back in 1978 we had hierarchical designs, and that was doing a relatively simple 16Kb DRAM chip with only 32,000 transistors using 6um (aka 6,000 nm) design rules. SoC designs today make massive use of hierarchy at all levels of IC design: IC Layout, transistor netlist, gate level netlist, RTL level, C or SystemC level, embedded software, software drivers, firmware. Our semiconductor IC business is based on both IP and EDA tools being able to handle hierarchy effectively.

One company in the EDA space that knows quite a bit about IP reuse is Methodics, because they’ve built a company that focuses on IP use and reuse. Their latest generation IP management system is a software tool called Percipient that enables IP reuse for a wide range of IC design teams where a user and manage their IP as a collection of dependent, hierarchical building blocks, then reuse entire hierarchies instead of single IP blocks.

The methodology in using Percipient is that everything in the design hierarchy is treated as IP, starting at the top-level of the SoC, down to a module, a block, even a cell like a PLL. So your project in Percipient can be at any level of hierarchical description, enabling elegant reuse. In the following diagram we can see a USB Controller that has a regular resource of a USB PHY cell along with a private resource of a USB TestBench:

The USB TestBench is a private resource that makes sense to use when the USB controller is used in a standalone context, but not while actually using the IP as part of a larger design. At the top-level of my SoC I don’t really need to carry around this private resource of a USB TestBench. As I place this USB Controller into an IO Subsystem the private resources are noted, and these private resources are visible only to a certain level of the IP, but not the parent IP.

The IO TestBench is relevant to the IO Subsystem only, but as I place this IO Subsystem into my SoC as another IP block then I don’t need to see or deal with the lower-level private resources any longer. So I can use this IO Subsystem multiple times on different projects and not be concerned with the private resources associated with it.

So, what do we get with a context-dependent resource (aka Private Resource)?

  • Manage peripheral resources (stimulus generators, testbenches, PDKs, etc) as IPs in my system for tracking, releases, or caching.
  • Reuse any IP easily in a variety of contexts while not having to make any context-specific changes.
  • Customize my development environment of IP blocks while not interfering with private resources in their separate context.

Hierarchy is a wonderful thing and is commonly used within EDA tool flows, so its refreshing to know that Methodics has figured out how to support hierarchy as part of the larger IP management task by supporting the concept of private resources, allowing us to manage these private resources along with all of our other IP. At the top-level of my SoC I can see just the IP blocks that I need to manage my project, and I don’t need to be bothered with lower-level IP blocks that only pertain to a different context. With Percipient I just see what I need to at each level of the hierarchy.

White Paper
To get more insight into this topic then read the four page White Paper at the Methodics web site here.

Related Blogs

ISO 26262 for Semiconductors
The attractive automotive market has some unique safety requirements that are defined in the ISO 26262 standard. To learn more about how design data and IP lifecycle management fits into the ISO 26262 standard then consider attending the roundtable discussion led by Methodics at the December 6th event in Munich, Germany.


A Crossover MCU

A Crossover MCU
by Bernard Murphy on 11-30-2017 at 7:00 am

Back in the day we had processors which consolidated computing power onto a chip, and out of these sprang (if you’ll excuse the Biblical imagery) microcontrollers (MCUs) in one direction and increasingly complex system-on-chip (SoC) processors in another direction. SoCs are used everywhere today, in smartphones, many IoT devices, networking switches and many more applications. MCUs played a vital though less glamorous role in machine control, handling our anti-lock braking (ABS) and fuel-injection, in implanted medical devices, power tools and many other applications your probably never realized contained a processor.


A key characteristic of these systems was absolute reliability and real-time responsiveness. You probably don’t care about hooking up your pacemaker or your ABS to iTunes, but you very much do care that they will work dependably (no need to reboot constantly) and will continue to do so for many years. So the great majority of development in MCUs went into cost, low power, reliability and support for RTOS, while functional advances moved more slowly.

But IoT pressures have changed this landscape. Now there is a greater need for future-proofing as communication, security and other standards evolve. System builders expect to be able to gather data from these systems to process in the cloud, they expect to be able to support over-the-air (OTA) updates to minimize maintenance costs and keep systems relevant and compliant, they need to support higher levels of performance and they need support for the aggressive types of power management used today in applications processors (APs).

All of this while, of course, still providing the reliability and real-time response for which MCUs are known. You can’t just replace all the MCUs with APs. APs don’t provide RTOS–grade support since they have (among other things) unpredictable interrupt latencies. But you still want all those goodies you get in an AP. Getting to this is a big deal. I think I’m remembering an ARM-provided statistic correctly when I say that ~80% of the systems built on their platforms are MCUs, so these are huge volumes waiting for these devices if this transition is possible.

Taking a leaf from the automakers playbook, NXP (who are very well known in this market) have chosen to address this need though what they call a crossover MCU. This is quite a device (actually a small family of devices – iMX.RT). It’s based on a Cortex-M7, supporting RTOS operation and delivering, apparently, 50% higher performance than similar products. It provides 2-D graphics support, camera interface, LCD controller and hi-performance audio support. And it provides interfaces to wireless connectivity for WiFi, Bluetooth, BLE, ZigBee and Thread. Remember – this is an MCU.

It provides a wide range of memory interfaces and on-board supports for AES, High-Assurance Boot and on-the-fly flash decryption. And the device includes an integrated PMIC so you can manage power directly without need for an extra device on your board. You can naturally leverage the NXP and ARM development ecosystems, there are dev boards, and so on.

NXP told me that this MCU is already available in TSMC 40nm and 28nm FDSOI with Samsung, which they believe provides a good roadmap towards RF and non-volatile additions to the family. And of course they’re working on shrinks and lower node implementations. Sounds to me like these crossover devices may be the wave of the future in advanced MCU applications. And potentially even in less advanced applications if we expect to wire them into the IoT. Maybe not my toothbrush though. You can read more about these processors HERE.


Multi-Channel Multi Rate FEC Engine Webinar with Open Silicon

Multi-Channel Multi Rate FEC Engine Webinar with Open Silicon
by Eric Esteve on 11-29-2017 at 12:00 pm

I will be pleased to moderate on December 7[SUP]th[/SUP] the Open-Silicon webinar addressing the benefits of the multi-channel multi-rate forward error correction (MCMR FEC) IP and the role it plays in high-bandwidth networking applications, especially those where the bit error rate is very high, such as high speed SerDes 30G and above.

Open Silicon’s multi channel, multi rate (MCMR) forward error correction (FEC) engine is fully configurable to support 400G/200G/100G/50G/25G rates on multiple channels. The panelist will outline use cases and discuss the key technical advantages that the MCMR FEC IP core offers, such as support for up to 56Gbps SerDes integration, bandwidth of up to 400G, support for KP4 RS (544,514) and KR4 RS (528,514), support for Interlaken, Flex Ethernet and 802.3x protocols, support for configurable alignment marker, and PRBS test pattern generator and loopback test.

If you are not familiar with Reed-Solomon codes, let’s take a quick look at the terminology. The NRZ and the PAM-4 PHY have most of their processing in common, but their biggest difference in the PHY architectures is that the NRZ PHY uses an RS(528,514, t=7, m=10) code with 10-bit symbols (m=10) that has an error-correction capability of t=7 (seven 10-bit symbols can be corrected), whereas the PAM-4 PHY uses an RS(544, 514, t=15, m=10) code with error-correction capability of t=15.

If you look at the above table, the PHY using NRZ signaling, 100GBASE-KR4, only support high quality PCB, when the PAM-4 PHY, 100GBASE-KP4, can support lower quality, standard PCB. When the medium quality is lower, you expect to correct more error, so the higher number (t=15) associated with RS(544,514) than for the NRZ PHY used on high quality PCB, where t=7, associated with RS(528,514).

The panelists will also discuss the architectural advantages of the core, such as its flexibility, configurability and scalability, all of which enable the MCMR FEC IP to be uniquely tailored to address customer specific application requirements. The MCMR FEC IP is part of Open-Silicon’s networking IP portfolio that includes the company’s Interlaken IP core, as well as its new Ethernet PCS and Flex Ethernet IPs, which enable high-bandwidth chip-to-chip, Ethernet endpoint and Ethernet transport applications.

The MCMR FEC IP features list:
Programmable common and unique alignment marker for each lane
Supports both KP4 (544, 514) and KR4 (528, 514) forward error correction and parity calculation. Supports both runtime and static configuration.
PRBS generator for test pattern generator
Support statistics reports, exception detection and error reporting
Runtime configurable FEC bypass operation
Supports lane re-ordering
Supports 50G and 25G SerDes rates
Supports 400G/200G/100G/50G/25G operation

This webinar is ideal for chip designers and SoC architects of high-speed, high-performance communication and computing applications such as packet processing/NPU, traffic management, switch fabric, switch fabric interface, Framer/Mapper, TCAMs, Serial Memory, FPGA and more.

For registration to this webinar, taking place on Thu, Dec 7, 2017 5:00 PM – 6:00 PM CET or 11:00 AM – 12:00 EST, please visit:

Open Silicon FEC Webinar

About Open-Silicon
Open-Silicon is a system-optimized ASIC solution provider that innovates at every stage of design to deliver fully tested IP, silicon and platforms. To learn more, please visit www.open-silicon.com.

By Eric Esteve from IPnest


Outsourced Operations: Reduced Risk, Fast Ramp, and Managed Complexity

Outsourced Operations: Reduced Risk, Fast Ramp, and Managed Complexity
by Daniel Nenni on 11-29-2017 at 7:00 am

One of the more interesting semiconductor success stories is Apple and how they transformed from a struggling computer company to a dominant chip maker. We covered this story in quite a bit of detail in our book “Mobile Unleashed” in Chapter 7 “From Cupertino” but the short answer to how they did it is: Outsourced Operations.


Apple’s outsourced chip design effort started with the brain of the iPod in 2001 and continued with the iPhone in 2007. Apple made the switch to internally developed SoCs nine years later with the A4 SoC which powered the iPhone 4 and the first iPad in 2010. The rest as they say is semiconductor history with Apple now shipping the industry leading A11 Bionic SoC with a blistering 12 month chip cadence that boggles the semiconductor mind, absolutely.

We are seeing the same type of transformation in the IoT industry where chip start-ups and systems companies alike are wading into chip design through outsourcing. A new whitepaper “Outsourced Operations: Reduce Risk, Accelerate Ramp, Manage Complexity” from Presto Engineering captures this concept quite well.

“Outsourced operations is an economically attractive approach that can reduce risk, improve speed of implementation and mitigate against the potential costs of failure…”

The paper starts out covering current chip trends, competition for fab capacity, compressed volume ramps, and mitigating risks. For IoT specifically, the window of opportunity is fast moving so any type of delay could be fatal, especially a silicon related fault.

The bulk of the paper covers outsourced operations, which is what Presto Engineering does, including The 10 highly specialized skill sets of a successfully operations team. The one that I have the most experience with is Device Engineering which includes analyzing yield and interacting with the foundry. This is no small task, believe me, especially if third party IP is involved.

Most systems companies already outsource production using off-the-shelf chips as a mere line item in their Bill of Materials. You can see teardowns of just about every leading product online now so we all know what is on the BOMs. The iPhone is my favorite teardown as the transition from off-the-shelf chips to custom silicon is clear over the last ten years. The Amazon Echo is another one to follow. Amazon is definitely following the Apple recipe for silicon and systems success.

For Apple, the SoC was the starting point but now you can see off-the-shelf chips and commercial semiconductor IP magically disappear inside the SoC further increasing competitive advantages. You can also see first hand the software advantages of a systems company having control over their silicon.

Bottom line (summary of the paper):

[LIST=1]

  • The barriers to entry in semiconductor manufacturing are getting higher as scarce manufacturing capacity is sought by more, larger players.
  • The semiconductor supply chain evolved to support the fabless business model that cannot easily adapt to the needs of the new industrial OEM and IoT customers.
  • Assembling an in-house operations capability requires at least ten specialized skill sets and a significant investment in support infrastructure.
  • Long commitments and slow, costly changes to supply-chain configuration increase the cost of mistakes and the risk of failure.
  • For industrial and IoT projects, outsourced operations is an economically attractive approach that will reduce risk, improve speed of implementation, manage complexity and mitigate against the potential costs of lost opportunity.Presto Engineering, Inc. provides outsourced operations for semiconductor and IoT device companies, helping its customers minimize overhead, reduce risk and accelerate time-to-market. The company is a recognized expert in the development of industrial solutions for RF, analog, mixed-signal and secured applications – from tape-out to delivery of finished goods. Presto’s proprietary, highly-secure manufacturing and provisioning solution, coupled with extensive back-end expertise, gives its customers a competitive advantage.

    The company offers a global, flexible, dedicated framework, with headquarters in the Silicon Valley, and operations across Europe and Asia. If you would like to discuss your operations outsourcing needs in more detail, please contact us at 408-372-9500, info@presto-eng.com, or visit our website at www.presto-eng.com for more information and local contacts worldwide.

CEVA and Local AI Smarts

CEVA and Local AI Smarts
by Bernard Murphy on 11-28-2017 at 7:00 am

When we first started talking about “smart”, as in smart cars, smart homes, smart cities and the like, our usage of “smart” was arguably over-generous. What we really meant was that these aspects of our daily lives were becoming more computerized and connected. Not to say those directions weren’t useful and exciting, but we weren’t necessarily thinking of smart as in intelligent. For most of us, if we thought about artificial intelligence (AI) at all, we mostly remembered a painful track-record of big promises and little delivery.


The AI part of this changed dramatically for most of us with the application of neural nets for recognition in the big tech companies (Google, Facebook, et al.), particularly in image and speech recognition. For the first time, AI methods not only lived up to the promise but are now beating human experts. (In deference to AI gurus, neural nets have been around for a long time. But their impact on the great majority of us took off much more recently.)

These initial recognition systems ran (and still run) in big data-centers, often using specialized hardware (NVIDIA GPUs and Google TPUs for example) in the trainingphase, where they learn to recognize objects/sounds/etc. based on many thousands of labeled examples (in another nod to experts, some level of self-training is now also becoming popular). Once a system is trained, a similar setup can be used in production in a phase called inference to classify objects as needed, for example to recognize a traffic sign or a tumor.

So real, useful AI running on big iron, check. But it didn’t take long to figure out that where we really wanted to exploit intelligence was in applications, and outsourcing this kind of intelligence to cloud services wasn’t going to work out so well in many cases, particularly thanks to unpredictable response times and power demands. This prompted a lot of investment in R&D and applications, with a goal to move inference to applications, a direction that is already enjoying considerable success.

Obvious examples are any applications in a car where recognition must be both excellent and real-time under all conditions, such as in pedestrian detection. Sensor fusion from radar, LIDAR, cameras and other sources, requiring some very sophisticated recognition of complex data in complex environments, can significantly reduce chance of collision with other objects. Lane departure warnings are another example where recognition is essential. As driver safety systems become more advanced, even before we get to driverless cars we can expect to add road-sign recognition to this list of capabilities

Drones, beyond personal entertainment value, are starting show real value in many areas such as disaster response, real-estate marketing (let a prospective buyer get more views of a house) and many more applications such as surveying, mapping, remote inspection and monitoring. Requiring experienced remote pilots to guide these drones is impractical; skilled pilots are not widely available and would be too expensive for most of these uses. Adding intelligence to these devices to self-fly/navigate is an obvious win (there is even some indication that AI-powered drones may be safer than human piloted versions). Again, this requires lots of recognition technology.

Smartphones, which launched the revolution in early smarts, were curiously late to the AI party. Sure they had Siri and similar assistant capabilities but most of the heavy lifting stayed in the cloud. Only relatively recently Apple added neural net technology to the iPhone X for FaceID access. Similarly, Google in the Pixel 2 series has added a dedicated visual engine to support advanced camera functions (at some point, not clear you can access these yet). Intelligent functions like these may shift the balance of AI activity inferencing (eg in voice recognition) towards the device and away from the cloud.

If you weren’t already concerned about big brother, image recognition is now making its way into surveillance systems. We all know the movie/TV set-piece where the cops ask for the tapes from the gas station surveillance system, which they then study for hours to figure out who shot the victim. No more. Now recognition behind those systems can detect people and vehicles, even identifying known (good or bad) players. I have personal (though less dramatic) experience of this technology. I mistakenly drove in the FasTrak lane (without a pass) for a few hundred yards before recognizing my mistake. A few weeks later I got an automated fine, clearly showing my license plate, which I assume FasTrack automatically read and passed on for license ID. Big brother indeed.

I’ll wrap up with one last example. Security is a big topic these days, especially as we increasingly expect to depend on these smart systems. Part of how we address security is through design, in the hardware and the software. But an ever-present reality for defense is that bad actors will always be one step ahead of us; we will always need ways to detect potential intrusion. The classic signature-based approaches are too expensive for smart applications and frankly continue to fall further behind in effectiveness. A more promising approach is behavioral detection where defense systems look not for classic signatures but instead for behavioral signatures which are much more likely to be common across wide families of attack types. This approach is also based on neural nets.

How can these amazing capabilities be deployed on all these platforms? Commonly through neural nets implemented using embedded DSPs, and programmed via translation of trained networks to that dedicated inference platform. You can read CEVA’s take on this direction in a piece in the Embedded Vision Alliance HERE.


Protecting electronics around the world, SEMI insights

Protecting electronics around the world, SEMI insights
by Daniel Payne on 11-27-2017 at 12:00 pm

SEMI is a worldwide organization with local chapters like the one here in Oregon, where I attended a recent half-day presentation by several industry experts on the topic – Globalization, How it shapes the Semiconductor industry:

  • Michael Chen, Director, Mentor – A Siemens Business
  • John Brewer, CEO, Amorphyx
  • Ed Pausa, Director, PricewaterhouseCoopers
  • David D’Ascenzo, IP Attorney

Mentor hosted the breakfast event on their serene campus in Wilsonville and the scent of bacon filled the room as networking quickly started after check-in. Dave Anderson got our attention on the agenda for the day and Michael from Mentor presented first on the topic of, “How do I protect design and value in the global IC manufacturing world?”


Michael Chen, Mentor

On one of his first slides Michael showed the familiar psychology diagram of Maslows’s Needs, then cracked a joke on how there are two more important needs lower on the pyramid – battery life and WiFi. The room roared with laughter and approved of his observation in our mobile-driven society how often we are limited by battery life and WiFi signal to do work or play.

Mr. Chen summarized that the basic technique to protect our design and value is through using Math to protect our IP. The Chinese government has announced ambitious plans to invest some $100B in the semiconductor industry so as to not be reliant on importing so much from abroad, but do we trust the technology and products coming out of China?

The global semiconductor industry is on track to exceed $400B in revenue this year, yet how does it protect against the impact of counterfeit products on the market? Is it just that some distributors are shady in where they get parts? Any electronic part or even passive component can now be counterfeited. Oddly enough, even counterfeit electronics can outperform the legal parts.

In the USA we have a Joint Strike Fighter program (aka F-35) that has parts coming from 14 different countries during assembly, so how does Lockheed Martin know where each component has been, and who has touched it before shipping?

One way to instill trust and slow down counterfeit parts is to use only a trusted fab, like IBM in the old days. Does anyone have security concerns with IBM being sold to GLOBALFOUNDRIES which is owned by Abu Dhabi?

Another technology solution is for IC design companies to use an IC fingerprinting approach that allows secure field tracking of all devices ever sold. Our industry only uses IDs in about 30% of all electronic components, so we still have a long way to go on this effort.

On the software side Michael introduced the concept of Roots of Trust (RoT), a set of functions in the trusted computing module that is always trusted by the computer’s operating system. Crypto technology is used to uniquely ID each chip, something that gets security encoded inside each chip, also called a Physically Un-clonable Function (PUF).

Another approach for hardening the security of an IC is to actually camouflage layers on the IC design in order to thwart the reverse engineering process. Mentor has some software in this area to camouflage.


Ed Pausa from PwC provided more numbers, charts and graphs than I could digest in a lifetime, all on the topic of historical and projected growth of semi in China. If this area concerns you, then please visit their site for more details. As an example, he showed a chart on worldwide semi consumption market by region, showing the growing of China from 2003 – 2016:


John Brewer, Amorphyx

Hailing from Oregon, John Brewer’s latest startup company has transferred their quantum-based display technology to a major Chinese company, but not as a black box, rather as a fully disclosed, physics based approach. He predicts that within a few years we will buy a 65″ TV for less than $1,000 from our local retailer or Amazon.com online, because of the new technology that Amorphyx has discovered and licensed. They limit their technology transfer to a specific screen size and chemistry, so when a TV company wants to go to a different size display they need to buy into another technology transfer.