RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CNBC Qualcomm and SemiWiki

CNBC Qualcomm and SemiWiki
by Daniel Nenni on 01-18-2019 at 7:00 am

Over the holidays I did an interview with CNBC on the subject of Qualcomm. The producer had read the History of Qualcomm chapter in our book Mobile Unleashed and wanted to base a 15 minute report on it. The interview lasted 90 minutes but of course only snippets of what I said were used. You can see the recorded report by clicking on the image below. You can also see the history of Qualcomm chapter HERE for reference.

Before you judge, here is the other 90 minutes of the interview as I remember it:

I started out talking about the fabless semiconductor transformation during the 1980s and 1990s when Qualcomm was one of the leaders. If not for Qualcomm we would not have the mobile electronics we have today. Qualcomm also perfected the multi sourcing foundry business model which is critical since competition is what drives our industry. Competition is what enables low cost consumer electronics and made it possible for us to have super computer class devices in our pockets, absolutely.

I remember working with QCOM down to 40nm when their chips were being manufactured at TSMC, UMC, SMIC, and Chartered. QCOM would design first on TSMC using special design rules and design practices that enabled multi manufacturing sources. Most other companies did this as well but QCOM was the best at it for sure. TSMC was not happy about this of course since they did all of the heavy process technology lifting only to lose high volume manufacturing business to the other foundries but that was foundry life back in the 1980s and 1990s.

On the business side however, QCOM was very anti-competition. QCOM started out as a systems company making navigation and communication systems. QCOM then pivoted into a chip maker and more importantly a patent strong-hold based on those chips. In fact, QCOM has always made more money from licensing patents than chips. The reason being is that they adopted a “no license – no chip” business model. So you had to license the patents if you wanted a QCOM chip. QCOM was the only game in town for modems and leading edge SoCs at the time so they could play the chip game by their own rules.

Buying commercially available chips to launch products then over time developing your own chips is a critical part of the semiconductor ecosystem if you are a consumer electronics company. Smartphone giants like Apple, Samsung, and Huawei do this for a couple of reasons: Price of course, at high volumes you can make your own chips cheaper than buying them. You can also more easily differentiate features from your competitors. Battery life is one example. The less chips you have in your device the longer the battery life. If you look at the iPhone tear-downs over the years you will see supporting chips disappear into the SoC on a regular basis thus saving power and space. The modem however remains a separate chip in Apple phones.

The other big advantage of making your own chips is prototyping and emulation. As you develop your chip you can verify it quickly using prototyping and you can also start software development months before the chip is done. This is a huge advantage for smartphone companies that control their own software ecosystem like Apple.

Bottom line: Controlling your silicon really is required to be a leader in consumer electronics.

The problem is that to buy a QCOM chip you also had to license the patents which made it much more difficult to develop your own chips without getting letters from QCOM legal. QCOM pricing was also a problem. For the license and chip QCOM got a percentage of the smartphone rather than a fixed chip price. That’s like going to the grocery store and the final item you pay for is a percentage of what you already have in the bag. Having been in the IP business myself I was both amazed and impressed that QCOM got away with this. It really was a disruptive business model for fabless semiconductor companies.

As it turns out it was a bit too disruptive. The U.S. and other governments have accused Qualcomm of unfairly competing and charging excessive rates for its technology. China, Korea, Taiwan, and the EU have already fined or settled with QCOM. The US case is in progress now. I would be very surprised if the FTC did not get a favorable first ruling but we shall see. Hopefully there will be a quick settlement so we can all get back to the business at hand and that is making semiconductors for the greater good of consumer electronics.

This is just my opinion of course. I don’t have financial ties to either company but I do own an iPhone 10. I don’t particularly care for it. The one nice thing I can say is that the battery lasts MUCH longer than previous iPhones. I can now get through a full day of use without charging which is a nice change. The excessive price however is a bit disruptive and at some point in time Apple may pay dearly for that as well.

Update:
The FTC rested their case against QCOM and in my opinion it was much weaker than expected. QCOM is presenting their defense followed by closing arguments but as of today I don’t think the FTC made their case. Opinions are split of course. In fact a hedge fund shorted QCOM stock expecting a big verdict for the FTC:

Qualcomm Incorporated: Ignoring The Legal Risk Is Patently Ridiculous

“The FTC has brought a powerful legal case against the company, and the trial [conducted entirely before a judge, not a jury] is currently underway,” the note said. “We believe Qualcomm will lose.”

If you want to follow the case Twitter is a good place to start: #FTCQCOM


Needham Growth Conference Notes 2019

Needham Growth Conference Notes 2019
by Robert Maire on 01-17-2019 at 12:00 pm

We attended the Needham Growth Conference which is one of the first conferences of the year and in the quiet period before most companies reported so even though there was no “official” comment from most companies on the quarter, the surrounding commentary spoke volumes:

  • The down cycle (and everyone admits its a cycle and no one admits to ever saying it wasn’t still cyclical…) is on going and appears to be somewhat bouncing along a bottom or near bottom level of business.
  • There are no early signs of any sort of up turn or change in the cycle.
  • Hopes of a H2 recovery are currently just that….”hopes”
  • Its unclear whether we could have another leg down and no one was ruling it out or in….
  • Everyone laid the blame primarily on the memory market although foundry/logic is no great shakes either

All the companies in the space are small and mid cap suppliers, sub suppliers, and not the core big cap names; AMAT, LRCX, KLAC, ASML or TEL.

Given the tone of comments overall it would be our take away that the large cap companies will likely have to take numbers down further and get incrementally more negative, when they report, based upon what we heard from the other companies at the conference.

While the stocks seem to have hit some resistance floors in the stock prices, its not like they are bouncing off a bottom. Each time the stocks start to recover a bit they seem to get pushed down again by another piece of negative news, so we seem to be stuck in a low range until there are some clearer signs of a recovery or at the very least a firm bottom (which we have not yet seen…)

One of the comments we heard at the conference that we concur with is that we are concerned of the “death of a thousand cuts” where the industry continues to get negative incremental news rather than just get the bad news over with.

We can’t imagine that TSMC will increase CAPEX when it reports this week. Apple which is 20% of their business is obviously not doing all that well and surely has cut back even further on orders. It seems highly unlikely that TSMC will increase capacity and we think there is significant equipment reuse between 7NM and 5NM and that technology spending will not make up for weaker capacity spend.

Memory pricing, and therefore capital spending remains weak. It will take some time for the excess supply to get worked off, especially in light of reduced demand. It would take time for excess supply to be worked off with good demand but with falling demand its difficult to handicap.

ACLS–Happy for non mainstream business
While Axcelis clearly has exposure to the memory market they also have a lot of non-core business which is much less impacted than the mainstream, leading edge market. That “outside the core” business will help soften the weakness but not eliminate it. The recently announced buy back is appropriate and positive. The company remains on plan and delivering on promises

Cabot – new non semi biz coupled with being a consumables play
Cabot has been one of the more consistent performers as they are a consumable supplier rather than a capital equipment supplier and as such are based on wafer starts and layer count and not capital spend. However, wafer starts are not super strong …but its still better than being an equipment company

Formfactor- Memory still weak and Intel is still a bit slow to ramp…
Also in the consumable business is FORM. However form is more new design driven or different design driven. They have a better balance between memory and logic than previously. Everything else is OK, we just need a demand recovery.

MKSI- Most experienced management-Upcoming ESI close
MKSI’s CEO, Jerry, has been with the company for 35 years and has seen it all before and knows how to deal with it and make it right. The team at MKS did a great job with Newport and we are sure they will do the same great job with ESI. Management is not running in fear of the cycle but dealing with it head on. He contemplated the length of the downturn through his “canoe” analogy of varying shapes.

UCTT – As an AMAT and Lam supplier its hard to outperform
Its harder to escape the pull of gravity when the customers are weak as is the case here. Acquisitions have helped in the past but nothing that was exciting or at a great valuation. we are just waiting for a recovery here to get things moving again.

Ichor- Great, realistic, experienced management -weak customers
Much as with Ultraclean its hard to escape the customers weakness. Acquisitions have been very good and jump started ICHR since its IPO. Management is making the best of the situation and buying back its very cheap stock.

COHU- In early innings of integration not helped by weakness
So far, so good but the jury is still out on the combination of the two companies. Obviously its that much harder to do an integration during tougher times but hopefully the results will be worth it

Tower Jazz- Towers above the other players
Russell Ellwanger has taken what was a uncertain company and turned it into a great business model. He has done a great job of not only doing great “roll up” acquisition deals, but more importantly, managing them to perfection after rolling them up. Not “integrating ” them into the “borg ” of a large company but optimizing the different models and expertise of each individual. Business may be soft but the model works well with strong process capability & technology on top of sound business practices. They will outperform in the recovery.

Summary: Good companies, Ugly neighborhood

waiting at the station.
It feels like everyone is waiting at the station for the recovery train to come along and whisk them off to good times again. The problem is we are just waiting and not much within our control to hurry it along. On top of that the train timing is unclear at best and in fact could be longer than hoped for. There is a bit of helpless resignation.

The semi equipment industry is at the bottom of a very long trickle down that starts with Apple and China, through TSMC, Samsung & Micron etc . We need to see signs of a recovery coming from top down as any recovery won’t start from the bottom up… that we are still waiting on.


The New Intel CEO

The New Intel CEO
by Daniel Nenni on 01-17-2019 at 7:00 am

Interestingly, in some circles I’m known as an “Intel basher” but nothing could be further from the truth. I grew up with Intel and give them full credit for bringing serious compute power to our desktops. My first Intel powered computer was an IBM XT and I have had dozens of Intel based desktops and laptops since then. As a result, I hold Intel to a much higher standard and that includes CEOs. I wrote the blog “The Legacy of Intel CEOs” at the end of 2014 criticizing the last two Intel CEOs and I stand by it. More than 80,000 people read that blog and it is still getting traffic even today.

I have also been critical of the Intel Board of Directors (specifically Andy D. Bryant) and I still am but that is another blog entirely. Andy has got to go without a doubt. I am not critical of Intel as a whole as I still believe they are THE greatest semiconductor company of our times. Think about it, where would be be today without the contributions of Intel?

It is my sincere hope that the new CEO will be the start of a technology renaissance at Intel. Unfortunately, there is only one candidate of the rumored five that has a chance in my opinion and that is Johny Srouji. I first learned about Johny while researching our book “Mobile Unleashed” as he was part of the team that brought us the iPhone and iPad SoCs. Prior to Apple (pre 2008) Johny worked for Intel Israel (14 years) and IBM (6 years), so yes he has Intel experience but I would not call him an Intel insider.

The other rumored candidates are Lisa Su (CEO of AMD), Navin Shenoy (Intel), Murthy Renduchintala (QCOM/Intel), and Diane Bryant (Ex Intel). All of whom I don’t personally see succeeding but Lisa Su would be my choice out of the four. Bottom line: The next Intel CEO MUST be an outsider! I remember stating some time ago that Intel should buy NVIDIA just to get their CEO but of course they didn’t which clearly was a mistake. CEOs can make or break a company for sure.

Hopefully the Intel CEO question will be answered on the conference call next week. The Intel CEO search started last June which to me is a very long time to be without a leader. Believe it or not, I have a lot of Intel friends, many of them still work there. According to LinkedIn I have more than 500 connections who currently work for Intel and more than 5,000 that have been employed there. When I ask them about the CEO debacle most just shake their heads. Andy Bryant really showed his true back stabbing colors there. One thing that they all agree on is that BK did not deserve the ousting he got and another thing that most agree on is that if Murthy gets the CEO job Intel resumes will hit the streets. Just the opposite if Johny is hired, expect an influx of resumes from just about every company in the semiconductor community to hit his desk, absolutely.

Just my opinion of course but I am an “internationally recognized semiconductor industry expert” read by millions of people so there’s that. I still do dishes, fold laundry, and empty the trash at home though so my horse is not very high.


A Sharper Front-End to Intelligent Vision

A Sharper Front-End to Intelligent Vision
by Bernard Murphy on 01-16-2019 at 7:00 am

In all the enthusiasm around machine learning (ML) and intelligent vision, we tend to forget the front-end of this process. The image captured on a CCD camera goes through some very sophisticated image processing before ML even gets to work on it. The devices/IPs that do this are called image signal processors (ISPs). You might not be aware (I wasn’t) that Arm is in this game and has been for 15+ years, working with companies like Nikon, Sony, Samsung and HiSilicon and now targeting those trillion IoT devices they expect, of which a high percentage are likely to need vision in one form or another.


So what do ISPs do? As Thomas Ensergueix (Sr Dir of Embedded at ARM) explained it to me, this largely comes down to raising the level of visual acuity in “digital eyes” to the level we have in our own eyes. A big factor here is handling the high dynamic range (HDR) that you will often find in raw images. And to get better than human eyes, you want performance in low light-conditions and ability to handle 4k resolution (professional photography level) at smartphone frame rates or better.

Look at the images of a street scene above, a great example of the dynamic range problem. Everything is (just barely) visible in the standard image on the left, but in attempting to balance between the bright sky and the rest of the image, the street becomes quite dark; you wouldn’t even know there was a pedestrian on the right, about to walk out into the road. You can’t fix this problem twiddling global controls; you need much more sophisticated processing through a quite complex pipeline.

An ISP pipeline starts with raw processing and raw noise reduction, followed by a step called de-mosaicing to fill out the incomplete color images which result from how imagers manage color (a color filter array overlaying the CCD). Then the image goes into HDR management and color management steps. Arm view the noise reduction, HDR management and color management as particular differentiators for their product line.


Thomas said that in particular they use their Iridix technology to manage HDR better than conventional approaches. Above on the left, an image has been optimized using conventional global HDR range compression. You can see the castle walls quite clearly and the sky isn’t a completely white blur, but it doesn’t accurately reflect what you would see yourself. The image on the right is much closer. You can see clouds in the sky, the castle walls are clearer, as are other areas. This is because Iridix uses local tone mapping rather than global balancing to get a better image.

Arm introduced two new products recently including this capability, the Mali-C52 for full-range applications requiring near human-eye response, and the Mali-C32 for value-priced applications. In addition to improved HDR management they use their Sinter and Temper technologies to reduce spatial and temporal noise in images. In color management, beyond basic handling they have just introduced a new 3D color enhancer to allow subjective tuning of color. Finally, all of this is built on a new pixel pipeline which can handle 600M pixels/sec, easily enabling DSLR resolution at 60 frames/sec.

So when you think about smart vision for pedestrian detection, intruder detection or whatever application you want to target, spare a thought for the front end image processing. In vision as in everything else, garbage-in inevitably becomes garbage-out. Even less-than-perfect-in limits the accuracy of what can come out. Object recognition has to start with the best possible input to deliver credible results. A strong ISP plays a big part in meeting that objective.


Applying Generative Design to Automotive Electrical Systems

Applying Generative Design to Automotive Electrical Systems
by Daniel Payne on 01-15-2019 at 12:00 pm

Scanning headlines of technology news every day I was somewhat familiar with the phrase “Generative Design” and even browsed the Wikipedia page to find this informative flow-chart that shows the practice of generative design.


Generative design is an iterative design process that involves a program that will generate a certain number of outputs that meet certain constraints, and a designer that will fine tune the feasible region by changing minimal and maximal values of an interval in which a variable of the program meets the set of constraints, in order to reduce or augment the number of outputs to choose from.

Right away I could see the benefits of using an automated approach to generative design, because the designer can quickly see which design meets all of the requirements in an optimal way. Looking at the challenges of automotive design and the specific quest to develop autonomous vehicles it becomes clear that there is a lot of complexity in such a system:

  • Dozens of sensors: Cameras, radar, LIDAR
  • Decentralized Electronic Control Units (ECUs)
  • Multiple data networks
  • Wiring between sensors, ECUs and battery

There’s a torrent of data being generated by an Autonomous Vehicle (AV), gigabits per second which feed into decision and control systems, keeping us moving along the road safely. Experts calculate that it will take billions of miles of testing to verify that an AV is safe enough.

Fortunately there are intermediate steps towards reaching full level 5 autonomy, and the automotive systems have integrated sensors, computers and networks working together to meet the requirements of safe driving.

Features for level two autonomy could include:

  • Active cruise control
  • Lane departure warning
  • Lane keep assist
  • Parking assist

Meeting level two autonomy still requires that your car have 17 sensors or so: ultrasonic, long-range radar, short-range radar, surround cameras. The driver still has to manually respond to a notification about drifting outside of a lane and use steering to control a safe path.

The demands of level five AV are much higher than level two, so such an AV would have 40+ sensors:

  • Ultrasonic
  • Surround cameras
  • Long-range radar
  • Short-range radar
  • LiDAR
  • Long range cameras
  • Stereo cameras
  • Dead-reckoning sensors

With each added sensor there is an accompanying increase of wiring in the harness, and the need for increased computation of the massive data stream being generated by all of those sensors.

In America we have the Big 3 in automotive, but for AV there are some 144 companies developing products and services. Semiconductor spending on Advanced Driver Assistance Systems (ADAS) projects are in high-growth mode according to Strategy Analytics, reaching $13B by 2025:

So how do the major automotive OEMs and startup semiconductor companies designing for ADAS make their systems safe and get to market quickly? Generative Design certainly has the promise to leverage the best practices of experienced engineers in a process that can optimize a system while meeting safety requirements.

Let’s take another look at the Generative Design process, and apply it to automotive design where rules-based automation can generate a wide range of ideas for hardware, software, networks and logic combinations:

Your most-experienced engineers would be creating the rules to start with, then less-experienced engineers can run the generative design automation to produce many scenarios to choose from. Functional models are the starting foundation of the electrical system to be designed, but they don’t need implementation details. Your functional model contains communication networks, power sources and electronic components.

Software tools like Capital from Mentor can read your functional models as part of the electrical systems design environment. All of the generated architectures have system logic, networks, hardware and software. Your past experience has been captured in the rules, which helps assure that safety goals are met, while past mistakes are not repeated.

Pencil and paper methods are no longer sufficient tools to optimize an automotive electrical system where you need to optimize performance, power consumption, volume, weight and thermal domains. Design automation through generative design is going to make your engineering team more productive, producing an optimal design in less time than previous methods.

Fewer errors can be expected with generative design because there’s less manual entry and effort involved. Engineers across disciplines can share data effectively – Electrical, Mechanical, PCB, Software.

Data continuity means that you can trace every system requirement through to implementation, and know that your system can trace requirements to any domain and that you are in compliance with each requirement:

Design rules will check for flaws like unterminated wire ends, differences between graphical and physical bundle length, maximum wire currents, generated heat and other best practices that you have developed over the years. Impacts of design changes can be quickly understood, like changing an ECU to a different location, or changing a network, the performance in another location may change.

Using a unified data source ensures that engineers on the team know when a change happens and how it affects their domain. If you move an ECU then the impact on timing, signal integrity, physical clearance and collision can be determined. You know the impact of each change to your system.

Summary
Generative design is quickly moving into the automotive design realm because it helps design teams model, analyze and optimize across multiple domains to meet stringent safety requirements of ADAS and the goal of level five autonomous vehicles. Software tools like Capital are available from vendors like Mentor. Read the six page White Paper for more details.

Related Blogs


IDT Invests in IoT Security

IDT Invests in IoT Security
by Daniel Nenni on 01-15-2019 at 7:00 am

As we are preparing for the “IoT Devices Can Kill and What Chip Makers Need to Do Now” webinar next week, Intrinsic-ID did a nice press release with Integrated Device Technology. IDT is one of the companies I grew up with here in Silicon Valley that pivoted its way to a $6.7B acquisition by Renesas.


IDT is focused on automotive, high-performance computing, mobile and personal electronics, network communications, and wireless infrastructure. The common thread amongst all of those applications is security, absolutely.

SUNNYVALE, Calif., Jan. 14, 2019 – Intrinsic ID, the world’s leading provider of digital authentication technology for Internet of Things security, today announced Integrated Device Technology, Inc. (IDT), has licensed QuiddiKey, based on SRAM PUF technology, for security in its wireless charging products.

“Wireless power implementation is growing rapidly and expanding into multiple markets, so Intrinsic ID’s ability to help us deliver our technology in a secure, scalable manner was key to our choice,” said Dr. Amit Bavisi, senior director of SoC mobile engineering for IDT’s Wireless Power Division. “We chose QuiddiKey primarily for delivering cost-effective and robust foundational security. This strong anchor of trust singularly enables our customers to maximize their revenue and reassure their customers with the ability to hold counterfeits at bay. An additional, and more important, benefit is that the use of strong unclonable authentication for legitimate branded devices keeps consumers safe from charging hazards with counterfeits, which may not comply with industry-standard safety requirements.”

IDT delivers innovative wireless power solutions both for the receivers used in smartphones and other applications, as well as the transmitters used in charging pads and automotive in-car applications.

QuiddiKey is based on Intrinsic ID’s patented SRAM (Static Random Access Memory) PUF (Physical Unclonable Function) technology and allows semiconductor manufacturers to deliver IoT security via a unique fingerprint identity without the need for an additional security chip, such as a secure element. A root key generated by QuiddiKey delivers a high bar of security because it is internally generated, is never stored, and anchors all other keys and security operations to the IoT-connected product, such as a smart home appliance.

“IDT wireless charging solutions are used for an expanding range of applications, including small-footprint products such as fitness and health monitors, and charging solutions for smartphones. Authentication has become a necessity as providers of charging base offerings require control over their end-to-end charging system,” said Pim Tuyls, Intrinsic ID’s chief executive officer. “QuiddiKey’s ability to create unclonable identities for any IoT-connected product without the need for additional hardware is critical to profitably scale the IoT.”

About Intrinsic ID
Intrinsic ID is the world’s leading digital authentication company, providing the Internet of Things with hardware-based root-of-trust security via unclonable identities for any IoT-connected device. Based on Intrinsic ID’s patented SRAM PUF technology, the company’s security solutions can be implemented in hardware or software. Intrinsic ID security, which can be deployed at any stage of a product’s lifecycle, is used to validate payment systems, secure connectivity, authenticate sensors, and protect sensitive government and military systems. Intrinsic ID technology has been deployed in more than 125 million devices. Award recognition includes the IoT Breakthrough Award, the IoT Security Excellence Award, the Frost & Sullivan Technology Leadership Award and the EU Innovation Radar Prize. Intrinsic ID security has been proven in millions of devices certified by Common Criteria, EMVCo, Visa and multiple governments. Intrinsic ID’s mission: “Authenticate Everything.” Visit Intrinsic ID online at www.Intrinsic-ID.com.


Specialized AI Processor IP Design with HLS

Specialized AI Processor IP Design with HLS
by Alex Tan on 01-14-2019 at 12:00 pm

Intelligence as in the term artificial intelligence (AI) involves learning or training, depending on which perspective it is viewed from –and it has many nuances. As the basis of most deep learning methods, neural network based learning algorithms have gained usage traction, when it was shown that training with deep neural network (DNN) using a combination of unsupervised (pre-training) and subsequent supervised fine-tuning could yield good performance.

A key component to the emerging applications, AI driven computer vision (CV) has delivered a refined human-level visualization achieved through the application of algorithm such as DNN to convert digital image data to a representation understood by the compute engine –which is increasingly moving towards the network edge. Some of the mainstream CV applications are embedded in smart cameras, digital surveillance units and Adaptive Driver Assistance Systems (ADAS).

DNN has many variations and it has delivered remarkable performance for CV related tasks such as localization, classification and object recognition. Applying DNN data driven algorithm for image processing is computationally intensive and requires special high speed accelerators. It also involves performing convolutions. A technique frequently used in digital signal processing field, convolution is a mathematical way of combining two signals (the input signal and an impulse response of a system, containing information as to how an impulse decays in that system) to form a third signal, which is the output of these convolved signals. It reflects how the input signals impacted in that system.

The design target and its challenges
As a leading provider of high-performance video IP, Chips&Media™ developed and deployed various video Codec IPs for a wide range of standards and applications, including fully configurable image signal processing (ISP) and computational photography IP.

The company most recent product, a computer vision IP called c.WAVE100 is designated for real time object detection and processing of input video at 4K resolution and 30fps. Unlike the programmable software based IP approach, the team goal was to deliver a PPA-optimal hardwired IP with mostly fixed DNN (with limited runtime extensions). The underlying DNN based detection algorithm was comprised of MobilNets, Single Shot Detection (SSD) and its own proprietary optimization techniques.

The selection of MobileNets on top of an optimized accelerator architecture that employs depthwise separable convolutions is intended for a lightweight DNN. The four layer architecture consists of two layers (LX#0, LX#2) intended for conventional and depthwise convolution, and another pair (LX#1, LX#2) for pointwise convolution as shown in figure 2. On the other hand, SSD is an object detection technique using a single DNN and multi-scale feature maps.

As a DNN-based CV processing is inherently repetitious, evolving around the MAC unit –with massive data movement through the NN layers and FIFOs, the team objective was to have a tool that allows a rapid architectural exploration to yield an optimal design and shorten development time for time-to-market. The DNN based model was then trained using large dataset on TensorFlow™ deep learning frameworks. As illustrated in figure 3, the generated model was to be captured in C language format and synthesize into RTL.

In order to fairly assess the effectiveness of an HLS based solution versus the conventional RTL capture approach, two concurrent c.WAVE100 IP development tracks were assigned to two different teams. Such arrangement was done to mitigate risk of not disrupting the existing production approach which relies on manual Verilog coding captures. Furthermore, none of team members have prior exposures to the HLS tool or flow.

The team selected the Catapult® HLS Platform from Mentor as it provides algorithm designers a solution to generate high-quality RTL from C/C++ and/or SystemC descriptions that is targetable to ASIC, FPGA, and embedded FPGA solutions. A big plus on the feature side includes the platform ability to check the design for errors prior to simulation, its seamless and reusable testing environment, and its support to formal equivalence checking between the generated RTL and the original source. A power-optimized RTL, ready for simulation and synthesis can be rapidly generated through Catapult using the flow as shown in figure 4.

In addition to a shortened time to market at lower development cost, there are 3 key benefits pursued by the team:
– To enable a quick late-stage design changes at C/C++ algorithm level, regenerate the RTL code and retarget to a new technology.
– To facilitate what-if, hardware and architecture exploration for PPA without changing the source codes.
– To accelerate schedules by reducing both design and verification.

Flow comparison and results
At the end of the trials, the team made a comparison of the two flows as tabulated below:


The team takeaways from this concurrent development and evaluation efforts on design with HLS vs traditional RTL methods are as follows:

  • Easy to convert algorithmic C models to synthesizable C code. Unlike RTL, there was no need to write FSMs or to consider timing between registers. The C code was easier to read for team code reviews and the simulation time was orders of magnitude faster.
  • Optionally easy targeting on free software like gcc and gdb in order to quickly determine if the C code matched the generated RTL.
  • Ability to exercise many architectures with little effort using HLS, which otherwise was very difficult to do in the traditional RTL flow.
  • SCVerify is a great feature. There was no need to write a testbench for RTL simulation and the C testbenches were reusable.

To find more details on this project discussion check HERE.


SOC security is not a job for general purpose CPUs

SOC security is not a job for general purpose CPUs
by admin on 01-14-2019 at 7:00 am

Life is full of convenience-security tradeoffs. Sometimes these are explicit, where you get to make an active choice about how secure or insecure you want things to be. Other times we are unaware of the choices we are making, and how risky they are for the convenience provided. If you leave your bike unlocked, you can expect it to be stolen. However, we all know the feeling of learning that our credit card number has been stolen – clueless as to how or why usually. The other thing we need to be wary of is that hackers and bad actors are always looking for new ways to exploit security flaws. This means that things we saw as safe choices can, overnight, become risky.

Remember back in the day when you could easily use a debugger to find the code that did a password check and bypass it? Now we have protected address spaces and better encryption. System exploits are often found by hackers wearing white hats and then provided to vendors for fixing, before the public even hears about them.

However, in the last year a serious new security flaw known as Spectre has come to light that should give everyone pause. Like most people you bought a general-purpose computer with a CISC instruction set to both play games and do your banking. Processor vendors have spent the last several decades dramatically improving the performance of the general-purpose processors that are used in them. Among them are the Intel, AMD and sometimes ARM processors.

With the clock ceiling of ~4GHz for processors, CPU designers looked for ways to improve performance. An area ripe for optimization was wait states for memory reads, which can block processing for hundreds of CPU cycles. The widely adopted solution is predictive branching, where the CPU used prior execution profiles to determine the likely outcome of a branch decision. The processor would save state and proceed to execute the most likely code path. If the prediction was wrong the processor state was returned to the saved state. And, execution would resume with the correct branch. This seems safe enough…

Unfortunately, even if memory and processor registers are restored, there is still a latent trace from the code that was executed based on the prediction – the memory cache may have been changed based on memory reads. Downstream, hackers can use this in a number of ways to ferret out the contents of memory that was believed to be secure. One example is where the predictive branch was a memory bounds check, but the training led the processor to expect that the test would pass, but in reality, contains illegal memory accesses. The predictive code would then pull protected memory into the cache, where it can be retrieved later by additional hacker code. In fact, there are numerous other ways to exploit cache modification by malicious code leveraging predictive branch execution. Some even work in the Java runtime environment in browsers.

Unlike software exploits, this one relies on fundamental behavior of general purpose processors. So, when this exploit became public there was no fix ready, and in fact we will have to live with it, perhaps with some mitigation, for some time. The most secure fix is to disable predictive code execution, using the LFENCE instruction, but this leads to huge slowdowns in CPU performance. One security researcher estimated that 24 million LFENCE instructions would need to be added to the Office Suite.

Now look at all the new applications where processors are used that have heightened security requirements. In the face of this, it is time to start using different types of processors for different types of tasks – secure processors for critical jobs, and higher performance processors for less critical tasks. The push for heterogonous processors has been underway for some time, largely driven by performance needs. However, there is a growing need for specifically designed secure processor families. These might for instance be RISC based and are less vulnerable to predictive execution exploits. They also can have their own direct connected memory and security IP and accelerators, that are not accessible to any other part of the system.

Rambus outlines one such solution in their white paper “The CryptoManager Root of Trust”. Starting with a 32 bit RISC-V processor that is dedicated to security functions, the entire ensemble includes a number of essential components and the proper architecture for ensuring security. As such it is specifically designed to securely run sensitive code. It comes with dedicated SRAM and ROM memories. Also, there is an AES, secure SHA-2 hash core and asymmetric public key engine.

The Rambus CryptoManager Root of Trust (CMRT) also includes a true random number generator, a key derivation core (KDC) for deriving ephemeral keys from root keys. To detect tampering it also has a canary core that can detect glitching and overclocking. The Rambus white paper goes into detail about its comprehensive attack resistance. Also, it discusses the techniques it uses to create silos for sensitive code that needs to run securely. In fact, multiple roots of trust can be created to keep resources, keys and security assets for different application separate from each other. The CMRT core can be added as a complete security solution to SOCs to address the needs of a number of vertical markets. These include IoT, Automotive, Networking/Connectivity, and Sensors.

Rambus also describes the development tools and their provisioning infrastructure that complete the core’s development kit and deliverables. The white paper, which goes into much more detail on the full set of features and capabilities is available on the Rambus website. It is worth noting that the RISC-V core is not considered to be at risk from the Spectre exploit. I highly recommend reading the white paper, and its notes and references.


CES 2019 The Year of De-Appification

CES 2019 The Year of De-Appification
by Roger C. Lanctot on 01-13-2019 at 7:00 am

CES 2018 saw the proliferation of digital assistant applications in cars (and homes, of course) with Harman International, Panasonic and Visteon showing multiple digital assistant implementations with in-dash infotainment systems. Panasonic showed a hybrid Alexa system capable of working off line and Harman showed a system with a dial to allow the user to select their preferred digital assistant: Alexa, Google Voice, Cortana or Bixby.

The leader in hybrid automotive speech recognition systems, Nuance, demonstrated a system capable of automatically selecting the appropriate digital assistant depending on the task. Meanwhile, German Autolabs was demonstrating an aftermarket device with multiple no-look, no-touch human machine interface options – including speech recognition and gesture – for communicating while driving.

For German Autolabs the over-arching message was clear: the age of de-appification had begun. Not everyone got the message in 2018, but CES 2019 is arriving in two weeks with an escalation in digital assistant integration.
No one is obsessing over recognizing accented voices or quibbling over when speech recognizers will be acceptable. Digital assistants have arrived and car makers and their suppliers are being forced to reckon with the consequences.

It’s not simply a question of accessing vehicle functions or cloud content or service resources. Voice interaction in the car is rapidly turning cars into browsers and driver and passengers queries into actionable and monetize-able inputs.

De-appification, an expression coined by German Autolabs CEO, Holger Weiss, refers to the reality that drivers and passengers will no longer be accessing content, applications and services via on-screen icons. The world of content and service delivery in the car will be an eyes-free and hands-free experience driven by voice.

More importantly, an increasing portion of the recognition and the process of responding to and/or anticipating driver and passenger needs will be supported by on-board artificial intelligence. The car will become more intelligent through the process of coordinating on-board sensor inputs, mobile device content and service information and cloud resources.
CloudMade was one of the first companies to demonstrate this capability. The competition to deliver this value proposition in 2019 and beyond will be fierce.

The entire vehicle will become an intelligent digital assistant anticipating driver needs and enhancing the driving experience. The most advanced systems will integrate with safety systems creating the opportunity for the vehicle to communicate and converse with the driver like the computer in “2001: A Space Odyssey” or like “Kit” in “Knight Rider.”
The implications for the development of in-vehicle systems in 2019 are significant and include:

  • The launch of OEM-branded systems such as “Hey BMW” and “Hey Mercedes” as front end interfaces to cloud partners including Alexa, Bixby, Google Voice, Cortana and Siri;
  • The integration of smarthome digital assistants – such as Orange’s new Djingo – with vehicle-based systems;
  • The capture of these driver (and passenger) requests to better anticipate driver needs and wants for integration with contextual marketing and payment platforms;
  • The demise of app-and-icon-centric in-vehicle user interfaces in favor of voice-and-gesture-centric systems. Like “Chris;”
  • The near elimination of human-centric call centers for concierge, roadside assistance and emergency services;
  • The enhancement of emergency response with artificial intelligence systems capable of instantly determining vehicle, driver and passenger status and automatically communicating the appropriate information to first responders and next of kin;

The enhancement of customer relationship management systems integrating with dealers to build stronger customer retention programs.

Multiple suppliers will use CES 2019 to demonstrate platforms designed to collect and interpret vehicle data. The next phase in this evolution, though, will be the integration with artificial intelligence and digital assistance systems intended to bring vehicles to life with a smarter, safer and more productive operating environment.

Will there by hiccoughs ahead? Definitely. Natural language systems capable of carrying on layered dialogues are still evolving and will take time to see market introduction. But it is not going too far to suggest that in-vehicle speech systems, by the end of 2019, will be capable of conversing with drivers to either help preserve alertness or to establish that a respite from driving is required.

In the end, what started out as a handy tool for the hands-free entry of destinations, the dialing of phone numbers or the selections of songs or radio stations, will begin saving lives with timely alerts and guidance. CES 2019 will usher in this new age of voice-based driver assistance.

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. Roger will keynote the Consumer Telematics Show on January 7 at Planet Hollywood. More details about Strategy Analytics can be found here:https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


CES 2019 A New Era

CES 2019 A New Era
by Bill Jewell on 01-11-2019 at 12:00 pm

CES 2019 was held this week in Las Vegas and had over 4500 exhibiting companies and over 180,000 attendees. Over 6500 media and industry analysts attended (including yours truly of Semiconductor Intelligence). CES 2019 includes a broader industry than just electronics, which led to the show being renamed CES (previously the Consumer Electronics Show) and the sponsoring organization changing its name from the Consumer Electronics Association (CEA) to the Consumer Technology Association (CTA).

The CTA projected the overall U.S. consumer technology market will hit $398 billion in 2019, up 4% from 2018. Much of the total consists of the large dollar but slow growing categories of smartphones, laptop PCs, and televisions. These three categories total $131 billion in 2019 but are growing only 0% to 2%. These categories do contain some high growth products. 5G smartphones will emerge on the market later this year and are expected to account for only 1% of total smartphone units in 2019 but should account for 75% in 2022. 8K UHD televisions are also a new category in 2019 and should only account for about 200 thousand of 42 million TVs.

Most of the growth in consumer technology in 2019 is driven by emerging categories. Streaming services (video and music) are forecast to reach $26 billion in 2019, up 25% from 2018. This category reflects the broader CTA definition of consumer technology to include services as well as electronics. The major Internet of Things (IoT) categories are smart home (home controls, monitoring and security), smart speakers (such as the Amazon Echo and Google Home) and smartwatches. These three categories are projected to total $10.9 billion in 2019, up 15%. Another fast-growing category is in-vehicle technology including entertainment, navigation and driver-assist features. Totaling $17 billion in 2019, this category will grow 9%.

The emphasis on new consumer technologies was evident at CES Unveiled media event on Sunday, January 6. The event featured diverse applications such smart plumbing products from Moen and Kohler, numerous smart home products, an automated bread maker, a smart mirror to analyze your facial product needs and a pillow sleep aid.

Some of the products are of questionable practicality. Helite demonstrated an airbag for cyclists. The airbag resembles a life jacket and inflates when the bike crashes – protecting the torso. Any experienced cyclist knows the most common injuries in a crash are to the head (hopefully protected by a helmet), the arms and the legs. Ovis demonstrated a self-driving suitcase which will follow you through the airport. I guess pulling a wheeled suitcase by the handle is too much work.

An innovative product from French company E-Vone is a set of smart shoes which detect falls and notify caregivers with a precise location. Other solutions require the user to wear a device and push a button when the user needs assistance.

Flexfuel is another French company which develops products to reduce automotive pollution and increase fuel efficiency. Its location next to the bar at CES Unveiled gave it a new meaning.

The press conferences of the major consumer electronics companies focused on emerging markets. Panasonic emphasized artificial intelligence (AI), the internet of things (IoT) and robotics. The company demonstrated two products which use its electric powertrain platform: electric assist mountain bikes from Van Dessel and an electric motorcycle from Harley Davidson (available for pre-order at only $29,799).

Samsung’s CES press conference did feature its core businesses of smartphones and televisions. It has 5G networks up and running in South Korea and will introduce its first 5G smart phone later this year. Samsung displayed its 98-inch QLED 8K UHD TV, currently available for pre-order in the U.S. Samsung spent much of its press conference discussing its AI platform – Bixby – for televisions, cars and appliances. It also displayed a line of robots for health care (Bot Care), air monitoring and conditioning (Bot Air), retail assistance (Bot Retail) and mobility assistance (GEMS). Samsung’s emphasis on AI and robotics is evident from the layout of its huge booth (largest at CES 2019).

Sony’s press conference focused on tie-ins to its movie and music businesses. Its cameras, televisions and audio products were discussed in relation to these businesses. Singer Pharrell Williams made an appearance to discuss his visit to Sony’s R&D center in Japan. The only hardware product emphasized was Sony’s PlayStation 4 video game system.

Qualcomm began its press conference discussing 5G. The company expects over 30 5G devices (mostly smartphones) will be launched in 2019, almost all using Qualcomm’s RF devices. Most of the press conference was devoted to Qualcomm’s automotive products. The company said 30% of new cars are equipped with cellular connectivity and it expects this to grow to 75% in five years. Qualcomm’s booth was one of the largest at CES 2019 and featured 6 models of prototype 5G smartphones for China and Europe. Most of Qualcomm’s booth space was dedicated to smartwatches, noise cancelling headphones, and the smart connected car.

Intel’s CES booth was adjacent to Qualcomm and about two-thirds the size. It featured laptops using its latest Core i7 8[SUP]th[/SUP] generation processor. Like Qualcomm, Intel devoted most of its space to emerging technologies such as autonomous driving and immersive cinematic experiences.

What does the new emphasis on IoT, AI, robotics and smart cars mean to the semiconductor industry? It marks a new era. The major drivers of the past (smartphones, PCs and televisions) are showing flat to slow growth overall. Two of the largest semiconductor companies are shifting emphasis to the emerging markets. Intel, which gets most of its revenue from processors for PCs and servers, is pushing autonomous driving and entertainment delivered over 5G networks. Qualcomm, primarily a smartphone IC company, is also emphasizing automotive and IoT applications. The new era also opens up opportunities for other semiconductor companies to provide devices for the new generation of consumer products. Some of these emerging applications such as smart homes, smart speakers, autonomous cars and even personal robots may become as ubiquitous as PCs, TVs and smartphones over the next five to ten years.