webinar banner2025 (1)

TSMC and Apple Aftermath

TSMC and Apple Aftermath
by Robert Maire on 01-21-2019 at 7:00 am

TSMC reported an in line quarter, as expected and also reported down Q1 guidance, also as expected. The only thing some investors may have been caught off guard about is the magnitude of the expected drop, 14%, from $9.4B to $7.35B. This is the largest quarter over quarter drop for TSMC in a very long time. Importantly for TSMC, 7NM was 23% of business, so leading edge remains very solid and 20NM and below was half of all business.

CAPEX is being cut, as we had projected, by several hundred million dollars, probably at least 5% and the cuts may get deeper as time goes along. We expect most of the cuts to be in H1 2019 with H2 2019 left open to see how business recovers.

We have been very clear about the CAPEX cuts and “trickle down” impact from Apple. We were interviewed on Bloomberg TV 10 days ago regarding Apple and had specifically called out TSMC as the most impacted and the overall capex cuts;

Link to Bloomberg TV interview on Apple, TSMC & CAPEX

For any one who was paying attention over the last year this slow down should come as no surprise. We don’t expect a large downtick in the stocks as the news should be well expected. It is, none the less, another slug of bad news, in what we expect to be an earnings season of a flow of negative news bites. We think it will be hard to escape the negative flow and likely further downward number revisions.

To be clear we still love TSMC and think they are the greatest foundry ever and right now, the most advanced chip maker. However if demand sucks theres not a lot you can do about it, no matter how good you are. Apple is 20% of TSMC’s business and chips for mobile are obviously well beyond that so the impact on TSMC will be significant and it will take a while to work out.

Channel Chokes
The other large problem to keep our eye on is how bloated the channel is given the smart phone slowdown. Our checks indicate there is a lot of unsold product in the pipeline that will take time to work out. A lot of the inventory is likely held as unpackaged wafers, held at lower unfinished goods pricing but represents a lot of chips when they get packaged. This hidden inventory is likely high and will take several quarters or more to work off and even longer while demand is depressed so we wouldn’t be holding our breath for a quick bounce back.

Chip equipment companies are likely to be even more negative given that one of their largest and clearly most advanced customers has put the brakes on. While bleeding edge business is likely the least impacted we could still see projects and shipments delayed and pushed out from one quarter into the next as TSMC modulates spending to support their profitability. One thing the industry has become good as is quick adjustments to near term trends and they can put the brakes on quickly. This is one of the supporting reasons for our concern about another down leg for the industry.

AMAT most negatively impacted
If Lam is the house that memory built then AMAT is the house that foundry built. AMAT has had a long and deep partnership with TSMC as their main supplier. To a lesser extent, KLAC and ASML could see some further weakness out of TSMC.

Consumable companies not so defensive as believed
The common wisdom is that consumable companies such as ENTG, CCMP and others who are wafer start driven are more of a “Steady Eddy” type of business , compared to capital equipment providers, except when wafer starts experience the sharpest drop in over ten years as is the case here. Its clear that even the consumable suppliers will get hit as wafer starts slow and inventory of built wafers gets worked off.

The stocks
We don’t expect that much of a negative reaction as much of the negative news has already been baked in a while ago. In addition the stocks seem to be building up a downside resistance to all the negative news. We could see individual stocks sell off as they adjust their numbers downward on their respective conference calls as the trickle down continues.


Remote Control: Crazy Likes Company

Remote Control: Crazy Likes Company
by Roger C. Lanctot on 01-20-2019 at 12:00 pm

Three years ago Chris Valasek and Charlie Miller hacked an FCA Jeep to demonstrate the ability to remotely control a vehicle. The stunt was intended to make the point that the automotive industry had a problem with cybersecurity and the consequences of failing to deal with this vulnerability could be catastrophic.

As 2019 arrives, new startups are poised to turn this weakness into a feature as at least three companies are contending to provide remote control – or teleoperation – capabilities for cars: Phantom Auto, Ottopia and Designated Driver.

Interestingly enough, two car companies already provide remote control capabilities as standard elements of their stolen vehicle tracking and recovery systems: General Motors/OnStar and Hyundai Blue Link.

The systems from Phantom Auto, Ottopia and Designated Driver are intended to support and enable autonomous vehicle systems – providing back-up when on-board systems or human drivers fail. The idea of remote control seemed absurd when Phantom Auto was the only company seriously proposing it at CES 2018.

It turns out, the idea has been kicking around and in development for decades. Even regulators have come to recognize that remote operation may be a requirement for autonomous vehicle systems – given the potential for failure of or malicious intrusion into on-board self-driving systems. Remote control immediately became a possibility in most cars with the advent of self-parking systems combined with wireless connections.

Simple remote functions such as remote start, door lock/unlock or vehicle conditioning have been around for years. Remote vehicle slowdown, requiring the intervention of law enforcement, has been around for a decade.

Full vehicle remote control, though, only became a reasonable option with the onset of LTE connectivity and the proliferation of vehicle sensors – especially cameras. Self-driving cars may appear to be remotely operated but, generally, are not.

The proliferation of new suppliers attacking this challenge will change things in 2019 and beyond. Remote operation of vehicles will more than likely be targeted, initially, at self-driving test mules – to get them out of sticky situations on public roads. More robust and sophisticated solutions will follow quickly enabling a hybrid automated driving proposition allowing vehicles to tap off-board resources to unravel complex driving circumstances.

The concept is not new to commercial applications such as mining nor to professional drivers at race tracks where vehicle settings are routinely altered via remote wireless connections during racing events. Of course, race cars are not controlled remotely and the rules for allowing remote altering of settings vary by racing circuit.

Improvements in contextual awareness via sensors and cameras will be critical to the success of remote operation. The arrival of low-latency 5G technology will make this technology even more reliable and relevant.

The implications of remote operation are important to ponder:

  • It changes the nature and function of the on-board wireless connections from emergency response to crash avoidance and guidance;
  • It enables enhanced hybrid decision-making for complex driving circumstances;
  • It allows for law enforcement and roadside assistance applications not previously considered possible – such as clearing disabled vehicles from roadways;
  • It enables an automated or manually controlled response system in the event of vehicles failures or hacks.

At CES 2018, Phantom Auto seemed a little crazy to the casual observer. The concept of remote human control of vehicles seemed fundamentally unscalable. CES 2019 will show that crazy likes company as the startup is joined by Ottopia and Designated Driver. It should make for an interesting show.


CNBC Qualcomm and SemiWiki

CNBC Qualcomm and SemiWiki
by Daniel Nenni on 01-18-2019 at 7:00 am

Over the holidays I did an interview with CNBC on the subject of Qualcomm. The producer had read the History of Qualcomm chapter in our book Mobile Unleashed and wanted to base a 15 minute report on it. The interview lasted 90 minutes but of course only snippets of what I said were used. You can see the recorded report by clicking on the image below. You can also see the history of Qualcomm chapter HERE for reference.

Before you judge, here is the other 90 minutes of the interview as I remember it:

I started out talking about the fabless semiconductor transformation during the 1980s and 1990s when Qualcomm was one of the leaders. If not for Qualcomm we would not have the mobile electronics we have today. Qualcomm also perfected the multi sourcing foundry business model which is critical since competition is what drives our industry. Competition is what enables low cost consumer electronics and made it possible for us to have super computer class devices in our pockets, absolutely.

I remember working with QCOM down to 40nm when their chips were being manufactured at TSMC, UMC, SMIC, and Chartered. QCOM would design first on TSMC using special design rules and design practices that enabled multi manufacturing sources. Most other companies did this as well but QCOM was the best at it for sure. TSMC was not happy about this of course since they did all of the heavy process technology lifting only to lose high volume manufacturing business to the other foundries but that was foundry life back in the 1980s and 1990s.

On the business side however, QCOM was very anti-competition. QCOM started out as a systems company making navigation and communication systems. QCOM then pivoted into a chip maker and more importantly a patent strong-hold based on those chips. In fact, QCOM has always made more money from licensing patents than chips. The reason being is that they adopted a “no license – no chip” business model. So you had to license the patents if you wanted a QCOM chip. QCOM was the only game in town for modems and leading edge SoCs at the time so they could play the chip game by their own rules.

Buying commercially available chips to launch products then over time developing your own chips is a critical part of the semiconductor ecosystem if you are a consumer electronics company. Smartphone giants like Apple, Samsung, and Huawei do this for a couple of reasons: Price of course, at high volumes you can make your own chips cheaper than buying them. You can also more easily differentiate features from your competitors. Battery life is one example. The less chips you have in your device the longer the battery life. If you look at the iPhone tear-downs over the years you will see supporting chips disappear into the SoC on a regular basis thus saving power and space. The modem however remains a separate chip in Apple phones.

The other big advantage of making your own chips is prototyping and emulation. As you develop your chip you can verify it quickly using prototyping and you can also start software development months before the chip is done. This is a huge advantage for smartphone companies that control their own software ecosystem like Apple.

Bottom line: Controlling your silicon really is required to be a leader in consumer electronics.

The problem is that to buy a QCOM chip you also had to license the patents which made it much more difficult to develop your own chips without getting letters from QCOM legal. QCOM pricing was also a problem. For the license and chip QCOM got a percentage of the smartphone rather than a fixed chip price. That’s like going to the grocery store and the final item you pay for is a percentage of what you already have in the bag. Having been in the IP business myself I was both amazed and impressed that QCOM got away with this. It really was a disruptive business model for fabless semiconductor companies.

As it turns out it was a bit too disruptive. The U.S. and other governments have accused Qualcomm of unfairly competing and charging excessive rates for its technology. China, Korea, Taiwan, and the EU have already fined or settled with QCOM. The US case is in progress now. I would be very surprised if the FTC did not get a favorable first ruling but we shall see. Hopefully there will be a quick settlement so we can all get back to the business at hand and that is making semiconductors for the greater good of consumer electronics.

This is just my opinion of course. I don’t have financial ties to either company but I do own an iPhone 10. I don’t particularly care for it. The one nice thing I can say is that the battery lasts MUCH longer than previous iPhones. I can now get through a full day of use without charging which is a nice change. The excessive price however is a bit disruptive and at some point in time Apple may pay dearly for that as well.

Update:
The FTC rested their case against QCOM and in my opinion it was much weaker than expected. QCOM is presenting their defense followed by closing arguments but as of today I don’t think the FTC made their case. Opinions are split of course. In fact a hedge fund shorted QCOM stock expecting a big verdict for the FTC:

Qualcomm Incorporated: Ignoring The Legal Risk Is Patently Ridiculous

“The FTC has brought a powerful legal case against the company, and the trial [conducted entirely before a judge, not a jury] is currently underway,” the note said. “We believe Qualcomm will lose.”

If you want to follow the case Twitter is a good place to start: #FTCQCOM


Needham Growth Conference Notes 2019

Needham Growth Conference Notes 2019
by Robert Maire on 01-17-2019 at 12:00 pm

We attended the Needham Growth Conference which is one of the first conferences of the year and in the quiet period before most companies reported so even though there was no “official” comment from most companies on the quarter, the surrounding commentary spoke volumes:

  • The down cycle (and everyone admits its a cycle and no one admits to ever saying it wasn’t still cyclical…) is on going and appears to be somewhat bouncing along a bottom or near bottom level of business.
  • There are no early signs of any sort of up turn or change in the cycle.
  • Hopes of a H2 recovery are currently just that….”hopes”
  • Its unclear whether we could have another leg down and no one was ruling it out or in….
  • Everyone laid the blame primarily on the memory market although foundry/logic is no great shakes either

All the companies in the space are small and mid cap suppliers, sub suppliers, and not the core big cap names; AMAT, LRCX, KLAC, ASML or TEL.

Given the tone of comments overall it would be our take away that the large cap companies will likely have to take numbers down further and get incrementally more negative, when they report, based upon what we heard from the other companies at the conference.

While the stocks seem to have hit some resistance floors in the stock prices, its not like they are bouncing off a bottom. Each time the stocks start to recover a bit they seem to get pushed down again by another piece of negative news, so we seem to be stuck in a low range until there are some clearer signs of a recovery or at the very least a firm bottom (which we have not yet seen…)

One of the comments we heard at the conference that we concur with is that we are concerned of the “death of a thousand cuts” where the industry continues to get negative incremental news rather than just get the bad news over with.

We can’t imagine that TSMC will increase CAPEX when it reports this week. Apple which is 20% of their business is obviously not doing all that well and surely has cut back even further on orders. It seems highly unlikely that TSMC will increase capacity and we think there is significant equipment reuse between 7NM and 5NM and that technology spending will not make up for weaker capacity spend.

Memory pricing, and therefore capital spending remains weak. It will take some time for the excess supply to get worked off, especially in light of reduced demand. It would take time for excess supply to be worked off with good demand but with falling demand its difficult to handicap.

ACLS–Happy for non mainstream business
While Axcelis clearly has exposure to the memory market they also have a lot of non-core business which is much less impacted than the mainstream, leading edge market. That “outside the core” business will help soften the weakness but not eliminate it. The recently announced buy back is appropriate and positive. The company remains on plan and delivering on promises

Cabot – new non semi biz coupled with being a consumables play
Cabot has been one of the more consistent performers as they are a consumable supplier rather than a capital equipment supplier and as such are based on wafer starts and layer count and not capital spend. However, wafer starts are not super strong …but its still better than being an equipment company

Formfactor- Memory still weak and Intel is still a bit slow to ramp…
Also in the consumable business is FORM. However form is more new design driven or different design driven. They have a better balance between memory and logic than previously. Everything else is OK, we just need a demand recovery.

MKSI- Most experienced management-Upcoming ESI close
MKSI’s CEO, Jerry, has been with the company for 35 years and has seen it all before and knows how to deal with it and make it right. The team at MKS did a great job with Newport and we are sure they will do the same great job with ESI. Management is not running in fear of the cycle but dealing with it head on. He contemplated the length of the downturn through his “canoe” analogy of varying shapes.

UCTT – As an AMAT and Lam supplier its hard to outperform
Its harder to escape the pull of gravity when the customers are weak as is the case here. Acquisitions have helped in the past but nothing that was exciting or at a great valuation. we are just waiting for a recovery here to get things moving again.

Ichor- Great, realistic, experienced management -weak customers
Much as with Ultraclean its hard to escape the customers weakness. Acquisitions have been very good and jump started ICHR since its IPO. Management is making the best of the situation and buying back its very cheap stock.

COHU- In early innings of integration not helped by weakness
So far, so good but the jury is still out on the combination of the two companies. Obviously its that much harder to do an integration during tougher times but hopefully the results will be worth it

Tower Jazz- Towers above the other players
Russell Ellwanger has taken what was a uncertain company and turned it into a great business model. He has done a great job of not only doing great “roll up” acquisition deals, but more importantly, managing them to perfection after rolling them up. Not “integrating ” them into the “borg ” of a large company but optimizing the different models and expertise of each individual. Business may be soft but the model works well with strong process capability & technology on top of sound business practices. They will outperform in the recovery.

Summary: Good companies, Ugly neighborhood

waiting at the station.
It feels like everyone is waiting at the station for the recovery train to come along and whisk them off to good times again. The problem is we are just waiting and not much within our control to hurry it along. On top of that the train timing is unclear at best and in fact could be longer than hoped for. There is a bit of helpless resignation.

The semi equipment industry is at the bottom of a very long trickle down that starts with Apple and China, through TSMC, Samsung & Micron etc . We need to see signs of a recovery coming from top down as any recovery won’t start from the bottom up… that we are still waiting on.


The New Intel CEO

The New Intel CEO
by Daniel Nenni on 01-17-2019 at 7:00 am

Interestingly, in some circles I’m known as an “Intel basher” but nothing could be further from the truth. I grew up with Intel and give them full credit for bringing serious compute power to our desktops. My first Intel powered computer was an IBM XT and I have had dozens of Intel based desktops and laptops since then. As a result, I hold Intel to a much higher standard and that includes CEOs. I wrote the blog “The Legacy of Intel CEOs” at the end of 2014 criticizing the last two Intel CEOs and I stand by it. More than 80,000 people read that blog and it is still getting traffic even today.

I have also been critical of the Intel Board of Directors (specifically Andy D. Bryant) and I still am but that is another blog entirely. Andy has got to go without a doubt. I am not critical of Intel as a whole as I still believe they are THE greatest semiconductor company of our times. Think about it, where would be be today without the contributions of Intel?

It is my sincere hope that the new CEO will be the start of a technology renaissance at Intel. Unfortunately, there is only one candidate of the rumored five that has a chance in my opinion and that is Johny Srouji. I first learned about Johny while researching our book “Mobile Unleashed” as he was part of the team that brought us the iPhone and iPad SoCs. Prior to Apple (pre 2008) Johny worked for Intel Israel (14 years) and IBM (6 years), so yes he has Intel experience but I would not call him an Intel insider.

The other rumored candidates are Lisa Su (CEO of AMD), Navin Shenoy (Intel), Murthy Renduchintala (QCOM/Intel), and Diane Bryant (Ex Intel). All of whom I don’t personally see succeeding but Lisa Su would be my choice out of the four. Bottom line: The next Intel CEO MUST be an outsider! I remember stating some time ago that Intel should buy NVIDIA just to get their CEO but of course they didn’t which clearly was a mistake. CEOs can make or break a company for sure.

Hopefully the Intel CEO question will be answered on the conference call next week. The Intel CEO search started last June which to me is a very long time to be without a leader. Believe it or not, I have a lot of Intel friends, many of them still work there. According to LinkedIn I have more than 500 connections who currently work for Intel and more than 5,000 that have been employed there. When I ask them about the CEO debacle most just shake their heads. Andy Bryant really showed his true back stabbing colors there. One thing that they all agree on is that BK did not deserve the ousting he got and another thing that most agree on is that if Murthy gets the CEO job Intel resumes will hit the streets. Just the opposite if Johny is hired, expect an influx of resumes from just about every company in the semiconductor community to hit his desk, absolutely.

Just my opinion of course but I am an “internationally recognized semiconductor industry expert” read by millions of people so there’s that. I still do dishes, fold laundry, and empty the trash at home though so my horse is not very high.


A Sharper Front-End to Intelligent Vision

A Sharper Front-End to Intelligent Vision
by Bernard Murphy on 01-16-2019 at 7:00 am

In all the enthusiasm around machine learning (ML) and intelligent vision, we tend to forget the front-end of this process. The image captured on a CCD camera goes through some very sophisticated image processing before ML even gets to work on it. The devices/IPs that do this are called image signal processors (ISPs). You might not be aware (I wasn’t) that Arm is in this game and has been for 15+ years, working with companies like Nikon, Sony, Samsung and HiSilicon and now targeting those trillion IoT devices they expect, of which a high percentage are likely to need vision in one form or another.


So what do ISPs do? As Thomas Ensergueix (Sr Dir of Embedded at ARM) explained it to me, this largely comes down to raising the level of visual acuity in “digital eyes” to the level we have in our own eyes. A big factor here is handling the high dynamic range (HDR) that you will often find in raw images. And to get better than human eyes, you want performance in low light-conditions and ability to handle 4k resolution (professional photography level) at smartphone frame rates or better.

Look at the images of a street scene above, a great example of the dynamic range problem. Everything is (just barely) visible in the standard image on the left, but in attempting to balance between the bright sky and the rest of the image, the street becomes quite dark; you wouldn’t even know there was a pedestrian on the right, about to walk out into the road. You can’t fix this problem twiddling global controls; you need much more sophisticated processing through a quite complex pipeline.

An ISP pipeline starts with raw processing and raw noise reduction, followed by a step called de-mosaicing to fill out the incomplete color images which result from how imagers manage color (a color filter array overlaying the CCD). Then the image goes into HDR management and color management steps. Arm view the noise reduction, HDR management and color management as particular differentiators for their product line.


Thomas said that in particular they use their Iridix technology to manage HDR better than conventional approaches. Above on the left, an image has been optimized using conventional global HDR range compression. You can see the castle walls quite clearly and the sky isn’t a completely white blur, but it doesn’t accurately reflect what you would see yourself. The image on the right is much closer. You can see clouds in the sky, the castle walls are clearer, as are other areas. This is because Iridix uses local tone mapping rather than global balancing to get a better image.

Arm introduced two new products recently including this capability, the Mali-C52 for full-range applications requiring near human-eye response, and the Mali-C32 for value-priced applications. In addition to improved HDR management they use their Sinter and Temper technologies to reduce spatial and temporal noise in images. In color management, beyond basic handling they have just introduced a new 3D color enhancer to allow subjective tuning of color. Finally, all of this is built on a new pixel pipeline which can handle 600M pixels/sec, easily enabling DSLR resolution at 60 frames/sec.

So when you think about smart vision for pedestrian detection, intruder detection or whatever application you want to target, spare a thought for the front end image processing. In vision as in everything else, garbage-in inevitably becomes garbage-out. Even less-than-perfect-in limits the accuracy of what can come out. Object recognition has to start with the best possible input to deliver credible results. A strong ISP plays a big part in meeting that objective.


Applying Generative Design to Automotive Electrical Systems

Applying Generative Design to Automotive Electrical Systems
by Daniel Payne on 01-15-2019 at 12:00 pm

Scanning headlines of technology news every day I was somewhat familiar with the phrase “Generative Design” and even browsed the Wikipedia page to find this informative flow-chart that shows the practice of generative design.


Generative design is an iterative design process that involves a program that will generate a certain number of outputs that meet certain constraints, and a designer that will fine tune the feasible region by changing minimal and maximal values of an interval in which a variable of the program meets the set of constraints, in order to reduce or augment the number of outputs to choose from.

Right away I could see the benefits of using an automated approach to generative design, because the designer can quickly see which design meets all of the requirements in an optimal way. Looking at the challenges of automotive design and the specific quest to develop autonomous vehicles it becomes clear that there is a lot of complexity in such a system:

  • Dozens of sensors: Cameras, radar, LIDAR
  • Decentralized Electronic Control Units (ECUs)
  • Multiple data networks
  • Wiring between sensors, ECUs and battery

There’s a torrent of data being generated by an Autonomous Vehicle (AV), gigabits per second which feed into decision and control systems, keeping us moving along the road safely. Experts calculate that it will take billions of miles of testing to verify that an AV is safe enough.

Fortunately there are intermediate steps towards reaching full level 5 autonomy, and the automotive systems have integrated sensors, computers and networks working together to meet the requirements of safe driving.

Features for level two autonomy could include:

  • Active cruise control
  • Lane departure warning
  • Lane keep assist
  • Parking assist

Meeting level two autonomy still requires that your car have 17 sensors or so: ultrasonic, long-range radar, short-range radar, surround cameras. The driver still has to manually respond to a notification about drifting outside of a lane and use steering to control a safe path.

The demands of level five AV are much higher than level two, so such an AV would have 40+ sensors:

  • Ultrasonic
  • Surround cameras
  • Long-range radar
  • Short-range radar
  • LiDAR
  • Long range cameras
  • Stereo cameras
  • Dead-reckoning sensors

With each added sensor there is an accompanying increase of wiring in the harness, and the need for increased computation of the massive data stream being generated by all of those sensors.

In America we have the Big 3 in automotive, but for AV there are some 144 companies developing products and services. Semiconductor spending on Advanced Driver Assistance Systems (ADAS) projects are in high-growth mode according to Strategy Analytics, reaching $13B by 2025:

So how do the major automotive OEMs and startup semiconductor companies designing for ADAS make their systems safe and get to market quickly? Generative Design certainly has the promise to leverage the best practices of experienced engineers in a process that can optimize a system while meeting safety requirements.

Let’s take another look at the Generative Design process, and apply it to automotive design where rules-based automation can generate a wide range of ideas for hardware, software, networks and logic combinations:

Your most-experienced engineers would be creating the rules to start with, then less-experienced engineers can run the generative design automation to produce many scenarios to choose from. Functional models are the starting foundation of the electrical system to be designed, but they don’t need implementation details. Your functional model contains communication networks, power sources and electronic components.

Software tools like Capital from Mentor can read your functional models as part of the electrical systems design environment. All of the generated architectures have system logic, networks, hardware and software. Your past experience has been captured in the rules, which helps assure that safety goals are met, while past mistakes are not repeated.

Pencil and paper methods are no longer sufficient tools to optimize an automotive electrical system where you need to optimize performance, power consumption, volume, weight and thermal domains. Design automation through generative design is going to make your engineering team more productive, producing an optimal design in less time than previous methods.

Fewer errors can be expected with generative design because there’s less manual entry and effort involved. Engineers across disciplines can share data effectively – Electrical, Mechanical, PCB, Software.

Data continuity means that you can trace every system requirement through to implementation, and know that your system can trace requirements to any domain and that you are in compliance with each requirement:

Design rules will check for flaws like unterminated wire ends, differences between graphical and physical bundle length, maximum wire currents, generated heat and other best practices that you have developed over the years. Impacts of design changes can be quickly understood, like changing an ECU to a different location, or changing a network, the performance in another location may change.

Using a unified data source ensures that engineers on the team know when a change happens and how it affects their domain. If you move an ECU then the impact on timing, signal integrity, physical clearance and collision can be determined. You know the impact of each change to your system.

Summary
Generative design is quickly moving into the automotive design realm because it helps design teams model, analyze and optimize across multiple domains to meet stringent safety requirements of ADAS and the goal of level five autonomous vehicles. Software tools like Capital are available from vendors like Mentor. Read the six page White Paper for more details.

Related Blogs


IDT Invests in IoT Security

IDT Invests in IoT Security
by Daniel Nenni on 01-15-2019 at 7:00 am

As we are preparing for the “IoT Devices Can Kill and What Chip Makers Need to Do Now” webinar next week, Intrinsic-ID did a nice press release with Integrated Device Technology. IDT is one of the companies I grew up with here in Silicon Valley that pivoted its way to a $6.7B acquisition by Renesas.


IDT is focused on automotive, high-performance computing, mobile and personal electronics, network communications, and wireless infrastructure. The common thread amongst all of those applications is security, absolutely.

SUNNYVALE, Calif., Jan. 14, 2019 – Intrinsic ID, the world’s leading provider of digital authentication technology for Internet of Things security, today announced Integrated Device Technology, Inc. (IDT), has licensed QuiddiKey, based on SRAM PUF technology, for security in its wireless charging products.

“Wireless power implementation is growing rapidly and expanding into multiple markets, so Intrinsic ID’s ability to help us deliver our technology in a secure, scalable manner was key to our choice,” said Dr. Amit Bavisi, senior director of SoC mobile engineering for IDT’s Wireless Power Division. “We chose QuiddiKey primarily for delivering cost-effective and robust foundational security. This strong anchor of trust singularly enables our customers to maximize their revenue and reassure their customers with the ability to hold counterfeits at bay. An additional, and more important, benefit is that the use of strong unclonable authentication for legitimate branded devices keeps consumers safe from charging hazards with counterfeits, which may not comply with industry-standard safety requirements.”

IDT delivers innovative wireless power solutions both for the receivers used in smartphones and other applications, as well as the transmitters used in charging pads and automotive in-car applications.

QuiddiKey is based on Intrinsic ID’s patented SRAM (Static Random Access Memory) PUF (Physical Unclonable Function) technology and allows semiconductor manufacturers to deliver IoT security via a unique fingerprint identity without the need for an additional security chip, such as a secure element. A root key generated by QuiddiKey delivers a high bar of security because it is internally generated, is never stored, and anchors all other keys and security operations to the IoT-connected product, such as a smart home appliance.

“IDT wireless charging solutions are used for an expanding range of applications, including small-footprint products such as fitness and health monitors, and charging solutions for smartphones. Authentication has become a necessity as providers of charging base offerings require control over their end-to-end charging system,” said Pim Tuyls, Intrinsic ID’s chief executive officer. “QuiddiKey’s ability to create unclonable identities for any IoT-connected product without the need for additional hardware is critical to profitably scale the IoT.”

About Intrinsic ID
Intrinsic ID is the world’s leading digital authentication company, providing the Internet of Things with hardware-based root-of-trust security via unclonable identities for any IoT-connected device. Based on Intrinsic ID’s patented SRAM PUF technology, the company’s security solutions can be implemented in hardware or software. Intrinsic ID security, which can be deployed at any stage of a product’s lifecycle, is used to validate payment systems, secure connectivity, authenticate sensors, and protect sensitive government and military systems. Intrinsic ID technology has been deployed in more than 125 million devices. Award recognition includes the IoT Breakthrough Award, the IoT Security Excellence Award, the Frost & Sullivan Technology Leadership Award and the EU Innovation Radar Prize. Intrinsic ID security has been proven in millions of devices certified by Common Criteria, EMVCo, Visa and multiple governments. Intrinsic ID’s mission: “Authenticate Everything.” Visit Intrinsic ID online at www.Intrinsic-ID.com.


Specialized AI Processor IP Design with HLS

Specialized AI Processor IP Design with HLS
by Alex Tan on 01-14-2019 at 12:00 pm

Intelligence as in the term artificial intelligence (AI) involves learning or training, depending on which perspective it is viewed from –and it has many nuances. As the basis of most deep learning methods, neural network based learning algorithms have gained usage traction, when it was shown that training with deep neural network (DNN) using a combination of unsupervised (pre-training) and subsequent supervised fine-tuning could yield good performance.

A key component to the emerging applications, AI driven computer vision (CV) has delivered a refined human-level visualization achieved through the application of algorithm such as DNN to convert digital image data to a representation understood by the compute engine –which is increasingly moving towards the network edge. Some of the mainstream CV applications are embedded in smart cameras, digital surveillance units and Adaptive Driver Assistance Systems (ADAS).

DNN has many variations and it has delivered remarkable performance for CV related tasks such as localization, classification and object recognition. Applying DNN data driven algorithm for image processing is computationally intensive and requires special high speed accelerators. It also involves performing convolutions. A technique frequently used in digital signal processing field, convolution is a mathematical way of combining two signals (the input signal and an impulse response of a system, containing information as to how an impulse decays in that system) to form a third signal, which is the output of these convolved signals. It reflects how the input signals impacted in that system.

The design target and its challenges
As a leading provider of high-performance video IP, Chips&Media™ developed and deployed various video Codec IPs for a wide range of standards and applications, including fully configurable image signal processing (ISP) and computational photography IP.

The company most recent product, a computer vision IP called c.WAVE100 is designated for real time object detection and processing of input video at 4K resolution and 30fps. Unlike the programmable software based IP approach, the team goal was to deliver a PPA-optimal hardwired IP with mostly fixed DNN (with limited runtime extensions). The underlying DNN based detection algorithm was comprised of MobilNets, Single Shot Detection (SSD) and its own proprietary optimization techniques.

The selection of MobileNets on top of an optimized accelerator architecture that employs depthwise separable convolutions is intended for a lightweight DNN. The four layer architecture consists of two layers (LX#0, LX#2) intended for conventional and depthwise convolution, and another pair (LX#1, LX#2) for pointwise convolution as shown in figure 2. On the other hand, SSD is an object detection technique using a single DNN and multi-scale feature maps.

As a DNN-based CV processing is inherently repetitious, evolving around the MAC unit –with massive data movement through the NN layers and FIFOs, the team objective was to have a tool that allows a rapid architectural exploration to yield an optimal design and shorten development time for time-to-market. The DNN based model was then trained using large dataset on TensorFlow™ deep learning frameworks. As illustrated in figure 3, the generated model was to be captured in C language format and synthesize into RTL.

In order to fairly assess the effectiveness of an HLS based solution versus the conventional RTL capture approach, two concurrent c.WAVE100 IP development tracks were assigned to two different teams. Such arrangement was done to mitigate risk of not disrupting the existing production approach which relies on manual Verilog coding captures. Furthermore, none of team members have prior exposures to the HLS tool or flow.

The team selected the Catapult® HLS Platform from Mentor as it provides algorithm designers a solution to generate high-quality RTL from C/C++ and/or SystemC descriptions that is targetable to ASIC, FPGA, and embedded FPGA solutions. A big plus on the feature side includes the platform ability to check the design for errors prior to simulation, its seamless and reusable testing environment, and its support to formal equivalence checking between the generated RTL and the original source. A power-optimized RTL, ready for simulation and synthesis can be rapidly generated through Catapult using the flow as shown in figure 4.

In addition to a shortened time to market at lower development cost, there are 3 key benefits pursued by the team:
– To enable a quick late-stage design changes at C/C++ algorithm level, regenerate the RTL code and retarget to a new technology.
– To facilitate what-if, hardware and architecture exploration for PPA without changing the source codes.
– To accelerate schedules by reducing both design and verification.

Flow comparison and results
At the end of the trials, the team made a comparison of the two flows as tabulated below:


The team takeaways from this concurrent development and evaluation efforts on design with HLS vs traditional RTL methods are as follows:

  • Easy to convert algorithmic C models to synthesizable C code. Unlike RTL, there was no need to write FSMs or to consider timing between registers. The C code was easier to read for team code reviews and the simulation time was orders of magnitude faster.
  • Optionally easy targeting on free software like gcc and gdb in order to quickly determine if the C code matched the generated RTL.
  • Ability to exercise many architectures with little effort using HLS, which otherwise was very difficult to do in the traditional RTL flow.
  • SCVerify is a great feature. There was no need to write a testbench for RTL simulation and the C testbenches were reusable.

To find more details on this project discussion check HERE.


SOC security is not a job for general purpose CPUs

SOC security is not a job for general purpose CPUs
by Tom Simon on 01-14-2019 at 7:00 am

Life is full of convenience-security tradeoffs. Sometimes these are explicit, where you get to make an active choice about how secure or insecure you want things to be. Other times we are unaware of the choices we are making, and how risky they are for the convenience provided. If you leave your bike unlocked, you can expect it to be stolen. However, we all know the feeling of learning that our credit card number has been stolen – clueless as to how or why usually. The other thing we need to be wary of is that hackers and bad actors are always looking for new ways to exploit security flaws. This means that things we saw as safe choices can, overnight, become risky.

Remember back in the day when you could easily use a debugger to find the code that did a password check and bypass it? Now we have protected address spaces and better encryption. System exploits are often found by hackers wearing white hats and then provided to vendors for fixing, before the public even hears about them.

However, in the last year a serious new security flaw known as Spectre has come to light that should give everyone pause. Like most people you bought a general-purpose computer with a CISC instruction set to both play games and do your banking. Processor vendors have spent the last several decades dramatically improving the performance of the general-purpose processors that are used in them. Among them are the Intel, AMD and sometimes ARM processors.

With the clock ceiling of ~4GHz for processors, CPU designers looked for ways to improve performance. An area ripe for optimization was wait states for memory reads, which can block processing for hundreds of CPU cycles. The widely adopted solution is predictive branching, where the CPU used prior execution profiles to determine the likely outcome of a branch decision. The processor would save state and proceed to execute the most likely code path. If the prediction was wrong the processor state was returned to the saved state. And, execution would resume with the correct branch. This seems safe enough…

Unfortunately, even if memory and processor registers are restored, there is still a latent trace from the code that was executed based on the prediction – the memory cache may have been changed based on memory reads. Downstream, hackers can use this in a number of ways to ferret out the contents of memory that was believed to be secure. One example is where the predictive branch was a memory bounds check, but the training led the processor to expect that the test would pass, but in reality, contains illegal memory accesses. The predictive code would then pull protected memory into the cache, where it can be retrieved later by additional hacker code. In fact, there are numerous other ways to exploit cache modification by malicious code leveraging predictive branch execution. Some even work in the Java runtime environment in browsers.

Unlike software exploits, this one relies on fundamental behavior of general purpose processors. So, when this exploit became public there was no fix ready, and in fact we will have to live with it, perhaps with some mitigation, for some time. The most secure fix is to disable predictive code execution, using the LFENCE instruction, but this leads to huge slowdowns in CPU performance. One security researcher estimated that 24 million LFENCE instructions would need to be added to the Office Suite.

Now look at all the new applications where processors are used that have heightened security requirements. In the face of this, it is time to start using different types of processors for different types of tasks – secure processors for critical jobs, and higher performance processors for less critical tasks. The push for heterogonous processors has been underway for some time, largely driven by performance needs. However, there is a growing need for specifically designed secure processor families. These might for instance be RISC based and are less vulnerable to predictive execution exploits. They also can have their own direct connected memory and security IP and accelerators, that are not accessible to any other part of the system.

Rambus outlines one such solution in their white paper “The CryptoManager Root of Trust”. Starting with a 32 bit RISC-V processor that is dedicated to security functions, the entire ensemble includes a number of essential components and the proper architecture for ensuring security. As such it is specifically designed to securely run sensitive code. It comes with dedicated SRAM and ROM memories. Also, there is an AES, secure SHA-2 hash core and asymmetric public key engine.

The Rambus CryptoManager Root of Trust (CMRT) also includes a true random number generator, a key derivation core (KDC) for deriving ephemeral keys from root keys. To detect tampering it also has a canary core that can detect glitching and overclocking. The Rambus white paper goes into detail about its comprehensive attack resistance. Also, it discusses the techniques it uses to create silos for sensitive code that needs to run securely. In fact, multiple roots of trust can be created to keep resources, keys and security assets for different application separate from each other. The CMRT core can be added as a complete security solution to SOCs to address the needs of a number of vertical markets. These include IoT, Automotive, Networking/Connectivity, and Sensors.

Rambus also describes the development tools and their provisioning infrastructure that complete the core’s development kit and deliverables. The white paper, which goes into much more detail on the full set of features and capabilities is available on the Rambus website. It is worth noting that the RISC-V core is not considered to be at risk from the Spectre exploit. I highly recommend reading the white paper, and its notes and references.