CES has been morphing into an automotive show for several years now. Chipmakers were pitching control solutions, infotainment solutions, then connectivity solutions. Phone makers pitched device integration. Automotive electronics suppliers pitched MEMS sensors and cameras. Now, with a lot of pieces in place, the story in 2016 has turned to a system-level solution.
And it isn’t self-driving vehicles. Every time someone says autonomous, an angel gets its wings – but for regulatory and legal and cultural reasons, large-scale deployment of autonomous vehicles is still generations away. Researchers will research, and that’s good, but the real money for auto companies and chip suppliers is in ADAS: advanced driver assistance systems.
The embedded chip companies – Freescale/NXP, Renesas, TI – were out in front for a while. These firms were all deep in automotive control with safety-qualified parts, so the leap to infotainment and connectivity wasn’t huge. Intel tried to tell an infotainment story, with so-so results. NVIDIA snagged headlines with its Tegra SoCs in a Tesla console win, and made headway in luxury display segments. Mobileye combined MIPS cores with embedded vision processing for dedicated EyeQ ADAS chips, coming from nowhere to 10 million cars. Qualcomm now wants in with Snapdragon 820 Automotive and Snapdragon 602A solutions tuned for cars.
Suddenly, there is a battle royale developing around who can create the algorithms for ADAS integration, from the vehicle to the cloud.
This isn’t exactly a new idea. Several years ago in a restaurant at the Venetian during CES, I met the folks from INRIX who were using GPS, mobile apps, and cloud algorithms to create real-time traffic mapping. They took technology deployed in commercial fleets and massaged it to consumer smartphone tastes, but still face difficulty monetizing beyond commercial space.
What if that capability can be integrated in cars, and not just luxury models but more mainstream offerings? Now this gets very interesting for a lot of chipmakers. One analyst, Wunderlich, writes that high-end ADAS content per car may eventually be an order of magnitude above a typical mobile device – for example, perhaps 8 to 10 cameras per car with associated vision processing.
But for that kind of broad adoption, ADAS has to fully integrate with the automotive control systems. That mandates a move from garden-variety mobile SoCs that burp at extremely inconvenient times to more robust, automotive qualified parts.
In many ways, this surge in automotive chip interest resembles the COTS push in defense technology years ago. Defense electronics suffered as more and more semiconductor suppliers bailed out of the mil-spec business, unable to sustain product development and manufacturing costs for a miniscule market dwarfed by consumer electronics. The response was the Perry Memo, allowing more commercial grade technology in select use cases.
Automotive under-hood applications are renowned for being even nastier than many mil-spec applications, with harsh environmental requirements and relatively low volumes that again sent many suppliers for the exit. Fortunately, foundries and processes caught up, so chip fabrication is not as big a barrier as it used to be. Safety-critical design is now a big hurdle.
So are the algorithms, and honestly that comes first – more on the safety-critical angle shortly.
NVIDIA has fired up their DRIVE PX 2, a massive 250W liquid cooled supercomputer in a box featuring the latest Tegra technology with 64-bit ‘Denver’ ARMv8 cores and Pascal GPU cores. NVIDIA thinks GPU computing is a fit for a deep neural net (DNN) cloud leveraging a common architecture running Cuda. Deep learning will be crucial to object recognition and motion tracking combined with mapping elements and other context from the cloud.
Mobileye says that makes for a nice demonstration, but to get to production algorithms takes a lot more doing. They are aiming for a proprietary mapping technology running on EyeQ chips called Road Experience Management (REM) which chews road information and localization at 10Kb/Km, compared to Google’s current HD technology with Gb/Km kinds of numbers. In theory, carmakers using EyeQ can flip on new vehicle software and build a “road book”.
CEVA is telling carmakers to hold their horses. Just as in mobile, where CEVA used more power-efficient DSP core IP to enable chipmakers to differentiate 4G LTE solutions, CEVA is creating IP for ADAS solutions. They have coupled their CEVA-XM4 vision engine with the open source Caffe open-source deep learning framework, creating a licensable solution for chipmakers called the CEVA Deep Neural Network.
Why is CEVA so confident to step right in the middle of this heavyweight fight? CEVA says they perform deep learning 3x faster, with 30x less power, and 15x less bandwidth compared to the NVIDIA approach. Some of the gain is efficient silicon in the XM4, but much of it is a floating-point to fixed-point conversion step that cuts bandwidth without sacrificing accuracy.
Compared to the Mobileye REM approach, CEVA says they are more open for customized algorithms in end-to-end solutions with tuned hardware and software. On top of that, the CEVA-XM4 is now certified to ISO 26262. That makes the XM4 currently the only licensable vision processor IP supporting ASIL B safety integrity level.
There is also mounting competition. We already mentioned Qualcomm. Samsung has created its own automotive division, and both Huawei and LG are also after automakers. There is the stealth-mode Apple automotive team doing who knows what. And there’s Faraday Future, a new firm with Tesla-like aspirations. This could put more chipmakers, or even automakers looking to self-design chips, in the market for licensable IP.
I’ve said several times that the secret of Qualcomm’s success has been a tight coupling between algorithm and silicon, with the examples of the Viterbi decoder and CDMA chipsets. After this initial phase of basic ADAS chips, we’re likely to see that the long-term winner in ADAS creates this same style of algorithmic coupling, adding cloud-based technology for an end-to-end solution optimized on ultra-low power silicon at the edge. (Qualcomm is also moving into deep learning with Zeroth.)
Can CEVA create an IP-based ADAS ecosystem quickly to compete with the headstart for Mobileye and NVIDIA and a new thrust from Qualcomm? Is CEVA’s bet on ISO 26262 certification well placed? For another perspective with a bit more detail on the Mobileye and NVIDIA pressers, Junko Yoshida had some excellent CES 2016 ADAS coverage in EETimes. The team behind Caffe also has a website.
Share this post via:
Comments
0 Replies to “Chips on the road to deep learning”
You must register or log in to view/post comments.