Last week I have attended to the webinar from CEVA ““Challenges of Vision Based Autonomous Driving & Facilitation of An Embedded Neural Network Platform” and I loved that I have heard and seen. For the first time since I read about autonomous driving, I have seen a realistic roadmap and not a geek’s fantasy, suggesting that you will seat in completely autonomous car by next year or so! Driving a car can be so boring sometime that it is legitimate to dream about a way to escape it… but we should never forget that automotive is a life critical application.
That’s why we expect real experts to address the numerous algorithms, architecture and processing challenges. Even if autonomous driving will require a great deal of software engineering, as far as I am concerned, I prefer such project to be managed by hardware (IC or IP) experts, as they have this bug-free culture, preventing them to launch product which is not 100% perfect. If I seat in an autonomous car, I don’t want this life critical application to be managed by any software company, first releasing product and sending patches afterwards…
CEVA’s webinar starts with this roadmap from National Highway Traffic Safety Administration (NHTSA) and this roadmap looks like a realistic starting point. Don’t expect full autonomous driving (level 4) to be available before 2024-2025, even if limited self-driving (level 3: highway autopilot and self-parking) could be available a few years before. If you drive an autonomous car today, you will benefit from function specific automation (level 1), offering adaptive cruise control or lane centering. The road to autonomous driving is long and the next step (level 2) only offers a combination of automated functions, like traffic jam assistance or collision avoidance but still requires driver control.
CEVA has associated the type of algorithm, traditional or CNN, which can be used at each level. Only traditional algorithms are used for ROI detection and identification for level-1 automation and only limited deep learning algorithm could be used at level-2. The reason why deep learning algorithms and CNN are not really implemented before level-3 is linked with the current challenges associated today with deep learning: the very high bandwidth required and computing bottleneck make it a solution not yet cost effective in production.
But CEVA is working hard to develop a complete solution around the CEVA-XM vision DSP core together with CDNN HW accelerators. Because convolutions are the major and most cycles consuming layers, creating dedicated HW engine for executing the convolutions layers in CNN allows to dramatically decreasing the power consumption. Compared with Nvidia TX1 GPU, CEVA-XM6 based platform offers 25X better power efficiency factor, calculated in ROI/Sec/Watt. Moreover, this platform provides the flexibility to cope with future Neural Network development, and, if you consider that papers are issued every week about new developments in deep learning, the successful solution will have to be flexible.
Which architecture best fit autonomous driving requirements, centralized or distributed? From the above picture, we see that it clearly depends of the target level. Distributed modular architecture is a good fit for the comfort or convenience applications implemented in vehicles available today. As soon as you want to implement safety applications using radar, lidar or stereo vision to support level 2-3-4, the architecture has to integrate sensor fusion and need to be partial centralized. Just a clarification about level-3, or limited self-driving automation, made by CEVA during the webinar: level-3 could be perceived by the driver as full self-driving feature, even though it requires driver attention. Such confusion could be dangerous and that’s the reason why some OEM has decided to skip it and directly build level-4 solution.
Centralized architecture seems to be well suited for level-4, as you would intuitively expect full autonomous vehicle to rely on centralized driver assistance system. According with CEVA, this centralized architecture will integrates deep learning technology to support level-3 or level-4. CEVA offers a comprehensive vision platform centered on CEVA-XM DSP, including imaging & vision SW libraries, CEVA Deep Neural Network (DNN), CEVA HW accelerators and imaging & vision applications. Reading this blog “Could Deep Learning be Available for Mass Market” will remind you the principle of Deep Learning and the way it’s implemented by CEVA…
An expert from AdasWorks has described DRIVE 2.0, artificial intelligence-based full stack software suite for Level-5 self-driving cars, as well as TOOLKIT 2.0, a framework combining the training and testing tools you need to build the Drive 2.0 suite. This picture is quite interesting: starting from the photography at upper right, you can see the various simulations generated by Toolkit 2.0:
You still can attend to this webinar available on-demand and you will be surprised by the number of questions. In fact, it was not possible for CEVA and AdasWorks to take all of them during the webinar!
By Eric Esteve from IPNEST
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.