Semiwiki 400x100 1 final
WP_Term Object
(
    [term_id] => 468
    [name] => Mobile
    [slug] => mobile
    [term_group] => 0
    [term_taxonomy_id] => 468
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 311
    [filter] => raw
    [cat_ID] => 468
    [category_count] => 311
    [category_description] => 
    [cat_name] => Mobile
    [category_nicename] => mobile
    [category_parent] => 0
)

NVIDIA’s Deep Learning GPUs Driving Your Car!

NVIDIA’s Deep Learning GPUs Driving Your Car!
by Mitch Heins on 12-09-2016 at 4:00 pm

 In a recent SemiWiki article it was noted that 5 of the top 20 semiconductor suppliers are showing double-digit gains for 2016. At the top of the list was NVIDIA with an annual growth rate of 35%. Most of this gain is due to sales of its graphics processors (GPUs) which one normally associates with high performance computer gaming engines. The thing that caught my eye, though, was that while gaming had a hefty 65% growth, IoT (Internet of Things) and Automotive accounted for 193% and 61% growth respectively.

Ok, IoT is growing everywhere so a large % ramp makes sense. But what is Nvidia doing in automotive? My first thought went to graphics applications like heads-up displays but after a little digging I found a very interesting niche that NVIDIA has exploited with their GPUs and that’s in the area of ADAS (Advanced Driver Assistance Systems).

 The idea of a ‘connected car’ has been around for a while with its roots going back as far as the 1960’s when General Motors (GM) had a project called DAIR (Driver Aid, Information and Routing). The vision was right but the technology and ecosystem to support it were not up to the task.

It would be 30 years later in the mid 1990’s before GM would announce OnStar connectivity in cars and another 10 years after that in the mid 2000’s with the advent of smart phones that we would see Infotainment’ apps be added. Add another 10 years and we have reached the present age where multiple pieces of the ecosystem are now in place to support GM’s original vision and much more.

These ecosystem pieces include 4G cellular ubiquity, high speed cloud-based data centers, cars with dozens of micro controllers and processors and a plethora of sensors, both in the cars and in the environment in which we drive. All of this technology is being pulled together with embedded software and connections to the cloud, in what we now call the internet-of-things or IoT.

 This progression has led to an impressive amount of investment to take ADAS to the next level with the advent of autonomous or self-driving cars. Self-driving cars require multiple connected technologies to work together: GPS, radar, lidar, sonar, cameras, and sensors to name a few. All of these technologies generate massive amounts of heterogeneous data that must be analyzed together in real-time. Suppliers have stepped up with AI (artificial intelligence) technologies that process the information to enable a car to predict how surrounding objects might behave and to make decisions on how the car should respond.

One of the favorite algorithms for this AI type of work is something called a Deep Neural Network (DNN). DNNs mimic the neuron connections of the human brain and like the brain they must be trained. Training a DNN takes a long time. As an example for image recognition alone, a DNN must be shown millions of images and for each image it must go through a process where it identifies and classifies objects in the image. Results are graded against known good answers and then corrections are fed back into the DNN until it can successfully identify and classify all of the objects.

 This is where NVIDIA has found a niche. It turns out that you can use the massively parallel nature of NVIDIA’s GPUs to dramatically reduce the time spent training a DNN. And as an added bonus, once the DNN is fully trained you can use the GPU to execute the DNN code on massive amounts of data in real time (milliseconds, not hours) and this is exactly what the creators of self-driving cars need to complete the technology portion of their eco-system.

To make it easier for anyone to use their GPU in this fashion, NVIDIA has developed an entire suite of software development tools called DIGITS (Deep Learning GPU Training System) that can be used to rapidly train DNNs for image classification, segmentation and object detection tasks. The nice thing about DNNs, though, is that they are useful for much more than just image processing. DNNs can be used for data aggregation and to convolve heterogeneous data of widely differing types (images, text, numerical data etc.) which fits perfectly with the problem space of IoT systems in general.

The self-driving car is just once instance of where data aggregation and convolution will occur in the world of IoT. That market alone is expected by Morgan Stanley to grow to $1.3 trillion by 2022 for the U.S. alone . If NVIDIA can win the processor sockets for that market and other IoT adjacencies where DNNs can be used, their growth for the next several years will be well assured.

See also:
5 of the Top 20 Semiconductor Suppliers to Show Double-Digit Gains in 2016!
IoT and Automotive to Drive IC Market Growth Through 2020
Who owns the road? The IoT-connected car of today – and tomorrow
Which GPU to use for deep learning?

Share this post via:

Comments

0 Replies to “NVIDIA’s Deep Learning GPUs Driving Your Car!”

You must register or log in to view/post comments.