Probably the most interesting thing about Neural Networks is how they can be used for complex recognition tasks that we as people can easily perform but we might have a lot of trouble explaining how. One very good example of a problem that Neural Networks can tackle is determining when people are making a fake smile. Intuitively we know how to do this, but we would be hard pressed to explain the process we use to do it.
Neural Networks are being used for facial recognition, medical diagnosis, autonomous vehicles, and more. The list of applications is limitless, and the best part is that problems can be thrown at Neural Networks without having to map out a specific solution. Instead of hard coded programs that can do one specific task and no other, we can build a Neural Network and train it over and over again to do whatever tasks we want it to perform.
The power and potential of Neural Networks has not gone unnoticed by the major players in software and hardware. At the CDNLive event in Silicon Valley last week, Cadence CEO Lip-Bu Tan’s keynote talk featured Neural Networks. A few months ago Cadence hosted an event specifically targeted at Embedded Neural Networks. While at first glance using Neural Networks in an embedded environment sounds far fetched, the reality is that with today’s technology the training phase can be executed on servers and the coefficients for the task at hand can be downloaded to run the recognition process on an embedded platform.
It is worth noting that Google and Nvidia were represented among the speakers at the Cadence Embedded Neural Network Summit in February. However, I found one of the most interesting talks was by Sumit Sanyal founder and CEO of Minds.ai. He emphasized that the ‘new’ binaries will be the training weights for Neural Networks. The training process is lengthy, but his company and others are working to shorten it. In addition, they are looking to create the smallest training weights so they can be used on almost any platform.
Instead of larger word sizes and floating point numbers, training weights can be efficiently expressed with 8 bit values. This also leverages the existing compute infrastructure. If for example we wanted to go even smaller to 4 bit values, this would cause extra work for hardware that was designed for larger word sizes. Parallelism is also hugely valuable in this space. An overlap of only one pixel is needed in the data used for training, allowing larger training problems to be broken up and solved in parallel.
Astoundingly Neural Networks are significantly more accurate than conventional coding approaches for the recognition problems they have been used for. Accuracy percentages for facial recognition are in the high 90’s. Let’s talk about one specific benchmark for Neural Networks – the German Traffic Sign Recognition Benchmark (GTSRB). It consists of 51,840 images of German road signs, which are divided into 43 classes. The image sizes range from 15 pixels on a side up to 222 by 193 pixels. The two main metrics for this benchmark are accuracy of recognition and the size of the training weights used for recognition.
Samer Hijazi of Cadence presented some of their work with Neural Networks and talked about results in the GTSRB. They aggressively reduced the size of the training weights by combining layers that were used in the processing. They also reduced the size of each layer using numerical methods. Lastly they applied a hierarchical approach to the recognition problem. Using these methods, they were able to provide an extremely high recognition accuracy of 99.8%, and a smaller number of MAC’s per frame than the previous best result by over one order of magnitude.
Given the wide range of applications and the soon to be widespread ability to train and then use Neural Networks in mobile and embedded platforms, we can expect to see huge advances in almost every computational domain. We are seeing hints of this with autonomous cars, and many other areas. We live in a visual world, and computers are now for the first time learning to see the way we do and give us back meaningful information. The same goes for sound, any other sensor input and big data for that matter. Think of medicine (radiology, tumor detection, etc.), geology with images from space, or physics with data from particle colliders. Manufacturing and quality control are other examples of areas that stand to be revolutionized. For more information on Cadence Tensilica technology which is used to build Neural Networks you can look here.
Share this post via:
Comments
0 Replies to “Neural Networks Poised to Make Big Changes in Our World”
You must register or log in to view/post comments.