Neural nets are hot these days. In this forum certainly you can’t swing a cat without hitting multiple articles on the topic – I’ve written some myself. For me there are two reasons for this interest. First, neural nets are amazingly successful at what they do, for example in image recognition where they can beat human observers in accuracy and response time. More subtly, they have changed the way we look at some aspects of artificial intelligence, from mathematical models to biological models.
With the benefit of hindsight this shouldn’t be surprising. If we want to mimic the behavior of say the visual cortex, starting with a low-level model of how the brain actually works (interconnected neurons with connectivity weights trained through learning) seems like a better bet than a high-level algorithmic abstraction of how we think vision works. You lose the benefit of understanding the process, but the effectiveness of the result is more important in this case than scientific insight.
The way neural nets work is maybe easiest to understand in image recognition. First an image is broken up into small regions. Pixels within each region are tested against a function to detect a particular feature such as a diagonal edge. The function is simple – a weighted sum of the inputs, checked against a threshold function to determine if the output should trigger. Other feature tests (eg for color) can then be performed, but I’ll skip that complication.
Outputs are fed into a second layer. The same process repeats, this time with a different set of functions which extract slightly higher level details from the first-level. This then continues through multiple layers until the final outputs provide a high-level characterization of the recognized object. The weighted sums at the core of this method can be modeled very nicely in a DSP or GPU, which is convenient because the Snapdragon 820 core offers both and can perform this modeling with low power consumption.
Setting the weights requires a training phase. Once a net has been trained it can be used to distinguish between the classes of objects on which it has been trained – road signs for example. Within their training domain, such neural nets have been shown to achieve 99% or better recognition accuracy in real time.
A great place to deploy this capability is in mobile systems, because that removes the need to go to the cloud for complex processing. Qualcomm recognized this and has just announced a software development kit to be used with the Snapdragon 820 (the processor at the heart of the Samsung Galaxy S7 and other phones) to enable neural net processing. This Snapdragon Neural Processing Engine SDK is powered by the Qualcomm® Zeroth™ Machine Intelligence Platform and is optimized for Snapdragon 820.
Just some of the places this capability can be used are on smartphones, security cameras, cars and other platforms for scene detection, text recognition, object tracking and avoidance, gesturing and natural language processing. Think about upcoming electric vehicles, game-stations and “remoteless” home entertainment centers – all of these will enabled by this kind of technology.
In many applications, untethering from the cloud is not a nice-to-have. You don’t want collision-avoidance dependent on whether you have line of sight to a cell tower (despite Verizon claims to the contrary, this is not universal). Or be at the mercy of heavy loads on cloud servers. And you don’t want security checks like facial or iris recognition on your phone farmed out to the cloud for similar reasons. Not to mention that man-in-the-middle attacks are an obvious weakness in cloud-based security.
Thanks to programs like this, we can look forward to much more safety, security and other intelligent usefulness in mobile devices in the near future. You can learn more about the Qualcomm offering HERE.Share this post via: