Keysight webinar 800x100 (1)
WP_Term Object
(
    [term_id] => 151
    [name] => General
    [slug] => general
    [term_group] => 0
    [term_taxonomy_id] => 151
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 441
    [filter] => raw
    [cat_ID] => 151
    [category_count] => 441
    [category_description] => 
    [cat_name] => General
    [category_nicename] => general
    [category_parent] => 0
)

The Next Big Thing in Deep Learning

The Next Big Thing in Deep Learning
by Bernard Murphy on 02-14-2017 at 7:00 am

I mentioned adversarial learning in an earlier blog, used to harden recognition systems against bad actors who could  use slightly tweaked images to force significant misidentification of objects. It’s now looking like methods of this nature aren’t just an interesting sidebar on machine learning, they are driving major advances in the field (per Yann LeCun at Facebook).

The class of systems considered in these approaches are called Generative Adversarial Networks (GANs) in which one neural network is played off against another. One network, called the discriminator, performs image recognition with a twist – it reports on whether it believes the image to be real or fake (artificially constructed). The second network, called the generator, reverses the normal function of a recognition system to create artificial images which it feeds to the discriminator. If the discriminator determines an image to be fake, it feeds back information to the generator on what caused it to come to that conclusion.

The beauty of this setup is that this pair of networks, after a bootstrap on a relatively modest set of real images, can self-train to recognition/generation levels of quality that would normally require much larger set of labeled image databases. This is a big deal. A standard reference for images, ImageNet, contains over 14 million images across 1000 categories. That’s for “standard” benchmark images. If you want to train on something outside that set, unless you get lucky you must first build a database of tens of thousands of labeled reference images. But with GAN approaches you can reduce the size of the training database to hundreds of images. That’s not only more efficient, it can be important where access to larger databases can be limited for privacy reasons, as is often the case for medical data.

This raises an interesting question in deep learning – if GAN-enhanced training on a small set of examples can achieve similar levels of recognition to (non-enhanced) training on a much larger set, doesn’t that imply significant redundancy in the larger set? But then how do you measure or better yet eliminate that redundancy? This is a question we understand quite well in verification, but I’m not aware of work in this area for deep learning training data. I would think the topic should be extremely important. A well-chosen training set, together with GAN methods, could train a system to be accurate in recognition across a wide range of examples. A poorly chosen training set, even with GAN reinforcement, could do no better than recognize well across a limited range. If anyone knows of work in this area, let me know.

So one thing you get out of GAN is improved learning on smaller datasets. But the other thing you get is improved image generation (because the discriminator is also training the generator). Why would that be useful? I can imagine that movie-makers might find some way to take advantage of this. A more serious application is to support something called inpainting – filling in missing parts of an image. This has obvious applications in criminal investigation as one example.

Another very interesting application is in astronomy, specifically in approaches to mapping dark energy by looking for weak gravitational lensing of galaxies. This is a tricky problem. We don’t know really know much about dark energy, and we’re looking for galaxies whose size and shape we don’t know, because they’re distorted by that dark energy. This seems like a problem with too many unknowns, but one group at CMU have found a way to attack the problem through generative creation of galaxy images. They expect to be able to use methods of this nature, together with models of estimated shearing of the images caused by lensing, to map against the images we actually detect. By tuning to get accurate matches they can effectively deduce the characteristics of the dark energy distribution.

Deep learning marches on. It continues to become more interesting, more capable and more widely applicable. The Nature article that started me on this topic is HERE.

More articles by Bernard…

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.