Matthew Rosenquist
Member
Facial recognition has been struggling for some time to be accepted as a mainstream biometric. Apple’s latest generation iPhoneis shifting away from fingerprint scanning and may introduce large numbers of consumers to facial recognition to unlock their most personal device. But history has shown that attackers aren’t ever very far behind, especially with using images of a person’s face for authentication. Will the new advances outpace the threats and establish a strong beachhead for widespread use or will facial recognition again fail to deliver long-term sustainable security for users?
Face as a Password
Facial recognition is one of dozens of different possible biometric measures to use as a means for identifying and authenticating users. By far, fingerprint has been the preferred choice over the past several years, with many of the most popular smartphones integrating a fingerprint scanner for easier unlocking.
Facial recognition has been tried in the past, but the results were not impressive. The failure rate for users resulted in a need to tune down the criteria to the point that it was simply not very secure. Then there is the growing problem of people’s images appearing everywhere. The world has been flooded with cameras and the fondness to share pictures. Social media and search engines are fertile grounds to locate high resolution pictures of just about anyone. Simple face matching for authentication was obsolete before it reached the open market.
Secure at First Glance
Several years ago, I was in a conference room at work and as people were slowly sauntering in for an upcoming meeting. I sat down next to a senior manager who was playing with a new digital tablet. He gleefully told me it was an unreleased prototype from one of the big tech vendors in Asia. He exclaimed it sported facial recognition login for “total security”. He was keen on showing off this latest technology, so he locked the device then held it up to his face so the camera could see him. After a few moments, it automatically unlocked. He was brimming with pride and turned toward me to showcase this revolutionary security capability that he was sure would change the industry.
Just at that moment I took a quick picture of his face with my phone. I asked him to lock his new tablet and then he handed it over to me. I pulled up his picture on my phone and placed it in front of the camera. I had to shift it closer a bit before it unlocked.
His face went pale with astonishment. It was as if I had robbed him of all joy. I handed it back to him and said, “perhaps not totally secure”.
This was a case of a consumer believing marketing claims and only witnessing the supporting information. But there was not any opposition to round out his opinion. To him, the technology worked as stated, therefore he assumed that the claim of security was true as well. He abruptly learned that technology function alone does not equate to protection, as attackers also use it to their advantage.
Facial recognition has had a difficult time in fending off researchers and attackers who look to undermine the security. History paints a grim picture. As new technology was introduced, the opponents quickly adapted and found simple ways to again undermine the trust and confidence.
Unlike other types of biometrics, face based authentication has several unique challenges. With the readily available source material, repository pictures or the ability to take a picture of someone’s face, the base biometric is known to the attacker. The security controls tend to rely in the processing and analysis of the data. The problem is once a hack is discovered it likely can be reproduced to work on any system. Therefore, break it once and it can be applied to fail everywhere.
Originally, first generation facial recognition focused on a static mapping of an image. It was soon defeated by headshot images of the subject that were presented to the camera. Printouts or digital images on a smartphone were sufficient to fool the algorithms.
The second generation featured the ability to detect ‘live sensing’. Essentially it looked for minor movements and the blinking of the eyes that static pictures lacked. The very first hacks were crude and amusing. Pictures printed out on photo paper with the eyes cut out. Researchers then wore the pictures like masks, showing their eyes in the cutouts. More sophisticated coders created overlays to digital images which painted in slight movements and eye blinks. This was similar to the video chat overlays we see today that paint makeup, dog noses, and other novelties in real-time. The easiest hack was simply to video the victim’s face for a few seconds and replay it to the sensor.
Third generation attempted to create a pseudo 3-dimensional model from a single camera. By using algorithms, a series of 2 dimensional pictures were stitched into a 3-dimensional model that could be measured. As it turns out, the direct reverse-engineering of this process resulted in successful compromises. Again, by taking a short video of the persons face and replaying to the camera gave it what it needed. More artistic adversaries created actual 3D masks and models which also proved successful, although time consuming and not very practical.
We are now entering into the fourth generation where dual cameras provide true stereo vision. This is a great leap forward but the physics of looking at light bouncing off of images through a lens still remain. It is possible to undermine such dual-camera setups, but it is significantly more difficult as both cameras must be fooled in coordination. The base technology is already heavily explored. Consider simple 3D goggles where the binocular vision of humans is fooled into perceiving depth while looking at 2-dimensional screens. The same fundamentals can be applied to defeat a stereoscopic setup. Even research in photo lithography is applicable.
The question will be if it is too difficult and problematic for attackers to leverage. Then again, if someone figures out a way once, it likely could be implemented into a system for widespread use. One thing is for certain, the battle continues, as advances in technology are trying to stay one step ahead of potentially exploitable vulnerabilities.
Face as a Password
Facial recognition is one of dozens of different possible biometric measures to use as a means for identifying and authenticating users. By far, fingerprint has been the preferred choice over the past several years, with many of the most popular smartphones integrating a fingerprint scanner for easier unlocking.
Facial recognition has been tried in the past, but the results were not impressive. The failure rate for users resulted in a need to tune down the criteria to the point that it was simply not very secure. Then there is the growing problem of people’s images appearing everywhere. The world has been flooded with cameras and the fondness to share pictures. Social media and search engines are fertile grounds to locate high resolution pictures of just about anyone. Simple face matching for authentication was obsolete before it reached the open market.
Secure at First Glance
Several years ago, I was in a conference room at work and as people were slowly sauntering in for an upcoming meeting. I sat down next to a senior manager who was playing with a new digital tablet. He gleefully told me it was an unreleased prototype from one of the big tech vendors in Asia. He exclaimed it sported facial recognition login for “total security”. He was keen on showing off this latest technology, so he locked the device then held it up to his face so the camera could see him. After a few moments, it automatically unlocked. He was brimming with pride and turned toward me to showcase this revolutionary security capability that he was sure would change the industry.
Just at that moment I took a quick picture of his face with my phone. I asked him to lock his new tablet and then he handed it over to me. I pulled up his picture on my phone and placed it in front of the camera. I had to shift it closer a bit before it unlocked.
His face went pale with astonishment. It was as if I had robbed him of all joy. I handed it back to him and said, “perhaps not totally secure”.
This was a case of a consumer believing marketing claims and only witnessing the supporting information. But there was not any opposition to round out his opinion. To him, the technology worked as stated, therefore he assumed that the claim of security was true as well. He abruptly learned that technology function alone does not equate to protection, as attackers also use it to their advantage.
Facial recognition has had a difficult time in fending off researchers and attackers who look to undermine the security. History paints a grim picture. As new technology was introduced, the opponents quickly adapted and found simple ways to again undermine the trust and confidence.
Unlike other types of biometrics, face based authentication has several unique challenges. With the readily available source material, repository pictures or the ability to take a picture of someone’s face, the base biometric is known to the attacker. The security controls tend to rely in the processing and analysis of the data. The problem is once a hack is discovered it likely can be reproduced to work on any system. Therefore, break it once and it can be applied to fail everywhere.
Originally, first generation facial recognition focused on a static mapping of an image. It was soon defeated by headshot images of the subject that were presented to the camera. Printouts or digital images on a smartphone were sufficient to fool the algorithms.
The second generation featured the ability to detect ‘live sensing’. Essentially it looked for minor movements and the blinking of the eyes that static pictures lacked. The very first hacks were crude and amusing. Pictures printed out on photo paper with the eyes cut out. Researchers then wore the pictures like masks, showing their eyes in the cutouts. More sophisticated coders created overlays to digital images which painted in slight movements and eye blinks. This was similar to the video chat overlays we see today that paint makeup, dog noses, and other novelties in real-time. The easiest hack was simply to video the victim’s face for a few seconds and replay it to the sensor.
Third generation attempted to create a pseudo 3-dimensional model from a single camera. By using algorithms, a series of 2 dimensional pictures were stitched into a 3-dimensional model that could be measured. As it turns out, the direct reverse-engineering of this process resulted in successful compromises. Again, by taking a short video of the persons face and replaying to the camera gave it what it needed. More artistic adversaries created actual 3D masks and models which also proved successful, although time consuming and not very practical.
We are now entering into the fourth generation where dual cameras provide true stereo vision. This is a great leap forward but the physics of looking at light bouncing off of images through a lens still remain. It is possible to undermine such dual-camera setups, but it is significantly more difficult as both cameras must be fooled in coordination. The base technology is already heavily explored. Consider simple 3D goggles where the binocular vision of humans is fooled into perceiving depth while looking at 2-dimensional screens. The same fundamentals can be applied to defeat a stereoscopic setup. Even research in photo lithography is applicable.
The question will be if it is too difficult and problematic for attackers to leverage. Then again, if someone figures out a way once, it likely could be implemented into a system for widespread use. One thing is for certain, the battle continues, as advances in technology are trying to stay one step ahead of potentially exploitable vulnerabilities.