Hacking iris-recognition systems
LAS VEGAS—Researchers have for the first time demonstrated how a synthetic iris image could trick an iris-recognition system into believing it the iris of a real person contained in its database, raising fears that iris recognition, acknowledged as one of the most accurate biometric security measures available, may not be as accurate as previously thought.
While hacking a biometric system makes for great headlines, should this research make security professionals worried about the use of iris recognition as an identity authentication or access control solution? Security Director News spoke with one subject-matter expert to find out.
Javier Galbally, a researcher at the Universidad Autonoma de Madrid's Biometric Recognition Group-ATVS, along with researchers at West Virginia University, presented the research on using synthetic iris images to fool a commercially available iris-recognition system at the Black Hat security conference, which took place July 21-26 in Las Vegas.
Creating synthetic iris images from scratch that an iris-recognition system would identify as real irises is nothing new. But Galbally and his collaborators demonstrated for the first time how a synthetic image generated by reengineering a digital iris code (this is the binary representation of the image, which is stored in a database instead of the real image for security and privacy reasons) of a real person could be used to fool the iris-recognition system.
"For a long time, it has been assumed that [a binary iris code] did not contain enough information to allow the reconstruction of the original iris," according to the presentation description on the conference's website. "The experimental results show that the reconstructed images are very realistic and that, even though a human expert would not be easily deceived by them, there is a high chance that they can break into an iris recognition system."
Galbally and his collaborators used genetic algorithms to reverse-engineer the synthetic iris images, according to Wired. Once the images were generated, they tested the images against Lithuania-based Neurotechnology's VeriEye iris recognition system, which the National Institute of Standards and Technology recently named one of the top iris-recognition systems on the market. The system authenticated the synthetic images against binary iris codes in its database more than 80 percent of the time, Wired reported. “The idea is to generate the iris image, and once you have the image you can actually print it and show it to the recognition system, and it will say ‘okay, this is the [right] guy,’" said Galbally, according to Wired.
However, there's "nothing surprising" about creating a synthetic tool to fool biometric detection technology, according to Salvatore D'Agostino, CEO of IDmachines, a company that provides design, integration and consulting services related to identity credentialing and access management. "There's no liveness detection with most biometrics," D'Agostino told Security Director News. "[Galbally] was able to demonstrate something that was kind of black hattish, but to me as a security practitioner, does this give me qualms about the use of iris? None whatsoever."
People have generated synthetic tools to trick fingerprint readers, palm readers and even facial recognition, D'Agostino said. They're all still used. "You could make a dummy of me ... put it into a facial recognition system and it probably would say, 'Hi Sal, how are you doing?'"
Galbally may have exposed a vulnerability, but security best practices would prevent anyone from exploiting this particular vulnerability in a real-world scenario, D'Agostino said. Consider the following caveats: To re-engineer a synthetic image of a real iris, you would first need to hack into the iris-recognition's secure database to access the binary iris codes, which are generated from complex algorithms and encrypted. If you were able to access the database, break the encryption and re-engineer the binary iris code to create the synthetic image, you would still need to hold an image of an iris up to an iris-recognition system, which would not work in any scenario where other security layers would exist, D'Agostino said.
What this research does well, D'Agostino said, is demonstrate why biometrics should never be the only layer in an access control system. "This goes right to the heart of the fact that standalone biometrics are vulnerable to this sort of thing," D'Agostino said. "So people who think biometric-only access control is some kind of killer app, this is the kind of cold water they don’t like thrown on their fire."
While the vulnerabilities exposed in this research don't worry or surprise D'Agostino, the whole process is still useful and exemplifies how security is supposed to work, he said. Researchers and hackers are supposed to probe for a way to trick or defeat a security system. "And if you can break it, you tell the world and you get the scalp because you've done a good thing and exposed a vulnerability," he said.
So, while Galbally may have a scalp, his research shouldn't create fear that iris-recognition systems are suddenly more vulnerable than everyone believed.