Will Artificial Intelligence Steal Your Fingerprints?
Some people worry that artificial intelligence will steal their jobs, but machine learning algorithms now generate images of fake fingerprints that match the prints of one in five people on the planet. Other biometric identification systems, such as face and iris recognition, may also be vulnerable. The capability puts the mobile device industry on notice that current biometric authentication systems may not be adequate for securing cell phones and other devices.
A recent study from New York University (NYU) Tandon School of Engineering reveals a surprising level of vulnerability in mobile devices. Using a neural network trained to synthesize human fingerprints, a research team created a fake fingerprint that could potentially fool a touch-based authentication system for much of the population, according to an NYU announcement.
The research team was headed by Julian Togelius, NYU Tandon associate professor of computer science and engineering, and doctoral student Philip Bontrager, the lead author of the research paper. Bontrager presented the paper at the IEEE International Conference of Biometrics: Theory, Applications and Systems, where it won the Best Paper Award.
The fake fingerprints, known as DeepMasterPrints, work in part because the biometric sensors on mobile devices are necessarily small so they don’t take up too much space. However, due to their size, the sensors can identify only a portion of a print rather than a whole print. “A small section of your fingerprint isn’t as unique as your entire fingerprint. A lot of the assumptions about fingerprint security are based off of research that has been done on the entire fingerprint,” Bontrager explains.
Rather than the entire print, smaller biometric sensors use minutia points. “When you scan your fingerprint, there are ridges. Sometimes these ridges intersect with other ridges, or they end. The minutia points are finding all of these unique points and where they’re located,” Bontrager elaborates. “The master print is a partial fingerprint that is similar to at least part of a lot of fingerprints. Basically, it’s similar enough that a single master print can match many different people, like the idea of a master key.”
The study builds on earlier research conducted by Nasir Memon, professor of computer science and engineering and associate dean for online learning at NYU Tandon, and Arun Ross, Michigan State University professor of computer science and engineering. They proved the theoretical possibility of master prints, a term they coined. Bontrager’s team, which included Memon and Ross, took the research a step further and created actual images capable of fooling commercial fingerprint identification systems, such as Neurotechnology’s VeriFinger, which has won numerous awards and is widely used.
To generate the fake but convincing images, the researchers used a method known as latent variable evolution using a generative adversarial network, which essentially pits two neural networks against each other. In this case, the first network cranked out fake fingerprint images while the second worked to detect images of fake prints from images of authentic fingerprints. The two networks learned from one another. Then they evolved and improved until they created a faux fingerprint that worked.
“The second network gets better at knowing the difference between the fake fingerprints and the real ones, and the first network learns how to create fake prints that the second network isn’t aware of. They compete back and forth until the first one can create convincing images,” Bontrager says.
He describes the algorithms as evolutionary because they evolve as they learn. “You start out with a bunch of random samples and then they compete, and it’s survival of the fittest over many generations until you get to the optimal one.”
While the images were good enough to trick fingerprint matching systems, the team has yet to conduct an actual physical test. “There has been a lot of research showing that it’s possible and people can do it. In practice that’s not so easy to do,” Bontrager suggests. “There will need to be a number of experiments to see how successful it can be in the physical space. That is one of the next steps.”
Actually conducting an attack with fake fingerprints would call for high-level skills. It wouldn’t necessarily require the resources of a nation-state or a well-organized criminal group, but so-called script kiddies working in a basement in the suburbs probably couldn’t pull it off. “At the moment, you would need a somewhat advanced attacker. They would have to be able to go from the image that we have, or their own, and then they would have to create a physical copy,” Bontrager notes. “There are different ways to do this. Some of them involve 3D printing and inverse molds and using some sort of silicone or Play-Doh material that can fool your sensor.”
Also, to achieve an even greater success rate, an attacker would probably need to tailor the technique to a specific manufacturer’s system, which is more challenging. “To really make it have a high likelihood of working for a specific phone, they would want to target the algorithm in that phone. If you can get a system that’s very similar to the type of system you’re attacking and then design for that, you can get a much better success rate, but that requires a lot of understanding,” the lead researcher offers.
The researchers theorize that other types of biometrics, such as face and iris recognition systems, also may be vulnerable, but only if a fraction of an iris or portion of the face is enough for authentication and if those features are less unique than a scan of the entire iris or face.
“For fingerprints, we’re specifically exploiting the fact that we’re only looking at a small section of the fingerprint. The algorithm could be applied the same way in these other modalities, but we need to study how it’s normally done in order to see if there are specific shortcuts or weaknesses that could be targeted,” Bontrager states.
The team considers the research to be a red flag for technology companies. “It’s a wake-up call to this synthetic kind of attack. There are potentially other ways to come up with clever uses of synthetic biometrics,” Bontrager declares.
He suggests that companies consider the possibility of synthetic attacks when designing systems. “There has been a lot of careful thought put into how biometrics occur in the wild, but some thought also needs to be put into what synthetic attacks could mean. The sensors need to be either large enough or have high enough resolution that they are secure even for smaller, partial fingerprints.”
He sees cost and size as the only real barriers to hardening devices against a fake fingerprint attack, but the problem may resolve itself as technology becomes smaller and less expensive. “It would be more expensive from the hardware side. Adding the defenses from the software side wouldn’t be quite as robust, but it wouldn’t add much cost,” Bontrager says.
The technology already is changing, he points out. “We’ll see some of these optical sensors being put behind the screen, where they would have the potential for being larger, so they could catch a larger part of your fingerprint. And even if you’re capturing just a small part of the fingerprint, if it’s at a higher resolution, it will be more unique because you’re capturing more information.”
It is also possible that companies will teach devices to discern actual fingerprints from synthetic versions. The neural network technology Bontrager’s team used learned what a probalistic model of fingerprints looks like, and it put those together. When it detects a ridge, for example, it can surmise where other ridges will be and how they will likely be shaped. But it also makes some mistakes and combines features in the fake fingerprint that would not occur in real life.
“Once you’re aware of these, you can try to program your software to identify an attack, like maybe teaching it to look for unnatural traits in the fingerprint. Once people start looking at the images that are produced, they can design counter systems, but it’s always a back-and-forth game,” the doctoral candidate says.
The most important goal is to ensure the technology industry is aware of the threat and can protect against it, indicates Togelius, Bontrager’s advisor and fellow researcher. “If we didn’t do it, someone else would do it, but they wouldn’t publish it. They would just use it.”