Skip to content

Trending tags

Episode 60| Biometrics: Privacy, Problems and Possibilities

Melissa Michael

02.11.21 34 min. read

Biometrics have gotten a lot of attention in recent years. Biometric authentication systems have the potential to take the place of passwords, streamlining the user login experience. But there are a lot of considerations before taking these systems into use. When should they be used, and how? What are the risks, and when should biometrics be approached with skepticism? Vic Harkness, who gave a presentation on biometrics at DEFCON 29 Rogues Village, and red teamer Tom Van de Wiele joined episode 60 of Cyber Security Sauna to talk about the advantages and disadvantages of biometric authentication systems, some of the wackiest ways our bodies can be measured, and why layered security still works best.

Listen, or read on for the transcript. And don’t forget to subscribe, rate and review!

ALL EPISODES    |    FOLLOW US ON TWITTER

Janne: Welcome to the show. 

Tom: Thanks for having us.

Vic: Yeah. Thank you.

So, in general, what are the advantages of using biometrics for identification or authentication? I guess you can’t forget your fingerprint at home.

Vic: Yeah. It is an advantage that you can’t forget your fingerprint or your face. It’s something that you’re carrying with you easily. It’s not something that you have to write down, such as writing down your password on a bit of paper and carrying that with you.

That also means that there’s less opportunity for theft potentially, because no one can steal a picture of your face. Well, or can they? That is one of the potential drawbacks of it. If a system could successfully authenticate people using a photo of their face, that’s bad.

But thankfully, a lot of the systems are built to not work like that. Something like that wouldn’t be sufficient.

Vic Harkness, Tom Van de Wiele

Vic Harkness, Tom Van de Wiele

So when we say biometrics, what exactly are we talking about? Fingerprints obviously; your face, you mentioned. We’re basically identifying based on something that we are that gets measured somehow.

Vic: Yeah. Biometrics refers to basically quantifying the human body. So any way we can numerically describe the human body, and ideally, describe it uniquely.

And there’s all kinds of different ways we can do this. Most commonly we’ve seen fingerprint recognition, where the computer is looking at the certain positions of locations called minutiae in your fingerprint and encoding these as numbers, which can then be used for matching. It’s similar with facial recognition systems. It’s often looking at various locations around the face and numerical values associated with those.

But in the same way as my password is not stored in the system, hopefully, but a hash of it is, in the same way, there’s no picture of my fingerprint on the phone. There’s some sort of a numerical representation of it.

Vic: Right. It depends on the system. For things like your passport, nowadays, those use a facial photo and also fingerprints. Those actually do have a complete picture of your fingerprint, and at borders, quite often they will collect a complete fingerprint.

But for things like unlocking your phone, opening a certain application, they do just contain a numerical encoding of your fingerprint rather than the fingerprint itself, much like how a password should ideally be salted and hashed rather than just being a plain text value that someone could then reuse if they’re able to compromise the storage file.

Tom: The problem is that not all technology is created equally, and some security vendors will have to make systems that have to apply or have to satisfy different security criteria, versus the consumer tech that we have in our pockets. So when we talk about fingerprint readers, when we talk about facial recognition, we have to make a very big distinction between consumer tech versus tech that’s being used in other places and other applications.

Yeah, absolutely. And there’s other compensating controls as well. I remember a conversation about a specific type of fingerprint reader being said that it’s insecure because you can attach clamps to it and do this thing and then run it like this. And then you can adjust the data, and the Marine Corps uses this. And a Marine Corps colonel mentioned that, “Yeah, but we also have an armed Marine standing next to the fingerprint reader, so try to attach your clamps to it.”

Tom: And that’s a very good point, in that obviously there’s a reason why companies and governments are using biometrics, but the problem is that like any other security control, if you single out a security control, it will fail at some point.

So you need to have a combination of different security controls, a pin or a password, or like you mentioned an armed guard, or the fact that it’s a certain geographic setting that only a few people can have access to. So you need to have a combination of different security controls, of which biometrics could be one, to be able to protect whatever it is that you’re trying to protect.

Okay. Now, one of the most common arguments against biometric authentication is that you can’t change your fingerprint. So what do you do if your fingerprints get compromised in a breach? But Vic, I’ve heard you say that that’s actually not much of an issue. Why not?

Vic: Yeah, that’s correct. If it is a system where it’s only encoding certain numerical values that describe your fingerprint, then if an attacker was to steal those, they wouldn’t be able to do too much with them, whereas they would need to steal an entire image to actually be able to use it.

Okay. What about if an attacker actually gets access to your face or fingerprint directly, like a kid that uses his sleeping dad’s finger to buy in-app money for a game, or something like that?

Vic: Quite often with biometric systems, there’ll be some sort of secondary modality, such as some sort of liveliness detection in the system. In the case of a child who’s using his dad’s fingerprint to buy more in-game currency for a computer game or something, there tends not to be too stringent secondary modalities in there. So they probably could just get their dad’s fingerprint, put it on the phone and go buy stuff.

But if we’re looking at potentially more serious applications, such as taking out a bank loan or performing some other sensitive financial interaction, the banks have moved away from just having say, a photo of a person, because could you just grab a photo of them off of social media? Could you take a photo of the person? Could you just flash the camera at them when it’s doing the check so that it thinks they’re there? In these sorts of scenarios, you need the person to actually be giving consent, rather than just a person to be able to pop up and point of phone at them.

Banks have had a lot of issues with fraud taking place between romantic partners, where one person will take out a loan in the other person’s name that obviously, they don’t want them to know about, because they’ve got ready access to things like their date of birth, their address, all that sort of personal information. But they might also have access to the face of their partner. So say their partner is asleep in bed. They could then use their sleeping face to authenticate to a system. So what banks have moved more towards is requiring a video from the person who’s taking out a loan of them saying some set phrase, not the my-voice-is-my-password thing, but it could be, “My name is (full name), and this is-”

And I understand that I’m taking a loan.

Vic: Yeah. As well as some semi-random bit of extra data, like saying the number 12 or doing a certain gesture, because then – The system’s not coercion-proof, but the person who is having the loan taken out in their name is definitely aware of what’s happening. So that helps to reduce the risk from fraud.

Well, that’s the thing, I like the word “coercion” you mentioned, because that’s one of the arguments as well, is that, what if somebody puts a gun to your head and takes your photo by force? And I’m like, if somebody puts a gun to my head, I’m going to give them my password as well. So I don’t see that being an issue.

Vic: Yeah. With that level of coercion, you’re probably going to do what they say regardless.

One of the concerns people have had is with law enforcement. So I believe that the law in the U.S. currently is that you can’t be compelled to hand over your password for your phone, but you can be compelled to put your fingerprint on the reader, or the policeman could just force you to look at your camera for a second so that it unlocks automatically. And then they could look for incriminating evidence.

I believe that in iPhones they’ve actually introduced a tool which stops the iPhone from being used for unlocking in that way temporarily. Tom was telling me about this earlier.

Tom: Yeah. So most modern iPhones, and I suppose iPads as well, that have either the older Touch ID or Face ID, if you bring up the menu for emergency services and then cancel it again, it will temporarily disable things like Face ID, if indeed you are somewhere where there might be abuse of your phone by whoever it is, or at a protest, or where someone can compel you to unlock your phone.

And it’s good that at least the technology is trying to catch up to those scenarios, knowing that not every place is safe to unlock your face whether you want it or not. So let’s hope companies stay agile here and give us enough tools to choose when we want to use this technology.

Well, that’s very interesting. Can you guys think of some of the other risks or disadvantages of biometrics?

Tom: I think at least for me, one of the biggest concerns I have is it’s kind of a semantics discussion, as we already touched upon, not all biometric systems are created equally. Sometimes they are part of a larger database. And if that database leaks, now, of course, you have the name and certain biometric data of that person, which can be of use to someone.

Again, this falls under the category of identity theft, or fraud, or trying to harass someone the moment they want to, I don’t know, have a position somewhere at a company or a government, or go into politics.

So it’s not just the actual concerns of when to use it and what we’re talking about. It’s also the aftermarket consequences of what do you do when you have a breach? Because I’m not sure how many people or companies, rather, that have implemented biometric technology are running the actual crisis simulations of having a breach and what to do with it.

Because as we mentioned, you can’t change your face and you can’t change your fingerprints. So you need to make sure that it is extremely hard and costly for an attacker to able to yield or use the data in any way possible, either directly or for aftermarket sales.

Could you do something like change the way you’re capturing that biometric data and by that, render the old hashes inoperable?

Tom: I think that would mean that you would have to enroll yourself in a way that you can change it.

Or just force everybody to re-enroll.

Tom: I suppose, yes. But again, we’re talking about biometrics and biometrics is two. So is the system really just logging certain coordinates or certain metadata, so to speak, with which they can determine that, yes, this is indeed your fingerprint or your face? And at the same time, of course, all the algorithms and all the thresholds are being updated because people grow and people grow older, or their faces change, or people get sick or lose weight or gain weight.

So what you want to do as a hacker trying to mess with these systems, is you’re trying to find the lowest threshold for acceptance for these systems and try to see if you can mess with those statistics or those numbers. So it really is deep down a numbers game.

But again, that is, of course, if indeed the system is only logging those things. And that is really where we need more transparency, because the solution cannot just be “we’ll throw biometrics at it” because you might be adding a far larger problem versus the solution you think that you have in your hands.

Okay.

Vic: I think that people quite often place too much reliance on biometric systems, they’ve seen all the movies of the eyeball scanners and they think biometrics is quite secure. It’s become a bit of a buzzword, really. People just think it means it’s very secure.

If you look on Amazon for instance, if you search for biometric locks, you’ll find just endless pages of fairly cheap biometric locks, which supposedly unlock with your fingerprint. And you look at the reviews of them and you’ll see people saying, “Bought this, found out that it unlocks with any fingerprint, it’s rubbish. Can’t use it.”

But if someone’s not done that test themself or hasn’t looked at the reviews, they might think, “Oh, biometric lock. It’s going to be super secure, it’s going to be more secure than a key,” because people are aware that you can pick locks, it’s fairly common knowledge.

So you could potentially get people improperly using cheap biometric systems off of Amazon thinking that they’re securing something, and they’re not really able to make the informed decisions about the security of the system they’re using to secure whatever it is they’re securing.

That’s a good point. I’m wondering, aside from the most common fingerprint and face recognition, I know there’s many other aspects of our bodies that can be used to identify us and then authenticate us, both physical and behavioral. What are some of these?

Vic: Yeah, I mean, there’s so many different ways. Because basically any way you can quantify a human could be used for biometrics. In terms of physical biometrics, there’s so many different things, some of which will get used in combination with others.

Like say palm biometrics, where they’re looking at the palm of your hand. You’ve got a palm print that’s unique to you, similar to a fingerprint, but you’ve also got a lot of veins in your palm, which are supposedly unique in how they’re positioned. So you can have a system which is scanning the palm print of your palm, is looking at the veins, and combining those two to create an extra layer of security.

As things like camera systems get even more and more powerful and more high resolution, some biometric systems, instead of just doing facial recognition as we traditionally think about it, they’re also looking at your pores, like just by mapping out the person’s pores, that’s an extra layer of security on top of just a facial image.

In terms of behavioral biometrics, there’s so much weird stuff, like phone banking systems use voice biometrics some of the time. They might not even tell you they’re doing it, but it’s automatically taking place to provide an extra layer of security.

You get things like keystroke biometrics, where some systems, when you enter in your password, it’s looking at also the kind of cadence of how you’re pressing the keys on your keyboard and using that as an extra layer of authentication on top of the password. Because in theory, if someone had just stolen your password, they wouldn’t type it quite like how you do. I imagine that one’s falling off in usage a bit, since everyone’s been switching to using password managers because we’re all just copy pasting now.

I don’t think everyone has.

Vic: Yeah, well, they should probably, yeah, it definitely makes life a lot easier.

You mentioned an interesting point there. You mentioned that some of the systems can be used without the user being aware of it. I’ve heard some people talk about just-in-time authentication, where you have varying levels of authentication based on what the user is doing. Like, let’s have a little extra check, and maybe that’s something the user doesn’t have to interact with, you just monitor their cadence of their keystrokes for a minute or whatever. What do you think of that?

Vic: I think it depends on the security of what you’re actually trying to gain access to. So for banks, fraud is a big issue. People calling up and pretending to be someone else and setting up a transfer, or changing a password, that sort of thing. It’s a big risk and it costs them a lot of money when fraud occurs. So they want to do anything they can to reduce the risk of fraud.

So extra little touches – like doing voice biometrics that the user doesn’t have to actually actively do anything different, it’s just sat there automatically – is quite appealing because even if it only stops a handful of fraud cases, that’s still a win for the banks.

So do you guys have favorites of these, other than fingerprint and face ID methods that you think could be an effective authentication method in the future?

Tom: I think in the future we’re going to see finally the promise that was wearables a few years ago. To me that seems like the path of least resistance. Most of us already have a Fitbit or an Apple watch or an Android watch hanging from their wrist. Hopefully you have something hanging from your wrist, not from your ankle.

So I think that will probably be the path of least resistance, to introduce something there when it comes to, like Vic mentioned, either some kind of body characteristic of either heartbeat, or any kind of other thing that can be correlated to say, “Okay, as an extra factor, this is most definitely the official wearer of this device.” So then you can authenticate to whatever service it is.

Vic: But then you do run into all the privacy issues associated with that data. So if someone could steal your encoded fingerprint, that’s not that exciting, but if they could steal data surrounding, say, your GPS location at various times, what your emotional state seemed to be based upon your heart rate when you’re at those locations, you could potentially do some nasty things with that sort of data.

Yeah. I guess that would depend on the way you implement that. Because right now, when I’m fingerprint ID-ing myself to all these different services that I use on my phone, I don’t think Apple is sending my fingerprint anywhere. The phone is identifying me and then it’s telling these other services that, “Yeah, I’ve identified this guy and it’s legit.”

Tom: Well, as with any authentication method, you always have to worry about what you’re authenticating to, because it could be that it is an official Apple system that you are authenticating against, or something that integrates with these kinds of technologies.

But then you have to be very sure that you’re authenticating against the real thing and not that’s a kind of phishing version of it because phishing is always going to be susceptible to these kinds of scenarios.

That’s fair enough. So I know Vic, you also looked into some of the stranger wackier, biometric modalities as well. Could you tell us about some of these, and especially if there’s anything that you think might be viable. Or are they all just gimmicks and weirdness?

Vic: I mean, some of them are legitimate in terms of, you can use them to fairly reliably identify humans. But it’s a matter of would you want to?

So for instance, the human tongue. There’s a lot of patterning on your tongue. There’s also a lot of deviations in the shape of human tongues, and they actually can fairly reliably be used in human identification. But no one wants to be sticking their tongue to authenticate to systems, especially with COVID and stuff ongoing. So technically could be used, but in reality, why would you want that?

In terms of the weirder ones, basically people have been looking into all sorts of stuff. There is some study looking at anus prints to create an intelligent toilet, where it would automatically do analysis of what was left in the toilet to work out your current health and make nutritional suggestions, that sort of thing. And again, probably you could differentiate between the cohort of people in the study using their anus print, but why would you do that?

The bit that I got a bit caught up on, on that study, was that the smart toilet had a fingerprint reader in the flushing handle. So yeah, just use that.

(Laughing) There we go. So do you think consumer biometric authentication systems in their current state are secure enough to be trusted as a part of like a single sign-on system that provides access to sensitive data?

Tom: I think they don’t have to be secure. They just have to be good enough. And as said, layers always work best.

There’s a reason why the industry wants biometrics. It’s to make people hooked into the product or the technology, because it’s so easy to use because you don’t have to use anything extra to be able to use the product or service. That’s kind of it.

The old paradigm is something you are, something you have, something you know. But being in security, and if you’re in security long enough, you know that it actually means something that you forgot and something that you lost. So the way that big companies want to convince you that these services are secure enough is by using biometrics, which of course is supposed to be hassle-free.

But of course, that comes at a price. Because you have to make the tradeoff between false negatives and false positives. And false positives are way worse than false negatives, because you’re letting someone in that isn’t necessarily you. So that tradeoff versus the price of what they can put into these phones and tablets before their price starts to double, that’s the tradeoff that these vendors have to make.

So they have to be good enough. They have to be used correctly in line with, like Vic said, what they’re trying to protect.

And you need to be able to employ multiple controls, and in the right order. So that, for example, it becomes not really feasible for an attacker to attack a six-pin passcode on your phone. Even if they get past that for whatever technology that they’re going to use, maybe they cannot get into the behind-laying app, where that app now requires a fingerprint or a facial scan.

So you want to be able to use it in the right way and in the right combination to be able to protect what you’re trying to protect.

Vic: I think that it’s important that consumers should actually understand the reliability and security of the system they’re considering using. Systems like Windows Hello, they do release data on the false positive, false negative rate, that sort of thing. But for a lot of consumer products, they just don’t tell you, you’re expected to trust that it works and that it’s secure. And in scenarios where it’s much worse to have a false positive than a false negative, that’s potentially going to be a big risk.

Given our track record in educating custom consumers, particularly about security things. I don’t anticipate that this is going to be a conversation we’re going to be able to have in any meaningful way, like false positive, false negatives. There is quite enough difficult conversations about that in the security field.

Tom: I think it’s just important that security companies that are out there keep these kinds of manufacturers of biometric technology, keep them on their toes and to actually verify their claims. Fingerprint cloning is real. We’ve demonstrated this as well, as have other companies. Same thing for other biometric authentication systems.

So again, the idea is to raise the cost of attack and make it harder and thus more expensive for an attacker so that they cannot just mess with the lowest common denominator and that one threshold that you need. And also, of course, there’s a media frenzy every time there’s even a remote flaw found in these kinds of technologies.

So we have to be able to have transparency on how these things are being used. We need to know whether or not we can opt out if possible, because certain things, if you want to travel internationally, it’s almost impossible. So you need to look at all of those things separately and then together to see, “Okay, does this make sense?”

So I guess the real question is, when are biometrics an appropriate solution and when are they not? Like how and when should you use this and not?

Vic: Yeah. I mean, going back to the scenario earlier of the bank that wants to reduce fraud rates from people’s romantic partners, and so institutes the method of having people record themselves saying, “I’m taking out a loan.” That’s quite a good use of it because it has such an impact on fraud, versus having a system where say, you look at your phone to unlock it. There’s a lot more potential for abuse there.

So I think people will have to personally draw the line based on what they’re trying to protect, if they think biometrics will be the right solution for them, and also consider how it can fit into a wider security system. If you’re trying to reset an account or something, it might be appropriate to use the video of you saying, “I’m resetting my account.”

But you probably also want to have additional bits of information in there, say providing your date of birth or just extra steps to it, really. As annoying as people find it, the more steps that are involved in the secure authentication process, the more secure your system will be potentially, so long as each of those steps are fairly robust in their own sense, of course.

Tom: We are very fortunate that we can even talk about consent or non-consent, because we have that luxury. Not everyone in the world has that luxury.

Thinking about governments trying to stop human trafficking, the mass displacement of groups of people that we’ve seen in the last decade or so because of war, because of economics, because of climate change. There will be a massive pressure on individual governments around the world to come up with some kind of “perfect system” to be able to identify each and everyone coming over the border on the premise of, we need to stop the terrorists and save the children. And everyone wants those two things, but it cannot be at the cost of undermining our constitutional rights and being put in a database with lack of transparency.

So I think when those things come, and when the pressure is going to be mounting on continents, countries, organizations, we need to get the transparency and the legislative nature under control before we start jumping on those kinds of technologies. Which again, can certainly help to do the things that we want to do. Which is again, to stop human trafficking and those kinds of things.

But there will be abuse cases with this technology. Because as we keep saying on this podcast as well, you might like your current government, but you have no clue what government you’ll have 12 years from now. And whatever rights you give away are very, very hard to get back.

So, okay. As a consumer, as a user of all this biometric authentication, when should I start to be a little bit skeptical, leaving the tongue prints and things like that aside, when should I be skeptical when my face or iris or fingerprint is being used?

Vic: A lot of systems will try and take as much data about you as possible. And quite often, that’s just going to end up for advertising purposes. If they can get a photo of your face…

I’ve reached the age where I’m constantly getting advertised baby stuff, just constant baby stuff. Actually, it’s a cycle. I get advertised baby stuff, then fertility treatments, then baby stuff. And that’s because of my age and gender. And if I’m providing facial images to various…like say for a store card, if they wanted to take that sort of image of me, that would give them a lot of interesting data they can use for advertising.

It doesn’t really inconvenience me that much. It’s not that big of a deal, but someone else is going off and making money off of my facial image in that scenario. And I don’t think they should be the ones making money.

And yeah, potentially, as Tom said, there is the risk that whoever you’ve already handed your data over to, you can’t really stop them from doing whatever they want with it in the future. Even if they say they’re not going to do anything bad with it. Who knows?

Well, speaking of companies using this stuff, both of you guys work as consultants. So are you seeing a lot of companies doing things around biometrics? What sort of things are you encountering?

Tom: Well, when it comes to mobile devices and mobile technology, luckily we see the larger developers and companies reusing the functionality and APIs that are being offered by platforms like iOS and Android. Which is good, because in that way, they’re not trying to reinvent their own software with their own hardware.

Although some of them do develop their own scanners and then want to make them work with mobile technology or other computers or platforms. And there, of course, we’re maybe less enthusiastic, because that’s really where they have to redo all the work that companies like Google and Microsoft and Apple have of course thrown lots of money against getting this right and making this viable, knowing that there is of course an economic benefit to them.

So we see it on that side. What we also see is for example, companies that want to have software and hardware tested to see, does this actually answer or align with our requirements and what we think we need to be able to protect ourselves again, knowing what is under the hood? And if this thing or some of its dependencies do get compromised, what is the risk, what is going to be in the blast radius, and how do we prevent these kinds of things from happening?

So Tom, I know you’re a hacker and you like to break things. What are you doing in the area of biometrics? How are you breaking them? How are you personally attacking this infrastructure?

Tom: We’ve looked at several of these technologies, usually out of pure interest or from customer engagements. And the best way to actually start looking at these is again, that one of the thresholds or tradeoffs, rather, is to see how fast can someone use the system, or how long does the actual process of identification and authentication take? And what is the threshold between the false positives and the false negatives?

So you want to be able to try and see how can you fool the system or a certain aspect of the system where your chance of false positives would become greater, which gives you more chance of bypassing the system, so to speak.

So when you implement any form of biometric authentication, then there is going to be a threshold, as far as when you install fingerprint scanners for doors to go into your data center, for example, which is being used by companies, then you don’t want people coming back after lunch and having 50 people standing in a queue trying to get through the fingerprint scanner. So what people do is they hold the door for each other.

Or you have companies where they force you to actually go through the system, but then not everyone washes their hands, or there’s lots of dirt left on these kinds of scanners, which influences the results. And then the scanner has to make a decision, or there needs to be a setting configured by the company saying, “Where do we put the threshold versus false negatives, versus false positives?”

And that really is a good place to start to look at how can we fool this system? Can we fool it by the same pattern? Does it need to have a certain activity going on at the same time? Does it measure for resistance, anything like that?

Things like fingerprint scanners, for most of them, usually they’re misconfigured. So they’re not configured to be accurate enough, and thus a simple picture or printout of a fingerprint could be enough. Sometimes it could be just, and we’ve done this as well in the past, just using wet toilet paper to get past it, because sometimes the scanners just freak out. Or they think that if that fingerprint has been put on the piece of glass.

So there are different ways of bypassing these, but as we’ve been saying, it shouldn’t just be up to a single control. But when it comes to consumer electronics and people that just follow what they have in their hands, what they get for their birthday, or what they find at the mall, and they use it to protect their most personal details. And they’re going to assume, because it says on the box that it’s secure, that it’s secure. So that is really where our passion came from of saying, “Okay, let’s test these things and give them a good run for their money.”

So is there any other advice or other considerations that you’d like companies when they’re considering adopting a biometric authentication system, what’s the thought process you’d like them to go through?

Tom: Well, first of all, you’re trying to solve a problem, because it is a security control. So you need to ask yourself the question, what problem are we trying to solve? And is this really the path of least resistance when it comes to management, maintenance, enrollment, resets of accounts, the fraud scenarios that might be possible, or abuse scenarios that might be possible? Does this scale? Will this vendor or this technology be able to be used throughout the world, where there might be challenges with the technology?

So these are all kinds of things that need to come from the single question of why are we doing this? And why do we think that biometrics has an advantage over other combinations of let’s call them more classical security controls?

Vic: I was involved in a project a bit back, where a client was looking at switching to using Microsoft Hello for people to log into their laptops in the office. And that was in part being driven by their employees, because they were saying, “It’s annoying that every time I get up, I have to type my password back in. Can we have a system that makes it easier for us to log back in?”

So potentially biometrics can help to decrease the workload on people who repeatedly have to log systems. But you do have to balance that against the security implications if someone could improperly gain access to the system by abusing the biometric system.

Tom: And I think the main reason why we have the systems that we have today is cost. It’s a matter of, again, that tradeoff of convenience versus cost, because as said, Android phonemakers and Apple can make phones that will have a better chance of defending yourself when it comes to biometrics.

I mean, I think for fingerprints right now, they say it’s one in 50,000. They can make that smaller, but of course they would need to increase the price of the new technology they would have to put on the phones and tablets and no one is going to really pay for that. So they need to make something that is, “good enough,” based on their own tests and versus what they’re trying to protect.

What’s something that everyone should understand about biometrics and you feel that they don’t? What’s your pet peeve about this?

Vic: When I was doing the research for that conference talk, one of the things I saw appear in the BuzzFeed clickbait style lists quite a lot was salinity as a biometric modality. All of them would just say, “Yeah, you can use body salinity to identify people,” but no one was really citing any sources. They were just putting it in there.

And I ended up going down and doing a lot of digging and went down some very weird rabbit holes. And it came from, as far as I can tell, it originated from a paper from back in 1995, which was a follow-on piece from a previous project, which aimed at letting Yo-Yo Ma use his cello bow as an approximated computer mouse. It was a very, very weird rabbit hole to go down. But someone was proposing the use of just skin conductivity for data transfers. Like, could you shake hands and transfer some data?

And somewhere, someone wrote a book about biometric modalities and just cited that study in it. They didn’t quantify or say anything about how it could be used for biometrics. They just included it in the list. And then everyone else started including it in their click bait list saying, “Oh, body’s salinity, it was in that book one time. So, yeah. That’s a biometric modality.”

So I think that within biometrics, the people who are writing these lists quite often, they’re not trying to verify the claims they’re making, they’re just going, “Yeah, biometrics. You can do it with this and that.” So I think that is potentially a risk of biometrics, is I guess, people advertising biometrics, not doing their due diligence and just – I mean, I appreciate I’m on a podcast just talking about biometrics myself and not providing citations – but the risk from people gaining an inaccurate view of the security of biometrics and also the viability of some modalities, just because they’ve read a list one time, or someone else has read a list one time, versus people actually doing the research themselves, actually looking at the statistics themselves.

And I think that is potentially a threat to businesses when they read a list saying, “Oh, facial recognition, there’s one in a billion chance of it getting it wrong.” They’ll just see that advertising spiel and go, “Oh, wow, that’s amazing. I want some of that.” And potentially they could end up with a system which isn’t really suitable for their needs because there’s too much of a risk of people gaining access to it, or it’s just not secure enough, or it can easily be fooled by just someone showing a photo of a face to the system. I think that people should do more due diligence when making serious security decisions using biometric systems.

Well, I think that’s good advice in general. So with that, I want to thank you guys for being with us today. It’s been very, very informative. Thanks.

Tom: Thank you.

Vic: Yeah. Thanks for having us.

That was the show for today. I hope you enjoyed it. Please get in touch with us through Twitter, with the hashtag #CyberSauna, with your feedback, comments and ideas. Thanks for listening. Be sure to subscribe.

Melissa Michael

02.11.21 34 min. read

Categories

Highlighted article

Related posts

Close

Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.