Podcast transcript: The front line of fraud tech

The IT Pro Podcast logo with the episode title 'The front line of fraud tech'

This automatically-generated transcript is taken from the IT Pro Podcast episode ‘The front line of fraud tech'. We apologise for any errors.

Rory Bathgate

Hi, I’m Rory Bathgate. And you’re listening to the IT Pro Podcast, where this week we’re asking ‘who goes there’ as we discuss identity fraud. A recent report by Onfido, a UK firm specialising in identity tech, states that since 2019, identity fraud has increased 44% since the start of the pandemic – reflective of the rise in such cyber crime as threat actors took advantage of the digital transformation that took place across the period. It’s clear that security teams have their work cut for them, but the same report states that 75% of businesses are now simply willing to accept small volumes of fraud. This week, we’re speaking to Mike Tuchen, CEO of Onfido, to discuss the current threat landscape and what can be done for the sector.

Mike Tuchen

Thanks for having me.

Rory

So I guess my first question is kind of a broad one. How long has this willingness to accept small amounts of fraud be ongoing?

Mike

It's been rising over time, and we think for a couple of reasons. Willingness to accept fraud is going to come from, I think, two different backgrounds. Number one: what's the cost of fraud to my business? And number two: how much flexibility and choice do I have, in looking at fraud as a rule, let's say a diallable decision. Because in the past, when when companies are onboarding their new customers, this is an area where fraud can creep in, they were at a choice of fairly blanket policies that were generally kind of all or nothing in, in nature, right? And so you had a set of compliance dictates a set of, you know, fraud dictates driven by your fraud team, if you're a traditional bank, and you know, paper driven process. And I heard just recently from someone who had been in banking back in that era, that 75% of the applications were turned down for a new bank account, ecause of the cumbersomeness of those processes. A couple of things have happened. Number one, banks are still very, very sensitive to fraud, the cost of fraud to them is very high. But they, and all companies now, can take a much more ROI driven approach and say, "fraud, on average, costs me this much; a lost onboarder, a lower onboarding rate if you will, costs me this much. What's my trade off point? How do I balance those two?" And they can make an economic decision to balance the two of them by doing AV tests and figuring it out. So that's a new capability that simply didn't exist you know, if you were to turn the clock back five years, and certainly 10 years that didn't exist at all. So that ability to think about fraud as an economic choice, as opposed to as a binary sort of compliance driven choice, is a new sort of modern take that that I think changes a little bit of that tolerance.

Rory

Do you think that's also behind, in the same report, it's stated that companies younger than 20 years have a higher figure of accepting fraud rate, do you think that then this is reflective of that kind of changing perspective on what's really acceptable?

Mike

I think, yes, that's one of two factors. I think, number one, yes, they tend to be more agile about, you know, trying out this thing about fraud as an economic choice and trying out this, finding the balance of trade off. I think, as well, a lot of the more traditional companies, the longer tenure companies, have a higher cost of fraud right? If you're, if you're a bank, the cost of fraud to you is quite a bit higher than other platforms. With banks, one of the main things that banks do is extend loans. Well, that means if they have fraud, they're out the cost of the entire loan. Something like a choosing like a digital currency trading platform, the customer is bringing their own money in and is trading on their own account, the cost of fraud of them is just simply lower than the cost of fraud to the bank. So I think you'll see part of that is just simply how much is fraud costing me, as well as a potentially different economic choice.

Rory

Right. So it's really to do with the maturity of the company and therefore how severe the attack is.

Mike

Yes, and what industry they're in, right? It's not, in some cases there's a maturity difference. But in other cases, it's just simply, the industry, I'm in cost of fraud is very high, therefore I have to take, even if I'm very sophisticated about taking the economic trade off approach, I'm simply going to dial that all the way to the very, very low fraud end of it, because that's how the trade off for me works. Whereas for other platforms, they can see that a little bit differently.

Rory

So when this calculation is happening, when a company is weighing up, what degree of fraud is acceptable for us to allow to happen? Do you think that there needs to be more public sector pressure to kind of balance this out? Or is this a conversation that needs to be had had throughout the private sector to kind of change this culture?

Mike

I don't think there's a, this is an area where the public sector pressure is really going to be that helpful. To me, areas where the the public sector mandates are helpful is solving, you know, kind of externalities, where the market pressures don't solve themselves. An externality would be things like privacy, right? Who's protecting my privacy as an individual, and there's no market pressure that solves for that. It's a, it's a very weak sort of pressure for companies to take a privacy-centric approach. But on the other hand, governments can look out for the privacy of their citizens and mandate, you know, certain requirements like GDPR in the EU, or in California, there's the CCPA, which is a very similar regulation. And so those are areas where you're solving an externality that the market doesn't solve for, but I would argue that the fraud tolerance and desire to solve for that, the market really does balance that quite well. Companies that get hit by fraud, and they're very sensitive to that, right, if you look at the fraud, increase the 44% increase during the pandemic, the rise of sophisticated fraud, the cost of fraud, in companies that are sensitive to that, these are things that they care a lot about, and they take very, very active steps to counter. So I see this much more as a game of cat and mouse between the industry and the fraudsters, and a constant game of one-upmanship that's, you know, solving itself, I think, in a fairly disciplined way.

Rory

Right. Okay. And then focusing specifically on the tech sector, how can businesses mitigate the risk currently, is the tech they're just a technique to be developed more? And could you go over some of the maybe some of the inadequacies that we're seeing?

Mike

So I'll talk about some of the hard problems there, and where the trade offs tend to be. The top level way to think about it is, it's a constant game of cat and mouse. And as the fraudsters get more sophisticated, then the industry players like us get better at catching that. And as we get better, and leapfrog, then they look for ways to find, you know, weaknesses in what we've just built and they, you know, go and attack and find the gap. And then we close the gap, and cover other things. And so it's a constant game of one-upmanship. So rather than think of it as a sort of static playing field, I think of it as an active adversary that you're, you know, constantly looking to outsmart on both sides. And so what that means is, if you were to turn the clock back 10 years ago, most identity fraud was people doing forgeries, so taking an existing document, a passport or a driver's licence, and with a magnifying glass and a pen literally changing the doc, forging the doc. Those are, those are much more rare now, that's like single digit percent, like low single digits, maybe 1% of the fraud that we see. What's much more common now is digital fraud of two types. First, digital fraud, again, starts with a standard identity doc, or an existing identity doc, and then changes i, applies a change to an existing valid doc. And much more common, or increasingly common now is a pure digital fake. Fraudsters are creating a template that's a say, a British driver's licence template or a UK passport, and then applying a whole bunch of different synthetic identities. So a pure digital fake, from the ground up, using a template that looks like a valid one. And so we're seeing much more of that. And coupled with that, we're seeing different types of biometrics fraud. So, you know, classically the the Mission Impossible remember those silicone masks they sort of put on? Those are actually real, right? There's actually people using them now. Yeah, it's amazing, used to be if you want to defeat an old style, online identity system, you would literally just hold up a picture of someone else in front of your face, right? And for an old style thing, that might be good enough, or submit a picture that was taken somewhere else, that's of someone else that's not of the person sitting in front of you. Well, nowadays, you actually we can detect, is there a live human being there? And is it the same one, indeed, that's on the doc. Neither of those two approaches will work anymore. And so the latest frontier is a purely, just like there's a purely digital fake, to create a document, people are creating so called deepfakes of a different face than yours, and mapping it onto your face. Again, our newest approach can detect that as well. But that's an example of, you know, what the old world looked like and some of the cat and mouse game that's going on right now.

Rory

So in terms of the deepfakes, I mean, I guess that's kind of cutting-edge, threat actor activity right now. You do hear stories, I don't know how overhyped they are, but you do hear tales of deepfaked LinkedIn job interviews, or things that are going on, very elaborate scams involving faked video, faked video interviews, being chief among them. But you're saying that the kind of protection tech currently is neck and neck, that maybe we're developing an antidote to those threats, as quickly as threat actors can develop the threats themselves.

Mike

It's currently neck and neck, but to your point, the the quality of the deepfakes has gone way up. And so this is going to be a an ongoing game of cat and mouse in the coming years. And so it's this combination of synthetic identities and deepfakes. That's really the cutting edge, that we'll see more and more fraudsters. This is the most sophisticated fraud, so we'll see more and more starting to do that. And we'll see companies like us doing more and more to prevent that, for sure.

Rory

And on a specific tech question, what kind of solutions are being implemented to detect things like deepfakes?

Mike

One example is a biometric solution that we just brought out over the last couple of months, called Motion. And what it does is, have the end user turn their head from side to side, similar if you have an Apple phone and you're enrolling with face ID, you have to turn your head from side to side. Similar concept to that, and what that does is, most of the current deepfakes that are mapping someone else's face onto yours in real time, they're taking a texture map and mapping onto your face. And as a result at the boundaries, there's, you know, a level of distortion is detectable when you turn your head. And so that's why, rather than just a purely static looking at your face approach, that approach allows us to capture current state of the art. As those deepfakes get better and better, that's gonna be harder to catch. But that's, that's the current state of the art, our approach is 10 times more accurate even on a normal static image then before, but now it also captures these advanced kind of capabilities

Rory

So I guess we've talked about one of the most cutting edge threats right now, but on a broader point, what are some of the most common methods through which identity fraud is currently carried out?

Mike

You know, very commonly what people are doing is taking a stolen ID and passing it off as their own. And so, for us, what we want to be able to do, and what we encourage customers to do, is a variety of different checks because a stolen identity can come in a couple of different flavours. If they have the actual identity card itself, someone actually stole your wallet, then the the the identity is valid, right? The card is a valid card. And it's truly a genuine identity. Now the question is, is that person the same one that's on the identity card? So we do a bunch of work to confirm that biometric, and all the stuff I just talked about, confirming that the actual picture on the card is the same person that's in front of the cameras, a live person there is the same one there. And the picture on whatever identity document you have hasn't been altered, right? Those are the ways that you can detect that scenario. So all of that is current state of play. The other thing that we encourage customers to do, is to add on secondary checks. So we have a configurable approach, where customers can say let me do exactly what I just talked about, check the validity of the document, let me check the biometrics and ensure there's the same person who's on the document, the document hasn't been altered. But now let's also go back and confirm that that driver's licence, for example, is still valid. Because the first thing that most people do when their driver's licence gets stolen is go and cancel it, and get a new one. And so when you do it a look up, it'll come back and say, aha this one has been cancelled. And so for all those, that'll be another way for customers to detect that. This is a case of valid document with fraudulent use case, and so checking is the document actually still valid or genuine, let's say genuine document, that's no longer valid. Other examples are, we talked about all the purely synthetic ID examples, with the way fraud is working right now is, it's a big business. Many billions of dollars a year are being created fraudulently. And the way that that works then is, fraud is no longer the realm of an individual hacker, that's working by themselves in their garage, right? It's a team of very technical people doing this at scale. And so they're, picture a software development team, that once they find an exploit, they're trying to exploit it hundreds or thousands of times. And so there's an army of people that they use for different biometrics, there's different variations on the theme of different identities, and subtle variations of those identities that they're plugging in. And so once they find one that you'll have a hundred different flavours of that, trying to sneak in the door. And so an example of a mitigation approach that we've created for that scenario, is something we call known faces, and something that we also call repeat attacks. And those two things cover that exact scenario. If we've seen someone before, who's been fraudulent, the odds of that person being fraudulent the next time around are very, very high right? It's unusual that you'd have, it's like almost vanishingly impossible to have someone do something fraudulently one time, and then be a valid genuine customer the next time around, right? They're either fraudster or not, by and large with kind of 99-100% probability. And so with that case, this concept of known faces is super powerful. Once you've detected fraud, through any mechanism, you say, "aha, that person is fraudulent". Similarly, it detects these variations on a theme. And then what we'll do is say, "aha, we've seen this variation before". And now here's one hundred, different, other slight variations on that theme. The odds are, these are all fraudulent, and so all of these are mitigation techniques for that type of attack.

Rory

So with this kind of, micro economy of fraudsters that you're talking about, is it in any way similar to the ransomware as a service model that we're seeing with ransomware threat actors, where they're licencing out their services to kind of wannabe threat actors, who now have access to tools much more powerful than they would be able to come up with on their own? Is there a similar black market for fraud happening currently in tech?

Mike

Absolutely. And so as a matter of fact, there's a fair amount of overlap between the various communities here. And so in the fraud world, things like buying stolen identities, you know, there's groups that sell stolen identities. You can buy identity document templates, you can buy identity document generators, where you can plug one into the other and create a synthetic ID. So all of these things, the level of simplification required is going way down. You can see there's a company that, not a company but there's this software out there, that will create a fake utility bill. And you ask yourself "why on Earth would someone want to create a fake utility bill, it doesn't seem like a very useful thing". Well, it turns out that many banks use things like utility bills to verify that your address is valid. And so all of these things are little pieces of the toolkit that are very readily available on the so called dark net. And so, yes, for sure there's a whole industry of tools providers and data providers that are behind this increasing sophistication of fraud.

Rory

That's, that's very worrying. I mean, I suppose it's very difficult to rip out those systems right now, all you can kind of do is add on the tools, like you were saying to detect them when they happen. I mean, I know that banks are forever being urged to move away from the paper utility bill system, but I guess it's slow to change.

Mike

It is, in the end it's, you know, the question you have to ask is, what's the best alternative? And yes, there are approaches where you log in live to your utility bill account, and confirm that there's actually a live account available. But that has limited geographic scope, and limited utilities. And so it's not, it's more secure, but it's not as broadly applicable, as a paper based fallback that a lot of companies end up using. And so ultimately, to your point, you can't try to go back and say, "we'll shut down the dark net", that's not going to happen, right? We have to accept that that is out there. And it will always be out there, and so all we can really do is become increasingly sophisticated on the ability to detect and protect side, as they're, as they ramp up their capabilities on that side.

Rory

In terms of blame, aside from fraudsters themselves, within an organisation, where do we lay, whose feet do we lay the blame at for fraud? Is this the C suite executives that are ignoring the signs? Or is it often a case of employees maybe being being lax with personal, like you're saying, they've lost some personal details, and that's had a knock on effect down the chain?

Mike

Yeah, I'd say inside of companies I don't think the sort of 'blame concept' is super helpful. I mean, if you're, if you haven't created a high quality, secure, and frictionless signup approach then you should make sure you get the right skills on your onboarding team. Because that's really all about your onboarding team and their capabilities. But by and large, if you've done that, if fraud creeps through, which it will because there's no system is perfect and there's this constant game of cat and mouse that I described, rather than looking for blame, I think a more appropriate approach is to stay, "let's get to the root cause". What happened here? How did it break through, and how do we mitigate that and ensure it doesn't happen again? And I'd say if you were talking to a senior exec, if you're getting good responsiveness to that question, and the team is able to get to the root cause, you're able to diagnose and say, and here's how we prevent that going forward. Great. You probably have a team that's approaching this the right way. If you're not getting clear answers there, and you're getting a defensive kind of reaction then maybe you should have a look at what's going on. But I'd say the most important thing is, are you using best practice approach, and best practice technology like our company? And, you know, do you have the sophistication to be able to measure how you're doing, and run AB tests, and course correct if your fraud rates are looking too high.

Rory

Right, okay. Yeah, it's cultivating an internal culture of, "you're never going to hit 0% fraud rate", just being able to bounce back from and learn when you are hit with these attacks.

Mike

It's very, very similar to the security world. And because the world the security world is exactly the same, where there's a group of increasingly sophisticated attacks that are coming in, and you as a security team is trying your best to create as resilient the posture you can, that also is usable by your internal employees. You don't want to make the system so airtight, that your own employees can't use it either. And so you're trying to balance that usability, and user experience, with fraud capabilities and constantly looking to detect, and adapt over time. This is exactly the same, you're balancing your end user experience, your onboarding rate, with your ability to detect fraud, and you're just trying to put the best combination of all that in play, but also be able to detect and react if there's something that you missed. And so you really want to take a very, very similar approach on both sides.

Rory

So something else I'd like to draw out from the report that I mentioned earlier, is that 84%, of executives surveyed said that they're less than completely confident in their company's ability to keep up with data legislation, or regulation. This seems to be a data sovereignty issue as much as, within a local region, an inability to keep up with the changing nature of privacy regulation. How much does this feed into inability to prevent fraud?

Mike

I think the two are somewhat orthogonal to each other. I think this is a, it's a real concern, because the data sovereignty and privacy regulations are continuing to evolve and change in every different country around the world. And so, if you're a company that's doing business across multiple countries, then trying to figure out "how do I stay compliant in this kind of evolving landscape?" is really difficult. And so for us, we see ourselves as in some cases, being a source of expertise and a trusted expert to help our customers understand some of those changes, and what are some of the things they need to do? You know for example, in the US, we've had some biometric privacy regulations that have emerged that many of our customers weren't aware of. And so we're needing to educate them, say "are you aware of this, you, here's what you need to do to become compliant". And so, it's something that we see ourselves increasingly needing to play a role to help our customers navigate, given how complex it's becoming.

Rory

And increasingly complex in America, if I understand correctly, there is a lack of central federal regulation. So it can vary state by state, right?

Mike

It does indeed, as a matter of fact there are a crazy number of states that have different — so the US is no longer in this sense, one country, we have, you know, 30 or so states have created different regulations. And even the current draft that hasn't passed the Congress yet, of a new nationwide privacy law still has carve outs for a couple of states. So even in the best case, where we're getting a national standard, that standard isn't going to be totally uniform. It's gonna be uniform for many states, but there's gonna be a carve out for at least two that had been in the latest draft. And so the US is going to be, it looks like, even in the best case, somewhat of a patchwork. And that's, you know, and then you start taking this to the rest of the world, think about how crazy that gets. That's why 84% of execs are saying they're not 100% confident they can do it, it's not because they're somehow falling asleep on the job, this is a very complex problem to solve. So if you want to a topical hook, you know, because this the so-called blue check the verified thing, that was a complete fiasco, and I'd say entirely preventable. But, you know, we've seen, you know, couples like Eli Lilly and unlucky Martin, get attacked and lose like 15 billion of market cap each. Because Twitter decided not to do you know, any kind of variable verification aside from your willingness to pay eight bucks, and just, you know, make a simple economic card. I don't know if those were caused by people trying to manipulate the stock. But if they were the trade off of making, you know, potentially millions of dollars by spending $8 on getting a blue check. It's a no brainer, right? Because there's no actual verification bar aside from your willingness to pay eight bucks. So those that's an example of just a poor trade off by a very, very influential player in the market. The solution for it is simple. If Twitter were to use the type of onboarding and verification techniques that we've been talking about, that every digital bank in the world, every traditional bank in the world, every financial company in the world is already using, so this is proven technology at very high scale. If Twitter were to use that, the risk of this type of market manipulation would be orders of magnitude less. And so, this is just an example of Twitter choosing not to take advantage of readily available technologies, and as a result, putting not just their customers and their advertisers, but the broader financial markets at at risk. And so we look at that and say, that's just not a good trade off right? They can clearly... for something, a service that costs $8 a month, $100 a year, the small, small, small fraction of that they could spend to get too strong verification, seems like a very, very obvious trade off to us that we think they were, you know, really not making the right choice.

Rory

And like you're saying, increasingly complex, here in the UK, we've got a constantly changing landscape, the government keeps promising to replace GDPR with something but they're not saying what that is specifically. So it's definitely a complex landscape. Would you say that this is holding back developments in identity tech, maybe preventing a more unified solution from being developed and agreed upon? Because, of course, you'd have to have it agreed upon in every various region?

Mike

Well, I'd say that identity is by its nature, historically, it's been a very sovereign concept. So every country sees themselves as the root of identity trust for their country. And I don't think that's ever going to change. And so, I think that you have to separate that out from this concept of privacy. So even if you were, in some best case world, to have a uniform worldwide privacy standard — this, by the way, doesn't exist and is unlikely to exist, as long as all of us are alive and for quite a bit longer than that — but in some ideal planet, if that were to, you know, actually happen, you still have this concept of each country wanting to certify and be the source of trust for identity. And I don't think that changes, right? You get your national identity card well, by its name, from the country. You get your driver's licence, in the US, we get it from the state, but in most places you get it from the country. You get your passport from the country. So it's the government, the federal government, that ultimately is the source of identity. And so everything else, you know, depends on that and devolves from that in some way. So I don't think that is ever going to change, governments are very, very proprietary about their, let's call it their identity sovereignty.

Rory

That makes sense. It's as much a sociological and cultural problem maybe, as it as it is a technological problem. Just on that point of technology, just as my final question, I have to ask, when we're faced with kind of leading edge threats, like deepfakes, often leading edge solutions such as AI and machine learning, are pushed forward. How can these be be used in verifying identity? And is there a strong future for these technologies?

Mike

Absolutely, as a matter of fact, it's really the only way to solve this going forward. And so, everything that we do at Onfido, and everything I've talked about today it's been all about AI, machine learning. Because it think about the, you know, you talked about the LinkedIn job interview. And, you know, if you can fool a human being in an interview setting, which is a pretty demanding setting, then the odds of you being able to fool a human in an onboarding process, which lasts a fraction of time is sort of astronomically high. And so, really the only way you can do that is to train a computer to catch a computer. And then it becomes a software, machine learning arms race, which is kind of the current state of play that where we are. But it's, we're at the point now where the best models are better than human judgement. So historically, the the highest performing solutions were hybrid solutions that had an automated portion, and then a human in the loop to make trade-offs for corner cases that that computer couldn't figure out. Our newest technology. And so as a result, what customers were forced to do is choose between either a hybrid solution, which has the best accuracy, but has the trade-offs associated with that, which means slower turnaround time, because there's a human being who's actually looking at it. Worse scalability during peak times like Black Friday, or you know, Christmas rush, or whatever it is, Super Bowl if you're in that world, or the World Cup in Europe. Or, you know, have something fully automated but not as accurate. And so if you ask customers "well, which one do you want?" and they say, "I want both, I don't want to make that trade off". We're just at the point now, where our new fully automated solution is just rolling out. And that's about 25% more accurate than our hybrid solution, which is the most accurate in the market today, and has all the benefits of a fully automated solution, fast turnaround time, measured in seconds and, you know, infinite scalability during peak times and stuff like that. And so it's it's a win-win-win. But that's an example of just really applying ML and AI super deeply to solve this problem. And that clearly is the future direction. So your answer is: Is it promising? Yes, matter of fact, it's the only way.

Rory

Well, Mike, thank you so much for being on the show.

Mike

Rory, thanks for having me.

Rory

As always, You can find links to all of the topics we've spoken about today in the show notes and even more on our website at itpro.co.uk. You can also follow us on social media, as well as subscribe to our daily newsletter. Don't forget to subscribe to the IT Pro Podcast wherever you find podcasts. And if you're enjoying the show, leave us a rating and a review. We'll be back next week with more insight from the world of IT but until then, goodbye.

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.