Avsnitt

  • In Episode 8 of Digital Tells we speak with Suzanne Sando, Senior Fraud and Cybersecurity Analyst at Javelin Strategy and Research. BioCatch recently published a report written on our behalf by analysts at Javelin Strategy and Research titled New Account Fraud A Threat Down Every Avenue.

    In this discussion, Suzanne discusses many aspects of new account fraud, including identity theft and synthetic identities, stimulus fraud, the rise of Buy-Now-Pay-Later (BNPL), money laundering and anti-money laundering compliance, and opportunities for financial institutions to address these increasingly hot challenges.

    Peter Beardmore

    BioCatch recently published a report written on our behalf by analysts at Javelin Strategy and Research. It's titled New Account Fraud A Threat Down Every Avenue. There's a link to the report in the show notes. And in the run up to the report's publication, I had an opportunity to talk with its principal author, Javelins Suzanne Sando. We talked about a range of topics from scams to mule accounts to buy now, pay later, and identity theft, all relating to the challenges institutions face when it comes to new account fraud and strategies for dealing with these challenges. If you're a regular listener to the podcast, you know that normally I script a narrative infused with clips from conversations I've had in preparation for each episode. But for this episode, we've decided to switch it up a little, mostly because I don't think there's really anything to improve upon Suzanne's commentary on all the topics we discussed. So here it is, my complete discussion with javelins. Suzanne Sandow, thanks for taking the time to talk with us today.

    Suzanne Sando

    Absolutely.

    Peter Beardmore

    So, Suzanne, let's get started with just some introductions. Can you tell us what you do and what your origin story is, how you got to where you are?

    Suzanne Sando

    Sure. I wish it was interesting in some of these Marvel movies I've been watching, but. So my name is Suzanne Sando. I am a senior fraud and cyber security analyst at Javelin Strategy and Research. And I've been there about two years prior to getting into the analyst role. I was actually doing a lot of behind the scenes coding work. So I worked for a major financial institution in the U.S. and I did a lot of payment systems, back end coding, worked a lot of personal information, a lot of private data. So I kind of have that technology background that I bring to this new analyst role.

    Peter Beardmore

    So you've been working on a number of reports. Javelin just released, a fairly large identity related study, and you're also soon to release. Probably by the time this podcast comes out, we will have released a paper that you've done that's been sponsored specifically by BioCatch can you tell us a little bit about both pieces of research?

    Suzanne Sando

    Sure. So the main identity fraud report, the larger report that you referenced, that's something that's in its 19th year and that we've been putting out. And, you know, we kind of take a look at all aspects of identity fraud, both traditional identity fraud and identity fraud scams. And we kind of look at the losses to financial institutions, the consumer impact. Where are the pitfalls? What are the things that some of these industry verticals can be doing better? What can consumers do better to try and mitigate some of this loss that's happening in tandem with what's going on in the world? You know, because obviously we've had a lot going on with the pandemic. I mean, that has just changed every single facet of life. And the report that you mentioned, that's kind of an offshoot of that larger report that I wrote for you guys for BioCatch that's more specifically targeted to new account fraud and how that has sort of taken off between 2020 and 2021.

    Peter Beardmore

    Okay. So let's jump right into new account fraud, which is going to be the focus of our conversation today. What are the overall trends related to new account fraud? What are the highlights?

    Suzanne Sando

    So 2021 was unfortunately just another year of record losses across the board. Overall, consumers lost 52 billion between identity fraud scams and traditional identity fraud, like, you know, account takeover, existing card fraud, and then, of course, new account fraud that we're going to talk about. And of that, 52 billion, 7 billion is attributed to new account fraud. So if you compare that with last year's losses of 3.2 billion, that's like a 109% increase in new account fraud losses for consumers. You know, I think new account fraud is so attractive to criminals because just the nature of our world and e-commerce and, you know, digital banking activities, it's not going to go away. We continue to get more and more digital centric as that technology advances. So that means that that, you know, attack surface for new account fraud just keeps growing, especially as our daily activities evolve from both, you know, like a necessity and a convenience standpoint. So once you give consumers that convenience, like opening accounts online, applying for loans online, it's so hard to take that convenience away without impairing the customer experience.

    Peter Beardmore

    Or impairing your revenue.

    Suzanne Sando

    Exactly. Exactly.

    Peter Beardmore

    As a as a lender or as a credit card issuer or what have you, for sure.

    Suzanne Sando

    You know, we've also noticed that, like, it's not just checking and savings accounts and, you know, credit accounts that are driving this growth. Criminals are going to be motivated by any single thing that puts more money in their pocket. So payday loans, mortgages, even car loans are appealing to criminals. They don't need to know every single piece of a legitimate account holders information. It's just enough to get that application approved and get that fast cash. And of course, like, you know, I mentioned the pandemic, and that's a part of, you know, what we do for these reports. We look at what's going on in the world. The government is not immune to these problems. You have government assistance programs like the Unemployment Assistance Paycheck Protection Program. They're all facing huge issues with fraud. I read recently, I think that it. The Department of Labor reported 163 billion had been, quote, improperly dispersed, which could mean many things. But one thing it for sure means is fraud. It means that a lot of these funds went to fraudulent sources and there's a high chance that this money isn't going to be recovered. So I think the thing to take away from this is that criminals are so crafty in their exploitation and the techniques and the lengths that they're going to go to commit fraud.

    Peter Beardmore

    Out of curiosity, you mentioned the Paycheck Protection Program and the unemployment assistance. I just saw a headline recently that talked about the DOJ was had been appropriated some X hundreds of millions of dollars for an investigation related to, I believe, unemployment fraud. Is that good money chasing, bad? I mean, is there anything that's going to come of that, given that those programs are effectively over at this point? What's to be gained by even chasing that do you know?

    Suzanne Sando

    You know, part of me thinks that it's sort of a goodwill type of situation where we're trying to make good on these funds that were supposed to go to consumers who were really in need, small businesses who really needed that aid. But the fact of the matter is, if you you know, there's that 163 billion I mentioned, there's a full report from the testimony from the Department of Labor about that, you know, those missing funds. And I believe they mentioned that 4 billion at this point had been recovered, but 4 billion out of 163 not great. So, you know, like I said, I think it's it's a goodwill we're trying, but it's probably going to come to a very not great ending.

    Peter Beardmore

    Just for clarity, you didn't mention those numbers, 52 billion and 7 billion in new account fraud? Are those global numbers or those?

    Suzanne Sando

    Those are good. Thanks for for asking that. Those are United States. Those are U.S. numbers.

    Peter Beardmore

    Okay, good. I just want to be sure.

    Suzanne Sando

    Sure.

    Peter Beardmore

    And so with the new account fraud, the identity specific new account fraud, do you have any sense for are these legitimate ID legit, these stolen IDs? Are these synthetic IDs? You know, what what are the what are the sources?

    Suzanne Sando

    Criminals are using all types of identities between actual stolen identities and synthetic identities to carry out this new account fraud. Some are legitimate consumers. They, you know, have had their information exposed in the data breach. And then it's sold on the dark web for extremely high prices. And then other identities are pieced together using some real consumer PII. And then fake information is kind of thrown in the mix to create that synthetic identity, which then ideally, ideally for the criminal (chuckles) ideally is untraceable back to a real person. So, you know, when it comes to. What… What it is that they're using? I think it always goes back to what's available. You know.

    Peter Beardmore

    Let's shift focus here a little bit. One of the big trends we hear a lot about in the news lately and there were mentions in in your reports was related to buy now pay later and also I guess there's some other related fintech type offerings that are out there with respect to making credit more available to consumers in different forms. Can you talk a little bit about what buy now, pay later is and why it it is so attractive to fraudsters?

    Suzanne Sando

    That's a great question because I think that there's a lot of gray area around bnpl for consumers. So the main difference between Bnpl and, you know, a traditional credit card like a store credit card is that, you know, your site, you're getting these products by signing up for installments, but there's typically no interest on the purchase. And there's also not like this huge approval process standing in the way of you actually making that purchase. You don't have to have this amazing credit to get approved for a bnpl plan. So, you know, it's it's good for the consumer because they can get the thing they want and it's good for the retailer because they can draw in more people to buy their product and make more money. And they're not just having to target, you know, higher incomes. So, you know, if you want to buy, I don't know, a guitar, for example, but you can't afford the full purchase price upfront. The idea behind Bnpl is you make one payment upfront, which includes usually a fee for the Bnpl platform because you know they're going to make money too, with the promise then of making the other payments on an agreed upon schedule. The consumer, you know, is able to take advantage of bnpl with as many merchants that offer that option as within the checkout process. Whereas, you know, a traditional store credit card is obviously going to be tethered to a specific retailer. And then even interestingly enough, there's now been this explosion of financial institutions and credit issuers who are getting in on the competition. They're offering their own version of, you know, installment plans to kind of keep up with what consumers want and the competition in the market. And as far as fraud goes, you know, it's certain it's just like everything else. It's not without its faults, you know, besides the effects that it can have on a consumer in terms of if they miss or make late payments, they rack up debt. I think bnpl providers face a really difficult road with fraud in terms of detecting and preventing it by stealing a consumer's identity, a real consumer's identity, and opening up a bnpl account, a criminal can then purchase a significant amount of goods and stuff the consumer with the bill, and it's one that, you know, they might not find out about until those payments start rolling in. So since the plans, it kind of depends on the platform you're using and the customer's needs. That repayment can start within days or it could be months after the fact, which leaves that fraud undetected for a significant period that that can cause real problems, not just in terms of late payments. It could ding your credit. If you've got these missed payments, it can have real consequences. And then I think even trickier to kind of go back to the synthetic identity that you had mentioned after successfully setting up that account using real PII mixed with fake information to fill in those gaps, criminals will start making those big ticket purchases and then when it comes time to pay again, this time there's nobody left to make those remaining payments. So that creates substantial loss for retailers who now have no way to make good on these installment payments. And so I think Bnpl fraud is really lucrative for criminals because they can make these highly priced purchases for a fraction of the cost. They're just making that first installment payment and then they can turn around and sell it for full price. I feel like I'm giving away secrets of the trade, but yeah, you know, so I think the the good thing here is we're kind of getting to a point now where there might be some regulation for Bnpl. You know, people are talking about it. The government wants to get in on regulation for this in order to protect consumers. And we kind of have to hope that maybe there's going to be some fraud guidance and protection baked in as well.

    Peter Beardmore

    Who's ultimately left with the liability? So is it the is it the Bnpl company or is it the retailer itself when the payment is not made?

    Suzanne Sando

    That's a good question. And again, I think that's kind of another gray area that we're working with, because you have to imagine that there is an agreement that's that's built into when a retailer gets set up with a bnpl platform. But it also, I think, kind of depends on the circumstances of the individual fraud event. It may vary between fraud types. It may vary if the consumer can prove that it was scam or whatnot. It kind of depends on what really happened. And that's another thing that I think it would be good for both bnpl platforms and financial institutions and retailers and the like to have that regulation in place that also addresses fraud.

    Peter Beardmore

    Yeah, because also somebody has to go and investigate this.

    Suzanne Sando

    Exactly.

    Peter Beardmore

    There's a cost associated with that as well.

    Suzanne Sando

    Yep. Those operational costs can really add up.

    Peter Beardmore

    I want to shift focus again here and talk a little bit about money laundering. You know, we've seen over the course of the past six months a couple of major fines, one in Europe with HSBC, I believe one recently here in North America with USAA, both sort of relating to not ignoring AML requirements, but sort of neglecting the full force and effect of AML requirements. Is there an identity component here as well? And if so, how does that work?

    Suzanne Sando

    You know, one of the trickiest things about anti-money laundering practices is the balance between observance of AML policies adhering to these to these policies. But then the competition to gain account holders and revenue. So while you're following regulatory guidelines, financial institutions want to entice new customers and members. They want to have incentives for opening up accounts like, for example, policies for immediately available funds. So as soon as you open up the account and you make a deposit, those funds are ready and available for you to withdraw and use. And I think that this attracts a certain group. That is constantly monitoring what these policies are and they're looking for vulnerabilities that they can exploit for their own financial gain, and that that fits in perfectly, I think, with that identity component.

    Peter Beardmore

    And where in particular are the and you may not have an answer to this question, I'm not sure. But what are the weak points? Right. If you look at this in terms of a kill chain, if you will, of protection. Right. What are the weak points in the chain that institutions are failing at?

    Suzanne Sando

    I think right from the get go, when you have someone opening up a new account and you cannot, you think you know who's on the other end of that interaction, but you don't necessarily know. And that's kind of where money mules come into the picture here. It fits in perfectly with that because, you know, once those organized groups, as organized crime groups find that vulnerability that they want to take advantage of, that's sort of when they recruit their consumers, their money mules, to start laundering that money with the promise, you know, of some financial reward for very little effort. And so how that starts is they start opening these accounts, they start pounding these financial institutions to get these accounts open and immediately start using their account to get those funds. And I think the difficult part here is the layering and the concealing of funds. When we talk about AML, it's really difficult for law enforcement to trace when there are so many different hands in the pot and so many different ways that that money is being moved. You know, once you get that account open, you might be depositing a counterfeit check and you might be using a prepaid debit card, opening up traditional bank accounts. And then on top of it, you kind of add this complexity to the mix of consumers who they know what they're doing, they know what they signed up for, they're willingly doing the muling. And then there's those who are scammed into it through like employment scams, romance scams. And if you look at the guidelines and the guidance that the FBI has put out about money, muling they mention very specifically that like even if you don't know what you're doing, it's still a crime. So I think the two main points here to take away for, you know, financial institutions is, number one, it's that new account opening where you don't know who it is that you're working with. So if you're not verifying that identity, you're not using good ID proofing, you are going to be overwhelmed with new account fraud. And then on top of it, it's that element of you need to make sure that your consumers, your customers and members know what to look for when they are getting scammed into doing the dirty work for the criminal.

    Peter Beardmore

    And it's really difficult to be able to detect when this is happening when you're the bank, because in most cases it's legitimate people with legitimate Social Security numbers and addresses and.

    Suzanne Sando

    Exactly.

    Peter Beardmore

    Credit histories and other accounts. Right. So the traditional KYC approach to validation of a user or an account holder, they don't necessarily apply in the circumstance.

    Suzanne Sando

    And it's interesting that you bring that up because in Javelin’s research, we noticed that 55% of consumers said that new accounts were opened in their name at their existing primary financial institution. So if I have this history, like you said, they've got this this history of we know who this person is. We know Suzanne Sando. She has an account at ABC Institution, so she's got to be legit. Of course, she wants to open up another checking account. Of course she wants to open up a savings account. And so, like you said, that aspect I think is what is really tricky. It's knowing at that point then what are the important pieces of information to look at? How do we assess this person who's opening this account and make sure I know who this is for sure?

    Peter Beardmore

    So obviously this will be a self serving question. You can answer it however you like, but I would imagine that you get into conversations with financial institutions frequently about how do you go about doing this? Obviously BioCatch is in the game of behavioral biometrics, but what are the conversations like when you get into this? You know, okay, there might need to be another technology in the stack here to figure this out or to identify an indicator of risk in the circumstance where you've got a legitimate applicant applying for a seemingly legitimate account, but for nefarious purposes. How does behavioral biometrics fall into that discussion?

    Suzanne Sando

    So this is another one where I have a lot of thoughts. You know, something that (chuckles) something that we talk about a lot with our clients specifically, you know, financial institutions is that balance of the customer experience. And making sure it's frictionless, but still maintaining that level of security. And I think for a lot of institutions, they don't want to introduce additional friction, what they perceive to be additional friction into the process to kind of drive consumers away. And in my eyes, the account opening process is kind of make it or break it for a lot of organizations. If the application is too confusing, if it takes too long, you're risking application abandonment. So you have to make sure that what you're doing works for both the consumer and it works for your organization. So to kind of bring that back to behavioral biometrics, that is one of the things that we really impress upon FI’s as being incredibly important for this I.D. proofing. Consumers want to know that their PII is protected and they should be able to trust that the organizations have solutions in place to ensure that, you know, accounts aren't being fraudulently opened in their name. But equally as important when balancing that user experience and that friction. Consumers want to know that they're not going to have to jump through all these extra hoops to prove who they are and that they are who they say they are. So for me, the best solution takes full advantage of behavioral and device use biometrics. One of the hardest things for a criminal to fake or recreate is the inherent behaviors of a consumer. You know that thing that is distinguishing you from me? People are very hesitant to give up their passwords because it's very easy for consumers to use, but it's equally as easy for a criminal to crack. But as soon as you introduce behaviors and habits, that really adds a layer of complexity to the entire process. An important to note here is that very little friction is going to be introduced for legitimate consumers. Because when you use behavioral biometrics, it's all it's a combination of PII that they should already know and behaviors that they already have. So when you pair some of those behavioral biometrics, so for example, keystroke your mouse movements, the way you move around on your phone, the way you hold your device, that really gives a financial institution key data into the identity of the person on the other end of that interaction. And there are even, you know, more nuanced things that can be used during that account opening process. So the way a consumer moves throughout the application, the way they scroll, how they type, not just the speed but the cadence and all of that, it's very telling. The way you type in the PII that you should be familiar with your birthday, your Social Security number. Consumers who are really familiar with that info, they enter it differently and they move around a session differently than a criminal is going to do it. A criminal is attempting new account fraud. They're trying to overwhelm the system and open as many accounts as they can in a short period of time. And so all they're going to do is the bare minimum they're going to bring to that application. They're going to do everything as fast as they can. They're not paying attention to optional fields. They're not reading disclosures and agreements. Whereas genuine consumers who are opening these accounts, they're going to take the time to review more than just the bare minimum.

    Peter Beardmore

    Have you had conversations? We talked a little bit about scams at the beginning, but also mules, right where you've got these situations where you've got a legitimate user that may be in a position where they're doing something that is not in their best interest because they're being coached by a scammer or they've been recruited in the mule activity, which again is not in their best interest because they'd be violating AML laws or what have you. Have you had discussions with financial institutions who are using any of that technology and got any insights to how they're actually communicating with those victims in those circumstances?

    Suzanne Sando

    You know, I think that well, a lot of our research relies on self-reporting. So if the consumer doesn't know that they're being scammed, we won't know either. And there's also sort of this stigma around scam victims, and I would imagine also unwitting mules who don't know that their, you know, what they're doing because they're also scammed into it. There's this negative connotation around scam victims that, you know, sometimes might feel ashamed of what they did because they can't believe they got scammed into doing something. And so that data is not readily available to us. But I do think that having some of those contextual clues with a consumer who is legitimate, like let's say I'm opening another account, I'm being scammed in opening another account where I'm already an existing cardholder. My financial institutions should be able to use some of those contextual clues to say this is not how she normally acts when she's opening up an account. Maybe it's taking me a little longer to do something than it should, or maybe it's taking me even faster than it should. So I think that some of those aspects should play into that identity proofing solution.

    Peter Beardmore

    Let me just ask you a little bit about the decision making process and the factors that go in to financial organizations or even fintechs decisions around their anti-fraud stack. Could you shed any light on what are conversations like in 2022 when when organizations are looking at what should be in that stack or maybe what should we trim, or is there a high degree of risk of new tack or anything along those lines?

    Suzanne Sando

    Sure. I think that one of the main things that we hear a lot is what is our investment? And I don't just mean monetary, you know, am I able to take the solution and almost immediately plug it into what I'm already doing? Am I able to mold it to be what I need it to be? So that's kind of where having a solution that is rules based is very helpful because you can take these aspects of the account opening process and say, okay, Suzanne's application has a score that's kind of low because this is a rule that we have set up on our end. We can say, let's send this for a manual review. So I think that one of the important things here is, is it configurable and is it something I can easily get going and start using almost immediately with very little integration or deployment on our end? And I think that's another important aspect of this too, is to go back to manual review. I think that going into using a solution like this, you know that (chuckles) it's not going to take away every manual review. It shouldn't if you're never having to manually review an application that might come through this solution. Something might be wrong here. (chuckles) But the point is, is that if you can help cut down on the time that it takes for an employee to do that manual review, you're cutting back on operational costs because now that employee can do what they are really supposed to do, which isn't just necessarily manual reviewing every new account application that comes through. So I think those are some of the really important things to to consider when you're looking to add to your your broad technology stack.

    Peter Beardmore

    So there's a let me just surmise and make sure I get you correct is that there's a technical integration piece here, which obviously we want to minimize the the impact or the the disruption that goes along with that. There's the workflow that occurs in relation to that. In other words, is it substantially affecting the workflow that we currently have in place to the point that it would be too disrupting? And then there's the what's the overall impact on operations? Ideally, if it can lower the total operational time it takes per application or what have you, all the better.

    Suzanne Sando

    Exactly. That's exactly it. And really, at the end of the day, are we weeding out these new accountants that are fraudulent? That's another obviously important piece of it. But yeah, I think that about sums it up.

    Peter Beardmore

    And that was my conversation with Javelin’s, Suzanne Sando. As I mentioned at the top of this episode, you can find that report, Suzanne, authored for BioCatch, New Account Fraud, a Threat Down Every Avenue. It's on the BioCatch website. There's a link to it in the show notes. Digital Tells is written and narrated by me, Peter Beardmore, in partnership with my producer, Doug Stevens of Creative Audio and Music and with support and sponsorship from Bio Catch. Special thanks to Suzanne Sando from Javelin Strategy and Research. For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast. Until next time, take care.

  • In Episode 7 of Digital Tells we speak with Iain Swaine, Director of Global Advisory for BioCatch in the EMEA region. Iain reviews four different drivers of vulnerability as defined by the Financial Conduct Authority in the UK, how different regions’ authorities are approaching financial institutions’ responsibilities when it comes to vulnerable customers, the challenges that faster payments present, particularly when dealing with scams perpetrated on vulnerable customers, and how institutions can help identify and protect vulnerable customers by leveraging behavioral biometrics.

    Peter Beardmore

    When I was first married in the mid-nineties, my wife at the time was a part time bank teller while she was finishing college. She'd often come home with stories, often heartwarming stories about interactions with all sorts of people elderly clientele. There was a regular in her branch who was cognitively impaired, who worked at a grocery store across the street. He'd come in every week to deposit his paycheck. There was another regular who was visually impaired, and it was evident from her descriptions that she really liked and cared about these people. In all the tellers there, looked out for them, took extra time with them, and cared about their well-being. According to the Financial Conduct Authority definition, a vulnerable customer is someone who, due to their personal circumstance, is especially susceptible to detriment, particularly when a firm is not acting with appropriate levels of care. In the era of digital banking, identifying and caring for vulnerable customers isn't as straightforward as it once was. And arguably, in an age of increasing scams and other financial crimes, these populations are increasingly at risk. The topic of vulnerable customers has continued to gain in frequency and momentum over the past several years. The aforementioned Financial Conduct Authority in the UK, the Consumer Financial Protection Board in the US, the Monetary Authority of Singapore, the Royal Banking Commission in Australia and many other regulatory and standards organizations around the world have been looking at what classifies a banking customer as vulnerable and what sorts of obligations do banking institutions hold with respect to safeguarding vulnerable cohorts within their customer base and the population at large? I recently had a conversation with Ian Swain, who has served as director of Global Advisory for BioCatch in the European Theater for about six years. All the while, the narrative of what makes up a vulnerable customer and how to take care of them has evolved. Here's Iain.

    Iain Swain

    The Financial Conduct Authority, which guides the UK bank to put out an advisory last year about vulnerable customers. And it wasn't explicitly looking at fraud. That was one of the things they was looking at, treating them fairly, making sure that they were not left out in this digitized world. And it looked at everything from potential disability. So hearing vision, cognitive impairments, people where English wasn't the native language. It looked to age on their very young teenagers then into the twenties where they might not have the knowledge about the financial world. But of course, the one which is, I think, showing more traction with promoted statutes that the older generation, the non digital natives, you know, the 65 plus where particularly with COVID, they were being forced into a digital world which maybe they were ill prepared for.

    Peter Beardmore

    So I learned from speaking with Ian that the FCA had actually gone into some greater detail defining vulnerable customers by various drivers health, life events, resiliency, and capability. So I asked Ian to offer some insight into each of these areas, starting with health. And he started by making the point that vulnerability is not a binary concern. The degree to which someone is vulnerable may evolve over time.

    Iain Swain

    I think vulnerability we need to say, is not just a binary vulnerable or not vulnerable could be a trendy thing on health. You could have, say, a chronic condition which might be going to end to life that might actually make you more vulnerable due to that. Or you could be a temporary vulnerability with a health where it could be a short term illness rather than a terminal one. They may not be able to make the same level of choices. And again, that that could be everything from someone who needs a carer to look after them and do that, or someone who's actually who previously been in good health and then have an accident where they're no longer capable of making decisions or the decisions they make can be quite full.

    Peter Beardmore

    Okay, so there's health. Let's talk about life events.

    Iain Swain

    Okay. Well, life events can be something that happens. It could be similarly, you know, a longer term or shorter term thing. So one life event could be that you and I retired, annihilated another life event could be bereavement. So you have a bereavement and you've just lost a mother, father, child, partner. Where does that leave you? In an emotional state on there. And it covers really things which might cause you not to be capable of dealing with stuff that otherwise it would be in your stride. And I think that that's more of the transient species thing where it's a temporary period of vulnerability.

    Peter Beardmore

    So tell me about the resilience category.

    Iain Swain

    The resilience. Most of it is around financial resilience. So have you got the fact that you're in debt, you can't actually cope with any financial shocks? Have you got low savings? Have you got your outgoings exceeding your income? Is it erratic? I think resilience is one of the things you have vulnerability, which is going to rear its ugly head globally by Q3, Q4 of this year unfortunately. In Europe we've seen the awful spike in gas and electricity prices because of events.

    Peter Beardmore

    Ian touched on a topic that I think will be fodder for a lot of future conversation. If you believe global markets are heading into recession as a result of world banks trying to head off inflation, large segments of the population may soon become less financially resilient than they may have been in recent years. What does that mean in terms of their susceptibility for financial fraud and scams? Finally, I asked Iain about capability as a category of vulnerability.

    Iain Swain

    Capability really looking at how well people can actually look after the financial side of their life, both digitally and just in the real world. So do they actually have the knowledge to manage their finances? Have they actually got the numeracy skills to even go back and look at the money flow and looking and understanding where money is going in and out? We've got people who don't have English language skills, but we've considered things like the Ukrainians coming in as a refugee there. They don't know the English. How are they going to cope with some of this? What does the bank need to do to make sure that they are looked after and they can deal with it?

    Peter Beardmore

    And so clearly, when you look at health, life, events, capability, resiliency, there can be some overlapping circumstances. And it may lead one to ask, well, how could a financial institution even tell when a customer may be vulnerable and what's their obligation to do anything about it? We'll get to the how in a few minutes. But the what is just as important? What are the obligations? I asked Ian about how governments and standards organizations are looking at this, and not surprisingly, this is an area that seems to be evolving and there are similarities and differences in different countries and regions.

    Iain Swain

    Okay. So I think it depends where in the world you are. As I was saying earlier, the UK has taken the lead, the Financial Conduct Authority put out the guidance for its in that to quote, they’re were obliged to treat vulnerable customers fairly with a level of care that is appropriate given for the characteristics of the customers themselves. The Australian Commission put something out there which is more around the mis selling of it, but I think what I've seen, it's not become a regulation yet. They're actually guidances and they're trying to say to the banks, get your hands is in order. If you don't, you're going to get regulation.

    Peter Beardmore

    Iain went on to explain that other regions have come at it differently. In Australia, for example, the financial industry has been more proactive, recognizing the effects of hyper digitization and the dangers of faster payments on vulnerable populations. In the U.S., the Consumer Financial Protection Bureau has been slower to act, partially due to political hurdles, but also because the dangers of faster payments and peer to peer payment apps are just coming to the forefront of public awareness. I asked Ian about faster payments. In the U.S. systems like Zelle in Cash App and Venmo come to mind. But while this is a fairly new phenomenon in the U.S., it's been an ongoing issue in Europe and Asia for some time.

    Iain Swain

    I think when we look at faster payments, it's literally that instantaneous, real time payments. So that money comes out of your accounts. You press the confirm button and there's a certainty of fate within less than 60 seconds in most of the faster payment systems around the world. And that certainty of fate means that that money is debited from your account real time. It'll go across from a bank to bank network, or in the more advanced ones it will be a hub and spoke mechanism on there, which then routes it through and then it's at the other end they to the other person to actually get access. When we look at that compared to other things, we've got the traditional clearing day cycle, clearing cycle. It was three days, it was more batch driven. I think one of the things about faster payments is that it closes what would have been a gap for people to realize that they've been a victim of fraud. And this is especially true of a vulnerable person. So if you've got someone in the sixties and seventies, they might have the light go on why did I do that? In a non-real time system they can actually ring the bank using the number on the back of the said, you know, I think I've just been scammed and the bank can actually get stop on the payment. And even though it's gone into the payment rails, it's not gone through to the person, the other side. They can actually claw it back. The certainty of fate in a proper, real time system is that if you click confirm, even if you got that little thing at the back of your mind saying, Do I really want to do this? You click it and then you realize 5 minutes later, God, what have I done? Or you've got an elderly person who speaks to one of their children. Mum, mum! What have you done? You need to ring the bank. The bank will say, I'm sorry. The money's gone.

    Peter Beardmore

    Finally, I got to the question what can be done? In some cases the customer self-identifies that they are in a vulnerable category. It may come out of a questionnaire during the account opening process, or it may become apparent during an interaction. Let's say a customer updates their account due to the death of a spouse. In that case, the account can be flagged. But what can the bank do in some cases in a purely digital environment to identify a customer who is in a vulnerable category and therefore may need a different level of protection?

    Iain Swain

    Yeah. So when we looking at the digital channel, the banks are asking us and it's a very hot topic because I was speaking to a number of them yesterday. And five banks came to me and spoke to me about this. I said when we're looking at behavioral biometrics, the behavior insights behind the device, what can we do to actually identify and really through the lens of fraud prevention and customer protection, to make sure that that person, if we see indicators of vulnerability, is adequately protected in a manner which is transparent and non-intrusive and is following legitimate public interest because we are protecting them, knowing that they may well be more susceptible to certain types of attacks. An example that we were already doing is we get year of birth coming through. We don't collect full personal information. We never do that with the behavior biometrics. But by giving you a birth, we know which cohort they're in. And we've got a bunch of analytical models which actually are predictive to say if we say someone is over the age of 70. Are they behaving in a manner with eye hand coordination the cognitive choices that's consistent with someone of that age. If they're not, we can return the flag back which as you said, you know we've got an age mismatch. They’re younger this doesn't look like it's actually that stated person behind there. Now, we use that typically in a fraud prevention piece, but actually we have the fact that when we don't see any indications of fraud, we don't see any of the gross population level changes that a fraudster would do. But it actually looks like genuine behavior. At the moment, the bank just consuming that. They’re saying, well, that's something we'd like to consume. We can get an insight as to who's really operating the accounts.

    Peter Beardmore

    Iain went on to explain a number of other indicators of potential vulnerability that BioCatch can identify using behavioral biometrics. But it begs the question what to do with that information. Is a behavioral biometric indicator worthy of a conclusion that a different banking decision should be made? Should there be, say, more authentication? Or should this lead to a more thoughtful process?

    Iain Swain

    As you were saying Peter, when we look at the traditional mechanisms for saying we see something anomalous, we actually want to double check things they might not all in most cases are not fit for people who would fit into one or more of the vulnerability categories. So if we see something that's suspicious that's going on in an account, traditional information security or IT security or any form of orchestration would be, well, let's confirm, it really is the right person behind the account to add another level of security. So we say, well, here's an extra match that's coming through to you or we need you to confirm this by clicking a link in your email. These kind of things where you're adding a lot of complexity, you're taking it out of the channel where the customer's interacting. That's going to cause drop, is going to cause more confusion. It can lead to them being digitally isolated because they get scared of it.

    Peter Beardmore

    So taking care of vulnerable customers is not just about identifying vulnerable customers. It's about validating those indicators and then tailoring customer experiences that lead to positive outcomes.

    Iain Swain

    And the banks have taken the behavioral signals and the behavior of the device, but importantly, the behavior of the human, not just in how they're charging and sort of thing, but what it means when you combine it together cognitively. Are they confident of what they're doing? Are they distracted? Are they showing signs of stress? Does it actually look as if they've been guided by someone as they're doing this? You sort of feed those insights in we use them include in actual prevention models, but we can actually pull some of these pieces out, the banks are saying feed them to us and not necessarily going into our fraud engine but actually go into the banking decision piece some form of orchestration was a combination with a couple of banks we're talking about can we do behavior choreography? Can we actually guide the user experience by taking in the cognitive signals, the things that putting out the things we know about them internally in the bank, the things that are just slightly off now or often that's standard profile and tailor the experience for them.

    Peter Beardmore

    There's a lot of non-binary data to draw insights from, to figure out how best to connect the dots, and then to facilitate an experience that is, on one hand, protective, preventing scams, for example, and on the other hand, just helpful because while it may not be inherently obvious when an elderly or a cognitively or visually impaired client enters your digital bank, our obligations, while they may not be there from a regulatory sense yet, they certainly are there from a moral and ethical sense to do our best and to take care of them. Digital Tells is written and narrated by me Peter Beardmore, in partnership with my producer Doug Stevens of Creative Audio and Music and with support and sponsorship from BioCatch. Special thanks to Iain Swain. For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast. Until next time. take care.

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • The sixth episode of Digital Tells: A BioCatch Podcast examines the market for behavioral biometrics. What are the top challenges in preventing fraud in digital channels, and what technologies are on the radars of fraud practitioners? How do organizations like BioCatch partner and innovate with financial institutions to identify and isolate the Digital Tells that can help detect fraud? And how may behavioral biometrics evolve in the metaverse?

    Digital Tells’ host Peter Beardmore opens with an account of his recent conversation with Billy Beane from The Oakland A’s baseball organization. His discussion draws parallels between the evolution of baseball in the 21st Century with the game-changing digital analytics that have changed nearly every industry. Tom Field, SVP of Information Security Media Group, discusses the 2021 Fraud Transformation Survey: Detecting and Preventing Emerging Schemes. BioCatch Chairman, Howard Edelstein and BioCatch Co-founder, Uri Rivner share stories of innovating with customer development partners. And Peter Beardmore reflects on future opportunities for gleaning emotional insights in impassive online transactions.

    Transcript:

    Peter Beardmore

    When first introducing the concept of Behavioral Biometrics back in episode 1 of this podcast, we drew a parallel to the story of Billy Beane - the Oakland Athetics baseball team general manager whose adherence to Sabermetrics, a statistical and analytical approach to the game, revolutionized the sport. A few weeks ago, shortly after recording episode 1, I actually had the opportunity to speak with Beane in an event BioCatch hosted. I blogged about it shortly thereafter, there’s a link in the shownotes.

    Anyway, I took the opportunity to challenge Beane a bit. Because while he became famous following multiple division championships, a best-selling book and a hit movie (starring Brad Pitt) ~ there’s been a lot of criticism about Sabermetrics kind of - well ruining the game of baseball. There’s this pace of play problem, games are running longer, attendance is down, the fan base is older than those of other sports.

    And Beane was - unapologetic. He said look - data analytics is revolutionizing everything. It’s not just baseball. It’s every industry. And the game is different. Everyone has access to the same data he has. So a part of the game is now the analysis of every minute decision he and his in-game managers make / on the internet, on sports talk radio. And while Beane didn’t mention them - there’s also fantasy sports and sports betting. In fact in 2019, the last full season before the pandemic, Major League Baseball earned record revenues ~ despite the complaints of purists that attendance is down 14% from its highs.

    The game of baseball didn’t end in the early 2000’s and start anew. It evolved, and continues to. And so too does human engagement with the digital world continue to evolve.

    In this, the final episode in Season 1 of Digital Tells - we’re taking a look at how the market that BioCatch serves - the business of preventing fraud - is evolving. What are their greatest needs? How are their needs changing? And how is BioCatch innovating to meet some of those evolving needs? And finally, how might behavioral biometrics be applied to future opportunities and requirements that stem from the metaverse.

    That’s a lot to do, so let’s jump right in with a recent discussion I had with Tom Field. Tom is head of editorial operations with Information Security Media Group. Since launching their original property, Bank Info Security about 15 years ago ~ ISMG has expanded to 34 media properties and an audience of 950 thousand security leaders.

    ISMG recently published a study that BioCatch sponsored, there’s a link in the show notes, about the latest fraud trends, and the top priorities and challenges for fraud practitioners.

    I asked Tom about the challenges fraud practitioners are dealing with ~ particularly around choosing and implementing technology that prevents fraud ~ while ensuring that that same technology isn’t also preventing business.

    Tom Field

    Well, great question, because we asked, what are your top challenges in preventing fraud attacks, particularly in the digital channels and tied for number one, were the lack of resources or budget to be able to adopt new fraud prevention tools? And that's no surprise. Nobody ever has enough financial resources and there are more tools out there now than anyone could can hope to account for. So lack of resources, number one. Tied with that limited visibility into the risks introduced by new digital technology, for instance, faster payments platforms. And again, so much of this comes back to the digital transformation where your employees and your customers alike are more remote, more digital than ever before, and you're just challenged to be able to understand which users, which devices, which applications you're dealing with. So no surprise with visibility that's consistent. But coming right behind that, a percentage point behind that was increased customer friction due to multifactor authentication or other controls. And it tells the story that our respondents are challenged because they do want to add extra controls, but they're extremely concerned about putting off their users to the point where they abandon a transaction or abandon the company altogether. So it's a top three challenge.

    Peter Beardmore

    So that friction issue is a recurring one, right? We seem to be at this point in the evolution of our digital lives where it’s easy to apply technology to stop the bad stuff, but if you can’t apply it with some degree of surgical precision, you can easily kill the patient / or perhaps to be less dramatic / ruin the relationship.

    Tom Field

    You know what doesn't come out in the survey, but we understand is that the landscape has changed considerably. Customer expectations now are whether I'm dealing with my bank, whether I'm dealing with my grocery store, whether I'm dealing with my favorite Chinese restaurant. I expect the same digital experience I get from Netflix, Hulu and Amazon.

    Peter Beardmore

    So are these problems insurmountable? Are financial institutions confronting the challenge?

    Tom Field

    You know, I think there's a couple of things you can be encouraged about. One is that of all the respondents we had, only three percent reported that they would see a decrease in funding for anti-fraud in 2022. So 97 percent of respondents are expecting at least level funding, if not significant increases. So I celebrate that, first of all. Next, when you look to what they want to add in the next 18 months, transaction analysis and monitoring tools, behavioral biometrics and analytics, device ID and intelligence, cross-channel fraud detection, physical biometrics, voice, facial fingerprint. And so that tells me that there's a much smarter approach to anti-fraud controls, looking less at what somebody knows and more who somebody is.

    Peter Beardmore

    Financial institutions are moving forward. They are confronting challenges. They’re looking beyond the limitations of historically binary account data and credentials (what somebody knows - inherently stealable information) - and bridging that data with ‘who somebody is’ - for purposes of preventing fraud - yes - but there’s much more opportunity that comes with understanding your customer. Speaking of customers ~

    How do organizations like BioCatch evolve to meet customer needs? How can behavioral biometrics evolve to identify new opportunities and solve new problems for financial institutions and digital channels.

    You may recall meeting Uri Rivner in our first few episodes. Uri is one of BioCatch’s founders. When I interviewed Uri he told me a story that illustrates how BioCatch and our customers collaborate, share relevant data, and use that information to zero-in on the Digital Tells that indicate out-of-the-ordinary behavior.

    In this case, BioCatch was working with a new customer, a big online payments provider, It was the early stages of implementation, and we were fine-tuning behavioral biometrics on an ecommerce platform servicing small business websites. One of those businesses was a manufacturer of paper straws. Here’s Uri.

    Uri Rivner

    This was an e-commerce company in San Diego selling paper straws. And of course, in California, you have to use paper straws rather than plastic straws. Right? Normally you buy a pack. You know, if you're a restaurant, you can buy a crate. Full crates, OK? Ten thousand straws. There was a huge order that was made using that platform. I'm talking about twenty five thousand dollars worth of straws, all sorts of straws. Sixty two crates. And all of these crates had to be shipped very urgently with a huge shipping bill. Ten thousand dollars of shipping to the island of Tuvalu. Where is Tuvalu? It's an island in the Pacific Ocean. It is eleven thousand people. They don't need that many straws.

    Peter Beardmore

    Ok, so once you know where and what Tuvalu is, any human being can deduce that there may be fraud afoot, right? But for non-intelligent IT and ecommerce systems, not so much. And, well, I should cut to the chase, this wasn’t a success story, initially anyway. The transaction went through.

    Uri Rivner

    They allowed the money to move and then they actually were curious about that specific case. So they called the merchant and asked the merchant, this paper company can tell us about this interesting order to Tuvalu said, yeah, the guy told us that they have like a resort in this Pacific Ocean Island and they need a lot of straws. And, you know, the shipping bill was like crazy, like ten thousand dollars. But they give us a corporate credit card and it went through. So once the money was inside their accounts, our account, the merchant account, we got a phone call from from that person. And they were saying that they made a mistake, a terrible mistake. Like they didn't realize that the shipping is so expensive. Ten thousand dollars for the shipping. This is way too expensive. There's another company that could do it in like two thousand dollars. Can we do as can we do them a favor and move the 10000 to that shipping company and they'll do that specific shipping and then anything else that we need and all that. And we already got the money. So we said, OK, fine, we'll do it. So we move ten thousand dollars to that shipping company. Of course, it was all a hoax. There has never been that shipping company was just a fraudster's bank account in the US. So that's the result of the fraud. So the fraudster has a credit card, a stolen credit card, OK, they do all of these elaborate scam. And at the end of the day, the merchant moves ten thousand dollars from their own money, of course, to because they have to they have to move the money back. Right. It's a chargeback. Right. They have to return all of that. So from their own money, they move it to the bad guys. And this is a perfect example of a scam, right. Scam social engineering. You know, there's there's no end to it.

    Peter Beardmore

    So what did we learn? Well, in addition to hearing a well told story about a classic chargeback scam that small businesses are constantly encountering… when BioCatch went and looked at the session we discovered something interesting. In this case, the user didn’t know anything about the credit card they were using. It was all cut and paste. And for credit card numbers that’s actually not all that unusual in business transactions - often that number is stored somewhere and just copied. The expiration number was also pasted in. That’s actually a bit more unusual. I mean, who needs to copy/paste a 4-digit number? Right? But the Digital Tell in this case though was the zip code. It was also pasted in - and in a shipping field too. I mean who copy/pastes their own zip code? Turns out, pretty much nobody.

    But it’s this kind of information sharing and learning that leads to new solutions and use cases. The process of innovation that BioCatch shares with its development partners is continuous. This zip code variable today informs a lot of BioCatch use cases, an is literally one of hundreds that’s now being used for BioCatch’s Strong Customer Authentication solution (or SCA). We haven’t discussed SCA on the podcast yet ~ maybe more to come on that in season 2 - but basically, it’s about ensuring that ecommerce credit card transactions are genuine.

    To put a finer point on I asked Uri River specifically about how these new ideas and projects come up with BioCatch customers, particularly those customers who eventually become BioCatch development partners.

    Uri Rivner

    So the idea is you respond from time to time to emerging needs. This is the creativity and the more experimental phase of research. New detection is another good example or anything to do with scams because they’re also evolving and where the existing products simply doesn’t help. So you need to do some research, work with a design partner, create a model, make sure that it’s running and then it’s repeatable and scalable, then make it a part of your product offering. It is important to put it inside the product offering because then everyone can benefit from this, not just the specific project of course, but that’s the way things are being done. You know, typically in this line of work, you don’t have smart people sitting in the room designing something and then launching it and asking people to use it. It’s more like, you know, it’s working fine. And if it’s not working fine, you want to know why it’s not working fine. What is it that it’s not detecting. Let’s actually figure it out and let’s build some functionality around it. Sometimes it’s something that does not detect well, sometimes it’s something like a new type of, let’s say, insight that the industry is interested in. I think it’s a 2-way conversation by the way. It’s not just that it’s coming from the market. Sometimes it’s the data science team coming up with all sorts of very interesting insights and then talking to the banks and saying hey, will you be interested in that? So I wouldn’t categorize it as a one-sided streat mof information. You know BioCatch has smart people. The banks have smart people. And typically it’s the combination of the two to develop these sort of new offerings.

    Peter Beardmore

    When I first came to BioCatch, this development process, and the way the product roadmap works as a result ~ came as something of a revelation to me. Having spent most of my career in product marketing for security and hardware companies ~ I’ve been involved in all sorts of customer interaction, thousands of trade show floor conversations, given hundreds of executive and user briefings, been behind the 1-way glass for focus groups ~ and usually found ways to bring valuable insights from these interactions to the product lifecycle management process ~ but at the end of the day ~ product teams build products and hopefully… customers buy them. But when it comes to working with behavioral data, you REALLY need to work with live data, and share insights and tweak UI’s, get the ‘truth data’ - the results of investigations into fraud and customers’ intentions - to really understand what these Digital Tells are, where to find them, how to interpret them, and then apply them. The term “Development partner” has taken on a whole new, much more literal meaning for me.

    These partnerships and conversations with our customers also reveal other pressing business issues. A few minutes ago Tom Field talked about the importance of customer experience. You also met Howard Edelstein, BioCatch’s chairman in earlier episodes. When he and I spoke he also shared some experiences about working with our customers and development partners to uncover other potential benefits of behavioral biometrics.

    Howard Edelstein

    And I'll give you one. I was just on the phone with 10 days ago. They actually brought a bunch of their high end executives who are interested in innovation and differentiation in the bank to a discussion that we had and what else behavior can do for them. All of a sudden you see the ideas flowing out because they just didn't have any exposure to the technology that could help them. Because in this particular case, it was, quote unquote, locked up or focused on fraud prevention and risk mitigation, not in customer experience or tuning, you know, products, you know, like, for example, you know, who did you give more credit faster? You don't have to frustrate them. I mean, there are a million things you can do if you have comfort, knowing that the person who walked to the bank is X, Y and Z.

    Peter Beardmore

    Knowing the person, it turns out, may reap benefits far beyond detecting fraud or determining if users are involved in scam or money mule activity. Imagine the potential commercial benefits if digital merchants could read the digital tells of a customer in the same way a salesperson at a jewelry counter or a car dealership can read the body language and tone of a prospective customer. Emotional insights into customer behavior may be the next holy grail of ecommerce. What does it take to build trust with a customer, to put them at ease, to communicate the right message at the right time? I recently wrote a blog on this topic and BioCatch published a paper called Creating Trust and Ease with Emotional Insight in The Digital World. We’ll share a link in the show notes.

    Before we wrap up this season of Digital Tells, I just wanted to share some thoughts about what else may be coming. What forms might behavioral biometrics take in the future? The week before recording this episode, Facebook announced it’s namechange to Meta, which came with all sorts of futuristic messaging and CGI visual backdrop that resembled the futuristic dystopian movies like Tron and Ready Player One.

    It got me thinking, will behavioral biometrics have a role to play in a future where AR, VR, and wearables are ubiquitous in our daily lives?

    I asked Uri Rivner if he thought Behavioral Biometrics will have a place in that world.

    Uri Rivner

    Yea, definitely. It’s only a matter of time. And I think also inside cars. So the way you drive the car and or your behavior inside all sorts of, you know, situations. Yea, definitely. That’s another thing that behavioral biometrics at some point will evolve into. At the end of the day, you track human behavior. Right. And you try to understand the way they normally behave in order to see if there's any kind of anomaly, their behavior, to see if there’s any known criminal behavior or other types of behaviors that are likely undesired. So yeah, I would say that today we operate using certain types of channels. But as time goes by, there’s going to be new channels and behavioral biometrics will evolve with those channels as well.

    Peter Beardmore

    So as we learned at the beginning of this episode, while the market is clearly communicating a need for the benefits of behavioral biometrics today - there’ll almost inevitably be a need to understand Digital Tells so long as digital evolves.

    Digital Tells is written and narrated by me Peter Beardmore, in partnership with my producer Doug Stevens of Creative Audio and Music, and with the unwavering support and sponsorship of my employer, BioCatch.

    Special thanks to Tom Field, Uri Rivner, and Howard Edelstein.

    For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast.

    This is the final episode of the first season of Digital Tells, but please subscribe if you haven’t already to Digital Tells on Spotify, iTunes, or wherever you listen to podcasts. You may find us back in your feed with a bonus episode in the not-too-distant future.

    Until then, take care.

  • The Fourth episode of Digital Tells: A BioCatch Podcast dives into the phenomenon of online money mules. What is a mule? How does one become a mule? Why do or why should financial institutions care about mule activity? And what can be done to detect mules?

    We open with commentary from Julie Conroy, Head of Risk Insights and Advisory at Aite-Novarica Group. Julie discusses the role of money mules inside criminal organizations and introduces some different mule back-stories. Digital Tells’ host Peter Beardmore digs deeper into 5 mule personas. BioCatch’s Raj Dasgupta explains the regulatory and reputational risks that mules represent to financial institutions. And John Paul Blaho, Senior Director of Product Marketing, shares a look at how behavioral biometrics can enhance mule account detection.

    Transcript

    Peter Beardmore

    Are you a mule? 

    So, the title of this episode is admittedly a bit preposterous. If you’re listening to this podcast, it’s doubtful you're a money mule (or a drug mule, or an actual mule for that matter). But the point I’m attempting to make with this title is that there are different kinds of mules out there ~ ranging from the completely complicit (let’s just call them criminals) to the vulnerable / gullible victims of scam activities that we’ve discussed in previous episodes. 

    The FBI defines a money mule as <<someone who transfers or moves illegally acquired money on behalf of someone else. Criminals recruit money mules to help launder proceeds derived from online scams and frauds or crimes like human trafficking and drug trafficking.>>

    Ok, that makes sense, right… I mean if you’re going to make money scamming people, stealing, selling drugs, human trafficking ~ that money’s got to get someplace where you can eventually retrieve it, right? ~ ideally below the radar of authorities ~ or even better ~ laundered sufficiently so the proceeds can be spent or invested without drawing the attention of law enforcement. 

    In episode 2 Tom O’Malley walked us through all the specialized functions in a cyber criminal syndicate that the U.S. government prosecuted a few years back. Remember there were malware developers, crypters, spammers, bulletproof hosters, account take over specialists - and then there were these cash-out specialists. These were the people who ran the money mules - they had these networks of accounts where money could be moved quite quickly - beyond the reach of the direct victims, their financial institutions, investigators, and authorities. 

    On this episode of Digital Tells - A BioCatch Podcast - we’re taking a look at the mules themselves. What is a mule? How does one become a mule? Why do or why should financial institutions care about mule activity? And what can be done to detect mules?

    Act 1

    Julie Conroy is the head of risk insights and advisory for Aite-Novarica Group. She advises financial institutions, vendors, and merchants about risks relating to fraud and financial crime ~ strategies and tactics to mitigate those risks ~ and what that all means for customer experiences.

    I spoke with Julie recently about how financial institutions are struggling with mule accounts, and I asked her specifically about the spectrum of mule personas that I mentioned earlier.

    Julie Conroy

    You have this organization of functions in financial crime, just as we do on the side of the good. And so you do have some people that are specifically leveraging stolen identities, synthetic identities to open up your accounts with the express purpose of exiting these funds. You also have folks that have been brought over on visas, and they're using their identities for a period of time to open up accounts. And then after those accounts have been used to serve their muling purpose for a while, they go back to their country and there's no consequences, really. Somewhere in the middle, you have people that respond to the make money in your sleep signs. They know ultimately that this isn't quite right, but they are still using their own identities in order to help facilitate the mewling. And then at the other end of things, you have people that truly have been duped by romance scams or other things into opening up these accounts, facilitating the sending of funds and truly don't know that what they're doing is facilitating financial crime.

    Peter Beardmore

    At BioCatch, we’ve actually developed personas to help us bring some clarity to the spectrum of mules – from the more complicit to less complicit – we’ll share a link in the shownotes.

    OK, so let’s briefly discuss these personas:

    There’s The Deceiver – who opens an account specifically to perpetrate fraud – The deceiver is obviously the most complicit on the spectrum.

    Then there’s The Peddler – who sells their genuine bank account to a criminal

    There’s The Accomplice – a willing participant who’s chasing an “easy money” opportunity.

    There’s The Chump – who executed a transaction believing the money is clean – this is your prototypical scam victim.

    And then finally there’s The Victim – a victim of credential theft, unaware that there’s even been a break-in – The Victim obviously the less complicit persona.

    As I started to research these personas, a couple things struck me.

    First – with the exception of The Deceiver – the most complicit, and The Victim – the least complicit –  the others – The Peddler, The Accomplice, The Chump,  – they’re all kind of Sad, right? I mean, these aren’t societies winners… people basking in their own success? ‘living the dream’ so to speak? More likely – they’re desperate – They’ve fallen for easy money or get rich quick schemes – or romance schemes – and they’re either desperate enough to knowingly sell their own identity and accounts – or desperate enough to live with the ambiguity that they may have involved themselves in a criminal enterprise – or their just gullible – I mean, if you’re gullible enough to execute a transaction for a stranger on the internet - believing nothing is wrong – could that possibly be the only time someone has taken advantage or exploited you? 

    I’m reminded of that Clint Eastwood character in his movie The Mule that came out a few years ago. This very old man, clearly he’d made a slew of bad decisions in his life, and realizing in his final years that he was destitute – his home was foreclosed - and he’s alone / his family had lost faith in him and his x-wife (with plenty of justification) was rubbing his nose in his failures – but with just a glimmer of hope for redemption with his family and friends - he made a naïve decision to earn some money – and then around the time he realizes what he’s doing and who he’s involved with – the draw of all that money takes over – and the bad decisions just spiral. 

    And then the second thing that struck me, when I thought about the actual online behavior of money mules.

    Well, there are elements of the account take over problem we discussed in episode 2 – there’s some of the Account Origination and identity theft issues we discussed in episode 3 – and the scams we explored in episode 4 are pretty thick here too. So as complex as the mules themselves are, so too will be connecting the dots through a continuum of online activity. 

    What are the digital Tells of mules?

    Well, before we get to that, there’s another important question we need to ask. 

    Act 2

    You may recall meeting Raj Dasgupta in an earlier episode. He’s director of fraud strategy at BioCatch. I asked Raj – Do Banks even care about mule accounts? And if so, why?

    Raj Dasgupta

    The banks do care and they're required to care because of regulation. So if there is money laundering activity going on within their account base, they're responsible for it because by law, they're required to look into suspicious activity and report that activity and take action. If they're not there, then inadvertently playing a role in money laundering and money laundering can be used to fund a variety of fraudulent activity, criminal activity, including terrorism. So there is a very strong responsibility on the part of the banks to make sure that money laundering is not happening within their account base. It's against the law. And if they are not following the law, there can be heavy penalties and fines levied on them. And then, of course, there's a reputational issue. If you are known to not pay attention to mule activity going on and it's all bringing you a lot of negative press, you wouldn't want that. But that's on the softer side and the hard side. If you've not followed the law, there will be real dollar value impact in terms of fines and fees.

    Peter Beardmore

    So there’s regulation and reputation. And those are good reasons – reasons that have been around for a while. But as Julie Conroy explains – there are market dynamics at play that have accelerated the urgency for financial institutions to focus on mules. Here’s Julie:

    Julie Conroy

    It was something that we at Aite group had been seeing building even prior to 2020, as you have faster payments spreading across countries across the globe. And so, as we've seen faster payments hitting large markets like the US and Canada, even in 2019, we were seeing a greater emphasis on desire to invest in mule detection technology in those countries. Then you bring 2020 and add a massive global pandemic and the hundreds of billions worth of stimulus that were pumped into ecosystems around the world and the fraudsters quickly responded. And what we saw was as you were stealing hundreds of billions of dollars from things like unemployment claims from small business loan programs, you need a very robust new network to exit those funds from the system. And so we have a couple of bodies of research that just showed market increases in mule activity during the pandemic and carrying into 2021. And with that, you know, that only further emphasized the importance from a financial institution perspective to start building. More robust controls to detect this activity, both because many believe that there is a moral obligation to be stopping this financial crime.

    Peter Beardmore

    This stimulus fraud that Julie mentioned is a phenomenon that BioCatch has seen up close. It serves to reason that if you’re going to massively defraud the government of program money from the U.S. Payment Protection Program or enhanced unemployment benefits – you’ll need a place to send that money – a mule account. Back in June, as covid-related unemployment rates were slowly declining, the state of Virginia reported a 58% spike in unemployment applications in a single week! Not coincidently, BioCatch simultaneously saw a correlation in our data. There was a spike in high-risk applications for new deposit accounts originating from Virginia at the very same time. A spike in fraud – A spike in mule activity.

    You won’t need to look too far to find news stories of stimulus fraud from the past year ~ and it’s like that some of that accounting (or reconning) is only just beginning ~ so aside from the regulatory and reputational reasons for caring about mule activity that Raj cited – there’s a wave of activity that has financial institutions searching for answers.

    Act3

    So what are they to do? What technologies are financial institutions relying on to detect and route out mule activity? Here’s Julie Conroy again.

    Julie Conroy

    You know, I've seen institutions just start with some simple rule deployment and looking for, you know, patterns of behavior that in the first 90 days and an account like we didn't see much activity, there were no direct deposits. And then all of a sudden on day 93, we see $10000 come in and ninety nine hundred dollars go out. Very just basic things like that. It's a good starting point. And then as you progress along that spectrum, there are great indicative behavioral solutions like BioCatch that can examine the behavior of the account and compare it to the way that normal people interact with the account. Also, seeing that, you know, typically it's not just a mule that is logging into their account. You also have a mule herder who is managing multiple mules, and they will also periodically be logging into the account to make sure that these mules are doing what they're supposed to be doing. So recognizing not only that second person periodically logging into the account, but then also bringing the intelligence that second person is logging into 20 other accounts at this institution, that type of behavioral analysis. Can be really powerful.

    Peter Beardmore

    So there’s a lot there from Julie ~ but her key point here is yes, the rather binary rules financial institutions have traditionally used for monitoring account activity are a good place to start. But mules, while they may open and typically use their mule accounts on their own, those same accounts – are often controlled by complex and sophisticated organizations – either directly or indirectly. And those differences in behavior – may include some of the Digital Tells that behavioral biometrics can use to identify mule activity!

    And well, that’s something BioCatch has been taking a hard look at – working with our customers and data scientists – and the good news is – there’s something there.

    I recently spoke with JP Blaho, he’s senior director of product marketing at BioCatch, and I tried to get him to spill the beans on any upcoming BioCatch announcement. Here’s JP.

    JP Blaho

    So, Peter, we've been talking about mule attacks and mule fraud for well over a year now. Due to the coronavirus we've seen a significant increase in this type of activity, and we've been working with some of our partner banks for this last year to identify where these are happening and how we can identify them using behavioral biometrics faster. And with these handful of financial institutions, we've refined the solution to a point that we are going to launch formally the general availability of of mule money or money mule detection solution in November. So we are already engaging with our customers around the solution, showing them how we can prevent this. But we'll do more of a formal announcement to new customers, net new opportunities and to the analyst community next month.

    Peter Beardmore

    So, JP ~ let’s fast forward a bit. I want to zero in on what mule detection actually means for the bank and for the customer. What happens when those digital tells start to alert?

    JP Blaho

    You know, in many of those instances, I think the bank identifies that account before the individual realizes that it's happening. And in many of those instances, the account gets frozen and until that consumer tries to perform a transaction, realizes that they don't have access to the account is when they have to coordinate and work with the bank to have that account reopened or have it moved into a new account. So there's a lot of pain associated with that too to the victim, you know, not just for the fact that they're part of a criminal transaction, but now they most likely have to move their account to a different within a new account number, et cetera. But also their money is frozen until they can prove that they're not part of the problem or they were not actively part of the problem. So that's a really good point.

    Peter Beardmore

    Back to my point earlier about the topic of mules. This is a game of very few winners. But I think we have an obligation to apply the science of behavioral biometrics to help financial institutions identify mule accounts – to help them manage compliance and liability risk, to interrupt cybercriminal networks when possible - and help those who may have unwittingly found themselves in the middle of something really bad – to at least confront the truth. 

    Digital Tells is written and narrated by me Peter Beardmore, in partnership with my producer Doug Stevens of Creative Audio and Music, and with the unwavering support and sponsorship of my employer, BioCatch.

    Special thanks to Julie Conroy, Raj Dasgupta, and JP Blaho. 

    I mentioned those Mule Personas that BioCatch has identified earlier in the episode – you can find a link in the shownotes to take a closer look. 

    For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast.

    Join us for episode 6, in which we’ll take a look behind the scenes at biocatch – how the sausage gets made, so to speak, and some special insights on transforming fraud management to fuel digital business. 

    Until then, take care.

  • The fourth episode of Digital Tells: A BioCatch Podcast focuses on scams and social engineering. Why is there so much scam activity these days? Why are these scams so successful? And what, if anything, can financial institutions do to help protect themselves and their customer?

    We open with a first-hand story of a brilliant social engineering, told by Coby Montoya. Tim Dalgleish discusses some of the Digital Tells that may indicate scam activity. And Ayelet Biger-Levin explains the layers of machine learning and analytics that converge to detect scams.  

    Transcript

    Coby Montoya 

    Sometime in fall in 2019, I was working from home. Middle of the workday, I received this text message from it says it's from Capital One. It says, Hey, there's been a charge on your card at this Wal-Mart in California. Is that you? Yes or no? I said no. I responded back. And I immediately took out my Capital One card and looked at the back, called the phone number in the back to report this, you know, hey, this definitely was not me.

    Peter Beardmore 

    The voice you were just hearing if from a gentleman I met recently. He’s a professional, in his late 30’s, lives in Arizona.. and the scenario he’s discussing not uncommon. It’s happened to me. It’s likely happened to you… but this story gets interesting….

    Coby Montoya

     So as I'm on hold, I receive additional texts. Says, OK, this wasn't you type one if you would like someone to call you instead of having to call and sit on hold. So again, middle of my workday, you know, this is much easier for me than having this on hold. So I said, Sure, have someone call me back a few minutes later, actually, probably less than a minute. I receive a call back and the phone number was, you know, same phone number for my Capital One card. And so I let them know, Hey, this was not me. You know, I identified myself. I authenticate myself, providing some basic information, and they say, OK, well, we can send you a replacement card. We'll deactivate this one. It's going to take about three to five business days to receive this card. It's actually a card I use very frequently. So I ask them, Hey, is there any way you could send this out sooner? They said, Well, we can. There's a fee that comes with that, but you know, you just experienced fraud. So we're going to go out and waive that fee for you, right? All right. Great. Good experience. And I appreciate it. And so they said before we send this replacement card out, however, when to need you to verify the address we're going to send it to. We're also going to send you a one time passcode just to ensure that it's really you were speaking with. And so I said, OK, you know, waited for the code. Thirty seconds later, I receive a one time passcode to my phone number. I read it back to them. They said, Great, thanks. We're going to go ahead and send this card out to you. And that was that. So I thought, All right. Minor annoyance. You know, no one likes fraud on their card, but it took me about 10 minutes, three songs. So I thought, All right, I'm good to go. Later that evening at home watching TV with my girlfriend and all of a sudden I received a notification from my Capital One mobile app that says there's been a charge at a Wal-Mart in California. So I'm based in Arizona, right? So this is definitely not me. It's kind of annoying that, you know, I was like, Hey, we just talked about this. I just resolved this with Capital One. Why are they approving charges on a card that has been reported as compromise? So I called CapitalOne ready to be, you know, just explain to me, Hey, guys, you shouldn't be approving these these charges. I reported this as a as. And as I speak to someone, they say we don't show any sort of interaction, so we contacted you at all today. I go, Are you sure about that? Yeah. Let me check a different system. They check a different system. No, no interactions. And so I'm a little skeptical. I just talked to someone hours ago. This, you know this. This can't be the case. I know I spoke to you guys. So I ask, Okay, I know you're doing the best you can do, but can I please talk to a supervisor? Maybe they have access to a different screen that you don't have access to, you know, respectfully. And so I'm, you know, hold for another 15 20 minutes, but not speak to a supervisor. Same situation. So I go, OK, is there someone in like a security risk fraud department? OK, yep, I'm transferred there. Same thing. So I learned that they actually didn't contact me. So I'm really puzzled by this, and I realized sort of in real time that actually a fraudster, you know, a bad actor or criminal actually contact me to essentially social engineer me.

    Peter Beardmore

    So, obviously, this was a pretty good scam. And, I mean, stuff like this happens every day right? But here’s the really interesting part… That voice you were just hearing is Coby Montoya and well, let me let him introduce himself…

    Coby Montoya

    My name is Koby Montoya. I work in fraud and security, and so I've been doing so forabout 15 years now, and I've worked on the merchant side. The card issuer, side,payment network side and the front vendor side. So I have a fairly broad lens when itcomes to fraud risk management.

    Peter Beardmore

    Coby’s actually being modest. He’s a fraud expert, who’s worked for some of the biggest financial services companies in the world, helping them to manage risk and fight fraud. 

    So the next time it happens to you – or when your elderly mother tells you she got scammed again – go easy on her. It happens to even the best. In fact social engineering scams are on the rise globally. According to the U.S. federal trade commission, imposter scams were the #1 type of fraud reported by consumers last year. And most of these scams were carried out over the phone – with losses reported around $30 Billion.

    A few weeks ago I registered some domain names in preparation for launching this podcast, and for some reason I didn’t get the privacy settings right. About a week later my phone started ringing off the hook – at least a dozen calls every day – about half from would-be web developers looking for work – the other half, dire warnings about my fraudulent payments made from my accounts, messages of accounts past due, a lottery win, and a few were for what would apparently be life changing opportunities.

    Why is this all happening? In some cases it’s pandemic related. But mostly, it’s just because it works. And as evidenced by Coby’s story – scammers are convincing and very clever. And they prey on human nature and emotions.

    And for financial institutions this is a major problem. Because in some cases they’re liable to refund losses to consumers – in other cases they’re not, but might still make refunds anyway… just to protect their brand and their relationship with the customer – and in some cases the guidance from regulators is changing.

    In this episode of Digital Tells we’re focusing on scams and social engineering. Why are they so successful? And what, if anything, financial institutions can do to help protect themselves and their customers.

    In previous episodes you met Tim Dagleish from BioCatch – he’s been working with financial institutions throughout his career, and has a deep understanding for what’s crucial for banks when dealing with reported scams. 

    Tim Dalgleish

    So, yeah, there's a whole remediation process with a financial impact operation impact and customer experience impact because its customer life cycle. When you're a victim of fraud, it can go one of two ways. As a banking customer, as a bank, if you do a great job in looking after that victim and getting them back to to normal or protecting it, then that's the storey they tell at every barbecue for the next six months and tell their friends. If you do a bad job of it, that's a bad story that they're going to tell to their friends and they might even change banks. So it's a really, you know, life cycle with a relationship with a customer. If you get from right or wrong, you can typically go one or two ways. So it's really a critical, critical point in the relationship.

    Peter Beardmore

    So I want to stipulate for a moment here, particularly with respect to scams and social engineering that there are hundreds, if not thousands of angles that scammers can take. And they’re constantly iterating and improving. Some are better than others, you just heard one – we could share dozens of others, but this is supposed to be a twenty minute podcast – but what I’m getting to here is there’s a myriad of cash-out schemes from scam to scam. In some cases the scammer may literally lead their mark to transferring money, maybe even coach them how to do it – without ever getting a credential or gaining direct access to an account. In other cases – those credentials or account numbers are exactly what the scammers are seeking – and then that leads to the Account Take Over Fraud and Account Origination fraud issues we discussed in previous episodes. And in still other cases… the scammer convinces the victim to give them control over their account, in the middle of the session, after they’ve already legitimately logged in. Maybe get them to download a Remote Access Tool or malware – giving control of their phone or computer to the scammer ~ essentially giving them free reign ~ but following a perfectly legitimate log-in, from a known device, and probably from a known IP address. 

    With all these potential combinations of interactions and outcomes, you it might be hard to believe that there’s some magical algorithm to throw the brakes on any of it. And you’d be right. But let’s think a little more deeply about this. About what the victim actually experiences.

    Let’s say you’re on the receiving end of a scam? Maybe someone’s contacted you, they say they’re from your bank, it’s about some suspicious payment activity – and they get you to log-into your account – and then they ask you to do something you just can’t figure out how to do. You get frustrated, you’re worried that your money’s been stolen, and to be helpful they suggest you download a tool that will help them to help you resolve it. 

    OK, so obviously… that’s not a good outcome…and maybe the scam doesn’t even go down that path… but I want you to think about how you’d behave in that moment. I mean you’re experiencing something that’s out of the ordinary… what would you be doing on the screen – with your mouse – or your keyboard – or how would you be handling your phone? What might be some of the Digital Tells that could indicate that scam activity was underway?

    Here’s Tim Dalgleish again.

    Tim Dagleish

    So if you think about it, you know, when I normally do my banking as a person, I know why I'm logging in. I'm looking in to check a statement, paying my electricity bill, transfer money to my friends, behaving with the intent. I know what I'm doing it what we see scams is that they're being coached. And that manifests itself in what I would call little breadcrumbs of behaviour. If I'm on the phone to a scammer, they say log in and I log in and then I'm waiting there on the page. And they're so trying to social engineer and give me instructions next thing. Now, from a behavioural perspective, I'm no longer behaving with intent. I log in and then I do my mouse lost scam scammers on the phone to me, instructing me what to do or convincing me to do the next thing. So, you know, when you're being coached through a banking session, it looks much different from a behavioural perspective than when you're doing it yourself with intent. So that's really the power of being able to understand the customer's behavior in a really granular detail.

    Peter Beardmore

    So there are these subtle indicators… but you maybe thinking… as I have… “so what if I pause a little, or doodle with my mouse. I do that all the time. How’s anybody going to put that all together and conclude there’s a scam in progress?” If you listened to episode 1 you may recall the science fiction conversation we had with Howard Edelstein, and his point about finding the data – using machine learning to identify the subtle patterns – connecting the dots (so to speak) – and artificial intelligence to connect those patterns in real time. No single indicator is particularly strong in and of itself, but it’s the collection of indicators combined with data and analysis from millions of other banking sessions that can lead to reliable conclusions. 

    We met Ayelet Biger-Levin in previous episodes. She and I were discussing that phenomenon of bringing Science Fiction to reality I asked her to talk more about BioCatch AI and machine learning, and how behavioral biometrics actually comes together.

    Ayelet Biger-Levin

    BioCatch leverages supervised machine learning, and they've been asked a lot about kind of the difference between that and deep learning. So deep learning is really throwing a bunch of data at the machine and saying, OK, group it into groups so we can find the differences and anomalies moving forward. So, for example, if you take flowers and you throw at the machine all these different characteristics about flowers, the shape, the size, the color, the smell, then it will be able to group into flower groups and families. And then when you get a new one, you can say, OK, this belongs to that or it's abnormal to this group, etc. But when it comes to fraud, you don't just want to throw data at the machine and group that you want to find those specific characteristics that will correlate to fraud or genuine. And what we do is we take data, for example, the user interaction data, and we say, OK, what in this data can help us correlate with fraud? In order to do that, we need to look at known fraud cases and known genuine cases and then attribute those cases to the data and say, OK, here's what we've learned from all the fraud data. Here are the patterns that correlate with the fraud data and are different from the genuine data. That's how we can say that if we see signs of low data familiarity that has high correlation to fraud, because we've seen that in 64 percent of all the fraudulent cases, but we see that in none of the genuine cases. So that kind of helps correlate the known incidents for every customer. So we have a general model with all the learnings that we have over the years, and that's why having 10 years of data is very, very powerful, because we've learned we've seen how the mode of operation evolves over the years and every customer has unique MOs as well. So the ability to tune things according to their confirmed fraud cases and they're confirmed genuine cases helps us learn over time.

    Peter Beardmore

    Now, that the modeling approach that Ayelet describes applies to both the strict definition of fraud – where a cybercriminal is actually impersonating the victim – and scams – where a social engineer is manipulating or tricking a victim to do something bad. But from a modeling standpoint – it’s effectively the same approach. And that’s good because as we illuded to earlier – a lot of times scam and fraud activity overlap. What starts as a scam, can quickly switch to fraud. 

    So let’s quickly review that BioCatch process – as we discussed briefly in episode 2. The first layer in the model is looking at the user’s own behavior profile. How has that user acted in the past, compared to now? The second layer compares the user’s behavior to that of the broader population. How does mouse doodling correlate to fraud? How do long sessions and pauses correlate to genuine sessions or known fraud cases? And the third layer, and this is like the holy grail of scam detection, – is we put those first two layers together – so when you have a scenario where you’ve got a legitimate user – but that legitimate user’s behaviors indicate scam or coercion activity – the bank can actually function as a last line of defense – and help protect us all – but especially vulnerable folks from scams. 

    One other important point here is that this seemingly hair-splitting difference between scams and fraud activity – it actually does matter. Often times, an investigation’s determination on scams or fraud may mean the difference between whether the bank is required to reimburse the customer or not. Historically frauds (and this may be an over-generalization) – but fraud transactions are typically reimbursed. Scam activity – where the user is tricked to transfer funds on their own, or to knowingly share credentials and passcodes – not so much. But those tides are also shifting. In the UK, the Financial Ombudsman Service is finding in favor of fraud and scam consumer complainants more frequently than in previous years. In the U.S. the Consumer Financial Protection Bureau recently amended its guidance on those scams where the fraudster tricks the victim into sharing their OTP code – this is exactly what happened in the scam Coby Montoya fell victim to. Now victims are covered in that scenario in the U.S.

    So this is another potential loss channel for financial institutions, which only increases the incentive to better detect scam activity. 

    Digital Tells is written and narrated by me Peter Beardmore, in partnership with my producer Doug Stevens of Creative Audio and Music, and with the unwavering support and sponsorship of my employer, BioCatch.

    Special thanks to Coby Montoya – Coby wrote a blog about the experience he shared at the beginning of this episode. You can find a link in our shownotes. Also thanks again to Tim Dalgleish and Ayelet Biger-Levin.

    For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast.

    Join us for episode 5, in which we’ll explore Mules. Are you a mule? Truth is, you may not even know. More importantly, why should financial institutions care about mule activity? And what can be done to detect it?

    Until then, take care.

  • The third episode of Digital Tells: A BioCatch Podcast tackles the global epidemic of identity theft, and the resulting fraudulent accounts that ruin personal credit ratings, perpetuate mule activity and money laundering, and drain institutions of $Billions annually. Tom O’Malley joins us again to discuss why most account opening fraud occurs online. Raj Dasgupta from BioCatch, discusses the peculiar online behaviors exhibited by cybercriminals, versus those of genuine account applicants; The Digital Tells that help Behavioral Biometrics distinguish between criminal and genuine activity. Ayelet Biger-Levin discusses BioCatch’s newly-announced Age Analysis Capability. And Howard Edelstein shares a story of account opening fraud detection that has become BioCatch lore. 

    Tom O’Malley, a retired U.S. Department of Justice financial crimes prosecutor, founded a website, FrozenPII.org, which helps consumers protect their identity. Check it out!

    Transcript

    Have you ever been the victim of identity theft? Ever applied for a loan or a credit card, only to find out someone else has masqueraded as you and negatively effected your credit standing? Identity theft and new account fraud is a global problem. If you live in the United States, chances are you’ve been a victim – and if not ~ it’s likely someone close to you has been.

    I was chatting with Tom O’Malley, the former federal financial crimes prosecutor you met in Episode 2, and we were discussing identity theft. The U.S. federal trade commission reported recently that $3.3B was lost in 2020 due to identity theft – that’s nearly double the $1.8B lost in 2019. 

    And where are those stolen identities put to work? well, online of course – in the form of new accounts – credit card accounts, lines of credit, deposit accounts, you name it. Here’s Tom O’Malley

    Tom O'Malley

    Most often they're being opened remotely because it presents a little risk to the person who's opening an account. I mean, if you show a physically token something besides whatever documents you present, which are going to be fake driver's license, et cetera, you put yourself as a criminal at risk because there surveillance cameras. Nowadays, there's the ability to match surveillance footage with driver's license, facial recognition, driver's license. So typically criminals are not going to do this physically in a branch bank. They're going to do it remotely and they can do it remotely from anywhere in the world and depending on a bank's processes and fraud methods to detect fraud, it can be done from anywhere in the world, even though they're supposed to be a customer in the United States, opening up a bank account.

    This is interesting, unlike the scams and account take over stories that we discussed in earlier episodes – crimes that disproportionately target older folks – Identity Fraud victims are more likely to be young… like under 40. In fact, in 2019 of the 1.6 million identity fraud reports in the U.S. – 44% were from people between the ages of 20 and 29. According to Equifax Canada, nearly half of all suspected fraud applications are for those between 18 and 24. 

    Ok – so – somebody gets ahold your personal information, enough to open a credit card account in your name. Maybe they obtained your personal info on the dark web – maybe it was originally stolen in some big corporate data breach. And then that info, your data, is applied to an online form to open an account. Oh, by the way – it might not be a credit account – it could be just a bank account, so instead of obtaining false credit in your name – is used for shuffling money between accounts – for scams – or mule activities – both issues we’ll be taking a closer look at in later episodes. 

    For this episode of digital tells, we’re taking a close look at the act of opening fraudulent accounts. Which, for those of us who have been victims, happens silently in the background… Before that heart-in-your-throat moment when you realize your credit rating has been ruined… or perhaps even worse, you’re contacted by law enforcement about scams or mule activities perpetrated in your name.

    Also – very important note here – your credit rating – or mine for that matter – isn’t the only fall-out of identity theft. Financial institutions, credit issuers, they’re the ones usually taking the hard financial losses. A study released earlier this year by Javelin Strategy & Research, reported that combined fraud losses climbed to $56 billion in 2020 globally. Of that, traditional identity fraud losses totaled $13 billion. 

    Well, back to that initial account opening, in episode 2 we got a glimpse into the sophistication and scale of cybercrime syndicates…. Scale meaning LOTS of accounts and lots of victims. It’s sendom just one account, rather it’s usually hundreds or even thousands of accounts opened in each campaign. 

    And therein lays an opportunity for institutions to differentiate between legitimate and fraudulent applications. The Digital Tells of fraudulent applications – if you will.

    Act 2

    My colleague Raj Dasgupta and I were recently talking about what typically happens during the act of applying for fraudulent accounts. Raj is the Director of Fraud strategy at BioCatch, and has two decades of experience in the trenches – dealing with identity fraud issues at organizations like TransUnion, HSBC, and Symantec, among others.  

    OK, so before I go to Raj – for just a moment – think about what you do when you open an online account… maybe your taking advantage of a great credit card deal with lots of hotel rewards points. Then put yourself in the seat of one of these highly specialized cybercriminals we discussed in episode 2 – how would you go about your job of applying for multiple fraudulent accounts – hour after hour – all day long?

    OK – here’s Raj -  

    Raj Dasgupta

    Yeah, sure, I think copy pasting in online interaction can be on two different scenarios. One is account opening where you are copy pasting stolen information or made up information onto a form which is used for a new account opening. And it can be copy pasting the name, address or certain parts of the PII, quite likely from an application like an Excel sheet where you have all the stolen data. And within that copy pasting behavior. One is it's unusual for somebody applying for a new account to be copy pasting their own data. And the other is there can be copy paste and then erasing the pasted data, putting it in another form. As I was saying, it could be that the first name, last name are together in the Excel sheet. It's copied over to the first name field and then you cut the last name and place it in the last name for you. Very, very unusual scenarios or online behavior. 

    Peter Beardmore

    Let's transition to somebody actually reading this information. Right. So it's like long term memory versus short term memory. Can you can you talk about that a little bit?

    Raj Dasgupta

    So again, imagine in the context of account opening, you're typing in your name and address, Social Security number. You've been doing it for many, many years. It comes very fluently. You can type all the nine digits in at a steady cadence without stopping or without having to delete any digit and retype it in because you're essentially pulling it out of your long term memory and typing in the fraudster has stolen that information from somewhere else. That information does not belong to them. And they're either copy pasting the Social Security number or the name or address or typing it in. But because they're not familiar with that data, they'll make mistakes and they'll correct those mistakes. And then there type it again. 

    Peter Beardmore

    So that behavior – cutting and pasting – the pace and pauses exhibited when entering personal information – those are just some of the Digital Tells that are the underlying indicators for behavioral biometrics to distinguish between genuine and fraudulent online account opening.

    In episode 2 we met Ayelet Biger-Levin, VP of Market Strategy at BioCatch. Later in the conversation we featured in episode 2, she went a little deeper into some of these indicators, and how BioCatch technology can make those distinctions.

    Ayelet Biger-Levin

    Some classic examples of the way that with this type of technology, we can distinguish between cyber criminal activity and genuine activity is by looking, by profiling the population and detecting differences between activities that correlate with fraud or correlate with genuine activity. So, for example, one thing that we observe when we track account opening activities is that there is a big difference between a cyber criminal and a legitimate actor and their familiarity with the process. A cyber criminal will be very, very familiar with the account opening process because they open many, many accounts every day. So they'll be very familiar with what are the mandatory fields. When you have a dropdown, they don't stop to select fields. They just go really quickly. They don't read the Ts and Cs, they won't select a credit card design. They'll just go very, very quickly and fill out the form, whereas the legitimate user will read the terms and conditions, will select their favorite credit card design, will think about their annual income, will select their interest rates and make decisions and selections. The process will be much longer. So that's one example. 

    A second example is familiarity with data. A legitimate actor will be very, very familiar with their personal data. And when someone uses the data that they're familiar with, they will display use of their long term memory. So when they type, they will type continuously without pauses and they will, of course, know the data they might have Autofill, which is legitimate, and they'll enter the data fairly quickly. However, cyber criminals, when they need to enter personal data, they'll either copy or paste it from a list. They might type it because they try to memorize it. But we will see that they're using their short term memory and we'll see segmented typing along the way. They often have errors that they need to fix and they really display low familiarity with the data. It's interesting that some fields are actually not known to legitimate actors like think about part of the application process. You need to fill in a hotel rewards card.

    That's not something that number is not something that you have in hand. You probably have to log into your email, look for that number, whereas a cyber criminal who knows the process and wants to fill out that that number potentially will have that readily available. 

    Hopefully at this point the idea is pretty clear by now – cybercriminals and legitimate applicants behave differently. Form formality. Short term and long term memory access. And obviously cut and paste and autofills can also make great indicators. BioCatch can leverage these Digital Tells to help organizations that rely on online applications for their business - protect themselves from fraud losses. And they also help and protect society – people like you and me – from being victimized by identity thieves and cybercrime syndicates. 

    But wait, there’s more. You may recall in episode 1 when I teased the idea that behavioral biometrics can actually guess your age. Not too long ago a BioCatch customer had an idea – if an application indicates the applicant is say 18 or 19 years old – or 75 or 85 years old for that matter – but the data is entered by someone say in their 40’s… could we detect that? It turns out, to a degree of certainty – we can! Here’s Ayelet again.

    Ayelet Biger-Levin

    When looking when analyzing the data and trying to find those correlations between ages and the use and the interaction. We found a shocking truth that for every year over 40, your keystrokes become slower. But specifically, there were nuances in things that we can look at, like shift to letter. So when you want to capitalize something, there are a few milliseconds added for every year over 40, and we could see a dramatic difference between someone in their 20s and someone in their 60s or 70s when conducting these activities. Another element is the use of a mobile device and the area in which users interact. So their swipe or the use of two thumbs versus a finger. A lot of indicators of age, very, very subtle things. But again, looking at the combination of those we're able to detect within five years, the age group that the user really belongs to.

    Act 3

    Alright, so, with all this technology to help differentiate between real and fraudulent account applications, you’ve got to figure that occasionally – some really interesting results follow. You’re going to want to listen up to this story… it’s a good one. 

    If you’re like me, you may have worked for a company or two in your career that has its own folk-lore. I’ve actually worked for 3 or 4 . You know those stories that everyone’s heard – inside and outside the organization that make it fun to talk about. I once worked for a company whose founder “allegedly” ran over the car of a pizza delivery driver with his tank while the poor guy was carrying the pizza to the front door. That story still occasionally comes up in conversation – and I still can’t confirm or deny it.

    Fortunately, BioCatch has no such infamous lore – but the story you’re about to hear I heard more than a few times. And this one I can not only confirm is TRUE, but it helps to make another really important point about the value of detecting accounting opening fraud using behavioral biometrics.

    In episode 1 you met Howard Edelstein, BioCatch’s chairman . In a second here I’m going to drop you into more of the conversation he and I had. In this part he was talking about winning the business of a major financial services company and the early stages of their work with BioCatch. Here’s Howard.

    Howard Edelstein

    And the story in point was we identified this is a particular case that came out of an analysis while where they were becoming a client, a particular case where someone was applying for a credit card. We thought it was perfectly legit. They filled out the entire application. And anyone who filled out the application that way had to be OK. Well, the credit card company turned down the application and they turned it down because they told us it was fraudulent. And we said, OK. And we went back. And you were always trying to figure out, you know, if the model works and the AI is humming along and the data science team came back and said, listen, you know this. We looked at the data. This can't be a fraudulent applications the guy really knew what he was entering. And the credit card company said, you know, we don't want to piss you guys off or anything, but just want to tell you it really is fraud. And we went back and forth a few times and we said, well, how do you know that? And they said, it's really simple. The guy's dead. Well, that's one of those New York binary kind of answers, right? Dead not dead, you know? Well, our data science team doesn't exactly take that at face value. They said, I think we better call them and tell them the guy's not dead. And everyone kind of looked at each other and said, you got to be effing kidding. Really? What am I going to do with this gem of a piece of information? Right. Because in the end of the day, it turned out they actually called the guy for the reported the guy. And someone answered the phone purporting to be to the dead guy who was applying for a credit card. And one thing led to another, and it turned out that, believe it or not, the guy was far from dead. And this was determined through the use of behavior. So it's a really simple explanation, quite frankly. But the explanation was that someone, a legitimate person entering a legitimate information for legitimate credit card application mistyped a digit of his Social Security number in the U.S. that social corresponded to a social of someone who was deceased. The byproduct, well, that was actually decreasing false declines and increasing number of credit cards to give out, which also was a real revenue opportunity for them. So it's a win win win situation and behavior had never been used this way before.

    Peter Beardmore

    So this is a great story – which raises a few important points – none of which pertain to BioCatch resurrecting the dead.

    But it’s important to understand, as we mentioned previously, that behavioral biometrics isn’t the only fraud detection technology out there. There are others. But none are infallible. And some may introduce friction (like asking life questions or imposing other obstacles) that prospects potentially just don’t want to deal with. And business spend lots (and lots) of money on marketing and customer acquisition… for organizations to lose a potential customer at the very point of filing out an account application / only because the anti-fraud tech is too cumbersome – or they accidently mistyped something – well – that’s just heartbreaking for marketers like me.

    In episode 6 we’ll talk about the return on investment (or ROI) of behavioral biometrics. But suffice, it’s not just about stopping fraud. It’s at least equally about winning and retaining good customers. By reducing friction – and making for a great customer experience.

     Digital Tells is written and narrated by me Peter Beardmore, in partnership with my producer Doug Stevens of Creative Audio and Music, and with the unwavering support and sponsorship of my employer, BioCatch.

    Special thanks to Ray Dasjupta, Ayelet Biger-Levin, and Howard Edelstein. We once again opened our episode with Tom O’Malley. Since Tom retired from the US Department of Justice, he’s started a website called FrozenPII.org. Pie is spelled PII (as in Personally Identifiable Information). The site helps consumers protect their identity. You can find a link in our show notes, check it out!

    For more information about this episode, behavioral biometrics, or to share a comment or idea, visit biocatch.com/podcast.

    Join us for episode 4, in which we’ll explore Scams. Did you know your car warrantee is about to expire? More importantly, what can be done to help detect when someone is about to be victimized by a scammer?

    Until then, take care.

  • The second episode of Digital Tells: A BioCatch Podcast looks at how sophisticated cybercrime networks perpetrate mass account take over fraud. Former U.S. federal prosecutor and consumer identity expert Tom O’Malley shares an overview of the GozNym cybercrime network, from which three members were prosecuted in 2019. We also talk with Jonathan Barnes, a retired attorney who discovered while on vacation that his personal bank account was being drained by cybercriminals. Finally we speak with Tim Dalgleish and Ayelet Biger-Levin, both of BioCatch, to discuss both the tactics of cybercriminals and the ‘Digital Tells’ that may help to identify account take over fraud. 

    Tom O’Malley founded a website, FrozenPII.org, which helps consumers protect their identity. Check it out!

  • The inaugural episode of Digital Tells: A BioCatch Podcast explores the origins of behavioral biometrics with BioCatch founder Uri Rivner and Chairman and CEO Howard Edelstein. The episode begins with host Peter Beardmore’s visits with his elderly mother and aunt, both having recently been targeted by scammers. Their stories illustrate the prevalence of cybercrime and scams throughout society, and the need for innovative solutions to help protect consumers and financial institutions alike.

    

    The concept of technology that can use the ‘Digital Tells’ of online behavior (mouse movements, typing habits, etc.) to validate users or determine fraudulent intent may seem like the stuff of science fiction. In fact, it initially did to some of the leaders of BioCatch. But today, it’s real, preventing over 6 million fraud incidents per year and protecting hundreds of millions of people.