Gå offline med appen Player FM !
Generative AI in Identity Verification with Russ Cohn, IDVerse – Podcast Episode 98
Manage episode 379450679 series 3382006
Let’s talk about digital identity with Russ Cohn, the (Go-To-Market) for IDVerse.
In episode 98, Russ Cohn the Go-To-Marketing for IDVerse joins Oscar to explore Generative AI within Identity Verification – including what is generative AI and deepfakes, why deepfakes are a threat for consumers and businesses, and some of the biggest pain points in the identity industry and how generative AI can support this.
[Transcript below]
“It’s very important that we understand these threats and start to mitigate and create ways of helping to support and stop these practices.”
Russ Cohn is the (Go-To-Market) for IDVerse, which provides online identity verification technology for businesses in the digital economy. Russ has spent more than 20 years scaling businesses of all sizes by delivering successful growth strategies across the UK, EMEA & US markets within fast-paced and high-growth online media, fraud, identity, SaaS, e-commerce, and data-driven technology solutions.
His strong tech knowledge is coupled with deep operational and commercial experience building teams within SaaS, advertising and marketing technology-driven revenue models. Russ was previously a key early member of the Google UK leadership team who grew the team from 25 to 3,000 people and the revenue from £10m to £1billion during his tenure. He brings deep experience supporting international technology companies and has a passion for marketing development, startup growth and technology solutions.
IDVerse empowers true identity globally. Our Zero Bias AI™ tested technology pioneered the use of generative AI to train deep neural network systems to protect against discrimination. Our fully-automated solution verifies users in seconds with just their face and smartphone—in over 220 countries and territories with any official ID document.
Connect with Russ on LinkedIn.
We’ll be continuing this conversation on Twitter using #LTADI – join us @ubisecure!
Go to @Ubisecure on YouTube to watch the video transcript for episode 98.
Podcast transcript
What is generative AI? This week Russ Cohn, from IDVerse has joined us to discuss generative AI and deepfakes and the threat this imposes on businesses and consumers for their digital identities. Stay tuned to find out more.
Let’s Talk About Digital Identity, the podcast connecting identity and business. I am your host, Oscar Santolalla.
Oscar Santolalla: Hello and thank you for joining a new episode of Let’s Talk About Digital Identity. Artificial Intelligence, in particular, Generative Artificial Intelligence is a topic that has been, I believe on most of our radars in the last 12 months, particularly. And there are amazing things going on. But also, we know that the bad guys are also using those tools. And one of those is related to deepfakes that are being used to cheat the identity verification system having existing until now.
So, to see how we are going to solve those problems in identity verification, these newer problems, we have a special guest today who is Russ Cohn. He is the go-to market for IDVerse, a company which provides online identification technology for businesses in the digital economy.
Russ has spent more than 20 years scaling businesses of all sizes by delivering successful growth strategies across the UK, EMEA, and US markets, within fast-paced and high-growth online media, fraud, identity, SaaS, e-commerce, and data-driven technology solutions. His strong tech knowledge is coupled with deep operational and commercial experience building things with SaaS, advertising and marketing technology driven revenue models.
Hello, Russ.
Russ Cohn: Hello, Oscar. How are you?
Oscar: Very good. Happy to have you here.
Russ: Thank you. Very glad to be here.
Oscar: Fantastic. It’s great to have you here. And we’ll talk about the deepfakes and how the newest practices in identity verification are solving these problems. So, let’s start, let’s talk about digital identity, Russ.
So first of all, I would like to hear a bit more about yourself, your story. Tell us about yourself and your journey to the world of identity.
Russ: Absolutely. I am fairly new to identity. I’ve only really started in the industry probably just over three years ago. I was the first international employee of OCR Labs, which is we recently rebranded to IDVerse, but I joined about three years ago. We’ve since then built the international team to over half the company, and we continue to grow in EMEA and the US.
As a background, I’m a marketer, a commercial leader, investor. I’ve spent probably over 20 years in technology-driven companies of all sizes. And I was lucky enough to join Google very early on, and there were 20 people in the UK, and 600 people around the world. And I grew up with them a little bit, and I left there with 65,000 people. So, I’ve got a fairly good experience at scanning companies and have invested and advised companies since then.
I’m now, as I said at IDVerse. And I’m focused on the go-to market. So, helping them globally, to take our products and execute them in the best possible areas and help our customers with the most cutting-edge technology to drive identity verification, make it effortless. Obviously, through the use of our sophisticated technologies and techniques, including Generative AI.
I’m excited about the opportunity for identity verification, as the need for verified trusted identities has grown exponentially, globally, really, since the pandemic. And with digital growing at such a phenomenal rate as well, we’re now living in a mobile-first world, and we need the right kind of identity verification to support that growth.
Oscar: Indeed. So, let’s go to some basics. For someone who has heard about that term, Generative AI and still is not so clear what it is, particularly. Could you tell us what is that? What is Generative AI?
Russ: Yeah, sure, I think, you know, everybody is talking about ChatGPT and Bard and it’s brought these techniques, the AI techniques to the public, and we can’t get enough of them. But everyone is using ChatGPT and Bard, etc to learn more, do their jobs better, find new facts. It’s pretty addictive and very, very useful but still at the at the fairly early stage.
So Generative AI, short for Generative Artificial Intelligence refers to a class of artificial intelligence systems and techniques that focus on generating new content or data rather than simply recognising patterns or making decisions based on existing data. Now these systems are designed to create original content that resembles human created data such as images, music, texts, videos, and more.
I use Spotify extensively. I’m sure most people do. And I’ve got an AI system on there now a couple months ago that’s going through my music catalogue in my background and choosing the right music based on my tastes. Generative AI models are generally trained on large datasets, and they learn to understand the underlying patterns and structures within the data.
So once trained, they can produce new examples that are similar to the data they were exposed to during their training. These models are capable of generating content that didn’t exist in the original dataset, making them a very powerful tool for creative tasks in content creation. Now at IDVerse, we’ve been doing Generative AI for a long time, probably since the start, seven or eight years ago.
And we use a technique, a very familiar technique called Generative Adversarial Networks or GANs, I’m sure a lot of your audience will be familiar with. Now GANs, just to go back to basics, consists of two neural networks, a generator and a discriminator. These are trained together in a competitive manner. The generator creates the synthetic data, and the discriminative task is to differentiate between the real and the generated data.
So, the competition between the two networks leads to the generation of increasingly realistic content, which we see everywhere in videos, photos, documents, et cetera. Now, we’ve trained millions of synthetic and real documents and millions and millions of synthetic faces using these techniques. For us, just to be clear, we only use ethically sourced or fair source data for face biometric, particularly in the training. This refers to the facial recognition datasets collected and used in a manner that upholds strict ethical standards and respects individual’s privacy, consent and fairness.
Such data is obtained transparently with informed consent, minimal intrusion and efforts to mitigate bias. So, these measures ensure the responsible and equitable use of biometric technology. In the context of facial identity verification, training data refers to the specialised datasets of facial images used to train the machine learning algorithm, or deep neural networks that are responsible for recognising and verifying individual’s identities based on their facial features.
So that’s quite a mouthful. Hopefully, that gives you some context. But this is how we look at Generative AI in identity verification.
Oscar: Yeah, thank you for that introduction. Of course, in one of the products of this type of Generative AI, in related tools are deepfakes that we are seeing more often, sometimes we saw that only for, like, say celebrities or famous people. But now, they can be used to attack me or to attack you, actually anybody right?
So, tell us how the use of deepfakes is a threat, a real threat for both consumers and businesses?
Russ: Yeah, absolutely. I think they are a massive threat as the rise of Gen AI, and you touched on it, fraudsters use the same if not better techniques than we do, or many companies do. And they are very, very good at surging ahead of these technologies and finding ways to create very realistic synthetic identities to both impersonate real people, as well as to create brand new identities of people who actually don’t even exist in real life.
And so, while that’s exciting as we talk about Web3 and avatars and these opportunities and possibilities, I think both consumers and businesses will continue to fall victim to many of the risks out there, unless measures are taken to prevent this.
Now, I just want to highlight a couple of examples of these like disinformation and fake news, right? So, creating videos of public figures, you can grab off Facebook or YouTube, and replicate those and make them do things that they never did. That can be exploited to spread false information.
This can incite conflicts and it can really manipulate public opinion. For us, we see and obviously, we’re very close to and care a lot about frauds and scams, so businesses and consumers of course, can – in the UK particularly we have a huge fraud problem. And we see a lot of deepfake base scams that can impersonate company executives, trusted individuals, they can deceive employees or the customers who can make them reveal sensitive information for financial transactions.
We’ve seen some of that just recently with MGM in the US in this recent breach. We don’t know it exactly, but we do know, I think somebody, an employee was actually targeted. This can cause you know I think like reputation damage of people, you know, politicians, businesses and people, fake videos and audio can be created. To endorse a product or not support it and that can create problems. And of course, the things we care about a lot of, identity theft, right?
And deepfakes can be used to impersonate individuals leading to identity theft. This may result in unauthorised access to personal data or systems. And of course, manipulation in financial markets, personal bank accounts, breaches of banks. So, this can cause big issues like privacy concerns, security threats and erosion of trust, through the wide use of this, and internal security problems for businesses, and privacy for people when they violated, and their identities are stolen.
So, it’s very, very important that we understand these threats and start to mitigate and create ways of helping to support and stop these practices.
Oscar: Yeah, indeed, you already explained some cases in which these criminals are already targeting the identification system that has been existing in the last years. If we focus on these services that are today and have been protecting us or helping us in identifying people in the last years. So, what are these – the biggest pain points or the weaknesses that they are being attacked by these criminals?
Russ: Yeah, look, I mean, there’s a lot of weakness in existing systems, which can come across in the fact that vendors don’t disclose, for example, that they don’t use their own technology, and they can’t always deliver on their promises. So, I think a lack of global document coverage, old style techniques like templating exclusion, like racial bias, gender and age in these poorly designed systems can cause huge problems. And systems that don’t have the ability to understand where these attacks are coming from with these synthetic IDs.
We create all of our own tech in-house. So, we don’t use external vendors to drive our fully automated solutions. So, we feel pretty confident. But they are, as you mentioned, these legacy systems that we’ve relied on, that aren’t necessarily up to speed. We’ve seen, from a pain point of view, is badly trained human spotters in remote locations, for example. So, some people in the industry and vendors use those, this can cause slow response times, and they can’t keep up with the standards and the technology that’s being used to identify fraudulent documents.
And also, the biometrics of people that are not real. So, it’s very difficult for them to keep up. And then, we’ve seen an issue around a lot of bias or differentials in the natural bias that’s in previous ID systems designed by, traditionally older white male engineers. And that’s a problem because these biases are built into these systems. And the humans who are evaluating physical documents, depending on where and how and what can inflict their own biases on age, gender, and race as well.
Now, this can slow down experiences for customers, as they take a lot longer. And of course, they aren’t as accurate, you know, humans can’t scale. And so, technology can do a lot of that heavy lifting, and can solve a lot of that. And you can still have humans for critical tasks, but it’s important that you use technology to identify these gaps.
In fact, we ran a study a few months ago with an external testing company called BixeLabs of 1500 subjects, male, female and transgender, across eight regions in the world for our facial biometrics. And we came back with zero bias on either race or gender on the facial biometrics. So, it’s pretty important that businesses start to use, and people start to get comfortable with one of the strongest, probably the strongest biometric there is for lots of actions that we do take in our everyday lives, whether it’s on a personal or work basis.
And I think that the other things that are challenging for us in the identity space is we see a lot of unethically sourced based biometrics, right? And that can refer to the acquisition usage or distribution of these, that can violate privacy, I mentioned earlier consent or ethics.
And these practices really can result in privacy infringements, discrimination, social harm and legal issues. And some examples of that are data scraping and profiling, lack of informed consent, data breaches, of course, we’ve seen that recently and frequently, deepfakes as we talked about and manipulation of people, government surveillance, employment discrimination. These are big issues.
And I think the lack of unified government standards around these things is also difficult. And it’s important that people use the latest technologies like computer vision and Generative AI to start, to be able to scale and address some of these issues and keep users and businesses safe going forward. But those are definitely some of the issues that we’ve seen accumulate over the last few years.
Oscar: Yeah, yeah, I can see there are quite a few. And how these more recent generation of identity verification system that are working together with Generative AI. So, if you can tell us a bit of the how, how they are different to the previous products, and how they are tackling these problems?
Russ: Yeah, as I expressed in some of the technologies that we use, I mean, training data for Gen AI, for example, if you think of it, if I can frame it in like nutritional labels like food, right? So, you’re feeding a machine, essentially. And so that training data should come with some sort of nutritional label, and to know what the macro nutrients will affect performance. So, you know, it’s important that when using Gen AI, you understand that the nutritional makeup of their training data, supply chain transparency, where do you get their data from, for example.
But it’s important, these techniques are able to detect the proliferation of these fake documents. I think digital identity is becoming more and more, of course, prolific and governments are starting to bring onboard connectivity into these digital identity databases that are able to verify customers in a much more robust way than potentially documents were.
So, I think we’ll see that constant trend of digitisation of technology, mobile-first, wallets, and of course, documentation that will become digital will make life a little bit easier. But, in order to protect themselves, consumers and businesses really need to think about what they can do to stop and be vigilant, right?
So, I think consumers need to educate themselves. They need to use things like password protection and protect their devices and be aware of things like phishing tactics in social media and email. So, we can do as much as we can for businesses, but I think businesses need to invest in these systems because they are stronger, the security measures are stronger, and will help protect them and their customers ultimately.
I think the differences that we see, we believe facial biometrics is a very, very strong and has been proven externally through, you know, NIST iBeta certification, for example, we have a 99.998 certification of liveness biometrics, I mentioned the inclusion and lack of racial bias. If you want to capture and work with people of all races, all genders, all colours across the world, it’s important to use systems that are inclusive, otherwise, you’ll end up discriminating and losing customers.
So, it is important to make these investments into these systems to help protect your business and help protect the consumers behind that. But ultimately, consumers have to also be educated themselves. They have to think about what they’re doing and be aware of things that are out of the ordinary or suspicious, unsolicited requests, for example. And then lastly, I think, you know, government needs to engage in some sort of public dialogue as well to help consumers about understanding what they’re doing in these initiatives.
And government needs to work with business as well to inform the public about things like biometric technology, ethical implications, and why they should be using these. But ultimately, there should be some ethical guidelines and review boards to be able to support the usage of this new technology that’s coming at us at such a pace. It’s really strong, really powerful and really useful.
But there have to be some guardrails around that, and I think it’s going to take a collective effort from consumers, businesses and government to get us there.
Oscar: You mentioned, for instance, a liveness detection that is one of the ways that this identity verification tools are checking that the person is a real person moving in front of the camera. In terms of the end user, so when the end user is in front of this identity verification system that are based on Generative AI, so let’s say user experience is similar, is so how transparent or is different?
Russ: Yeah, I think, look, with facial recognition, for example, and the techniques we use in identifying people when they’re going through the process of verifying themselves or for account access or re-authentication, no personal data is stored. So, the use of those biometrics is the ability to give people a robust way to prove themselves and their proof of life, if you will, when doing a particular action.
And I think what’s been missing in the past is people have accepted a document which could or could not belong to that person to be the valid form of identity. The reason why identity documents around the world had been the standard is there was always a picture of your face on that document.
So, you had a passport or driver’s license, you could see it was you in a sense. So, with liveness, people are protected the same way as using phones to open up access to your phone and to those systems. But these systems are tested and there is no personal data. People should feel very comfortable that the data that they’re using to generate that action is protected and their own in terms of doing that.
We’re just using technology to be able to verify that that person is live and present, and is not a deepfake, was not a synthetic ID. Because what we see a lot is these presentation attacks when people are using video footage that are grabbed from external sources, for example, to try and fake systems or try and trick systems that they are actually live and present.
But we are able to detect these digital footprints and be able to detect using multiple sources of multiple techniques on the mobile phone that we build software for that that person is live and present and is presenting the document that they say they are in order to verify themselves.
Oscar: Thank you, for explaining better how it worked for users. So, it’s simple for users. It’s not more complicated.
Russ: Simple and seamless and quick as well. It’s not more complicated. It’s less complicated, in fact, right? So, when you presented with it – there has to be a trust of course in the environment that you’re doing, and then providing your face to do that.
But ultimately, it’s safer and quicker, and ultimately more secure than any sort of biometric that they might have used previously.
Oscar: Yeah, it’s true. You mentioned also faster sometimes I think, being in front of these systems and yeah you are, waiting a little bit in front of the camera, right until it processes.
Russ: Yeah, look, it depends on the speed and the connectivity in the region you’re in, and it might be the phone and your mobile network, for example. But we account for all of that in the software that we design in helping people to process that. So, we shoot like a live stream video, and we take the best shots out of about 100, 120 frames that we shoot out of that video. It’s a very quick two or three second capture, and we’re able to compare the best quality face to the document that’s presented in this process.
Now, we can account for age, facial degradation, loss of hair, glasses, et cetera because we are looking at the underlying structure of someone’s face when doing that. So, we’re 3D mapping essentially that person’s face, and are able to then tell against the original document that’s presented if that person is the same person.
And that you can’t do, it’s very hard to do with humans, for example. And that’s why technology can do a lot of this lifting very, very quickly. We can do it in seconds and verify the person against very old very age documents or changes to their facial structures. And so, we’re very excited about how these techniques can verify people to the grade that I mentioned before.
Oscar: Yeah, indeed, it sounds like there’s a lot of innovation hearing what you’re talking, you are describing. So, what we say looking at the future, so what is the future of Generative AI in identity verification?
Russ: We were excited about Gen AI’s ability to create these huge datasets of synthetic personas, because it’s going to help prevent fraudsters trying to use this synthetically created people and documents that they create to trick and penetrate low grade systems.
And the more people we can support, the more businesses we can get our technology into, the more we can stop this the synthetic IDs and penetration attacks that are happening. And we’ve seen the velocity of these increase as we see better and better tools and faster processing time to be able to do this.
So, the ability to cover the identities of the world’s population through technology and creating inclusivity for all ethnicities, all genders, means that people can be granted access regardless of where they live, what device they’re using, what colour they are, what gender they are.
So, we’re very excited about how Gen AI can train and help people. And again, this is all ethically sourced data, right? So, we didn’t go and grab it elsewhere. It’s very hard to get in front of tens of millions of faces of variations of age and, again, colour, ethnicity, gender, et cetera.
So, Gen AI really helps us to do that, I think detection tools. So, developing and using advanced technology like Gen AI to detect this deepfake content can be crucial to mitigate the potential harmful effects that might come from that. Authentication mechanisms. So, implementing strong authentication, like facial can help, again, verify the identity of individuals and reduce that risk of impersonation.
So, trust has to be ensured that it’s in place there. And of course, eliminating frauds and scams, so businesses and consumers fall victim to deepfake base scams and others every day. For instance, a scammer can impersonate a company executive, as I said, and deceive employees into revealing sensitive information or maybe making financial transactions.
So, we want to stop fraud at the door. We want to stop fraud internally, externally. And we want to help protect businesses and their customers, whether their business or consumers from the rising threat of what’s coming on synthetic identities and the scale of using Generative AI at the fraudster level.
Oscar: Sounds good. Final question, for all business leaders that are listening to us right now, what is the one actionable idea that they should write on their agendas today?
Russ: Yeah, look, there are a lot to choose from. I think the one action from my opinion, maybe is – you’ve got to think like we’re living in a mobile-first world, right? And Gen AI solutions, as we’ve talked about are surging.
So, the action I would take is take the time to speak to your fellow executives and to the teams and to the people inside your business and understand how identity is currently viewed in your approach to your people, your processes, your security, your products and your customers. Where I sit and where we sit, is we are seeing the velocity increase of identity usage across the world.
Governments are enforcing and implementing more and more identity standards in order to control obviously, governmental services. And so, it’s important that people think about identity for their own businesses. It’s going to become critical to protect them and their customers. They need to think about everything from employee onboarding, how well you know your employee and your customers.
And of course, ultimately, what we’re all achieving, or trying to achieve in digital is improving user experiences, anything from onboarding to account management, to customer services interaction. So, it’s everything that your customer, your employee might touch within your business, potentially has something to do with identity. And the better you know the people in your business and your customers, I think, the better positioned you’re going to be to be able to not only stop these threats but take advantage of beating your competition by staying ahead and knowing your customer much better.
Oscar: All right, thank you very much, Russ, for all this very interesting conversation about how Generative AI is going to help us for the identity verification now and in the future.
So, for the ones listening to us who would like to know more about you or get in touch with you, what are the best ways for that?
Russ: Yes, thank you again, for the time letting me talk about something we, you know, and I’m very passionate about and obviously we’re very passionate about fraud and particularly technology.
If they want to get a hold of me, I’m on LinkedIn, you know, Russ Cohn, C-O-H-N. IDVerse.com has a repository of amazing content and information and thought leadership around a lot of these areas, so please take your time to look across the site. And if you want to get in touch with us, there’s lots of ways to do that on the site.
So, look forward to seeing and speaking with anybody who’s interested in learning more about IDVerse and about – chatting about fraud and identity.
Oscar: Perfect. Again, thank you very much, Russ. And all the best.
Russ: Thank you, Oscar. Appreciate the time.
Thanks for listening to this episode of Let’s Talk About Digital Identity produced by Ubisecure. Stay up to date with episode at ubisecure.com/podcast or join us on Twitter @ubisecure and use the #LTADI. Until next time.
11 episoder
Manage episode 379450679 series 3382006
Let’s talk about digital identity with Russ Cohn, the (Go-To-Market) for IDVerse.
In episode 98, Russ Cohn the Go-To-Marketing for IDVerse joins Oscar to explore Generative AI within Identity Verification – including what is generative AI and deepfakes, why deepfakes are a threat for consumers and businesses, and some of the biggest pain points in the identity industry and how generative AI can support this.
[Transcript below]
“It’s very important that we understand these threats and start to mitigate and create ways of helping to support and stop these practices.”
Russ Cohn is the (Go-To-Market) for IDVerse, which provides online identity verification technology for businesses in the digital economy. Russ has spent more than 20 years scaling businesses of all sizes by delivering successful growth strategies across the UK, EMEA & US markets within fast-paced and high-growth online media, fraud, identity, SaaS, e-commerce, and data-driven technology solutions.
His strong tech knowledge is coupled with deep operational and commercial experience building teams within SaaS, advertising and marketing technology-driven revenue models. Russ was previously a key early member of the Google UK leadership team who grew the team from 25 to 3,000 people and the revenue from £10m to £1billion during his tenure. He brings deep experience supporting international technology companies and has a passion for marketing development, startup growth and technology solutions.
IDVerse empowers true identity globally. Our Zero Bias AI™ tested technology pioneered the use of generative AI to train deep neural network systems to protect against discrimination. Our fully-automated solution verifies users in seconds with just their face and smartphone—in over 220 countries and territories with any official ID document.
Connect with Russ on LinkedIn.
We’ll be continuing this conversation on Twitter using #LTADI – join us @ubisecure!
Go to @Ubisecure on YouTube to watch the video transcript for episode 98.
Podcast transcript
What is generative AI? This week Russ Cohn, from IDVerse has joined us to discuss generative AI and deepfakes and the threat this imposes on businesses and consumers for their digital identities. Stay tuned to find out more.
Let’s Talk About Digital Identity, the podcast connecting identity and business. I am your host, Oscar Santolalla.
Oscar Santolalla: Hello and thank you for joining a new episode of Let’s Talk About Digital Identity. Artificial Intelligence, in particular, Generative Artificial Intelligence is a topic that has been, I believe on most of our radars in the last 12 months, particularly. And there are amazing things going on. But also, we know that the bad guys are also using those tools. And one of those is related to deepfakes that are being used to cheat the identity verification system having existing until now.
So, to see how we are going to solve those problems in identity verification, these newer problems, we have a special guest today who is Russ Cohn. He is the go-to market for IDVerse, a company which provides online identification technology for businesses in the digital economy.
Russ has spent more than 20 years scaling businesses of all sizes by delivering successful growth strategies across the UK, EMEA, and US markets, within fast-paced and high-growth online media, fraud, identity, SaaS, e-commerce, and data-driven technology solutions. His strong tech knowledge is coupled with deep operational and commercial experience building things with SaaS, advertising and marketing technology driven revenue models.
Hello, Russ.
Russ Cohn: Hello, Oscar. How are you?
Oscar: Very good. Happy to have you here.
Russ: Thank you. Very glad to be here.
Oscar: Fantastic. It’s great to have you here. And we’ll talk about the deepfakes and how the newest practices in identity verification are solving these problems. So, let’s start, let’s talk about digital identity, Russ.
So first of all, I would like to hear a bit more about yourself, your story. Tell us about yourself and your journey to the world of identity.
Russ: Absolutely. I am fairly new to identity. I’ve only really started in the industry probably just over three years ago. I was the first international employee of OCR Labs, which is we recently rebranded to IDVerse, but I joined about three years ago. We’ve since then built the international team to over half the company, and we continue to grow in EMEA and the US.
As a background, I’m a marketer, a commercial leader, investor. I’ve spent probably over 20 years in technology-driven companies of all sizes. And I was lucky enough to join Google very early on, and there were 20 people in the UK, and 600 people around the world. And I grew up with them a little bit, and I left there with 65,000 people. So, I’ve got a fairly good experience at scanning companies and have invested and advised companies since then.
I’m now, as I said at IDVerse. And I’m focused on the go-to market. So, helping them globally, to take our products and execute them in the best possible areas and help our customers with the most cutting-edge technology to drive identity verification, make it effortless. Obviously, through the use of our sophisticated technologies and techniques, including Generative AI.
I’m excited about the opportunity for identity verification, as the need for verified trusted identities has grown exponentially, globally, really, since the pandemic. And with digital growing at such a phenomenal rate as well, we’re now living in a mobile-first world, and we need the right kind of identity verification to support that growth.
Oscar: Indeed. So, let’s go to some basics. For someone who has heard about that term, Generative AI and still is not so clear what it is, particularly. Could you tell us what is that? What is Generative AI?
Russ: Yeah, sure, I think, you know, everybody is talking about ChatGPT and Bard and it’s brought these techniques, the AI techniques to the public, and we can’t get enough of them. But everyone is using ChatGPT and Bard, etc to learn more, do their jobs better, find new facts. It’s pretty addictive and very, very useful but still at the at the fairly early stage.
So Generative AI, short for Generative Artificial Intelligence refers to a class of artificial intelligence systems and techniques that focus on generating new content or data rather than simply recognising patterns or making decisions based on existing data. Now these systems are designed to create original content that resembles human created data such as images, music, texts, videos, and more.
I use Spotify extensively. I’m sure most people do. And I’ve got an AI system on there now a couple months ago that’s going through my music catalogue in my background and choosing the right music based on my tastes. Generative AI models are generally trained on large datasets, and they learn to understand the underlying patterns and structures within the data.
So once trained, they can produce new examples that are similar to the data they were exposed to during their training. These models are capable of generating content that didn’t exist in the original dataset, making them a very powerful tool for creative tasks in content creation. Now at IDVerse, we’ve been doing Generative AI for a long time, probably since the start, seven or eight years ago.
And we use a technique, a very familiar technique called Generative Adversarial Networks or GANs, I’m sure a lot of your audience will be familiar with. Now GANs, just to go back to basics, consists of two neural networks, a generator and a discriminator. These are trained together in a competitive manner. The generator creates the synthetic data, and the discriminative task is to differentiate between the real and the generated data.
So, the competition between the two networks leads to the generation of increasingly realistic content, which we see everywhere in videos, photos, documents, et cetera. Now, we’ve trained millions of synthetic and real documents and millions and millions of synthetic faces using these techniques. For us, just to be clear, we only use ethically sourced or fair source data for face biometric, particularly in the training. This refers to the facial recognition datasets collected and used in a manner that upholds strict ethical standards and respects individual’s privacy, consent and fairness.
Such data is obtained transparently with informed consent, minimal intrusion and efforts to mitigate bias. So, these measures ensure the responsible and equitable use of biometric technology. In the context of facial identity verification, training data refers to the specialised datasets of facial images used to train the machine learning algorithm, or deep neural networks that are responsible for recognising and verifying individual’s identities based on their facial features.
So that’s quite a mouthful. Hopefully, that gives you some context. But this is how we look at Generative AI in identity verification.
Oscar: Yeah, thank you for that introduction. Of course, in one of the products of this type of Generative AI, in related tools are deepfakes that we are seeing more often, sometimes we saw that only for, like, say celebrities or famous people. But now, they can be used to attack me or to attack you, actually anybody right?
So, tell us how the use of deepfakes is a threat, a real threat for both consumers and businesses?
Russ: Yeah, absolutely. I think they are a massive threat as the rise of Gen AI, and you touched on it, fraudsters use the same if not better techniques than we do, or many companies do. And they are very, very good at surging ahead of these technologies and finding ways to create very realistic synthetic identities to both impersonate real people, as well as to create brand new identities of people who actually don’t even exist in real life.
And so, while that’s exciting as we talk about Web3 and avatars and these opportunities and possibilities, I think both consumers and businesses will continue to fall victim to many of the risks out there, unless measures are taken to prevent this.
Now, I just want to highlight a couple of examples of these like disinformation and fake news, right? So, creating videos of public figures, you can grab off Facebook or YouTube, and replicate those and make them do things that they never did. That can be exploited to spread false information.
This can incite conflicts and it can really manipulate public opinion. For us, we see and obviously, we’re very close to and care a lot about frauds and scams, so businesses and consumers of course, can – in the UK particularly we have a huge fraud problem. And we see a lot of deepfake base scams that can impersonate company executives, trusted individuals, they can deceive employees or the customers who can make them reveal sensitive information for financial transactions.
We’ve seen some of that just recently with MGM in the US in this recent breach. We don’t know it exactly, but we do know, I think somebody, an employee was actually targeted. This can cause you know I think like reputation damage of people, you know, politicians, businesses and people, fake videos and audio can be created. To endorse a product or not support it and that can create problems. And of course, the things we care about a lot of, identity theft, right?
And deepfakes can be used to impersonate individuals leading to identity theft. This may result in unauthorised access to personal data or systems. And of course, manipulation in financial markets, personal bank accounts, breaches of banks. So, this can cause big issues like privacy concerns, security threats and erosion of trust, through the wide use of this, and internal security problems for businesses, and privacy for people when they violated, and their identities are stolen.
So, it’s very, very important that we understand these threats and start to mitigate and create ways of helping to support and stop these practices.
Oscar: Yeah, indeed, you already explained some cases in which these criminals are already targeting the identification system that has been existing in the last years. If we focus on these services that are today and have been protecting us or helping us in identifying people in the last years. So, what are these – the biggest pain points or the weaknesses that they are being attacked by these criminals?
Russ: Yeah, look, I mean, there’s a lot of weakness in existing systems, which can come across in the fact that vendors don’t disclose, for example, that they don’t use their own technology, and they can’t always deliver on their promises. So, I think a lack of global document coverage, old style techniques like templating exclusion, like racial bias, gender and age in these poorly designed systems can cause huge problems. And systems that don’t have the ability to understand where these attacks are coming from with these synthetic IDs.
We create all of our own tech in-house. So, we don’t use external vendors to drive our fully automated solutions. So, we feel pretty confident. But they are, as you mentioned, these legacy systems that we’ve relied on, that aren’t necessarily up to speed. We’ve seen, from a pain point of view, is badly trained human spotters in remote locations, for example. So, some people in the industry and vendors use those, this can cause slow response times, and they can’t keep up with the standards and the technology that’s being used to identify fraudulent documents.
And also, the biometrics of people that are not real. So, it’s very difficult for them to keep up. And then, we’ve seen an issue around a lot of bias or differentials in the natural bias that’s in previous ID systems designed by, traditionally older white male engineers. And that’s a problem because these biases are built into these systems. And the humans who are evaluating physical documents, depending on where and how and what can inflict their own biases on age, gender, and race as well.
Now, this can slow down experiences for customers, as they take a lot longer. And of course, they aren’t as accurate, you know, humans can’t scale. And so, technology can do a lot of that heavy lifting, and can solve a lot of that. And you can still have humans for critical tasks, but it’s important that you use technology to identify these gaps.
In fact, we ran a study a few months ago with an external testing company called BixeLabs of 1500 subjects, male, female and transgender, across eight regions in the world for our facial biometrics. And we came back with zero bias on either race or gender on the facial biometrics. So, it’s pretty important that businesses start to use, and people start to get comfortable with one of the strongest, probably the strongest biometric there is for lots of actions that we do take in our everyday lives, whether it’s on a personal or work basis.
And I think that the other things that are challenging for us in the identity space is we see a lot of unethically sourced based biometrics, right? And that can refer to the acquisition usage or distribution of these, that can violate privacy, I mentioned earlier consent or ethics.
And these practices really can result in privacy infringements, discrimination, social harm and legal issues. And some examples of that are data scraping and profiling, lack of informed consent, data breaches, of course, we’ve seen that recently and frequently, deepfakes as we talked about and manipulation of people, government surveillance, employment discrimination. These are big issues.
And I think the lack of unified government standards around these things is also difficult. And it’s important that people use the latest technologies like computer vision and Generative AI to start, to be able to scale and address some of these issues and keep users and businesses safe going forward. But those are definitely some of the issues that we’ve seen accumulate over the last few years.
Oscar: Yeah, yeah, I can see there are quite a few. And how these more recent generation of identity verification system that are working together with Generative AI. So, if you can tell us a bit of the how, how they are different to the previous products, and how they are tackling these problems?
Russ: Yeah, as I expressed in some of the technologies that we use, I mean, training data for Gen AI, for example, if you think of it, if I can frame it in like nutritional labels like food, right? So, you’re feeding a machine, essentially. And so that training data should come with some sort of nutritional label, and to know what the macro nutrients will affect performance. So, you know, it’s important that when using Gen AI, you understand that the nutritional makeup of their training data, supply chain transparency, where do you get their data from, for example.
But it’s important, these techniques are able to detect the proliferation of these fake documents. I think digital identity is becoming more and more, of course, prolific and governments are starting to bring onboard connectivity into these digital identity databases that are able to verify customers in a much more robust way than potentially documents were.
So, I think we’ll see that constant trend of digitisation of technology, mobile-first, wallets, and of course, documentation that will become digital will make life a little bit easier. But, in order to protect themselves, consumers and businesses really need to think about what they can do to stop and be vigilant, right?
So, I think consumers need to educate themselves. They need to use things like password protection and protect their devices and be aware of things like phishing tactics in social media and email. So, we can do as much as we can for businesses, but I think businesses need to invest in these systems because they are stronger, the security measures are stronger, and will help protect them and their customers ultimately.
I think the differences that we see, we believe facial biometrics is a very, very strong and has been proven externally through, you know, NIST iBeta certification, for example, we have a 99.998 certification of liveness biometrics, I mentioned the inclusion and lack of racial bias. If you want to capture and work with people of all races, all genders, all colours across the world, it’s important to use systems that are inclusive, otherwise, you’ll end up discriminating and losing customers.
So, it is important to make these investments into these systems to help protect your business and help protect the consumers behind that. But ultimately, consumers have to also be educated themselves. They have to think about what they’re doing and be aware of things that are out of the ordinary or suspicious, unsolicited requests, for example. And then lastly, I think, you know, government needs to engage in some sort of public dialogue as well to help consumers about understanding what they’re doing in these initiatives.
And government needs to work with business as well to inform the public about things like biometric technology, ethical implications, and why they should be using these. But ultimately, there should be some ethical guidelines and review boards to be able to support the usage of this new technology that’s coming at us at such a pace. It’s really strong, really powerful and really useful.
But there have to be some guardrails around that, and I think it’s going to take a collective effort from consumers, businesses and government to get us there.
Oscar: You mentioned, for instance, a liveness detection that is one of the ways that this identity verification tools are checking that the person is a real person moving in front of the camera. In terms of the end user, so when the end user is in front of this identity verification system that are based on Generative AI, so let’s say user experience is similar, is so how transparent or is different?
Russ: Yeah, I think, look, with facial recognition, for example, and the techniques we use in identifying people when they’re going through the process of verifying themselves or for account access or re-authentication, no personal data is stored. So, the use of those biometrics is the ability to give people a robust way to prove themselves and their proof of life, if you will, when doing a particular action.
And I think what’s been missing in the past is people have accepted a document which could or could not belong to that person to be the valid form of identity. The reason why identity documents around the world had been the standard is there was always a picture of your face on that document.
So, you had a passport or driver’s license, you could see it was you in a sense. So, with liveness, people are protected the same way as using phones to open up access to your phone and to those systems. But these systems are tested and there is no personal data. People should feel very comfortable that the data that they’re using to generate that action is protected and their own in terms of doing that.
We’re just using technology to be able to verify that that person is live and present, and is not a deepfake, was not a synthetic ID. Because what we see a lot is these presentation attacks when people are using video footage that are grabbed from external sources, for example, to try and fake systems or try and trick systems that they are actually live and present.
But we are able to detect these digital footprints and be able to detect using multiple sources of multiple techniques on the mobile phone that we build software for that that person is live and present and is presenting the document that they say they are in order to verify themselves.
Oscar: Thank you, for explaining better how it worked for users. So, it’s simple for users. It’s not more complicated.
Russ: Simple and seamless and quick as well. It’s not more complicated. It’s less complicated, in fact, right? So, when you presented with it – there has to be a trust of course in the environment that you’re doing, and then providing your face to do that.
But ultimately, it’s safer and quicker, and ultimately more secure than any sort of biometric that they might have used previously.
Oscar: Yeah, it’s true. You mentioned also faster sometimes I think, being in front of these systems and yeah you are, waiting a little bit in front of the camera, right until it processes.
Russ: Yeah, look, it depends on the speed and the connectivity in the region you’re in, and it might be the phone and your mobile network, for example. But we account for all of that in the software that we design in helping people to process that. So, we shoot like a live stream video, and we take the best shots out of about 100, 120 frames that we shoot out of that video. It’s a very quick two or three second capture, and we’re able to compare the best quality face to the document that’s presented in this process.
Now, we can account for age, facial degradation, loss of hair, glasses, et cetera because we are looking at the underlying structure of someone’s face when doing that. So, we’re 3D mapping essentially that person’s face, and are able to then tell against the original document that’s presented if that person is the same person.
And that you can’t do, it’s very hard to do with humans, for example. And that’s why technology can do a lot of this lifting very, very quickly. We can do it in seconds and verify the person against very old very age documents or changes to their facial structures. And so, we’re very excited about how these techniques can verify people to the grade that I mentioned before.
Oscar: Yeah, indeed, it sounds like there’s a lot of innovation hearing what you’re talking, you are describing. So, what we say looking at the future, so what is the future of Generative AI in identity verification?
Russ: We were excited about Gen AI’s ability to create these huge datasets of synthetic personas, because it’s going to help prevent fraudsters trying to use this synthetically created people and documents that they create to trick and penetrate low grade systems.
And the more people we can support, the more businesses we can get our technology into, the more we can stop this the synthetic IDs and penetration attacks that are happening. And we’ve seen the velocity of these increase as we see better and better tools and faster processing time to be able to do this.
So, the ability to cover the identities of the world’s population through technology and creating inclusivity for all ethnicities, all genders, means that people can be granted access regardless of where they live, what device they’re using, what colour they are, what gender they are.
So, we’re very excited about how Gen AI can train and help people. And again, this is all ethically sourced data, right? So, we didn’t go and grab it elsewhere. It’s very hard to get in front of tens of millions of faces of variations of age and, again, colour, ethnicity, gender, et cetera.
So, Gen AI really helps us to do that, I think detection tools. So, developing and using advanced technology like Gen AI to detect this deepfake content can be crucial to mitigate the potential harmful effects that might come from that. Authentication mechanisms. So, implementing strong authentication, like facial can help, again, verify the identity of individuals and reduce that risk of impersonation.
So, trust has to be ensured that it’s in place there. And of course, eliminating frauds and scams, so businesses and consumers fall victim to deepfake base scams and others every day. For instance, a scammer can impersonate a company executive, as I said, and deceive employees into revealing sensitive information or maybe making financial transactions.
So, we want to stop fraud at the door. We want to stop fraud internally, externally. And we want to help protect businesses and their customers, whether their business or consumers from the rising threat of what’s coming on synthetic identities and the scale of using Generative AI at the fraudster level.
Oscar: Sounds good. Final question, for all business leaders that are listening to us right now, what is the one actionable idea that they should write on their agendas today?
Russ: Yeah, look, there are a lot to choose from. I think the one action from my opinion, maybe is – you’ve got to think like we’re living in a mobile-first world, right? And Gen AI solutions, as we’ve talked about are surging.
So, the action I would take is take the time to speak to your fellow executives and to the teams and to the people inside your business and understand how identity is currently viewed in your approach to your people, your processes, your security, your products and your customers. Where I sit and where we sit, is we are seeing the velocity increase of identity usage across the world.
Governments are enforcing and implementing more and more identity standards in order to control obviously, governmental services. And so, it’s important that people think about identity for their own businesses. It’s going to become critical to protect them and their customers. They need to think about everything from employee onboarding, how well you know your employee and your customers.
And of course, ultimately, what we’re all achieving, or trying to achieve in digital is improving user experiences, anything from onboarding to account management, to customer services interaction. So, it’s everything that your customer, your employee might touch within your business, potentially has something to do with identity. And the better you know the people in your business and your customers, I think, the better positioned you’re going to be to be able to not only stop these threats but take advantage of beating your competition by staying ahead and knowing your customer much better.
Oscar: All right, thank you very much, Russ, for all this very interesting conversation about how Generative AI is going to help us for the identity verification now and in the future.
So, for the ones listening to us who would like to know more about you or get in touch with you, what are the best ways for that?
Russ: Yes, thank you again, for the time letting me talk about something we, you know, and I’m very passionate about and obviously we’re very passionate about fraud and particularly technology.
If they want to get a hold of me, I’m on LinkedIn, you know, Russ Cohn, C-O-H-N. IDVerse.com has a repository of amazing content and information and thought leadership around a lot of these areas, so please take your time to look across the site. And if you want to get in touch with us, there’s lots of ways to do that on the site.
So, look forward to seeing and speaking with anybody who’s interested in learning more about IDVerse and about – chatting about fraud and identity.
Oscar: Perfect. Again, thank you very much, Russ. And all the best.
Russ: Thank you, Oscar. Appreciate the time.
Thanks for listening to this episode of Let’s Talk About Digital Identity produced by Ubisecure. Stay up to date with episode at ubisecure.com/podcast or join us on Twitter @ubisecure and use the #LTADI. Until next time.
11 episoder
Alla avsnitt
×Välkommen till Player FM
Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.