What Credit Unions Need to Know about Generative AI and Voice Cloning
Credit Unions are learning more every day about AI, but there is still a lot of confusion about what it means for the financial sector. Is AI a good thing or a bad thing? What is generative AI and how will it impact credit unions and community banks? What risks does it pose in the hands of bad actors? In particular, will fraudsters be able to use generative AI or “voice cloning” to defeat security protocols and access member or customer accounts?
These are all valid questions, and the technical answers can be complicated. In this article, we’ll explore what you should know in order to make decisions about protecting those you serve against the misuse of generative AI and “deep fake” technology. Here are the most common questions banking call center leaders have about AI and its impact on account security.
#1 How Easy Is It to Clone a Voice Using AI?
Making a short clip of a pre-recorded voice that mimics an individual is pretty easy these days. But making a voice clone that sounds EXACTLY like the real person it is imitating is still surprisingly difficult. This is especially difficult if you are trying to replicate a target’s voice in a natural dynamic conversation.
This is why the notorious examples of bank accounts being hacked using voice clones involve a person cloning their own voice and using it to defeat active voice authentication where there is a known phrase such as “my voice is my password” used for verification. In this situation, they have unlimited access to samples of their own voice to feed the generative AI model. In addition, they only have to create a short clip and play it back one time to fool an automated system.
The ability to create “true to life” voice clones of random individuals depends on the quality of the available voice sample, quantity of voice samples, and the quality of the software used. Most readily available generative AI software used for creating voice clones doesn’t do a very good job at perfectly replicating a target’s voice. A lot of synthetic replicas sound robotic and can be spotted by a live agent or by a voice authentication system.
We (and our clients) are actively using generative AI and voice cloning to test our own voice verification system for vulnerabilities. So far, even the most sophisticated tools we have used resulted in voice clones that were effectively blocked by our system.
To understand the current state of generative AI and deep fake or voice clones, it’s important to understand what it is and why it was developed. Deep fakes sound scary, but the original purposes for which generative AI were built are usually good.
Here are examples of positive uses for voice cloning:
- Reducing the cost and time to make films or voice overs for audio productions
- Creating personalized digital assistants that are pleasant to use for daily tasks
- Restoring digital voices for those who have lost the ability to communicate with their voice due to injury or illness
- “Bringing back” loved ones who have passed on so their voices can be remembered and cherished
The negative uses, like creating fake news or engaging in financial crime, get a lot of attention. However, most generative AI tools weren’t built with those use cases in mind, and they aren’t optimized to defeat security protocols. That’s good news for credit unions that are concerned about the threat of voice cloning.
Fraudsters have to work hard to use existing tools to make plausible voice clones at scale that can defeat security protocols like voice biometric technology. They also have to find effective ways to acquire the high quality voice samples on which the clones are based. All of this requires investment, and most thieves are looking for ways to steal that don’t require a lot of time, effort, and money.
Is Biometric Voice Authentication Really Secure in the Era of Deep Fakes and Voice Cloning?
Biometric voice authentication works by matching a person’s voiceprint to a previously captured sample to score how closely it matches. As mentioned before, active voice authentication involves matching the voiceprint for a predetermined phrase such as “My voice is my password” that a caller repeats during an interaction with an application such as an IVR.
On the other hand, passive voice authentication works by comparing voiceprints or AudioPrints™ based on unique characteristics of a person’s voice in natural conversation. These latter, more sophisticated voice verification systems leverage algorithms for conversational voice matching rather than relying on a direct match to a specific predefined phrase.
Here are several reasons we still have a very high level of confidence in the security of properly designed and calibrated passive voice authentication.
Reason #1: Current voice cloning tools are not well suited to replicate voices in dynamic conversational interactions like those encountered in a live FI contact center engagement. Synthesizing a new sentence requires several seconds to process. The resulting lag time makes it obvious to a contact center agent that something suspicious is going on.
Reason #2: Voice ID systems are score based. In our lab tests, real persons’ voices scored much higher than AI generated clone voices when compared against the real person’s AudioPrint™ enrollment. Careful calibration ensures AI voices have an extremely low probability of getting past the Voice ID system and real voices have a much higher probability of passing through.
Reason #3: The quality of the synthetic voice depends on the duration and quality of the original voice captured and fed into these tools. Acquiring high quality voice recordings of a typical credit union member or community bank customer requires additional investment on the part of the fraudster. This significantly reduces ROI for a fraudster planning to use these tools in an account takeover (ATO) attempt. For voice authentication algorithms that include characteristics of the caller’s device as well as their voice, there is an added layer of protection. It is much easier for the fraudster to go after an FI that is not using a voice ID system.
How Can I Protect My Credit Union Call Center from Voice Cloning and Deep Fake Attacks?
For most credit unions, the alternative to using voice authentication for account security is relying on the ability of individual agents to weed out fraud. They would have to do this using security Q&A and their own hearing to detect deep fake fraud. Out of wallet questions are no longer a valid option as the first line of defense. It is far too easy for fraudsters to buy, steal, or socially engineer their way through traditional security Q&A.
What about fake voices created by fraudsters? Creating a handful of synthetic voices that match a caller’s stored demographics (e.g., a 40 year old male) paired with correct answers to security Q&A would be much simpler than trying to create a replica of each individual’s voice. An agent might be fooled by this type of generic fake voice if it sounds similar to what they are expecting based on a caller’s profile in their core banking system. However, these attacks will not be able to defeat a passive voice ID system that is looking for a close match to an enrolled caller’s voice.
In the context of identifying a fake voice vs. the real caller, it’s better to be equipped with finely tuned technology for voice matching instead of relying on the human ear. Contact center agents and IVR systems assisted by voice authentication technologies are better positioned to combat this type of AI mimicry compared to those that attempt to fight this battle without the assistance of voice ID. For the best security, we recommend working with your voice ID solution provider to ensure your voice verification system is well calibrated to ensure strong rejection of synthetic voices.
This doesn’t mean voice biometrics must be the ONLY verification method for account takeover prevention. For high value/high risk transactions, it also makes sense to layer in additional authentication methods to protect against fraud losses.
What’s the final analysis? Financial institutions should be prepared to fight technology with technology. Don’t leave agents to face fraudsters and generative AI threats alone. Instead equip them with state-of-the-art fraud prevention tools to protect both agents and account holders from evolving threats.
To learn more about the power of voice authentication for fraud prevention, request a demo today.