The Intersection of AI, Voice Biometrics, and Call Center Fraud Risk

Artificial Intelligence (AI) has been getting a great deal of attention lately with the topic of deepfakes and its potential use in voice fraud. However, as with any technology, AI can be used for good or evil. Let’s start by taking a look at what AI is and clear up a common misunderstanding around the topic. 

According to Milind Borkar, Founder and CEO of Illuma, the public perception of AI is not always accurate. This is because people tend to think of just the most recent advancements instead of the long history of artificial intelligence. 

“There are certain aspects of AI that get a whole lot of coverage in the media and then the general public associates AI with just that small section. Today that’s happening with generative AI where you see things like synthetic voices or chat GPT where content is being generated on the fly. People are associating AI with just that niche. In reality, AI is a pretty broad area under which a variety of different subsections fall under and machine learning is one of those core components. Machine learning (ML) is simply machines being able to teach themselves without being directed by humans.” 

AI and ML have been part of the technology driving voice biometrics innovation from the start. The algorithms and continuous learning systems that refine the accuracy of voice authentication are made possible by this technology. 

“Overall, AI has been a part of our journey since day one and will continue to be a critical component moving forward. We have leveraged key technology in that space to bring out a product that not only can be trained in what’s called an unsupervised manner, but then also is able to continue learning in a supervised manner in a production environment with our clients. This gives our clients a very easy way to deploy the technology with low effort, but continue creating improvements and enhancements for better performance after it’s been live over repeated call interactions.” 

In simple terms, the more the voice biometrics system practices authenticating callers, the more real world feedback it gets for continuous refinement. As the Illuma Shield™ software conducts passive voice authentication over thousands of calls, it continues to become faster and even more precise. Being able to trust a system that is continuously improving itself is a relief for contact center managers. 

“I feel like it’s amazing that this technology exists. Over my years working in a contact center and knowing this environment, you don’t know who you’re speaking to on the other side of the phone. Now with the amount of fraudsters out there, that is always in your mind. With Illuma Shield™ we KNOW we are talking with our members. It really eases your mind and improves the member experience.” – Ann Wright, Contact Center Manager at Financial Plus Credit Union

This is a great example of how AI and ML are used to protect account holders from criminal threats. But what about the use of generative AI to create deep fakes or voice clones that attempt to defeat voice authentication systems?

Illuma Is Driving AI Innovation to Fight Deep Fake Threats

Illuma’s most recent, award-winning Finovate demo was directed at detecting and blocking voice cloning attacks, the latest threat putting financial accounts at risk. Borkar noted the prevalence of technology used for creating deep fakes and Illuma’s response. “It’s becoming easier and easier to replicate people’s voices with very readily available tools on the web. Our latest demo showed Illuma’s latest enhancement in features and capabilities. By incorporating deep fake detection into our product, we can defend against the newest form of threats on generative AI.”

You can watch the short Finovate demo below.

Is Voice Cloning the Biggest Threat to Call Center Security?

While it should certainly be on the radar for financial institutions, it hasn’t yet become a commonly used fraud method against consumer focused banking institutions. “We talk about the capabilities of AI in terms of what fraudsters can do with it and clearly there are some really scary things possible with deepfakes and voice cloning. But a lot of the threats that banks and their customers see on a regular basis have nothing to do with artificial intelligence.”

As an example, banks and credit unions use Illuma Shield™ voice authentication to process a high volume of calls every day. According to Borkar, “We haven’t seen a real use of deepfakes in any of our contact center production deployments today. We believe that’s because it still requires more effort for fraudsters than simply going to the Dark Web, buying a bunch of stolen information, going through the call center KBA process and answering questions.”

A well prepared fraudster can typically answer every security question with ease. If they sense that they will be stopped by one agent, they can hang up and try again, potentially reaching another agent who isn’t as alert that particular day. “That still continues to be the easiest way to penetrate, and that is the most common attack vector we hear our clients talk about.”

The truth is that account holders are often at higher risk than the call center as direct targets of fraud. ”The scary part is that some of the higher value attacks are really happening directly on account holders. So these are credit union members or community bank customers where they’re answering the phone call on their personal cell phone and have very limited capability of being able to authenticate who that person is on the other end of the line.”

One common fraud trick is to call an account holder claiming to be a fraud specialist from their neighborhood financial institution. They tell the account holder that fraud was detected on an account and that they are helping address it. The fraudster says they are sending a one-time passcode to the account holder’s phone and will ask to have it read back. 

What the fraudster is actually doing is trying to break into the account holder’s digital banking account and they’re initiating OTP to reset a password. The unsuspecting account holder has no idea. They just read that code from the bank back to the fraudster who enters that code into the digital banking portal and then has full access to the account.

“This is a very low tech approach. It doesn’t require generative AI or really anything besides a phone, a computer, and a login. These types of attacks are the most common ones that we see and they’re very difficult to protect against today outside of educating consumers.” 

What Should Financial Institutions Do About Fraud Threats?

Financial institutions can’t afford to ignore any fraud avenues, but they do need to consider the probability vs. impact of the methods currently in use. While deepfake attacks are apparently not very common right now, that will change in coming months or years with disastrous consequences. At the same time, known fraud methods such as hacking and social engineering are clear and common attack methods putting account holders at risk right now. 

Borkar’s advice is to think holistically. Rather than look for a point solution that solves one attack vector, banks and credit unions should take a broader approach by strengthening the arsenal to stay ahead of fraud, no matter what form it takes. 

Illuma’s approach is a three-pronged defense to protect against fraud in general. “First is strong authentication. We have a solid biometric engine that looks for very precise matches of voice prints, which gives a very strong capability to reject the vast majority of threats that are out there today. We supplement that with the deepfake detection, which we demonstrated at Finovate. The idea is now you’re looking for specific signatures of machine generated voice to add to your authentication decisions and scores. The third piece is adding multi-factor authentication. That’s because if someone was capable and sophisticated enough to get through the first two, getting to the third one as well becomes a nearly impossible task.” 

At the end of the day, financial institutions that make themselves difficult targets are most likely to discourage fraud attempts. Even if the business is a criminal enterprise like identity fraud, the people running it want the fastest ROI on their investment of time and resources. There is never a 100% foolproof system to stop fraud. However, a layered approach to call center security can make it very painful and unattractive for identity thieves.

Contact us to learn how Illuma can help you build a stronger fraud defense.