Earlier this year, Microsoft announced a new artificial intelligence (AI) system that can reproduce a person’s voice after listening to them for just three seconds.
This was a sign that AI could be used to convincingly replicate key parts of someone’s identity.
Here’s an example of someone’s three-second voice prompt entered into the system:
And that’s what the AI, known as VALL-E, generated when asked to reproduce the person’s voice while saying the following phrase:
Deputy reporter Joseph Cox later reportedly used similar AI technology to access bank accounts with his voice replicated by AI.
In March, Guardian Australia journalist Nick Evershed raised concerns among some security experts when he said he could use an AI version of his voice to access Centrelink’s self-service accounts.
Voice duplication is already being exploited by scammers, but it’s not the only way professionals can take advantage of AI.
See how technology is being used and how best to protect yourself.
AI can reproduce anyone’s voice
The Guardian’s research shows that the ‘voiceprint’ security system used by Centrelink and the Australian Taxation Office (ATO) (which uses the phrase ‘In Australia, my voice identifies me’) can be fooled. It has been suggested that there is
It felt like the scene in the 1992 movie Sneakers where Robert Redford’s character recorded someone’s voice to go through a security checkpoint.
Services Australia said in its 2021-22 annual report that voice biometric authentication is used to authenticate more than 56,000 calls per day and 39% of calls to Centrelink’s key business numbers said. It also says that voiceprints are “as secure as fingerprints.”
The ATO says, “It would be very difficult for someone else to imitate your voiceprint and access your personal information.”
Dr. Lisa Given, Professor of Information Science at RMIT University, said: AI-generated voices can also trick people into believing they’re talking to someone they know.
“If the system can reasonably copy my voice and add some empathy, the scammers can go from texting ‘Mom, I lost my phone’ to making a phone call or actually seeing that person.” I was trying to create a voicemail,” she says.
Last month, the U.S. Federal Trade Commission warned consumers about fake family emergency calls using AI-generated voice clones. The FBI has also warned about virtual kidnapping scams.
These concerns have led experts to suggest some basic tactics that people can use to protect themselves from voice duplication.
- Call friends and family directly to confirm their identity, or come up with a safe word What to Say on the Phone to Confirm a True Emergency
- Beware of Sudden Callsbecause the caller ID can be forged, even from someone you know
- Be careful when asked to share personally identifiable information address, date of birth, middle name, etc.
Mark Gorrie, Asia Pacific Managing Director at cybersecurity software company Gen Digital, said: AI voice generators continue to improve in their ability to fool both humans and security systems..
“For years, ‘robo-scams’ have been easily detected by sound alone,” he says. “But voice-based AI has improved, and the text it uses has definitely improved.”
Scammers are tricking people with AI-generated texts and fake product reviews
As AI systems improve, large-scale language models (LLMs) such as OpenAI’s popular chatbot ChatGPT will be able to better emulate human-like responses. This is what scammers try to replicate in emails, text her messages, and other chatbots they create themselves.
“The concepts of empathy and social cues that we as humans use in building relationships are exactly the kind of tricks fraudsters can use and put into their systems,” says Dr. Given.
Scammers are using AI in phishing scams, which usually includes emails or text messages that claim to be from legitimate sources but ultimately use social engineering to obtain personal information. Some messages may even use links to direct you to dangerous websites.