AITech & Science

AI Breakthrough Makes Live Voice Cloning a New Cyber Threat

52
Artificial Intelligence (AI)

Cybersecurity experts have raised alarms over a new generation of artificial intelligence systems that can clone voices and speak in real time, creating serious risks for identity security and fraud prevention.

Researchers at NCC Group demonstrated that open-source AI tools combined with standard computer hardware can now generate live, convincing voice deepfakes with very little delay. The method, called “deepfake vishing” (voice phishing), allows attackers to mimic anyone during a live conversation.

The system only needs a few short voice samples to create an imitation. During tests, the AI produced realistic speech with less than half a second of delay, even on a laptop with a mid-range Nvidia RTX A1000 GPU. This means the technology can run on ordinary laptops or smartphones, making it easier for criminals to use.

In past years, voice deepfakes required long recordings and were limited to pre-recorded clips, which made them less flexible. The new real-time models can respond instantly, removing the awkward pauses that once exposed fake voices.

Pablo Alobera, Managing Security Consultant at NCC Group, said that in controlled tests with client consent, combining these real-time deepfakes with caller ID spoofing tricked targets in nearly every case. The success rate shows how difficult it is to detect impersonation over the phone.

Experts say video deepfakes have not yet reached the same level of realism. Even advanced systems like Alibaba’s WAN 2.2 Animate and Google’s Gemini Flash 2.5 Image struggle to match tone, emotion, and facial movement in live settings.

However, the growing accessibility of AI tools means both voice and video deepfakes could soon be used for scams, fraud, or misinformation. Cybersecurity analyst Trevor Wiseman warned that simple phone or video calls can no longer be trusted for identity verification.

Wiseman recommends that companies and individuals adopt unique verification codes or gestures—similar to secret signals in sports—to confirm identities during remote communication. Without such safeguards, he says, people will remain vulnerable to increasingly sophisticated AI-driven deception.

Written by
Sazid Kabir

I've loved music and writing all my life. That's why I started this blog. In my spare time, I make music and run this blog for fellow music fans.

Stay updated with nomusica.com. Add us to your preferred sources to see our latest updates first.

Related Articles

ChatGPT - OpenAI
Social MediaAI

ChatGPT Turns People Into Caricatures in Viral AI Trend

A new viral trend is turning people into AI-generated caricatures, and ChatGPT...

The moon moves in front of the sun in a rare "ring of fire" solar eclipse as seen from Singapore on December 26, 2019.
Tech & Science

“Ring of Fire” Solar Eclipse to Light Up Antarctica on Feb. 17

A rare “ring of fire” solar eclipse will take place on Tuesday,...

Artificial Intelligence (AI)
Tech & Science

AI.com Sold for $70 Million as Crypto.com CEO Bets Big on Artificial Intelligence

Crypto.com co-founder and CEO Kris Marszalek has entered the artificial intelligence space...

ChatGPT 5
AITech & Science

AI Experts Say Stop Relying on ChatGPT Alone

ChatGPT is one of the most popular AI tools in the world,...