Cybersecurity experts have raised alarms over a new generation of artificial intelligence systems that can clone voices and speak in real time, creating serious risks for identity security and fraud prevention.
Researchers at NCC Group demonstrated that open-source AI tools combined with standard computer hardware can now generate live, convincing voice deepfakes with very little delay. The method, called “deepfake vishing” (voice phishing), allows attackers to mimic anyone during a live conversation.
The system only needs a few short voice samples to create an imitation. During tests, the AI produced realistic speech with less than half a second of delay, even on a laptop with a mid-range Nvidia RTX A1000 GPU. This means the technology can run on ordinary laptops or smartphones, making it easier for criminals to use.
In past years, voice deepfakes required long recordings and were limited to pre-recorded clips, which made them less flexible. The new real-time models can respond instantly, removing the awkward pauses that once exposed fake voices.
Pablo Alobera, Managing Security Consultant at NCC Group, said that in controlled tests with client consent, combining these real-time deepfakes with caller ID spoofing tricked targets in nearly every case. The success rate shows how difficult it is to detect impersonation over the phone.
Experts say video deepfakes have not yet reached the same level of realism. Even advanced systems like Alibaba’s WAN 2.2 Animate and Google’s Gemini Flash 2.5 Image struggle to match tone, emotion, and facial movement in live settings.
However, the growing accessibility of AI tools means both voice and video deepfakes could soon be used for scams, fraud, or misinformation. Cybersecurity analyst Trevor Wiseman warned that simple phone or video calls can no longer be trusted for identity verification.
Wiseman recommends that companies and individuals adopt unique verification codes or gestures—similar to secret signals in sports—to confirm identities during remote communication. Without such safeguards, he says, people will remain vulnerable to increasingly sophisticated AI-driven deception.
