AITech & Science

AI Breakthrough Makes Live Voice Cloning a New Cyber Threat

85
Artificial Intelligence (AI)

Cybersecurity experts have raised alarms over a new generation of artificial intelligence systems that can clone voices and speak in real time, creating serious risks for identity security and fraud prevention.

Researchers at NCC Group demonstrated that open-source AI tools combined with standard computer hardware can now generate live, convincing voice deepfakes with very little delay. The method, called “deepfake vishing” (voice phishing), allows attackers to mimic anyone during a live conversation.

The system only needs a few short voice samples to create an imitation. During tests, the AI produced realistic speech with less than half a second of delay, even on a laptop with a mid-range Nvidia RTX A1000 GPU. This means the technology can run on ordinary laptops or smartphones, making it easier for criminals to use.

In past years, voice deepfakes required long recordings and were limited to pre-recorded clips, which made them less flexible. The new real-time models can respond instantly, removing the awkward pauses that once exposed fake voices.

Pablo Alobera, Managing Security Consultant at NCC Group, said that in controlled tests with client consent, combining these real-time deepfakes with caller ID spoofing tricked targets in nearly every case. The success rate shows how difficult it is to detect impersonation over the phone.

Experts say video deepfakes have not yet reached the same level of realism. Even advanced systems like Alibaba’s WAN 2.2 Animate and Google’s Gemini Flash 2.5 Image struggle to match tone, emotion, and facial movement in live settings.

However, the growing accessibility of AI tools means both voice and video deepfakes could soon be used for scams, fraud, or misinformation. Cybersecurity analyst Trevor Wiseman warned that simple phone or video calls can no longer be trusted for identity verification.

Wiseman recommends that companies and individuals adopt unique verification codes or gestures—similar to secret signals in sports—to confirm identities during remote communication. Without such safeguards, he says, people will remain vulnerable to increasingly sophisticated AI-driven deception.

Written by
Sazid Kabir

I've loved music and writing all my life. That's why I started this blog. In my spare time, I make music and run this blog for fellow music fans.

Stay updated with nomusica.com. Add us to your preferred sources to see our latest updates first.

Related Articles

AI Bubble Bursting
AITech & Science

AI Bubble Bursting? OpenAI Faces Setbacks as Cracks Begin to Show

OpenAI is facing growing pressure after shutting down its AI video tool...

Playstation
Tech & Science

Sony to Drop PlayStation Network Name by 2026

Sony Interactive Entertainment is retiring the “PlayStation Network” and “PSN” branding by...

Google AI Studio
AITech & Science

Google Moves Firebase Studio Toward AI Studio in Major Developer Shift

Google is making a significant change to its developer ecosystem by transitioning...

DeepSeek R1
AITech & Science

Secret ‘Hunter Alpha’ AI Model Appears Online & Everyone Thinks It’s DeepSeek’s Next Big Release

A mystery AI model has appeared online and developers cannot stop talking...