AITech & Science

AI Breakthrough Makes Live Voice Cloning a New Cyber Threat

71
Artificial Intelligence (AI)

Cybersecurity experts have raised alarms over a new generation of artificial intelligence systems that can clone voices and speak in real time, creating serious risks for identity security and fraud prevention.

Researchers at NCC Group demonstrated that open-source AI tools combined with standard computer hardware can now generate live, convincing voice deepfakes with very little delay. The method, called “deepfake vishing” (voice phishing), allows attackers to mimic anyone during a live conversation.

The system only needs a few short voice samples to create an imitation. During tests, the AI produced realistic speech with less than half a second of delay, even on a laptop with a mid-range Nvidia RTX A1000 GPU. This means the technology can run on ordinary laptops or smartphones, making it easier for criminals to use.

In past years, voice deepfakes required long recordings and were limited to pre-recorded clips, which made them less flexible. The new real-time models can respond instantly, removing the awkward pauses that once exposed fake voices.

Pablo Alobera, Managing Security Consultant at NCC Group, said that in controlled tests with client consent, combining these real-time deepfakes with caller ID spoofing tricked targets in nearly every case. The success rate shows how difficult it is to detect impersonation over the phone.

Experts say video deepfakes have not yet reached the same level of realism. Even advanced systems like Alibaba’s WAN 2.2 Animate and Google’s Gemini Flash 2.5 Image struggle to match tone, emotion, and facial movement in live settings.

However, the growing accessibility of AI tools means both voice and video deepfakes could soon be used for scams, fraud, or misinformation. Cybersecurity analyst Trevor Wiseman warned that simple phone or video calls can no longer be trusted for identity verification.

Wiseman recommends that companies and individuals adopt unique verification codes or gestures—similar to secret signals in sports—to confirm identities during remote communication. Without such safeguards, he says, people will remain vulnerable to increasingly sophisticated AI-driven deception.

Written by
Sazid Kabir

I've loved music and writing all my life. That's why I started this blog. In my spare time, I make music and run this blog for fellow music fans.

Stay updated with nomusica.com. Add us to your preferred sources to see our latest updates first.

Related Articles

ChatGPT - OpenAI
AI

ChatGPT Uninstalls Jump 295% After OpenAI’s DoD Deal

Uninstalls of the ChatGPT app in the United States jumped 295% in...

Alibaba Qwen 3.5
AI

Alibaba’s New Qwen 3.5 Model Runs Fully Offline on iPhone 17 Pro

Alibaba Group has released its new Qwen 3.5 small model series, and...

Hack Warning Cyberattack
Tech & Science

47,000 GitHub Repos Hacked by AI Bot That Won’t Stop Bragging

An AI bot called hackerbot-claw is tearing through GitHub right now. It...

AWS
Tech & ScienceWorld News & Politics

Iran Just Bombed Amazon’s Data Centers And The Internet Is Melting

Iranian missiles have reportedly smashed into Amazon Web Services data centers across...