AI

OpenAI Cracks Down on Malicious AI Use, Bans Accounts Linked to Scams

37
OpenAI ChatGPT

OpenAI has recently stepped up efforts to combat the malicious use of its ChatGPT tool, banning multiple user accounts involved in fraudulent activities.

The company’s proactive stance aims to tackle scammers and other malicious actors using AI to generate harmful content, fake resumes, and scam networks.

AI Use for Scams and Fraudulent Activities

In a new report, OpenAI highlighted that scammers have been using ChatGPT for a range of malicious activities, such as debugging code and generating content for nefarious purposes.

With ChatGPT surpassing 400 million weekly active users, its accessibility has made it a prime tool for malicious use.

Among the cases OpenAI uncovered, one involved a banned account that generated fake news articles that denigrated the U.S. and were published under a Chinese company’s byline in Latin American media outlets.

Another involved accounts reportedly linked to North Korea, which used AI to generate fictitious resumes in an attempt to infiltrate Western companies.

Romance Scams and Social Media Manipulation

OpenAI also reported identifying accounts that were likely connected to Cambodia, using ChatGPT to translate and create comments for a “romance baiting” scam across social media platforms like X, Facebook, and Instagram.

These scams often prey on unsuspecting individuals, leading to financial exploitation and emotional harm.

OpenAI’s Commitment to Safety and Integrity

In response to these threats, OpenAI has implemented stricter measures to prevent its tools from being used in fraudulent schemes. The company emphasizes that its policies strictly prohibit the use of its output for scams or fraud.

By sharing insights with partners like Meta, OpenAI is working to enhance its ability to detect and prevent malicious activities across platforms, ensuring the safe use of AI technologies.

While OpenAI’s recent bans reflect its commitment to preventing AI misuse, the company continues to monitor and investigate potential threats to maintain the integrity of its tools.

Written by
Sazid Kabir

I've loved music and writing all my life. That's why I started this blog. In my spare time, I make music and run this blog for fellow music fans.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay updated with nomusica.com. Add us to your preferred sources to see our latest updates first.

Related Articles

MiniMax
AIFinance & Business

Chinese AI Startup MiniMax IPO Skyrockets, Founder Becomes Billionaire

Yan Junjie, 36, founder and CEO of AI model developer MiniMax, has...

CES 2026 Robot
Tech & ScienceAI

Netflix Brings Pop Culture to CES 2026 While Embracing AI in Marketing

At CES 2026, the growing role of artificial intelligence in marketing took...

ChatGPT Health
AITech & Science

New ChatGPT Feature Offers Personalized Health Insights Without Replacing Doctors

OpenAI has introduced ChatGPT Health, a new feature designed to help users...

DeepSeek
AIWorld News & Politics

Governments Tighten Controls on Chinese AI Firm DeepSeek

Governments and regulators around the world are increasing scrutiny of Chinese artificial...