OpenAI removes users in China, North Korea suspected of malicious activities

By Anna Tong

(Reuters) – OpenAI has removed accounts of users from China and North Korea who the artificial intelligence company believes were using its technology for malicious purposes including surveillance and opinion-influence operations, the ChatGPT maker said on Friday.

The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations. 

The company gave no indication how many accounts were banned or over what time period the action occurred. 

In one instance, users had ChatGPT generate news articles in Spanish that denigrated the United States and were published by mainstream news outlets in Latin America under a Chinese company’s byline.

In a second instance, malicious actors potentially connected to North Korea used AI to generate resumes and online profiles for fictitious job applicants, with the goal of fraudulently getting jobs at Western companies.  

Another set of ChatGPT accounts that appeared to be connected to a financial fraud operation based in Cambodia used OpenAI’s technology to translate and generate comments across social media and communication platforms including X and Facebook.

The U.S. government has expressed concerns about China’s alleged use of artificial intelligence to repress its population, spread misinformation and undermine the security of the United States and its allies. 

OpenAI’s ChatGPT is the most popular AI chatbot, and the company’s weekly active users have surpassed 400 million. It is in talks to raise up to $40 billion at a $300 billion valuation, in what could be a record single funding round for a private company.

(Reporting by Anna Tong in San Francisco; Editing by Cynthia Osterman)

tagreuters.com2025binary_LYNXNPEL1K0SE-VIEWIMAGE