OpenAI: Chinese agent used ChatGPT for smear ops • The Register
#Security

OpenAI: Chinese agent used ChatGPT for smear ops • The Register

Privacy Reporter
3 min read

OpenAI reveals Chinese law enforcement-linked user attempted to use ChatGPT for influence operations against Japanese PM and dissidents, including fake smear campaigns and psychological pressure tactics.

OpenAI has uncovered evidence that a ChatGPT user with ties to Chinese law enforcement attempted to use the AI chatbot to orchestrate smear campaigns against critics of the Chinese Communist Party, including Japan's first female prime minister.

According to OpenAI's latest report on malicious uses of its models, the user tried to leverage ChatGPT to plan a coordinated campaign targeting Sanae Takaichi, Japan's prime minister, after she criticized the CCP's human rights record in Inner Mongolia. The user's prompts included requests to post and amplify negative comments about Takaichi on social media and to create fake email accounts posing as foreign residents to send complaints to other Japanese politicians.

When ChatGPT refused these requests, the user reportedly turned to other companies' AI models to continue their efforts. However, the same account later returned to ChatGPT to generate status reports on what they termed "cyber special operations" - covert influence campaigns designed to harass and silence critics of the Chinese government both domestically and abroad.

Featured image

The status reports revealed a sophisticated operation structure, focusing on five key areas: negative comments, immigration, living conditions, far-right links, and tariffs. One campaign included the hashtag #右翼共生者 ("right-wing symbiont"), which OpenAI found spreading in small quantities across X, Japanese platform Pixiv, and Blogspot starting in late October 2025. Despite these efforts, the campaign failed to gain significant traction, with YouTube videos receiving single-digit views and social media posts showing minimal engagement.

Beyond social media manipulation, the user's activities documented in ChatGPT included more concerning tactics of psychological pressure and harassment. These operations targeted dissidents' mental health and their families, involved hacking livestreams, and included filing false reports against social media accounts using fabricated evidence.

In one particularly disturbing incident, the user created fake obituary notices and gravestone photos claiming that dissident Jie Lijian had died, then mass-posted these messages online. Another operation targeted activist Hui Bo (@huikezhen) on X, where the user filed thousands of reports against his tweets and created dozens of fake accounts using his likeness. By November 29, 2025, Hui's X account was restricted, with multiple fake accounts appearing in search results instead.

The operations also included coordinated smear campaigns involving fake sex scandal allegations against three dissidents. Online searches for these individuals produced multiple pieces of content across blogs, Reddit, YouTube, Tumblr, and Adobe's Behance platform that mirrored these false claims.

Ben Nimmo, principal investigator on OpenAI's Intelligence and Investigations team, characterized these activities as "modern transnational repression" that goes beyond simple digital trolling. "It's about trying to hit critics of the CCP with everything everywhere, all at once," Nimmo explained, noting that these operations are "well-resourced" and "meticulously planned."

This activity bears similarities to earlier China-based influence operations known as "Spamouflage," which research teams have linked to individuals connected to Chinese law enforcement. Meta's August 2023 threat report attributed Spamouflage to such connections, suggesting a pattern of state-linked digital influence operations.

The discovery highlights the evolving nature of influence operations and the challenges AI companies face in preventing their tools from being used for malicious purposes. While OpenAI has banned the accounts involved, the incident demonstrates how determined actors may use multiple platforms and tactics to achieve their objectives, moving from simple social media manipulation to more sophisticated forms of psychological pressure and harassment.

For users of AI tools, this case serves as a stark reminder that these platforms are not private spaces and can be monitored for malicious activity. The incident also underscores the importance of robust content moderation and threat detection systems as AI becomes increasingly integrated into both legitimate and malicious operations worldwide.

Comments

Loading comments...