By Mauro Orru
OpenAI said it had disrupted several attempts to leverage its artificial-intelligence models for cyber threats and covert influence operations that likely originated from China, underscoring the security challenges AI poses as the technology becomes more powerful.
The Microsoft-backed company on Thursday published its latest report on malicious uses of AI, saying its investigative teams continued to uncover and prevent illicit activities in the three months since Feb. 21.
While misuse occurred in several countries, OpenAI said it believes a "significant number" of violations came from China, noting that four of 10 sample cases included in its latest report likely had a Chinese origin.
In one such case, the company said it banned ChatGPT accounts it claimed were using OpenAI's models to generate social media posts for a covert influence operation. The company said a user stated in a prompt that they worked for China's propaganda department, though it cautioned it didn't have independent proof to verify its claim.
Liu Pengyu, spokesman at the Chinese Embassy in Washington, said in a statement that China always adheres to the principle of developing AI for good and that it opposes theft, tampering, leaking and other illegal collection and use of personal information.
"Cyberspace is highly virtual, difficult to trace, and has diverse actors," he said. "We hope that relevant parties will adopt a professional and responsible attitude and base their characterization of cyber incidents on sufficient evidence rather than groundless speculation and accusations."
OpenAI's policies prohibit use of its popular AI chatbot and models to assist with fraud, scams or cyberattacks. The company regularly suspends ChatGPT accounts it says are in breach of its rules.
The release of ChatGPT to the public in late 2022 ushered in a wave of investments from companies willing to bet on a technology that is changing the way people do research, learn and work, though its growing capabilities also raise concerns about their weaponization for fraud, influence operations and other illicit activities.
OpenAI said in a letter to the U.S. Office of Science and Technology Policy in March that AI needs common-sense rules to shield users and that it was committed to preventing authoritarian regimes from using its models to amass power, threaten or coerce other states, carry out covert influence operations or malicious cyber activity.
News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI.
Write to Mauro Orru at mauro.orru@wsj.com
(END) Dow Jones Newswires
June 05, 2025 15:12 ET (19:12 GMT)
Copyright (c) 2025 Dow Jones & Company, Inc.
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.