OpenAI on Wednesday said it has disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year.
This activity encompassed debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X.
“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the artificial intelligence (AI) company said.
It also said it disrupted activity that generated social media content related to elections in the U.S., Rwanda, and to a lesser extent India and the European Union, and that none of these networks attracted viral engagement or sustained audiences.
This included efforts undertaken by an Israeli commercial company named STOIC (also dubbed Zero Zeno) that generated social media comments about Indian elections, as previously disclosed by Meta and OpenAI earlier this May.
Some of the cyber operations highlighted by OpenAI are as follows –
- SweetSpecter, a suspected China-based adversary that used its AI models for LLM-informed reconnaissance, vulnerability research, scripting support, anomaly detection evasion, and development. It has also been observed conducting unsuccessful spear-phishing attempts against OpenAI employees to deliver the SugarGh0st RAT.
- Cyber Av3ngers, a group affiliated with the Iranian Islamic Revolutionary Guard Corps (IRGC) that used its AI models to conduct research into programmable logic controllers.
- Storm-0817, an Iranian threat actor that used its AI models to debug Android malware capable of harvesting sensitive information, tooling to scrape Instagram profiles via Selenium, and translating LinkedIn profiles into Persian.
Elsewhere, the company said it took steps to block several clusters of accounts, including two related to influence operations codenamed A2Z and Stop News, that generated English- and French-language content for subsequent posting on a number of websites and social media accounts across various platforms.
“[Stop News] was unusually prolific in its use of imagery,” researchers Ben Nimmo and Michael Flossman said. “Many of its web articles and tweets were accompanied by images generated using DALL·E. These images were often in cartoon style, and used bright color palettes or dramatic tones to attract attention.”
Two other networks identified by OpenAI Bet Bot and Corrupt Comment have been found to use OpenAI’s API to generate conversations with users on X and send them links to gambling sites, as well as manufacture comments that were then posted on X, respectively.
The disclosure comes nearly two months after OpenAI banned a set of accounts linked to an Iranian covert influence operation called Storm-2035 that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election.
“Threat actors most often used our models to perform tasks in a specific, intermediate phase of activity — after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed ‘finished’ products such as social media posts or malware across the internet via a range of distribution channels,” Nimmo and Flossman wrote.
The misuse of generative AI for fraud and deepfake operations notwithstanding, cybersecurity company Sophos, in a report published last week, said the technology could also be abused to disseminate tailored misinformation by means of microtargeted emails.
This entails abusing AI models to concoct political campaign websites, AI-generated personas across the political spectrum, and email messages that specifically target them based on the campaign points, thereby allowing for a new level of automation that makes it possible to spread misinformation at scale.
“This means a user could generate anything from benign campaign material to intentional misinformation and malicious threats with minor reconfiguration,” researchers Ben Gelman and Adarsh Kyadige said.
“It is possible to associate any real political movement or candidate with supporting any policy, even if they don’t agree. Intentional misinformation like this can make people align with a candidate they don’t support or disagree with one they thought they liked.”