Home > Cyber News > 225K+ ChatGPT Credentials on Dark Web for Sale
CYBER NEWS

225K+ ChatGPT Credentials on Dark Web for Sale

A recent report by Group-IB has unveiled concerning statistics regarding compromised ChatGPT credentials. Between January and October 2023, over 225,000 logs containing these credentials were discovered on underground markets. These compromised credentials were identified within information stealer logs linked to malware such as LummaC2, Raccoon, and RedLine.

225K+ ChatGPT Credentials on Dark Web for Sale

Dynamics of Infected Devices Hosting OpenAI’s ChatGPT

According to Group-IB’s Hi-Tech Crime Trends 2023/2024 report, there was a notable fluctuation in the number of infected devices throughout the year. While there was a slight decrease in mid- and late summer, the numbers surged significantly between August and September. During this period, more than 130,000 unique hosts with access to ChatGPT were infiltrated, marking a 36% increase compared to the previous months. Among the top three stealer families, LummaC2, Raccoon, and RedLine, LummaC2 accounted for the highest number of infiltrated hosts, with 70,484, followed by Raccoon with 22,468 and RedLine with 15,970.

The surge in compromised ChatGPT credentials is attributed to the overall rise in the number of hosts infected with information stealers. This data is subsequently put up for sale on underground markets or in UCLs, highlighting the growing threat posed by cybercriminals.




Concerns of Exploitation of Large Language Models in Cyber Operations

In addition to the compromised ChatGPT credentials, there are broader concerns regarding the use of large language models (LLMs) by malicious actors. Microsoft and OpenAI have warned that nation-state actors from Russia, North Korea, Iran, and China are increasingly utilizing AI and LLMs to enhance their cyber attack operations. These technologies can aid adversaries in various malicious activities, including crafting convincing scams and phishing attacks, improving operational efficiency, and accelerating reconnaissance efforts.

Threat actors are now targeting devices with access to public AI systems, leveraging the communication history between employees and systems to search for confidential information, details about internal infrastructure, authentication data, and even application source code. T

The abuse of valid account ChatGPT credentials, facilitated by the easy availability of information via stealer malware, has become a prominent access technique for threat actors. This poses significant challenges for identity and access management, as enterprise credential data can be stolen from compromised devices through various means, including credential reuse, browser credential stores, or accessing enterprise accounts directly from personal devices.

Milena Dimitrova

An inspired writer and content manager who has been with SensorsTechForum since the project started. A professional with 10+ years of experience in creating engaging content. Focused on user privacy and malware development, she strongly believes in a world where cybersecurity plays a central role. If common sense makes no sense, she will be there to take notes. Those notes may later turn into articles! Follow Milena @Milenyim

More Posts

Follow Me:
Twitter

Leave a Comment

Your email address will not be published. Required fields are marked *

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.
I Agree