Compromised ChatGPT accounts made available on dark web

Compromised ChatGPT accounts made available on dark web

ChatGPT.jpg

Over 225,000 ChatGPT accounts were found to be up for sale on dark web marketplaces between January and October last year, selling for as low as US$1 per account.

After over 100,000 stolen accounts were seen traded last summer, with users being allowed to use OpenAI services for free.

Candid Wüest, VP for product management at Acronis, a cybersecurity specialist says that the accounts might also provide access to the query history which may reveal personal and sensitive information.

“Those cases highlight the potential attack surface that current generative AI models can provide to attackers,” Wüest told Capacity.

“Most discussions focus on the attack generation side, where attackers use GenAI to create phishing emails, build malware like ransomware, conduct vulnerability search and automate their attack process to be more efficient and scalable.”

Wüest adds that there is also the part where attackers go after the AI model and implementation itself.

For example, he says, attackers have started to bombard exposed AI chatbots with automated requests to use up all the resources and tokens of an implementation. Such requests will cost the receiving organisations money.

“In addition to the previously mentioned stolen passwords, we also see attackers reverse engineer applications or going through source code repositories to find API keys that can be abused,” he says.

“More advanced attacks like adding deliberate bias to LLMs or poison the data sets are not yet that common with cyber criminals, but depending on the use case they may grow in the future.”

Gift this article