AI/ML, Managed Security Services

Sysdig: LLMjacking Victims Can Lose Money, and Fast

Adobe Stock

Security and monitoring solution vendor Sysdig earlier this year outlined a new cyber threat in the emerging generative AI space, where attackers were using stolen cloud credentials to gain unauthorized access into cloud-hosted large language models (LLMs) and then selling that access to other hackers, leaving the cloud account owner with the bill.

The new scam, LLMjacking, opened up yet another avenue for cybercriminals looking to target LLMs, which are foundational to generative AI workload. Researchers with Sysdig’s Threat Security Team, in a report in May, noted that there already were numerous attacks on LLMs, from prompt injections to altering training data, but selling access to the models was something new.

In a new report released this week, Sysdig’s researchers outline the impact of LLMjacking, which is part of a larger trend in cloud cyberthreats – cloud attacks this year jumped 154% from 2023, according to Check Point Software – that are increasingly relying on automation, botnets, and open source tools to exact high financial costs to victims.

The rapid evolution of these attacks also highlights the need for MSSPs to be at the cutting edge of security practice and research, according to Crystal Morin, cybersecurity strategist at Sysdig.

The vendor’s 2024 Global Threat Year-in-Review report “details real, novel threats and techniques that MSSPs must be able to detect, defend against, and respond to,” Morin told MSSP Alert. “Like an internal security team, an MSSP should be willing to sit down with clients and baseline their environments’ and users’ normal activity. From here, they can identify anomalous activity such as LLMjacking, which may only be obvious through abnormal logins or resource consumption spikes.”

The High Costs of an Attack

In May, the researchers detailed how hackers were able to use cloud credentials in LLMjacking attacks against a range of AI models from the likes of OpenAI, Microsoft Azure, Amazon Web Services (AWS) Bedrock, Anthropic, and Google Cloud Platform’s Vertex AI. At the time, they noted an attempt to access a Claude model from Anthropic, an attack that, if undiscovered, could add up to more than $46,000 of LLM consumption costs per day for the victim.

In the new report, they dug deeper into the financial impact, noting that the cost of an LLMjacking incident could be as high as $100,000 per day and that one attack left the victim owing $30,000 after only three hours.

“LLMjacking is similar to proxyjacking and freejacking, where the attackers look to gain access to resources that are often otherwise costly,” the researchers wrote, adding that there is a significant difference than with something like cryptojacking – where a system’s compute resources are stolen to mine cryptocurrencies.

Difficult to Detect

“With cryptomining, an increase in CPU resource consumption is easy to identify based on specific behaviors and will trigger an immediate alert,” they wrote. “LLM usage, however, cannot be detected this way since there is only one behavior — a call to the LLM. LLM resource consumption will vary greatly across individual users and, therefore, it is difficult to differentiate between legitimate and malicious use.”

A single LLM user can make 500 to 1,000 calls or more a day. However, in one instance in July, Sysdig researchers saw a burst of 80,000 calls in three hours, generating a bill of $24,000 to $30,000 to the victim. It appeared that access to the AI model had been shared or sold and automation used to generate so many calls.

The motivation behind LLMjacking often is financial, though not always, Sysdig researchers said. Some AI models, from such vendors as OpenAI, Anthropic, and AWS, are sanctioned and unavailable in countries like Russia and website access is restricted in China, North Korea, and other countries. People in those countries might use malicious means to access such resources, they wrote.

MSSPs Need to Build AI Skills

The rapidly growing enterprise use of generative AI tools and the ongoing targeting of AI resources by threat actors is putting pressure on MSSPs to have the skills and knowledge to push back at the threats, Sysdig’s Morin said.

“An MSSP needs to understand the environments, applications, and processes their clients use,” she said. “Generative AI – or LLMs – is one example of this. If you know your client is using a particular model, be sure to brush up on the technology and stay up to date on its relevant threat intelligence, such as LLMjacking.”

That preparation also means using AI and machine learning for their own means.

“Cloud attacks move from initial system access to data exfiltration within 10 minutes, so MSSPs must detect threats and alert clients in seconds,” Morin said. “They should also consider automating some response actions for high-confidence detections through the use of ML and AI tools and scripts. These high-speed efforts will reduce or remove the impact of large-scale cloud threats.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

You can skip this ad in 5 seconds