Loading Now
×

Hugging Face dodged a cyber-bullet with Lasso Security’s help

Hugging Face dodged a cyber-bullet with Lasso Security's help

Hugging Face dodged a cyber-bullet with Lasso Security’s help


Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Further validating how brittle the security of generative AI models and their platforms are, Lasso Security helped Hugging Face dodge a potentially devastating attack by discovering that 1,681 API tokens were at risk of being compromised. The tokens were discovered by Lasso researchers who recently scanned GitHub and Hugging Face repositories and performed in-depth research across each.

Researchers successfully accessed 723 organizations’ accounts, including Meta, Hugging Face, Microsoft, Google, VMware, and many more. Of those accounts, 655 users’ tokens were found to have write permissions. Lasso researchers also found that 77 had write permission that granted full control over the repositories of several prominent companies. Researchers also gained full access to Bloom, Llama 2, and Pythia repositories, showing how potentially millions of users were at risk of supply chain attacks.

“Notably, our investigation led to the revelation of a significant breach in the supply chain infrastructure, exposing high-profile accounts of Meta,” Lasso’s researchers wrote in response to VentureBeat’s questions. “The gravity of the situation cannot be overstated. With control over an organization boasting millions of downloads, we now possess the capability to manipulate existing models, potentially turning them into malicious entities. This implies a dire threat, as the injection of corrupted models could affect millions of users who rely on these foundational models for their applications,” the Lasso research team continued.

Hugging Face is a high-profile target 

Hugging Face has become indispensable to any organization developing LLMs, with over 50,000 organizations relying on them today as part of their devops efforts. They’re the go-to platform for every organization developing LLMs and pursuing generative AI devops programs.  

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Serving as the definite resource and repository for large language model (LLM) developers, devops teams, and practitioners, the Hugging Face Transformers library hosts over 500,000 AI models and 250,000 datasets

Another reason why Hugging Face is growing so quickly is the popularity of its Transformers library being open-source. Devops teams tell VentureBeat that the collaboration and knowledge sharing an open source platform provides accelerates LLM model development, leading to a higher probability that models will make it into production. 

Attackers looking to capitalize on LLM and generative AI supply chain vulnerabilities, the possibility of poisoning training data, or exfiltrating models and model training data see Hugging Face as the perfect target. A supply chain attack on Huggy Face would be as difficult to identify and eradicate as  Log4J has proven to be.  

Lasso Security trusts their intuition 

With Hugging Face gaining momentum as one of the leading LLM development platforms and libraries, Lasso’s researchers wanted to gain deeper insight into its registry and how it handled API token security. In November 2023, researchers investigated Hugging Face’s security method. They explored different ways to find exposed API tokens, understanding  it could lead to the exploitation of three of the new OWASP Top 10 for Large Language Models (LLMs) emerging risks that include:

Supply chain vulnerabilities. Lasso found that LLM application lifecycles could easily be compromised by vulnerable components or services, leading to security attacks. The researchers also found that using third-party datasets, pre-trained models, and plugins adds to the vulnerabilities.

Training data poisoning. Researchers discovered that attackers could compromise LLM training data via compromised API tokens. Poisoning training data would introduce potential vulnerabilities or biases that could compromise LLM and model security, effectiveness, or ethical behavior.

The very real threat of model theft. According to Lasso’s research team, compromised API tokens are quickly used to attain unauthorized access, copying, or exfiltration of proprietary LLM models. A startup CEO whose business model relies entirely on an AWS-hosted platform told VentureBeat it costs on average $65,000 to $75,000 a month in compute charges to train models on their AWS ECS instances

Lasso researchers report they had the opportunity to “steal” over ten thousand private models associated with over 2500 datasets. Model theft has a topic entry in the new OWASP Top 10 for LLM. Lasso’s researchers contend that based on their Hugging Face experiment, the title needs to be changed from “Model Theft” to “AI Resource Theft (Models & Datasets).”

“The gravity of the situation cannot be overstated. With control over an organization boasting millions of downloads, we now possess the capability to manipulate existing models, potentially turning them into malicious entities. This implies a dire threat, as the injection of corrupted models could affect millions of users who rely on these foundational models for their applications,” said the Lasso Security research team in a recent interview with VentureBeat.  

Takeaway: treat API tokens like identities

Hugging Face’s risk of a massive breach that would have been challenging to catch for months or years shows how intricate – and nascent – the practices are for protecting LLM and generative AI development platforms. 

Bar Lanyado, a security researcher at Lasso Security, told VentureBeat during a recent interview that “we recommend that HuggingFace constantly scan for publicly exposed API tokens and revoke them, or notify users and organizations about the exposed tokens.” 

Lanyado continued, advising that “a similar method has been implemented by GitHub, which revokes OAuth token, GitHub App token, or personal access token when it is pushed to a public repository or public gist. To fellow developers, we also advise to avoid working with hard-coded tokens and follow best practices. Doing so will help you to avoid constantly verifying every commit that no tokens or sensitive information is pushed to the repositories.”

Think zero trust in an API token world

Managing API tokens more effectively needs to start with how Hugging Face creates them by ensuring each is unique and authenticated during identity creation. Using multi-factor authentication is a given. 

Ongoing authentication to ensure least privilege access is achieved, along with continued validation of each identity using only the resources it has access to, is also essential. Focusing more on the lifecycle management of each token and automating identity management at scale will also help. All the above factors are core to Hugging Face going all in on a zero-trust vision for their API tokens.  

Greater vigilance isn’t enough in a zero-trust world  

As Lasso Security’s research team shows, greater vigilance isn’t going to get it done when securing thousands of API tokens, which are the keys to the LLM kingdoms many of the world’s most advanced technology companies are building today. 

Hugging Face dodging a cyber incident bullet shows why posture management and a continual doubling down on least privileged access down to the API token level are needed. Attackers know a gaping disconnect between identities, endpoints, and any form of authentication, including tokens.

The research Lasso released today shows why every organization must verify every commit (in GitHub) to ensure no tokens or sensitive information is pushed to repositories and implement security solutions specifically designed to safeguard transformative models. It all comes down to getting in an already-breached mindset and putting stronger guardrails in place to strengthen the devops and the entire organization’s security postures across every potential threat surface or attack vector.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link