VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The open source Llama 2 large language model (LLM) developed by Meta is getting a major enterprise adoption boost, thanks to Dell Technology.
Dell today announced that it is adding support for Llama 2 models to its lineup of Dell Validated Design for Generative AI hardware, as well as its generative AI solutions for on-premises deployments.
Bringing Llama 2 to the enterprise
Llama 2 was originally released by Meta in July and the models have been supported by multiple cloud providers including Microsoft Azure, Amazon Web Services and Google Cloud.
The Dell partnership is different in that it is bringing the open source LLM to on-premises deployments.
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Not only is Dell now supporting Llama 2 for its enterprise users, it’s also using Llama 2 for its own use cases as well.
For Meta, the Dell partnership provides more opportunities to learn how enterprises are using Llama, which will help to further expand the capabilities of an entire stack of Llama functionality over time.
For Matt Baker, senior vice-president, AI strategy at Dell, adding support for Llama 2 will help his company to achieve its vision of bringing AI to enterprise data.
“The vast majority of data lives on premises and we now have this open access model to bring on premises to your data,” Baker told VentureBeat. “With the level of sophistication that the Llama 2 family has all the way up to 70 billion parameters, you can now run that on premises right next to your data and really build some fantastic applications.”
Dell isn’t just supporting Llama 2, it’s using it too
Dell was already providing support for the Nvidia NeMo framework to help organizations build out generative AI applications.
The addition of Llama 2 provides another option for organizations to choose from. Dell will be providing guidance to its enterprise customers on the hardware needed to deploy Llama 2 as well as helping organizations on how to build applications that benefit from the open source LLM.
Going a step further, Baker also noted that Dell is using Llama 2 for its own internal purposes. He added that Dell is using Llama 2 both for experimental as well as actual production deployment. One of the primary use cases today is to help support Retrieval Augmented Generation (RAG) as part of Dell’s own knowledge base of articles. Llama 2 helps to provide a chatbot style interface to more easily get to that information for Dell.
Dell will make money from its hardware and professional services for generative AI, but Baker noted that Dell is not directly monetizing Llama 2 itself, which is freely available as open source technology.
“We’re not monetizing Llama 2 in any way, frankly, it’s just what we believe is a really great capability that’s available to our customers and we want to simplify how our customers consume it,” Baker said.
Overall Llama 2 has been a stellar success with approximately 30 million downloads of the open source technology in the last 30 days, according to Joe Spisak, head of generative AI open source at Meta.
For Meta, Llama 2 isn’t just an LLM, it’s the centerpiece for an entire generative AI stack that also includes the open source PyTorch machine learning framework that Meta created and continues to help develop.
“We basically see here that we are really the center of the developer ecosystem for generative AI,” Spisak told VentureBeat.
Spisak commented that adoption of Llama 2 is coming from a variety of players in the AI ecosystem. He noted that cloud providers like Google Cloud, Amazon Web Services (AWS), and Microsoft Azure are using Llama as a platform for optimization of LLM benchmarks. Hardware vendors are also key partners according to Spisak with Meta working with companies like Qualcomm, bringing Llama to new devices.
While Llama sees adoption in the cloud, Spisak emphasized the importance of partnerships that can run Llama on-premises, with Meta’s partnership with Dell as a prime example. With an open LLM, Spisak said that an organization has options when it comes to deployment, which is important when it comes to consideration about data privacy.
“Obviously, you can use public cloud of course, but the real value here is being able to run it in these environments where traditionally you don’t want to send data back to the cloud, or you want to run things very kind of locally, depending on the sensitivity of the private data,” Spisak said. “That’s where these open models really shine, and Llama 2 does hit that sweet spot as a really capable model and it can really run anywhere you want it to run.”
Working with Dell will also help the Llama development community to better understand and build out for enterprise requirements. Spisak said that the more Llama technology is deployed, the more use cases there are, the better it will be for Llama developers to learn where the pitfalls are, and how to better deploy at scale.
“That’s really the value of working with folks like Dell, it really helps us as a platform and that will hopefully, help us build a better Llama 3 and Llama 4 and overall just a safer and more open ecosystem,” Spisak said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.