Loading Now
×

Stability AI goes ‘smol’ with StableLM Zephyr 3B

Stability AI goes 'smol' with StableLM Zephyr 3B

Stability AI goes ‘smol’ with StableLM Zephyr 3B


Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Stability AI is perhaps best known for its suite of stable diffusion text-to-image generative AI models, but that’s not all the company does anymore.

Today Stability AI released its latest model, StableLM Zephyr 3B, which is a 3 billion parameter large language model (LLM) for chat use cases, including text generation, summarization and content personalization. The new model is a smaller, optimized iteration of the StableLM text generation model that Stability AI first started talking about in April. 

The promise of StableLM Zephyr 3B is that it is smaller than the 7 billion StableLM models, which provides a series of benefits. Being smaller enables deployment on a wider range of hardware, with a lower resource footprint while still providing rapid responses. The model has been optimized for Q&A and instruction following types of tasks.

“StableLM was trained for longer on better quality data than prior models, for example with twice the number of tokens of LLaMA v2 7b which it matches on base performance despite being 40% of the size,”  Emad Mostaque, CEO of Stability AI, told VentureBeat.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

What the StableLM Zephyr 3B is all about

StableLM Zephyr 3B is not an entirely new model, rather Stability AI defines it as an extension of the pre-existing StableLM 3B-4e1t model.

Zephyr has a design approach that Stability AI said is inspired by the Zephyr 7B model from HuggingFace. The HuggingFace Zephyr models are developed under the open-source MIT license and are designed to act as assistants.  Zephyr uses a training approach known as Direct Preference Optimization (DPO) that StableLM now benefits from as well.

Mostaque explained that Direct Preference Optimization (DPO) is an alternative approach to the reinforcement learning used in prior models to tune them to human preferences. DPO has typically been used with larger 7 billion parameter models, with StableLM Zephyr being among the first that use the technique with the smaller 3 billion parameter size.

Stability AI used DPO with the UltraFeedback dataset from the OpenBMB research group. UltraFeedback has more than 64,000 prompts and 256,00 responses in its dataset. The combination of DPO, the smaller size and the optimized data training set provides StableLM with some solid performance in metrics provided by Stability AI. On the MT Bench evaluation, for example, StableLM Zephyr 3B was able to outperform larger models including Meta’s Llama-2-70b-chat and Anthropric’s Claude-V1.

Credit: Stability AI

A growing suite of models from Stability AI

StableLM Zephyr 3B joins a growing list of new model releases from Stability AI in recent months, as the generative AI startup continues to push its capabilities and tools further.

In August, Stability AI released StableCode as a generative AI model for application code development. That release was followed up in September, with the debut of Stable Audio, as a new text-to-audio generation tool.  Then in November, the company jumped into the video generation space with a preview of Stable Video Diffusion.

Though it has been busy expanding into different spaces, the new models have not meant that Stability AI has forgotten about the text-to-image generation foundation. Last week, Stability AI released SDXL Turbo, as a faster version of its flagship SDXL text-to-image stable diffusion model.

Mostaque is also making it quite clear that there is a lot more innovation yet to come from Stability AI.

“We believe that small, open, performant, models tuned to users own data will outperform larger general models,” Mostaque said. “With the future full release of our new StableLM models, we look forward to democratizing generative language models further.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link