Clicky

Home » Latest News » Stability AI launches StableLM, an open-source ChatGPT rival

Stability AI launches StableLM, an open-source ChatGPT rival

By

Ankita

| Updated on:

On Wednesday, Stability AI launched its new Open-source AI language model, known as StableLM. With the release of their latest language model stability, AI hopes to repeat the catalyzing effects of open source image synthesis model, Stable Diffusion, and make foundational AI technology available to all.

Along with refinement, StableLM might be used to develop an open-source alternative to OpenAI’s chatbot ChatGPT. 

Stability AI launches StableLM, an open-source ChatGPT rival

Key Points: 

  • Stability AI launched an open-source model known as StableLM which is an alternative to OpenAI’s ChatGPT.
  • The StableLM model is capable of performing a variety of tasks such as generating codes, texts, and much more. Showcasing how small and efficient models can also be equally capable of providing high performance with appropriate training. 
  • StableLM is currently available for usage in an Alpha form on GitHub and Hugging Face. 

StableLM an Open Source Alternative to ChatGPT 

StableLM is an open-source alternative generated by Stability AI to perform various tasks such as generating content, answering queries, and more. Stability AI has placed itself as an open-source rival to OpenAI.

According to Stability’s blog post, their latest language model, StableLM, was trained on an experimental dataset that was developed on The Pile. 

Apparently, it is 3X larger with around 1.5 trillion tokens of content. The richness of the dataset provides high performance of StableLM in coding and conversation tasks, regardless of its small size of 3 to 7 billion parameters. 

It was stated by Stability in their blog, “Language models are the backbone of our digital economy. With our language model we want each and everyone to have an individual voice of their designs”. Open-source models like StablityLM showcases the commitment level to Artificial Intelligence technology which is transparent, supportive, and available. 

Just like OpenAI’s latest large language model, GPT-4, StableLM is capable of generating texts and predicting the next token in a sequence.

The sequence initially begins when the users provide a prompt or query and StableLM predicts the next token based on that prompt. StableLM is capable of generating human-like texts and writing programs for users.

How to try StableLM right now? 

Currently, StableLM is available in Alpha form on GitHub and Hugging Face named “StableLM-Tuned-Alpha-7b Chat. The hugging face version works like a ChatGPT, although it might be slower compared to other chatbots. Parameter model sizes from 3 billion to 7 billion are available with about 15 billion and 65 billion parameter models to follow. 

Stability stated, “Our StableLM models are capable of generating codes and texts and will power a variety of downstream applications.” Stability AI indicates how small and efficient models can be equally capable of delivering high performance with proper training. 

In an informal experiment with Stable m’S 7B model developed for dialog based on the Alpaca method, it was found the model was able to perform better (when it comes to outputs) than Meta’s raw 7B parameter LLaMA model, however, not at the level of OpenAI’s GPT-3. 

Although the larger-parameter versions of StableLM might prove to be more flexible and capable of achieving various goals. 

Leave a Comment