Stable Diffusion 2 is a text-to-image generating model developed by Stability AI which was released on November 24, 2022. The new version of Stable Diffusion introduced a set of new features and capabilities that can help generate high-quality images using prompts or descriptions.
Stable Diffusion 2 introduced OpenCLIP as a new text coder along with the capability of handling more complex and larger prompts effortlessly.
In this article, we will explain What is Stable Diffusion 2? What can you do with it? Is Stable Diffusion 1.5 or 2.1 better? And more.
What is Stable Diffusion 2?
Stable Diffusion 2 is an image-generating model, like an upgraded version of Stability AI’s Stable Diffusion 1. Through this model, users can generate high-quality images by providing prompts. Robin Rombach and Katherine Crowson of Stability AI and LAION led this new project to train Stable Diffusion 2.
The new model introduced a new text encoder (OpenCLIP) which was produced by LAION in support of Stability AI (the creator of Stable Diffusion 2).
When did Stable Diffusion 2.0 Release?
Stable Diffusion 2.0 was released on 24th November 2022. The second version of the Stable Diffusion model was developed by Stability AI.
Stable Diffusion 2 Features
The second version of Stable Diffusion, “Stable Diffusion 2” has introduced some additional features and enhancements which are as follows:
- Stable Diffusion 2 utilizes a new text encoder called “OpenCLIP”, by LAION which helps in the image generation process using natural language descriptions. With the usage of OpenCLIP, the new model can generate images with 512×512 or 768×768 pixels which are trained on the LAION-5B dataset.
- The new version of Stable Diffusion is capable of handling longer and more complex text prompts without losing any fidelity or coherence. This feature was made possible due to the attention mechanism and the massive vocabulary size of OpenCLIP.
- Stable Diffusion 2 contains a new feature called the “Negative prompt” option that allows users to specify things they don’t want to include in the image.
- Stable Diffusion 2 is capable of generating high-quality images with fine-grained control over the style and content due to the latent diffusion framework and OpenCLIP text encoder.
- It is capable of leveraging all the pre-trained models and datasets without the need for data augmentation or fine-tuning due to the flexibility and modularity of the framework.
How To Access Stable Diffusion 2
You can access Stable Diffusion 2 easily via DreamStudio or Hugging Face. Below we have mentioned the steps on how you can easily access Stable Diffusion 2 to create beautiful and realistic images:
You can access Stable Diffusion 2.0 using DreamStudio. Here are the steps that you need to follow to access it:
- Visit https://dreamstudio.ai/generate and click on the “Login” option
- Click on the “Sign Up” option and create an account on DreamStudio to access Stable Diffusion 2.0 by providing your email
- Once done, log in to your account
- Now scroll down on DreamStudio’s site until you find “Model” and choose Stable Diffusion version 2.0
- You can now enter your prompt on the text box and click on “Dream” and Stable Diffusion version 2 will generate your image.
You should note that DreamStudio provides a certain amount of free credit points through which you can request image generation.
However, once these free points are used, you must purchase further points to continue using DreamStudio.
2. Stable Diffusion Online
You need to visit https://stablediffusionweb.com/ on your preferred browser and click on “Get Started for Free.” Now, you need to scroll down to Stable Diffusion Playground to access your free trial without the need of signing up or logging in.
Now, you can enter your prompt or short description and Stable Diffusion will begin generating your image.
If you are not satisfied with the image, then you can request another image generation by simply clicking on “Generate Image” again with the same prompt.
You can also change the prompt or give more specific instructions to have your desired results.
What can you do with Stable Diffusion 2?
You can generate detailed images using short text descriptions or prompts with Stable Diffusion 2. This model of Stable Diffusion can help you generate high-quality images with control over the style and content thanks to its OpenCLIP text encoder.
With the usage of OpenCLIP, the new model can generate images with 512×512 or 768×768 pixels which are trained on the LAION-5B dataset.
Also, unlike the previous model users don’t need to worry about the length of text prompts or its complexity level since the new model is capable of handling complex and lengthy text effortlessly.
The best thing about Stable Diffusion 2 is that anyone can use this model easily through DreamStudio or Hugging Face.
Is Stable Diffusion 2 better?
Stable Diffusion 2 is a better model compared to its previous model Stable Diffusion Version 1 as it contains more features and is capable of generating higher-quality and detailed images.
It uses a new text encoder called “OpenCLIP” which was created by LAION. This helps in improvising the overall quality of the generated image in comparison to Stable Diffusion Model 1.
Another factor that makes Stable Diffusion 2 better than its previous model is its capability of producing images in both 512×512 pixels and 768×768 pixels.
While entering your prompts, Stable Diffusion 2 users can even use the “Negative Prompt” feature to explain the things they don’t want the model to generate, a feature that was not available in the previous model.
Therefore, yes we would say Stable Diffusion version 2 is a better model in comparison to features and capabilities.
Is Stable Diffusion 1.5 or 2.1 better?
Both Stable Diffusion Version 1.5 and 2.1 are capable of generating high-quality images. However, both versions have certain differences that set both versions completely apart from each other and can be useful for fulfilling different goals.
Below we have mentioned a table comparing Stable Diffusion Version 1.5 and Stable Diffusion Version 2.1 that can help you determine which model is ideal for you.
|Stable Diffusion Version 1.5
|Stable Diffusion Version 2.1
|512 x 512 pixels
|512 x 512 pixels & 768 x 768 pixels
|Negative and weighted prompts
|Diversity and realism of images
|Low for architecture, design, etc.
|High for architecture, interior design, etc.
|Non-standard resolution and aspect ratio
Both Stable Diffusion models 1.5 and 2.1 are capable of generating an image at 512 x 512 resolution. However, Stable Diffusion model 2.1 can produce images that are more detailed and larger at 768 x 768 pixels making it more capable of capturing details from the prompt.
Stable Diffusion Version 2.1 also has the added advantage of having negative prompts that allow users to specify things they want to exclude from their images.
Meanwhile, Stable Diffusion 1.5 doesn’t support negative prompts, therefore, users can’t give any description of what they want to exclude from the image generation.
Stable Diffusion version 1.5 is a suitable model for those users who want to generate people or pop culture images.
Meanwhile, Stable Diffusion Version 2.1 is ideal for users looking for a text-to-image generating program for creating architecture, interior, or other landscape scene images.
Version 2.1 is more capable of supporting a range of art styles and themes which makes it a better option for generating designs and architectural images.
Therefore, if you want to generate more realistic, detailed, and stable images then you should go for Stable Version 2.1. However, if you want to generate images of a popular personality or style then you should go for Stable Diffusion version 1.5.