How to Animate Images with Stable Diffusion Video

Easy and fast image animation is one of the amazing powers of AI video programs, and Stable Diffusion video animation is one of the simplest (and cheapest) ways to make your images come to life!

In this article, we’ll explain how you can animate photos, drawings, AI-generated artworks, etc. with Stable Diffusion. I’ll also provide you plenty of examples to show you the power of these free tools.

The Many Faces of Stable Diffusion

Stable Diffusion is an open source image generator, or a GAN (Generative Adversarial Network). Standard versions of Stable Diffusion are downloaded to your local computer and run offline. One example of this is the Fooocus Web Interface.

However, there are many people who have published web interfaces that provide access to a limited set of features of Stable Diffusion. An example of this is Stable Diffusion XL.

Another web-based system is Stability.ai Stable Video Diffusion. Stability.ai is a web front end for Stable Diffusion video.

It doesn’t have the slick front end that competitors like RunwayML or Pika have, but it’s powerful and free. Stability.ai is a great way to start exploring Stable Diffusion video generation.

How to Animate Images with Stable Video Diffusion

Once you have pulled up Stablity.ai, you will see a very basic interface that prompts you to upload an image file to animate.

To generate your video you need a source image to start with, which gets uploaded to Stability.ai. You can use a photo, a stock image, an AI-generated image, anything you would like.

Experiment 1

For our first example, I used as a video source an AI image generated in Midjourney, the same one that was at the top of this article:

Stability.ai generated at 25 frame video clip from this image, which can be seen here:

Very cool!

Experiment 2

I then tried using a simple AI-generated Midjourney image in square format. Here’s the image:

And here is the video Stable Diffusion generated, using 14 frames:

There is not a whole lot of movement here, but nonetheless it does bring the image to life.

Experiment 3

I then tried a more editorial fashion photo image, also generated in Midjourney. Here’s the original image I created:

And here is the Stable Diffusion Video generation, using 14 frames:

Obviously glitchy, but impressive for a fairly significant amount of movement, nonetheless!

Experiment 4

Here is yet another image I generated in Midjourney, animated in Stable Video Diffusion, this time 25 frames:

And the 25 frame video output:

Control Options on Stability.AI Video

The downside of using a non-local version of Stable Diffusion video is that you don’t get as much control as if you installed it on your local machine and tweaked it. But there are some control options available to you, including:

Video Length

You can adjust the number of frames of video, which impacts the length of the video, but also the motion quality.

Sizing Strategy

You can change the aspect ratio of your video creation.

Frames Per Second

This will impact the length of the video and the smoothness of the playback.

Motion Bucket

This will impact the amount of motion you’re looking for on your image.

Cond_Aug

This adjusts the amount of noise in the resulting video

Seed

If you want to control the characteristics of an image or video, use the same seed

Final Thoughts

Stable Video Diffusion through Stability.ai is a fun tool to play with, and is free to use for a period of time. Eventually, you have to start to pay for video generation, though it is priced cheaply, and you only pay for the videos you generated (in other words, no recurring monthly fee).

Stability.ai is a great place to start when learning to generate AI video clips, but it is less flexible than paid services like RunwayML and Pika.ai.

For more information on these, see:

Sign up for our newsletter to keep up on the latest in AI tech:

Author