How to Animate Images in RunwayML

RunywayML is one of the new AI tools making waves across the internet. While ChatGPT rules for text generation, and Midjourney shines for image generation, there has been no big advance in video production until – RunwayML.

In this article, we will look at how to animate images in RunwayML’s AI system.

Image to video is now super easy!

RunwayML 101

Before we explain how to animate images using RunwayML, you’ll need to make sure you’ve got an account setup. You can make your own account directly with RunwayML, or use the Sign In With Google setup.

Once you have signed into RunwayML, you will have access to (as of writing) 125 free credits per month — that’s 125 seconds of animation. If you fall in love with RunwayML and want to use it for more than that, you’ll need to pay for an account upgrade.

How to Animate Images with RunwayML

Here’s the RunwayML home screen, once you’ve got access to the main system:

1. Go to Gen-2

The first step to animate an image with RunwayML is click on the “Introducing Gen-2” image on the home screen, as shown above.

Yes, the tag says “generate video with text prompts” but we’ll be using an image as a prompt instead of text.

2. Add your image to the prompt box

The Gen-2 Text to Video screen is a simple screen similar to the one shown above. Click the image icon on the left and choose an image.

For the first example, I added this image, generated in midjourney.

3. Leave the Prompt Box Empty

Yes, just leave the prompt box empty and click the “Generate” button.

This will use the image source itself as the prompt, and iterate on the image in a way that makes sense to RunwayML. You can certainly add more text to the prompt in order to attempt to direct it, but the animations often work quite well with no text in the box at all.

This is the output:

Pretty cool! It’s not perfect, and you never quite know what you’re going to get, but often the short video clips are quite cool.

AI Image Animation with RunwayML – Other Examples

Here are a couple of other short videos I created using the same technique, with images generated in Midjourney. Of course, you could use any image, including a photo, as your source image.

A flower image I generated in Midjourney.

And here is the RunwayML animation of this image:

Here’s another Midjourney image I generated, used as a video source:

And here’s the RunwayML output using the similar technique:

And here’s yet another Midjourney image used as a source image:

And here is the RunwayML output:

Stitched RunwayML Videos

The videos RunwayML generates are short, generally 3-4 seconds long. But you can create longer videos by stitching multiple videos together. I created a stitched video by using this image as a source image:

I ran that image through RunwayML, and then took a screenshot of the final frame of the resulting video.

Then that frame was run through RunwayML to create the next section of video, and so on.

This is the result:

Final Thoughts

This animation technique is extremely fun to play with, and you may find yourself running out of RunwayML credits pretty quickly!

Good luck, and please share your own generations!

Author