Creating Consistent Characters with Scenario

It feels like every day, there’s a new AI system or tool that offers a different capability. It’s impossible to keep up! But in the area of image generation tools, the ability to generate consistent characters is a bit of a holy grail. It’s also pretty difficult to accomplish with existing tools like Midjourney and Dall-E.

Scenario has burst onto the scene with the ability to make consistent characters.

It’s not exactly an easy process, but it’s easier and more successful than trying to make consistent characters with Midjourney.

In this article, I’ll explain the process I went through to generate a consistent character using Scenario.

Creating Consistent Characters with Scenario — Meet Mira

The above image grid shows a LoRA model I built using Scenario.

I started with a set of actual photographs taken from a public domain photoshoot of a real person, which were fed into Scenario. I then applied a few Scenario public models to those photos to generate the style and character I liked, whom I have dubbed Mira.

From there, as I generated additional images, I created a new model based solely on the generated images.

How to Make Consistent Characters in Scenario – a Step-By-Step Guide

Here’s the process I followed. Special thanks to Scenario lead artist @araminta-k for the inspiration.

1. Choose your source model (you’ll need 5+ images)

For my source model, I went to free photo website unsplash. I scrolled through photos, looking for photographers with collections of photos of the same person.

I found the photographer Danielle La Rosa Messina, who had a collection of about 16 photos of the same woman.

I picked the best photos and downloaded them, ending up with about 7 that showed her face and body clearly.

For this you could use real photos (as I did), images of animals (as @araminta-k did in the above link), screenshots from a video game avatar, etc.

The key is that you need multiple images from different angles to use as source images in order to train the model.

2. Train Your Own Model in Scenario

Next, click Create from the left menu bar, and choose Train Your Own Model.

I kept all of the options at the default. Just be sure you use the SDXL LoRA Model, which should be selected by default. Scenario uses a SDXL (Stable Diffusion XL) LoRA model, which stands for Low Rank Adaption of Large Language Models (LLMs),

From there, upload your training images to the new model, create a name, and Train it.

The generation can take 10 minutes or more, depending on your account type and the number of images in the model

You’ll end up with something like this:

3. Start Composing By Blending Your Model With Scenario

Once you’ve got your model generated, you go back to the Create menu and this time choose Start Composing (the right-hand menu).

Once you’ve started composing, add your model to the generator. To do this, choose the “Your Models” tab and select the model you’ve generated.

Add your model, and then choose the “Public Models” tab. You’ll see a wide variety of models to choose from to blend with your existing model.

Here is where the magic happens.

4. Add Public Models and Blend Them to Create Your Character

There are many public models to choose from, which you can blend with your actual model to create the character look you’re going for.

Scroll through the list and find a few that look intriguing. You can adjust the weights to create your own custom model and style that is particular to your character.

In my case, I chose the following public models and weights:

  • My Generated Model: 0.55 weight
  • Public Model Stylized Fantasy Iconic Imagery: 0.55 weight
  • Public Model Enchanted Realism: 0.20 weight
  • Public Model Psychedelic Bubblegum Pop: 0.55 weight

This will take a bit of trial and error.

Pick some weights, add a basic prompt, and generate an image. If you like the image you get, make more. If not, adjust the weights and try again.

Interestingly, the resulting character Mira doesn’t look exactly like the source model, but is in fact based on her characteristics.

5. Once You Have a Good Image, Use it as a Reference Image

After you’ve got an image you’re happy with, you can use it as a reference to generate more similar images. Scroll down to the “Reference Image” option, and add your image to the model.

Change the mode to “Reference Only” and keep your Influence in the 25-50 range, depending on what you’re looking for.

Add some new prompts, and generate a new version of your character in a new scene.

6. Train a New Model With Your Generated Images

Once you’ve got 5+ images generated, you can use them to train a new model that is entirely based on your generated images.

You no longer need to use the blend of custom and public models you were working with, you can entirely base your new model on the images of the same character you’ve generated.

To do this, you would simply follow the same instructions above, but instead of adding the photographs of the model from unsplash, you would train the model with the generated images you’ve created in Scenario.

Fun Additional Step: Animate with Pika.art

As a fun additional step, I’ve animated the images in Pika.art to create a short film based on my character Mira. You can see the short video below. The audio soundtrack was generated with suno.ai.

Conclusion

Scenario has a fairly generous free plan, so go ahead and give it a try! Creating consistent characters that can be used in graphic novels, animations, video games, etc. is fun and powerful.

Good luck!

Author