morphing animation using deforum

What is Deforum Stable Diffusion?

Deforum Stable Diffusion is an Ai technology that generate in-between images from the start and end images. Using prompts to describe the start and end images are accepted as well. Then you are able to use generated image sequences to make interesting animations.

How to create morphing animation using Deforum Stable Diffusion from GitHub?

1. First download the code from deforum-stable-diffusion GitHub repository. Open a dos prompt, in the directory where you want to install, type the command:
>git clone https://github.com/deforum-art/deforum-stable-diffusion.git
2. Download Protogen v2.2 checkpoint and put it at “deforum-stable-diffusion\models” directory.
3. Now you need to configure a virtual environment to run the code. If you haven’t installed Anaconda3, go to install Anaconda3.
4. Setup a conda environment for deforum with their  instruction. If the instruction confuses you, you can download a ready-for-use deforum_env at Gumroad (compatible with CUDA 11.8). Unzip it and put it at your anaconda3 installation under “envs” directory.
5. Open “Deforum_Stable_Diffusion.py” and edit it.
6. In line 116, change the Protogen_V2.2 checkpoint name to match with the checkpoint name you downloaded.
7. In line 140, change animation_mode from ‘None ‘ to ‘2D’ if you want to make an animation instead of still images.
8. In line 141, change max_frames to number of frames you want to render. For example, 240.
9. In line 147, change translation_x to “0:(0)” to avoid wobbly camera movement.
10. In line 230, change the first prompt to describe the start scene you want to generate.
11. In line 231, change the second prompt to describe the target scene you want to generate.
12. All other settings can be default. Save the file.
13. Open an Anaconda Prompt. Run command:
>conda activate deforum_env
14. Still in the Anaconda prompt, go to the directory “deforum-stable-diffusion” and run command:
>python Deforum_Stable_Diffusion.py
15. When it finishes, the image sequence is saved at “outputs”.
16. Use After Effect or other video editing software to import the image sequence and render as an video.


Face animation using Ai