Meta's Animated Drawings project demonstrates a method to animate children's drawings and provides an open-source toolkit for users to flexibly create their own animations.
How the project works
Animated Drawings uses the following technologies:
Object detection model: Identifies characters in the drawing. Pose estimation model: Detects the positions of character joints, providing support for action generation. Image segmentation: Process artwork to create a digital version for subsequent animation.

Core functions
Switch characters
Use different character designs to add more possibilities to the animation.
Apply various actions
Select preset actions to give characters dynamic effects, meeting a variety of creative needs.
Change output format
Support export in multiple formats, such as GIF files with transparent backgrounds, suitable for animation needs in various scenarios.
Multi-character scenes
Add multiple characters in one animation scene and make them interact with each other, enriching the animation content.
Add background image
Specify the background image path in the configuration file to add realistic scene effects to the animation.
Method
1. Overview of the painting-to-animation process
Input painting:
Input a drawing containing humanoid characters.
Detect humanoid characters in the drawing and crop them into separate sections.
Generate segmentation masks for humanoid characters from the cropped images and detect joint positions.
Generate the skeleton binding of the character using segmentation masks and joint positions.
Redirect motion capture data to the character skeleton to generate animations.
2. Image Processing for Generating Segmentation Masks
Processing steps:
Convert to grayscale image (a). Apply adaptive thresholding (b). Perform morphological closing operation (c) and dilation (d). Flood fill (e), retain the largest polygon (f).
The generated segmentation mask accurately fits the original human shape contour (g).
3. Skeleton binding and motion retargeting for animation characters
Skeleton binding:
Create a skeleton binding (b) for animation based on the predicted joint keypoints (a).
Extract the original pose from motion capture data. Project the character's upper body joints onto the frontal plane and the lower body joints onto the sagittal plane (c). Calculate the global direction of the bones, match the character's joint rotations to the original pose, and complete the action redirection (d).
Trial use
https://sketch.metademolab.com/canvas