DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters


Mingze Sun* 1   Junhao Chen* 1   Junting Dong† 2   Yurun Chen1   Xinyu Jiang1   Shiwei Mao1  
Puhua Jiang1   Jingbo Wang2   Bo Dai2   Ruqi Huang† 1  

1 Tsinghua Shenzhen International Graduate School, China      2 Shanghai AI Laboratory, China
* Indicates Equal Contribution      Indicates Corresponding Author

🔥 DRiVE has been accepted by CVPR 2025!
See you in Tennessee, USA 🇺🇸

We propose DRiVE, a pipeline that generates 3D Gaussian from a single image along with the corresponding skeleton (including hair and clothing) and skinning, enabling precise control over 3D Gaussian to render high-quality, controllable, and 3D consistent videos.





Anime Character Generation Results

Text Input Results

Explore how simple text descriptions transform into anime-style 3D characters:

Anime Image Input Results

Below are examples where an anime-style image input is transformed into 3D characters:

AnyPose Anime Input Results

Below are examples where AnyPose inputs are transformed into anime-style 3D characters:

Real Human Input Results

Below are examples where RealHuman inputs are transformed into realistic 3D characters:

Rigging Results

Below are the rigging results, showcasing various characters and their associated animation videos. The first row is the input, which includes image and text input. The second row shows the 3DGS corresponding to the character generated by our method. The third row shows the Skeleton generated by our method. The fourth row shows the Skinning generated by our method.

Animation Results

Below are the animation results, showcasing the character generation and animation comparison between CharacterGen and our method. Each group contains two videos: the first is the CharacterGen result, and the second is the result generated by our method.



BibTeX

@article{sun2024drivediffusionbasedriggingempowers,
      title={DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters}, 
      author={Mingze Sun and Junhao Chen and Junting Dong and Yurun Chen and Xinyu Jiang and Shiwei Mao and Puhua Jiang and Jingbo Wang and Bo Dai and Ruqi Huang},
      year={2024},
      eprint={2411.17423},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.17423}, 
}