MotionGPT: Finetuned LLMs are General-Purpose Motion Generators

Yaqi Zhang1,2, Di Huang4, Bin Liu1,2, Shixiang Tang4, Yan Lu4, Lu Chen5, Lei Bai3, Qi Chu1,2, Nenghai Yu1,2, Wanli Ouyang3
1University of Science and Technology of China 2CAS Key Laboratory of Electromagnetic Space Information 3Shanghai AI Laboratory 4The University of Sydney 5Zhejiang University

MotionGPT supports diverse control conditions for human motion generation by finetuning LLMs.

Abstract

Generating realistic human motion from given action descriptions has experienced significant advancements because of the emerging requirement of digital humans. While recent works have achieved impressive results in generating motion directly from textual action descriptions, they often support only a single modality of the control signal, which limits their application in the real digital human industry. This paper presents a Motion General-Purpose generaTor (MotionGPT) that can use multimodal control signals, e.g., text and single-frame poses, for generating consecutive human motions by treating multimodal signals as special input tokens in large language models (LLMs). Specifically, we first quantize multimodal control signals into discrete codes and then formulate them in a unified prompt instruction to ask the LLMs to generate the motion answer. Our MotionGPT demonstrates a unified human motion generation model with multimodal control signals by tuning a mere 0.4% of LLM parameters. To the best of our knowledge, MotionGPT is the first method to generate human motion by multimodal control signals, which we hope can shed light on this new direction.

Teaser image.

Compared with previous methods, MotionGPT has the unique ability to accept multiple control conditions and solve various motion generation tasks using a unified model.

Pipeline

Pipeline image.

Our MotionGPT (Motion General-Purpose generaTor) has the unique ability to accept multiple control conditions and solve various motion generation tasks using a unified model. Given text and poses as an input example, we organize task descriptions (Instruction) and multiple control conditions (Input) within a question template. MotionGPT fine-tunes a LLM with LoRA to generate the corresponding motion answer, which can then be decoded into human motions using a VQ-VAE decoder.

Text-to-motion Generation

the generated motion is in orange

(Text,initial pose)-to-motion Generation

the generated motion is in orange and we highlight the initial pose in blue (remain frozen for 0.5s)

(Text,last pose)-to-motion Generation

the generated motion is in orange and we highlight the last pose in blue (remain frozen for 0.5s)

(Text,key poses)-to-motion Generation

the generated motion is in orange and we highlight key poses in blue (remain frozen for 0.5s)

BibTeX


    @article{zhang2023motiongpt,
      title={MotionGPT: Finetuned LLMs are General-Purpose Motion Generators},
      author={Zhang, Yaqi and Huang, Di and Liu, Bin and Tang, Shixiang and Lu, Yan and Chen, Lu and Bai, Lei and Chu, Qi and Yu, Nenghai and Ouyang, Wanli},
      journal={arXiv preprint arXiv:2306.10900},
      year={2023}
    }