Skip to content

[ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation

Notifications You must be signed in to change notification settings

showlab/DragAnything

Repository files navigation

DragAnything

DragAnything: Motion Control for Anything using Entity Representation


🎶 Updates

  • July. 1, 2024. DragAnything has been accepted by ECCV 2024!
  • Mar. 24, 2024. Support interactive demo with gradio.
  • Mar. 13, 2024. Release the inference code.
  • Mar. 12, 2024. Rep initialization.

🐱 Abstract

We introduce DragAnything, which utilizes an entity representation to achieve motion control for any object in controllable video generation. Comparison to existing motion control methods, DragAnything offers several advantages. Firstly, trajectory-based is more user-friendly for interaction, when acquiring other guidance signals (\eg{} masks, depth maps) is labor-intensive. Users only need to draw a line~(trajectory) during interaction. Secondly, our entity representation serves as an open-domain embedding capable of representing any object, enabling the control of motion for diverse entities, including background. Lastly, our entity representation allows simultaneous and distinct motion control for multiple objects. Extensive experiments demonstrate that our DragAnything achieves state-of-the-art performance for FVD, FID, and User Study, particularly in terms of object motion control, where our method surpasses the previous state of the art (DragNUWA) by 26% in human voting.


User-Trajectory Interaction with SAM

Input Image Drag point with SAM 2D Gaussian Trajectory Generated Video

Comparison with DragNUWA

Model Input Image and Drag Generated Video Visualization for Pixel Motion
DragNUWA
Ours
DragNUWA
Ours
DragNUWA
Ours

More Demo

Drag point with SAM 2D Gaussian Generated Video Visualization for Pixel Motion

Various Motion Control

Drag point with SAM 2D Gaussian Generated Video Visualization for Pixel Motion
(a) Motion Control for Foreground
(b) Motion Control for Background
(c) Simultaneous Motion Control for Foreground and Background
(d) Motion Control for Camera Motion

🔧 Dependencies and Dataset Prepare

Dependencies

git clone https://github.com/Showlab/DragAnything.git
cd DragAnything

conda create -n DragAnything python=3.8
conda activate DragAnything
pip install -r requirements.txt

Dataset Prepare

Download VIPSeg and Youtube-VOS to the ./data directory.

Motion Trajectory Annotataion Prepare

You can use our preprocessed annotation files or choose to process your own motion trajectory annotation files using Co-Track.

If you choose to generate motion trajectory annotations yourself, you need to follow the processing steps outlined in Co-Track.

cd ./utils/co-tracker
pip install -e .
pip install matplotlib flow_vis tqdm tensorboard

mkdir -p checkpoints
cd checkpoints
wget https://huggingface.co/facebook/cotracker/resolve/main/cotracker2.pth
cd ..

Then, modify the corresponding video_path, ann_path, and save_path in the Generate_Trajectory_for_VIPSeg.sh file, and run the command. The corresponding trajectory annotations will be saved as .json files in the save_path directory.

Generate_Trajectory_for_VIPSeg.sh

Trajectory visualization

You can run the following command for visualization.

cd .utils/
python vis_trajectory.py

Pretrained Model Preparation

We adopt the ChilloutMix as pretrained model for extraction of entity representation, please download the diffusers version:

mkdir -p utils/pretrained_models
cd utils/pretrained_models

# Diffusers-version ChilloutMix to utils/pretrained_models
git-lfs clone https://huggingface.co/windwhinny/chilloutmix.git

And you can download our pretrained model for the controlnet:

mkdir -p model_out/DragAnything
cd model_out/DragAnything

# Diffusers-version DragAnything to model_out/DragAnything
git-lfs clone https://huggingface.co/weijiawu/DragAnything

🖌️ Train(Awaiting release)

1) Semantic Embedding Extraction

cd .utils/
python extract_semantic_point.py

2) Train DragAnything

For VIPSeg

sh ./script/train_VIPSeg.sh

For YouTube VOS

sh ./script/train_youtube_vos.sh

🖌️ Evaluation

Evaluation for FID

cd utils
sh Evaluation_FID.sh
cd utils/Eval_FVD
sh compute_fvd.sh

Evaluation for Eval_ObjMC

cd utils/Eval_ObjMC
python ./ObjMC.py

🖌️ Inference for single video

python demo.py

or run the interactive inference with gradio (install the gradio==3.50.2).

cd ./script

download the weight of sam_vit_h_4b8939.pth from SAM

python gradio_run.py

🖌️ Visulization of pixel motion for the generated video

cd utils/co-tracker
python demo.py

📖BibTeX

@misc{wu2024draganything,
      title={DragAnything: Motion Control for Anything using Entity Representation}, 
      author={Weijia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, Di Zhang},
      year={2024},
      eprint={2403.07420},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

🤗Acknowledgements

About

[ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages