Abstract

We present Stable Video Diffusion — a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our basemodel provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at https://github.com/Stability-AI/generative-models

Blog: https://stability.ai/news/stable-video-diffusion-open-ai-video-model

Paper: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf

Code:https://github.com/Stability-AI/generative-models

Waitlist: https://stability.ai/contact

Model: https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/tree/main

  • Scew
    link
    English
    121 year ago

    damn, good looking out. Won’t be using this for awhile til I sell a couple of kidneys.

    • @harry_balzac
      link
      English
      61 year ago

      Sell one kidney to get a 3d printer then print up more kidneys! It’s like printing money! (Please don’t sell your kidneys fr.)

    • ffhein
      link
      English
      21 year ago

      You can buy 2x second hand RTX3090 for about the same price of one new RTX4090, though you’ll probably need to get a new PSU as well. Or rent the hardware through runpod.io or similar for around $1/hour. Still a lot of money for most people but it’s not completely unachievable… Spend some time in the local LLM community and 48GB VRAM will start to feel like the bare minimum if you want to use any of the better models :S

      • Scew
        link
        English
        2
        edit-2
        1 year ago

        I’m just a lowly image generation hobbyist able to run some decent models on my 2060 super. lol. I had the highest tier of collab for awhile which was nice, but didn’t feel like learning how to create jupityr notebooks so was at the mercy of people keeping their dependencies up-to-date and would more often sit down to a broken notebook than anything else. My whole rig is probably achievable for less than the price of 1 3090 q.q

        Edit: took 5 seconds to do a search and I was low-balling my rig. Haven’t looked at prices in awhile.

        • ffhein
          link
          English
          21 year ago

          Definitely not cheap, but at least not as bad as having to buy an A100 for €7000 to get 40GB VRAM. I’m hoping second hand GPU prices will plummet after Christmas