AI Video THUDM/CogVideoX-5b CogVideoX is an open-source v... | AI Video THUDM/CogVideoX-5b CogVideoX is an open-source v...
AI Video THUDM/CogVideoX-5b
CogVideoX is an open-source version of the video generation model originating from QingYing. The table below displays the list of video generation models we currently offer, along with their foundational information.


When testing using the diffusers library, all optimizations provided by the diffusers library were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than NVIDIA A100 / H100. Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If the optimizations are disabled, VRAM usage will increase significantly, with peak VRAM usage being about 3 times higher than the table shows. However, speed will increase by 3-4 times. You can selectively disable some optimizations, including:
pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()

When performing multi-GPU inference, the enable_model_cpu_offload() optimization needs to be disabled.
Using INT8 models will reduce inference speed. This is to ensure that GPUs with lower VRAM can perform inference normally while maintaining minimal video quality loss, though inference speed will decrease significantly.
The 2B model is trained with FP16 precision, and the 5B model is trained with BF16 precision. We recommend using the precision the model was trained with for inference.
PytorchAO and Optimum-quanto can be used to quantize the text encoder, Transformer, and VAE modules to reduce CogVideoX's memory requirements. This makes it possible to run the model on a free T4 Colab or GPUs with smaller VRAM! It is also worth noting that TorchAO quantization is fully compatible with torch.compile, which can significantly improve inference speed. FP8 precision must be used on devices with NVIDIA H100 or above, which requires installing the torch, torchao, diffusers, and accelerate Python packages from source. CUDA 12.4 is recommended.
The inference speed test also used the above VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only the diffusers version of the model supports quantization.
The model only supports English input; other languages can be translated into English during refinement by a large model.
https://huggingface.co/THUDM/CogVideoX-5b THUDM/CogVideoX-5b · Hugging Face