• Get started
  • 🧨 Diffusers
  • Quicktour
  • Effective and efficient diffusion
  • Installation
  • Tutorials
  • Overview
  • Understanding pipelines, models and schedulers
  • AutoPipeline
  • Train a diffusion model
  • Load LoRAs for inference
  • Accelerate inference of text-to-image diffusion models
  • Working with big models
  • Load pipelines and adapters
  • Load pipelines
  • Load community pipelines and components
  • Load schedulers and models
  • Model files and layouts
  • Load adapters
  • Push files to the Hub
  • Generative tasks
  • Unconditional image generation
  • Text-to-image
  • Image-to-image
  • Inpainting
  • Text or image-to-video
  • Depth-to-image
  • Inference techniques
  • Overview
  • Distributed inference with multiple GPUs
  • Merge LoRAs
  • Scheduler features
  • Pipeline callbacks
  • Reproducible pipelines
  • Controlling image quality
  • Prompt techniques
  • Advanced inference
  • Outpainting
  • Specific pipeline examples
  • Stable Diffusion XL
  • SDXL Turbo
  • Kandinsky
  • IP-Adapter
  • PAG
  • ControlNet
  • T2I-Adapter
  • Latent Consistency Model
  • Textual inversion
  • Shap-E
  • DiffEdit
  • Trajectory Consistency Distillation-LoRA
  • Stable Video Diffusion
  • Marigold Computer Vision
  • Training
  • Overview
  • Create a dataset for training
  • Adapt a model to a new task
  • Models
  • Unconditional image generation
  • Text-to-image
  • Stable Diffusion XL
  • Kandinsky 2.2
  • Wuerstchen
  • ControlNet
  • T2I-Adapters
  • InstructPix2Pix
  • Methods
  • Textual Inversion
  • DreamBooth
  • LoRA
  • Custom Diffusion
  • Latent Consistency Distillation
  • Reinforcement learning training with DDPO
  • Accelerate inference and reduce memory
  • Speed up inference
  • Reduce memory usage
  • PyTorch 2.0
  • xFormers
  • Token merging
  • DeepCache
  • TGATE
  • Optimized model formats
  • JAX/Flax
  • ONNX
  • OpenVINO
  • Core ML
  • Optimized hardware
  • Metal Performance Shaders (MPS)
  • Habana Gaudi
  • Conceptual Guides
  • Philosophy
  • Controlled generation
  • How to contribute?
  • Diffusers' Ethical Guidelines
  • Evaluating Diffusion Models
  • Community Projects
  • Projects built with Diffusers
  • API
  • Main Classes
  • Configuration
  • Logging
  • Outputs
  • Loaders
  • IP-Adapter
  • LoRA
  • Single files
  • Textual Inversion
  • UNet
  • PEFT
  • Models
  • Overview
  • ControlNets
  • Transformers
  • UNets
  • VAEs
  • Pipelines
  • Overview
  • aMUSEd
  • AnimateDiff
  • Attend-and-Excite
  • AudioLDM
  • AudioLDM 2
  • AuraFlow
  • AutoPipeline
  • BLIP-Diffusion
  • CogVideoX
  • Consistency Models
  • ControlNet
  • ControlNet with Hunyuan-DiT
  • ControlNet with Stable Diffusion 3
  • ControlNet with Stable Diffusion XL
  • ControlNet-XS
  • ControlNet-XS with Stable Diffusion XL
  • Dance Diffusion
  • DDIM
  • DDPM
  • DeepFloyd IF
  • DiffEdit
  • DiT
  • Flux
  • Hunyuan-DiT
  • I2VGen-XL
  • InstructPix2Pix
  • Kandinsky 2.1
  • Kandinsky 2.2
  • Kandinsky 3
  • Kolors
  • Latent Consistency Models
  • Latent Diffusion
  • Latte
  • LEDITS++
  • Lumina-T2X
  • Marigold
  • MultiDiffusion
  • MusicLDM
  • PAG
  • Paint by Example
  • Personalized Image Animator (PIA)
  • PixArt-α
  • PixArt-Σ
  • Self-Attention Guidance
  • Semantic Guidance
  • Shap-E
  • Stable Audio
  • Stable Cascade
  • Stable Diffusion
  • Stable unCLIP
  • Text-to-video
  • Text2Video-Zero
  • unCLIP
  • UniDiffuser
  • Value-guided sampling
  • Wuerstchen
  • Schedulers
  • Overview
  • CMStochasticIterativeScheduler
  • ConsistencyDecoderScheduler
  • CosineDPMSolverMultistepScheduler
  • DDIMInverseScheduler
  • DDIMScheduler
  • DDPMScheduler
  • DEISMultistepScheduler
  • DPMSolverMultistepInverse
  • DPMSolverMultistepScheduler
  • DPMSolverSDEScheduler
  • DPMSolverSinglestepScheduler
  • EDMDPMSolverMultistepScheduler
  • EDMEulerScheduler
  • EulerAncestralDiscreteScheduler
  • EulerDiscreteScheduler
  • FlowMatchEulerDiscreteScheduler
  • FlowMatchHeunDiscreteScheduler
  • HeunDiscreteScheduler
  • IPNDMScheduler
  • KarrasVeScheduler
  • KDPM2AncestralDiscreteScheduler
  • KDPM2DiscreteScheduler
  • LCMScheduler
  • LMSDiscreteScheduler
  • PNDMScheduler
  • RePaintScheduler
  • ScoreSdeVeScheduler
  • ScoreSdeVpScheduler
  • TCDScheduler
  • UniPCMultistepScheduler
  • VQDiffusionScheduler
  • Internal classes
  • Overview
  • Attention Processor
  • Custom activation functions
  • Custom normalization layers
  • Utilities
  • VAE Image Processor
  • Video Processor


  • Diffusers

    🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions.

    The library has three main components:

    • State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve.
    • Interchangeable noise schedulers for balancing trade-offs between generation speed and quality.
    • Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.