Logo
Stable Diffusion on Google Colab: Free GPU Setup

Stable Diffusion on Google Colab: Free GPU Setup

October 25, 2025
8 min read

Stable Diffusion lets you turn text prompts into images using AI—but normally you need a powerful GPU, which most people don’t have lying around. If you don’t have one, no worries: you can run Stable Diffusion for free with Google Colab using their cloud GPUs.

This guide is different from the basic tutorials you see everywhere. It starts simple, but builds real depth so you actually understand what you’re doing—not just copy/pasting code and hoping it works.

✅ Beginner-friendly ✅ Works with Colab Free Tier ✅ Includes Pro Tweaks + Speed Optimizations ✅ Updated for 2025


📚 Table of Contents

  1. What is Stable Diffusion?
  2. How Stable Diffusion Works (Quick Explanation)
  3. Why Use Google Colab?
  4. Two Ways to Run Stable Diffusion on Colab
  5. Setup – Prerequisites
  6. Step-by-Step: Run Stable Diffusion Free on Colab
  7. Improve Speed + Quality
  8. Save Images to Google Drive
  9. Free vs Paid Colab Tiers
  10. Advanced Usage: Negative Prompts, Resolution, Batching
  11. Use Custom Models from CivitAI / Hugging Face
  12. Troubleshooting
  13. Prompt Engineering Guide
  14. FAQs
  15. Conclusion

❓ What Is Stable Diffusion?

Stable Diffusion is an open-source text-to-image AI model. You give it a description (prompt), and it generates a realistic image — it’s basically like having a digital artist that never sleeps. It’s popular for:

  • Concept art
  • Character design
  • Posters
  • Photography simulation
  • Anime + digital art

It runs on a GPU—so it’s perfect for Colab.


🧠 How Stable Diffusion Works (Simple)

  • Your text prompt is converted into numbers by a text encoder (CLIP).
  • Those numbers guide the U-Net model to denoise random noise step by step.
  • Each step adds details until an image emerges.

✅ Result: The AI doesn’t “draw”—it denoises.


✅ Why Use Google Colab?

FeatureBenefit
Free GPUYes (T4 GPUs available)
No setupRuns in browser
Fast startModel runs in ~10 minutes
Cross platformWorks on Mac/Windows/Linux

If you want to run Stable Diffusion on Google Colab free, this is still the best option in 2025 — nothing else really comes close for free GPU access.


⚙️ Methods: 3 Ways to Run Stable Diffusion on Google Colab

MethodDifficultyGPUFeatures
✅ Diffusers (this guide)EasyLowAPI-style generation
✅ AUTOMATIC1111 WebUIMediumMediumFull features + UI
✅ ComfyUIAdvancedMedium/HighWorkflow control

This guide uses Diffusers first—fastest, cleanest setup.


🔧 Prerequisites

You’ll need:

✅ Create a Read token (no write permissions needed).


🚀 Step-by-Step Setup

✅ Step 1: Enable GPU

First thing you need to do: enable the GPU (otherwise you’ll be waiting forever).

Runtime → Change runtime type → GPU

✅ Step 2: Install Dependencies

!pip install diffusers transformers accelerate safetensors --quiet

✅ Step 3: Login to Hugging Face

from huggingface_hub import login
login()

✅ Step 4: Load Stable Diffusion

from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipeline.to("cuda")

✅ Step 5: Generate Your First Image

prompt = "epic cinematic castle on a mountain, dramatic sky, 4k, ultra detailed"
image = pipeline(prompt).images[0]
image

✅ Success: You just ran Stable Diffusion for free on Colab — pretty cool, right?


⚡ Improve Speed + Prevent Crashes (Important)

Colab free GPUs are low VRAM (15GB). Optimize memory:

pipeline.enable_attention_slicing()

Reduce steps to speed up:

image = pipeline(prompt, num_inference_steps=28)

💾 Save Images to Google Drive

from google.colab import drive
drive.mount('/content/drive')
image.save('/content/drive/MyDrive/sd_image.png')

💰 Free vs Paid Colab

FeatureFree TierColab Pro
GPUT4T4 or A100
Runtime~1 hourup to 12 hours
PriorityLowHigh
Best forBeginnersDaily users

🔧 Advanced Options

🧽 Negative Prompts (Cleaner Images)

image = pipeline(prompt, negative_prompt="blurry, lowres, bad hands, extra limbs").images[0]

🔢 Batch Generate

images = pipeline(prompt, num_images_per_prompt=4).images

🖼️ Higher Resolution

image = pipeline(prompt, height=768, width=512).images[0]

✅ Keep resolution low to avoid GPU crashes.


🎨 Use Custom Models

Load anime/realistic models:

model_id = "stabilityai/sd-turbo" # very fast
pipeline = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

🧯 Troubleshooting

ErrorFix
CUDA out of memoryLower resolution 512×512
Hugging Face auth errorAccept model license + re-login
No GPU availableTry incognito or change runtime
Slow resultsUse SD Turbo

🎯 Prompt Engineering Cheat Sheet

<subject>, <style>, <lighting>, <detail>, <lens>, <mood>

✅ Example:

"sci-fi temple ruins, cinematic, volumetric lighting, ultra textured, 35mm, dark green fog"

❓ FAQ

Is Google Colab really free? Yes, limited free GPU time.

Do I need a GPU laptop? No.

Is this safe? Yes—tokens are local.


✅ Conclusion

You now know how to run Stable Diffusion for free on Google Colab with GPU acceleration—even with no experience. That’s the beauty of Colab: it levels the playing field.

Next steps: ✅ Try better models — experiment and see what works for you ✅ Add LoRA styles — they can transform your images ✅ Train your own AI model — if you’re feeling ambitious


🔧 Bonus: Best Free Google Colab Notebooks for Stable Diffusion

If you want a ready-to-use notebook instead of typing code manually, here are reliable options:

NotebookTypeLink
Fast Stable Diffusion (TheLastBen)AUTOMATIC1111 UIhttps://github.com/TheLastBen/fast-stable-diffusion
Lightweight SD (Diffusers)Scripthttps://github.com/huggingface/diffusers
SD Turbo DemoUltra Fasthttps://huggingface.co/stabilityai/sd-turbo

✅ Tip: Use TheLastBen if you want a full WebUI experience on Colab.


🛡️ Keep Your Hugging Face Token Safe

Avoid accidentally exposing your token:

import os
from huggingface_hub import login
login(token=os.environ.get("HF_TOKEN"))

You can set HF_TOKEN under Colab → Settings → Secrets.


GoalSetting
Best qualitynum_inference_steps=50, guidance_scale=9
Faster speednum_inference_steps=25
Prevent bad handsUse negative prompts
Realistic styleTry SDXL base model

❓ FAQ

Is Google Colab really free for Stable Diffusion? Yes, but sessions are limited to ~1 hour on the free tier.

Can I use AUTOMATIC1111 on Colab? Yes, but it’s heavier; best with Colab Pro.

Can I upload my own LoRA models or checkpoints? Yes—upload to /content or load from Hugging Face.

What about CivitAI models? You can download and use .safetensors checkpoints from CivitAI.


🛡️ Security: Protect Your Hugging Face Token

When running Stable Diffusion on Google Colab, never paste your Hugging Face token directly in notebooks. If the notebook is shared or saved, your token could be exposed.

Best Practice – Use Colab Secrets:

  1. Go to Colab → ⚙ Settings → Secrets
  2. Add a new secret: HF_TOKEN
  3. Use this code safely:
import os
from huggingface_hub import login
login(token=os.environ["HF_TOKEN"])

✅ This keeps your token private & secure.


🖥️ Run Stable Diffusion via AUTOMATIC1111 on Google Colab

The AUTOMATIC1111 WebUI is the most popular way to use Stable Diffusion, with sliders, previews, and extensions.

Easy Colab Installer

!git clone https://github.com/TheLastBen/fast-stable-diffusion.git
!bash fast-stable-diffusion/Automatic1111/run.sh

Then open the WebUI link shown in the output.

⚡ Features of AUTOMATIC1111 WebUI

  • Live prompt tweaking
  • Negative prompts
  • LoRA support
  • Upscaling
  • Extensions (ControlNet, OpenPose, etc.)

💡 Tip: Runs best on Colab Pro but works on free tier with low settings.


🧩 Run Stable Diffusion with ComfyUI on Google Colab

ComfyUI is a node-based workflow UI—great for power users.

Install on Colab

!git clone https://github.com/comfyanonymous/ComfyUI.git
!ComfyUI/run_cpu.bat # or configure GPU launch

✅ Why use ComfyUI?

FeatureBenefit
Full controlNode workflow design
Training workflowsLoRA + ControlNet graphs
Pro customizationsUltimate flexibility

❓ FAQ

Can I run Stable Diffusion for free on Google Colab? Yes. Using the Diffusers Python pipeline or AUTOMATIC1111, you can run Stable Diffusion on Colab’s free GPU tier.

Why does my session stop after 1 hour? Colab Free limits runtime duration. Use Google Drive to save images and progress.

Which Stable Diffusion model works best on Colab? Use runwayml/stable-diffusion-v1-5 or stabilityai/sd-turbo for fast results.

How do I fix CUDA out of memory? Lower resolution to 512×512 and enable attention slicing.

Is AUTOMATIC1111 better than Diffusers? Yes for UI control and extensions, no if you want speed and simplicity.

Is ComfyUI hard to use? It’s harder than WebUI but much more powerful for workflows.



✅ Final Thoughts

You now have a complete, working setup to run Stable Diffusion for free on Google Colab. You learned: ✅ How to load models ✅ Generate images ✅ Save output ✅ Prevent GPU crashes ✅ Boost image quality