This guide shows you how to install and run FLUX models in Stable Diffusion Forge. It’s based on a working setup guide from Reddit and verified installation steps I’ve tested myself. The instructions are simple and written for beginners — no technical jargon, just clear steps.
Source reference: Reddit: Getting started with FLUX in Forge
Requirements
Before you start, make sure you have:
| Requirement | Details |
|---|---|
| GPU | NVIDIA recommended (8GB VRAM minimum) |
| Python | Not required (Forge installer includes environment) |
| OS | Windows 10/11, Linux |
| Internet | Required to download models |
FLUX works best on NVIDIA GPUs — that’s the honest truth. AMD support is experimental, so your mileage may vary.
Step 1: Install Stable Diffusion Forge
- Download Forge from the official GitHub: lllyasviel/stable-diffusion-webui-forge
- Click Code → Download ZIP or clone the repo.
- Extract it to a folder like:
C:\AI\Forge
- Run the file
run.bat(Windows) orrun.sh(Linux). - Wait until Forge finishes installing dependencies — this might take a few minutes, so grab some coffee.
- Open the interface at:
http://127.0.0.1:7860
Step 2: Download FLUX Model
Download FLUX from an official source:
- FLUX.1-dev (high quality): FLUX.1-dev
- FLUX.1-schnell (fast): FLUX.1-schnell
Place the .safetensors model file here:
Forge/models/Stable-diffusion/Step 3: Required Text Encoders (Correct Files & Paths)
FLUX in Forge requires two text encoders:
- CLIP-L (ViT-L/14) –
clip_l.safetensors - T5-XXL (v1.1) – choose one of:
t5xxl_fp16.safetensors(~9.8 GB, best quality; needs ~32 GB system RAM)t5xxl_fp8_e4m3fn.safetensors(~4.9 GB, lighter; recommended for most PCs)
Download (trusted source):
- Hugging Face:
comfyanonymous/flux_text_encoders
Put both files here (create folder if missing):
Forge/models/text_encoder/Depending on your installation, the path may be
webui/models/text_encoder/inside the Forge directory.
You do not need to download OpenAI’s CLIP repository or Google’s full T5 model separately — the .safetensors files above are all you need. Don’t overthink this part.
Step 4: Enable FLUX in Forge
- Start Forge.
- Go to Settings → Stable Diffusion.
- Enable FLUX compatibility mode (if available).
- Select your model:
flux1-dev-fp16.safetensorsorflux1-schnell-fp16.safetensors
- Click Apply settings.
- Reload UI.
Step 5: Recommended Settings
| Option | Value |
|---|---|
| Sampler | Euler or DPM++ 2M |
| Steps | 20–30 |
| CFG Scale | 3–5 |
| Resolution | 1024x1024 |
For low VRAM GPUs, enable:
- Settings → Optimization → Low VRAM Mode
- Use FLUX.1-schnell for faster generation
Troubleshooting
| Issue | Fix |
|---|---|
| Model not loading | Check file path in Forge/models/Stable-diffusion/ |
| Tokenizer error | Install t5-base into models/text_encoder |
| CUDA out of memory | Reduce resolution to 832x832 |
| Black images | Reduce CFG scale below 6 |
🔧 Optional: Enable LoRA Support for FLUX in Forge
Forge supports LoRA for FLUX, but you must enable it properly.
Step 1: Download FLUX LoRA
You can find FLUX-ready LoRAs on:
Place LoRA files in:
Forge/models/Lora/Step 2: Enable LoRA Loader
- Go to Forge → Extensions
- Install Lora Block Weight (optional, improves LoRA control)
- Restart Forge
Step 3: Use LoRA in Prompt
In the prompt box, call LoRA like this:
<lora:your_flux_lora:0.8>Adjust strength between 0.6 – 1.0.
🧩 Add ControlNet for FLUX in Forge
Forge supports ControlNet for pose, depth, and edge guidance.
Install ControlNet Extension
- Go to Extensions → Available
- Search ControlNet
- Install and restart UI
Download ControlNet Models
Place models in:
Forge/models/ControlNet/Recommended:
| Model | Purpose |
|---|---|
| controlnet-openpose | Pose control |
| controlnet-canny | Edges outline |
| controlnet-depth | Depth accuracy |
Enable ControlNet
- Check Enable ControlNet
- Upload reference image
- Generate with guidance
⚙️ VRAM Optimization for FLUX in Forge
If you have a low VRAM GPU (8–12GB), apply these settings:
- Settings → Optimization → Enable low VRAM mode
- Use batch size = 1
- Use FLUX.1-schnell instead of FLUX-dev
- Reduce resolution: 832×832 or 768×1024
- Disable VAE tiling unless needed
🔼 Install High-Quality Upscalers (Optional)
To improve final image quality, install a 4x AI upscaler.
Recommended Upscalers
| Model | Purpose |
|---|---|
| 4x-UltraSharp.pth | Photorealistic sharpening |
| RealESRGAN 4x | General enhancement |
| 4x-AnimeSharp.pth | Anime/illustration |
Place upscalers here:
Forge/models/ESRGAN/Enable in Forge under Extras → Upscaling.
✅ Recommended Generation Settings for FLUX in Forge
These settings provide a good starting point for most users:
| Setting | Recommended |
|---|---|
| Sampler | Euler a / DPM++ 2M |
| Steps | 22–28 |
| CFG Scale | 3.5–5 |
| Hires Fix | Optional |
| Refiner | Off |
💡 Prompt Examples for FLUX
General Quality Prompt
hyperrealistic cinematic portrait, dramatic light, depth of field, masterpiece quality, 85mm lensLandscape Prompt
epic mountain valley, soft fog, sunrise glow, ultra detailed, atmospheric depth, realistic lightingCyberpunk Prompt
futuristic neon city street at night, rain reflections, cinematic film still, volumetric lightingUse a negative prompt for best results:
blurry, lowres, watermark, text, distorted, bad hands, noisy🚀 Performance Tips for Forge + FLUX
- Disable VAE Precision High to save VRAM
- Use xformers if available
- Close browser tabs while generating
- Avoid CFG > 6 (FLUX prefers lower values)
- Use schnell model for faster image previews
🛠️ Troubleshooting
| Problem | Solution |
|---|---|
| Forge crashes on launch | Update Python + GPU driver |
| FLUX model not detected | Rename file to .safetensors |
| Images look washed out | Lower CFG scale, try DPM++ sampler |
| CUDA out of memory | Reduce resolution or enable low VRAM mode |
| Tokenizer error | Download T5 tokenizer into /text_encoder/ |
Related Guides
- FLUX in ComfyUI: /blog/flux-comfyui-guide
- SDXL Best Practices: /blog/sdxl-best-practices-guide
- Stable Diffusion Prompting: /blog/stable-diffusion-prompting-guide
- Ostris AI Toolkit (LoRA training): /blog/ai-toolkit-guide