If you’ve ever loaded a single LoRA and thought, “What if I blended this style with that character and one signature prop?” — welcome to multi‑LoRA composition. Stacking LoRAs unlocks wild creative control: one LoRA adds a painterly look, another encodes a character identity, and a third injects an object or lighting signature. The magic happens when they cooperate instead of colliding.
In this long‑form guide, I’ll walk you through the why and how of combining multiple LoRAs. We’ll cover the technical challenges (and how to tame them), build a stable workflow in ComfyUI, explore composition strategies (style+character, object layering, regional control), and finish with advanced tricks like merging LoRAs into a single adapter.
I’ll keep it personal, practical, and repeatable — with the pitfalls I learned to avoid.
What you’ll learn
- The core idea behind LoRA and why stacking them can be powerful
- What current papers and the community say about multi‑concept composition
- A battle‑tested ComfyUI layout for loading multiple LoRAs and balancing strengths
- Composition strategies you can adopt today (with example weights and prompts)
- A troubleshooting playbook for artifacts, conflicts, and drift
- Advanced merging techniques and when to consider them
Background: what LoRA is and why composition is tricky
A LoRA (Low‑Rank Adaptation) is a small adapter that gently modifies a base diffusion model by learning low‑rank matrices that attach to selected layers. Instead of retraining the entire model, you snap on a compact “style/identity/concept module.” They’re tiny, fast to train, and stackable — at least in theory.
In practice, stacked LoRAs can fight each other. A strong style LoRA can override a character LoRA’s features; two style LoRAs might pull the color science in opposite directions; and object LoRAs sometimes inject reminders in places you didn’t intend. That’s the fun — and the challenge — of composition.
Research and community signals
- Mix‑of‑Show (arXiv): proposes decentralized LoRA adaptation for multi‑concept customization, a sign that academic work recognizes multi‑adapter composition as a real need.
- Community threads on r/StableDiffusion repeat similar questions: “Can you use several LoRAs simultaneously? How do weights and conflicts work?” The lived experience is consistent: you can, and it works, but you must tune strengths and prompts.
References:
- Mix‑of‑Show — arXiv: https://arxiv.org/abs/2305.18292
- Reddit — multi‑LoRA questions: https://www.reddit.com/r/StableDiffusion/comments/18dhvep/can_you_use_several_loras_simultaneously_how_do/
The technical challenges (in plain English)
- Shared layers, different biases: Multiple LoRAs can modify the same base layer in different directions. Too much weight and you get artifacts or identity loss.
- CLIP vs UNet pathways: Many LoRA loaders let you independently set
strength_model(UNet) andstrength_clip(text encoder). Over‑cranking CLIP can mangle prompt semantics. - Spatial control: Global LoRA weights apply everywhere. Without regional control, style might bleed into faces or objects you want untouched.
- VRAM/runtime: Each LoRA adds overhead. Stacking too many increases memory use and sometimes slows sampling.
Takeaway: Stacking LoRAs is powerful, but you’ll want a disciplined workflow to keep results reliable.
ComfyUI multi‑LoRA setup: a stable base layout
You can apply multiple LoRAs in ComfyUI by chaining multiple “Load LoRA” nodes into your model path or using a combined loader that accepts multiple adapters. The minimal pattern is straightforward and scales well.
Recommended node layout
-
Load Checkpoint — choose a base model compatible with your LoRA family (SD 1.5 vs SDXL vs FLUX)
-
Load LoRA (Style) — set
strength_modelandstrength_clip(e.g., 0.6 / 0.6) -
Load LoRA (Character) — e.g., 0.9 / 0.7
-
Load LoRA (Object/Prop or Lighting) — e.g., 0.4 / 0.4
-
CLIP Text Encode (positive)
-
CLIP Text Encode (negative)
-
Sampler (Euler a / DPM++ 2M Karras)
-
VAE Decode → Save Image
You can swap the order of LoRA nodes; in many builds, order isn’t strictly enforced, but keep a consistent convention (style → character → object) so you can reason about changes.
References and tutorials:
- ComfyUI Docs — Multiple LoRAs Example: https://docs.comfy.org/tutorials/basic/multiple-loras
- MyAIForce — Minimalist ComfyUI LoRA workflow: https://myaiforce.com/comfyui-lora/
About weights: strength_model and strength_clip
strength_modelnudges the UNet (image denoising) pathway — it affects visual appearance and structure tendencies.strength_clipinfluences the text encoder — too high and prompts can become brittle; too low and the LoRA’s vocabulary might not activate cleanly.
A good starting band:
- Style LoRA: 0.4–0.8 (model), 0.3–0.7 (clip)
- Character LoRA: 0.7–1.1 (model), 0.5–0.9 (clip)
- Object/Lighting LoRA: 0.2–0.6 (model), 0.2–0.6 (clip)
Prompt syntax reminders: if you use prompt‑weighting tokens or syntax that refer to LoRA names (depending on your frontend), keep angle brackets in code ticks to avoid MDX parsing issues. Example: use <lora:my_style:0.6> in code when documenting.
Keep the base model consistent
Stacking LoRAs is easier when the base checkpoint is close to your target domain. If you’re doing photoreal portrait + cinematic grade, start from a photoreal base; don’t fight your checkpoint.
Save versions
After each configuration change (weights/order), save the graph and include the weights in the filename or in a Markdown note. You’ll thank yourself later.
Composition strategies you can use today
Below are three strategies that cover most multi‑LoRA needs. They’re not mutually exclusive — treat them as patterns you can blend.
Strategy A — Style + Character (the classic)
Use one style LoRA for the “how it looks” and one character LoRA for “who it is.”
- Example weights: style 0.6 / 0.6, character 0.9 / 0.7
- Prompt idea: “cinematic portrait, shallow depth of field, rim‑light, award‑winning photography, 85mm”
- Negative prompt: “overexposed, extra fingers, watermark, text, logo”
- Notes: If style overwhelms identity, lower style to 0.4–0.5 or reduce
strength_clipfirst.
Graph outline:
- Load Checkpoint → Load LoRA (style) → Load LoRA (character) → encoders → sampler → output
Strategy B — Object/Detail Layering (style + prop + environment)
Use a light object LoRA for a signature prop (e.g., a mechanical sword), add an environment LoRA (cyberpunk city), and keep a subtle style LoRA for color science.
- Example weights: object 0.4/0.4, environment 0.5/0.5, style 0.3/0.3
- Prompt idea: “hero holding a mechanical sword, neon streets, puddle reflections, rainy night”
- Tip: If the prop appears everywhere, it’s over‑activated — lower object to 0.2–0.3 or adjust prompt frequency.
Strategy C — Regional / Segmented Composition
Sometimes you want the face to carry a character LoRA while the outfit or background carries style/environment LoRAs. You can use region/latent coupling nodes to assign different LoRAs to regions.
- Tools: “Latent Couple” or region nodes (various community nodes implement region composition)
- Pattern:
- Region 1 (face): Character LoRA 0.9/0.7
- Region 2 (outfit): Style LoRA 0.6/0.5
- Region 3 (background): Environment LoRA 0.4/0.3
- Notes:
- Keep initial experiments simple (two regions) and raise complexity gradually
- Keep prompts coherent across regions to avoid seams
Community mention:
- Reddit discussions on regional application often show how users combine LoRAs with latent‑couple style nodes to keep faces clean while stylizing outfits/backgrounds.
Quick strategy comparison (cheat sheet)
| Strategy | What it’s best for | Typical Weights | Pros | Cons |
|---|---|---|---|---|
| Style + Character | Portraits, IP characters in a visual look | Style 0.4–0.7; Character 0.8–1.1 | Simple, reliable, fast to tune | Style can override identity; careful with CLIP strength |
| Object Layering | Signature props + scenes | Object 0.2–0.6; Env 0.4–0.7; Style 0.2–0.5 | Targeted control; fun for scenes | Props can “leak” into unintended areas |
| Regional Composition | Faces/clothes/background separation | Region‑specific per LoRA | High precision, local control | More setup; seams if prompts contradict |
Building the ComfyUI graph (step‑by‑step)
Here’s a reproducible flow you can adapt. The details will vary by your nodes, but the shape is the same.
- Load Checkpoint
- Choose the base (SD 1.5 vs SDXL vs FLUX). Keep it consistent while iterating.
- Load LoRA nodes
- Add one node per LoRA. Wire each node into the model output chain. Many loaders show two sliders:
strength_modelandstrength_clip. - Start with style (0.6/0.6), then character (0.9/0.7), then object (0.4/0.4).
- Text encoders
- Positive prompt: describe content and tone. Keep LoRA trigger words minimal — don’t spam.
- Negative prompt: list common artifacts and off‑style elements.
- Sampler
- DPM++ 2M Karras with 20–35 steps is a good baseline. Adjust CFG (5–8) based on style looseness.
- VAE / Output
- Decode latents to image, save. If you use metadata embedding, include the LoRA names and weights.
- Optional: Regional nodes
- If using “Latent Couple” (or similar): define regions, attach per‑region LoRA/conditioning, and verify boundaries with a simple test pattern.
- Save the graph
- Name it with hints:
multi-lora_style06_char09_obj04.json— future you will appreciate it.
Reference:
- ComfyUI Docs — Multiple LoRAs Example: https://docs.comfy.org/tutorials/basic/multiple-loras
- MyAIForce — Minimalist workflow for ComfyUI LoRAs: https://myaiforce.com/comfyui-lora/
Best practices that save hours
- One at a time first: Validate each LoRA solo at a few weights before stacking. You’ll learn its “character.”
- Lower CLIP first: If a LoRA dominates prompts, reduce
strength_clipbefore reducingstrength_model. - Keep triggers short: Over‑prompting can create conflicts with CLIP guidance.
- Maintain a LoRA library: Tag LoRAs by type (style/char/object), family (SD 1.5/XL/FLUX), and recommended weights.
- Batch A/B tests: Fix the seed and sampler, then vary one weight at a time. Save 4‑up grids for comparison.
- Know when to stop: If three LoRAs don’t cooperate, try merging (below) or drop one.
Internal reading:
- LoRA training & publishing on Civitai (dataset/trigger tips): /blog/civitai-lora-training-guide
- ControlNet + extensions (for structure control alongside LoRAs): /blog/comfyui-controlnet-node-extensions-guide
- SDXL Best Practices (samplers/CFG/res notes): /blog/sdxl-best-practices-guide
Pitfalls and how to fix them (troubleshooting)
| Problem | Probable cause | Fix |
|---|---|---|
| Character identity lost | Style LoRA too strong, high CLIP strength | Lower style to 0.3–0.5; drop strength_clip first; strengthen character to 0.9–1.1 |
| Weird textures/halos | Two styles conflict in frequency/color | Disable one style or reduce both to 0.3–0.4; try a neutral base model |
| Object appears everywhere | Object LoRA over‑activated | Lower object to 0.2–0.3; reduce prompt emphasis; move object tokens later in prompt |
| Muddy outputs | Too many LoRAs or weights too high | Use max 2–3 LoRAs; drop weights into the 0.3–0.7 band; simplify prompt |
| Prompts feel ignored | CLIP strengths too high | Reduce strength_clip to 0.3–0.5; increase CFG slightly |
| High VRAM usage | Too many LoRAs / high res | Lower resolution, batch=1; disable a LoRA; use mixed precision |
| Inconsistent results across bases | Base mismatch | Use the recommended base family; document bases in your library |
Tip: Always keep a “control” render (no LoRAs) at the same seed for reference.
Advanced: merging LoRAs
Stacking at inference time is flexible, but sometimes you want a single adapter that captures a blend that works. That’s where merging comes in.
Why merge?
- Simpler inference: one LoRA to load
- Lower conflict risk: inside the merged adapter, weights can be balanced once
- Sharing convenience: easier for others to use in any UI
How to merge in ComfyUI
- Use a LoRA merger node/extension. A popular option:
- LoRA Power‑Merger for ComfyUI (GitHub): https://github.com/larsupb/LoRA-Merger-ComfyUI
- Typical flow:
- Load base model
- Load LoRA A and LoRA B as inputs to the merger
- Choose a merge method (add, weighted sum, etc.) and set factors (e.g., 0.6/0.4)
- Output a new merged LoRA file
- Test the merged LoRA alone at various weights
Research directions (for context, not required to implement)
- Mix‑of‑Show: decentralized LoRA adaptation for multiple concepts
- MultLFG (arXiv): training‑free Multi‑LoRA composition using frequency‑domain guidance — a signpost that better composition algorithms are arriving
References:
- LoRA Power‑Merger for ComfyUI — GitHub: https://github.com/larsupb/LoRA-Merger-ComfyUI
- MultLFG — arXiv: https://arxiv.org/abs/2505.20525
Frequently asked questions
-
How many LoRAs can I stack?
- As many as your VRAM and sanity allow, but 2–3 is where quality remains predictable. More than that requires careful tuning and often leads to diminishing returns.
-
Should I put LoRAs in the prompt like
<lora:my_style:0.6>or use nodes?- In ComfyUI, use nodes for precision (you get
strength_modelandstrength_clip). Prompt tokens are fine for reference and other frontends, but nodes are clearer.
- In ComfyUI, use nodes for precision (you get
-
Can I combine LoRAs with ControlNet?
- Absolutely. Let LoRAs handle identity/style, and use ControlNet (pose/depth/edges) for structure. Start with modest strengths so they don’t fight.
-
Do LoRA orders matter?
- Often not, but keep a consistent order for your own sanity and reproducibility.
Related guides on this site
- Civitai LoRA Training Guide
- Using ControlNet & Custom Node Extensions in ComfyUI
- SDXL Best Practices
- ComfyUI Portable vs Desktop
References and further reading
- Reddit — “Can you use several LORAs simultaneously? How do they work …” (r/StableDiffusion): https://www.reddit.com/r/StableDiffusion/comments/18dhvep/can_you_use_several_loras_simultaneously_how_do/
- MyAIForce — “Using (Multiple) LoRA in ComfyUI: A Minimalist Workflow”: https://myaiforce.com/comfyui-lora/
- ComfyUI Docs — “Multiple LoRAs Example”: https://docs.comfy.org/tutorials/basic/multiple-loras
- GitHub — “LoRA Power-Merger for ComfyUI”: https://github.com/larsupb/LoRA-Merger-ComfyUI
- arXiv — “Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models”: https://arxiv.org/abs/2305.18292
- arXiv — “MultLFG: Training-free Multi-LoRA composition using Frequency-domain Guidance”: https://arxiv.org/abs/2505.20525
Conclusion
Combining LoRAs multiplies your creative space, but it’s not just “turn everything to 1.0 and hope.” Treat each LoRA like an instrument: learn its voice solo, then mix at balanced volumes. Start with style + character, experiment with object layering, and reach for regional tools when you need surgical control. Document what works — weights, order, prompts — and save templates you can reuse.
When two or three LoRAs harmonize, you’ll feel it: identity is solid, style sings, props appear where you asked, and prompts still steer. And if you find a blend you love, consider merging to make it easy for future you (and for others) to use.
If you try these strategies, I’d love to see what you build — share your graph and a couple of A/B images, plus the weights that did the trick.