Stable Diffusion Prompt Builder

Build optimized prompts for Stable Diffusion with quality tags, negative prompts, and generation settings

Subject & Details

Style Modifiers

Negative Prompt

196 chars

Common negative prompts are pre-filled. Customize based on your needs.

Generation Settings

×

Generated Prompts

90 chars
a beautiful landscape, mountains, lake, sunset, masterpiece, best quality, highly detailed
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

Related Tools

What is Stable Diffusion?

Stable Diffusion is an open-source AI image generation model that runs locally or through various hosting services. Unlike closed models like Midjourney or DALL-E, you can run SD on your own hardware, fine-tune it, and have complete control over the generation process.

This builder helps you create optimized prompts with quality tags, negative prompts, and generation settings that work well with both SD 1.5 and SDXL models.

How to Use This Tool

1

Describe Your Image

Enter the subject and details. Add quality tags like "masterpiece" and "highly detailed" for better results.

2

Choose Style Modifiers

Select art style, lighting, and optionally an artist style to influence the visual output.

3

Customize Negative Prompt

The negative prompt excludes unwanted elements. Default values handle common issues like bad anatomy and artifacts.

4

Set Generation Parameters

Adjust steps, CFG scale, sampler, and resolution. Copy everything to use in Automatic1111, ComfyUI, or other SD interfaces.

Key Parameters Explained

Steps

Number of denoising steps. 20-30 is usually enough. Higher values take longer but may add detail.

CFG Scale

How closely to follow the prompt. 7-12 is typical. Higher = more literal but can oversaturate.

Sampler

The algorithm for generation. Euler a and DPM++ 2M are popular. Different samplers produce different aesthetics.

Seed

Random seed for reproducibility. Same seed + prompt = same image. Use -1 for random.

Pro Tip: Token Weighting

Use (word:1.3) to increase emphasis or (word:0.7) to decrease it. For example, "(blue eyes:1.4), (red hair:0.8)" emphasizes eyes more than hair. Works in most SD interfaces.

Important: SD 1.5 vs SDXL

SD 1.5 works best at 512×512 or 512×768. SDXL is designed for 1024×1024. Using wrong resolutions can cause artifacts or duplicate subjects.

Frequently Asked Questions

What's the difference between SD 1.5 and SDXL?

SDXL is a newer, larger model that produces higher quality images at higher resolutions. It requires more VRAM (8GB+ recommended) but has better composition and text rendering.

How do I run Stable Diffusion locally?

Popular options include Automatic1111 WebUI, ComfyUI, and Fooocus. You'll need a GPU with at least 4GB VRAM (8GB+ for SDXL). There are also cloud services like RunPod and Replicate.

Why use negative prompts?

Negative prompts help avoid common issues like bad hands, distorted faces, watermarks, and low quality. They're especially important for photorealistic and portrait images.

Related Tools

Negative Prompts Library

Browse curated negative prompts for different use cases.

Prompt Weights

Learn and test token weighting syntax.

Style Reference

Explore different art styles and artist keywords.

ControlNet Poses

Reference poses for ControlNet OpenPose.