Skip to content

🖼️ AI Image generation

Marcos Junior edited this page Oct 3, 2025 · 7 revisions

Tools to run existing models

Sites to get existing workflows and loras

Good models

  • Wan 2.2

    • I2V = Image to Video
    • T2V = Text to Video
  • Flux dev

    • T2I = Text to Image

Configurations that can really improve the generation

  • Steps = Number of denoising iterations.

    • The number could variate depending the model this is how it works
      • Low -> fast, but less detail.
      • Medium -> sweet spot.
      • High -> can improve realism, but diminishing returns (and slower). Too high sometimes makes images look "overcooked."
    • The number can vary between (1 and N)
  • Sampler = The algorithm that controls how noise is removed step by step.

    • Euler a -> fast, creative, less deterministic.
    • DPM++ 2M Karras -> smooth, detailed, highly recommended.
    • DDIM -> older, predictable, fast for testing.
  • Seed = The random number that initializes noise.

    • Same seed + prompt + config = exact same image.
    • Changing seed = different variation.
    • Good for reproducing or exploring variations systematically.
  • CFG = Classifier-Free Guidance.

    • Used to control how strongly the model follows your text prompt versus just producing random plausible images.
    • The number could variate depending the model this is how it works
      • Low CFG -> The model has a lot of freedom.
        • Output may look more "natural" but not match your text closely.
      • Medium CFG -> Balanced. This is the sweet spot for most models.
      • High CFG -> Model is forced to follow the prompt exactly.
        • Can cause overbaked, distorted, or ugly results (like too many extra fingers, harsh edges).
    • The number can vary between (1 and N)
  • Negative Prompt = As the name say defines what you don’t want.

    • Super important for Stable Diffusion!
    • Example prompts: low quality, blurry, bad anatomy, extra fingers, text, watermark
      • It is common to find it in chinese characters here
        • Training Artifacts
        • Prompt Embeddings
        • Copy-Paste Culture from CivitAI
        • "Noise" Tokens for Prompt Weighting

How generative models work

Next step