Scade knowledge base
  • What is Scade?
  • Quick start
    • How to create a flow
  • Build a flow
    • What is Flow?
    • How to create a flow
    • What is a Node
    • What are the Start and End Nodes?
    • How to add nodes (AIs and tools) to your flow
    • Top Nodes settings
      • Large language models
      • Image generation
      • Transcribe audio to text (Whisper)
      • Image background removal options
    • How to Connect Nodes
    • How to Use Expression Editor
    • How to Copy Generated Images or Text
    • What is the 'View Source' of a Node
    • How to add python code
  • Flow examples
    • Building a flow: create promo cards of a product
    • Building a flow: a virtual AI editorial office
    • Building a flow: video transcription and summarization
    • 5 minute challenge: compare different LLMs
    • 5 minute challenge: upscale and colorize photos
    • 5 minute challenge: summarize audio
  • Publish
    • Run flows via API
  • Pricing and credits
Powered by GitBook
On this page
  • Fooocus and other SD nodes
  • DALL·E node
  1. Build a flow
  2. Top Nodes settings

Image generation

PreviousLarge language modelsNextTranscribe audio to text (Whisper)

Last updated 10 months ago

Let's dive into the settings of image generation models like Focus and Juggernaut. Most of these models are based on Stable Diffusion.

Fooocus and other SD nodes

  • Prompt: Describe what you want the image to look like, including content and style.

  • Priority Queue: Set the order for processing requests.

  • Negative Prompt: Specify what to avoid in the image to reduce mistakes like extra fingers.

  • Style Selections: Choose the look or theme you want. Check GitHub for community examples.

  • Performance Selections: Adjust settings to balance speed and quality.

  • Aspect Ratios: Choose the shape of the image.

  • Image Number: Decide how many images you want to create.

  • Image Seed: Use this for consistent results. The default is -1, but you can use specific numbers.

  • Loras Custom URLs: Add external URLs for specific styles.

  • Sharpness: Adjust how clear and detailed the image is.

  • Guidance Scale: Balance between following the prompt and creative freedom.

  • Refiner Switch: Choose whether to use a refining model to improve quality after generation.

  • Uov Input Image: Select an image for transformations or guidance.

  • Uov Upscale Value: Increase the image resolution.

  • Inpaint Additional Prompt: Give extra details for filling in masked areas.

  • Inpaint Input Image: The base image to be modified.

  • Inpaint Input Mask: The mask defining which areas to change.

  • Inpaint Strength: Control how much the inpainting follows the prompt.

  • Outpaint Selections: Settings for extending the image beyond its original borders.

  • Outpaint Distance: Specify how much to extend the image in each direction (left, top, right, bottom).

  • Cn Prefix Fields: Upload images to mix and match, or use masks to add elements.

Similar concepts you can apply to other Stable Diffusion-like models as Juggernaut XL.

DALL·E node

Let’s look at Dalle node fields:

  • Prompt: Describe what you want the image to look like, including content and style.

  • Model: Choose the model to create the image.

  • N: Decide how many images to generate for comparison or selection.

  • Quality: Choose the resolution or quality of the image.

  • Response Format: Decide how the generated image will be returned.

  • Size: Set the dimensions of the image.

  • Style: Choose the artistic style or theme.

  • Image: Provide an input image for transformation or as a base for new variations.

  • Mask: Define which parts of the image to change.

Operation: Start the image creation process based on your specifications.

https://github.com/lllyasviel/Fooocus/discussions/2082
https://github.com/lllyasviel/Fooocus/discussions/143