Hugging Face Text to Image (Prompt)Detailed Configuration... | Hugging Face Text to Image (Prompt)Detailed Configuration...
Hugging Face Text to Image (Prompt)

Detailed Configuration Options
model_endpoint: (Required) Specifies the endpoint for the model you want to use for image generation. The default base URL is https://api-inference.huggingface.co. Providing just the model name defaults to this URL. A full URL, such as 'https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-2-1', overrides the default. This is where API requests for image generation are sent.

asset_folder: (Required) Designates the folder where generated images will be stored. This path, like '/AI Generated', is the path where the generated images will be saved.

prompt_template: (Optional) A Twig template that creates the prompt sent to the image generation model. It combines the input fields into a coherent description for the model. The selected element gets passed as "subject". If empty, the user has to input the initial prompt manually.

filename_template: (Optional) A Twig template to generate the filename dynamically.

parameters: (Optional) Contains additional parameters for the image generation process:

height: (Optional) Specifies the height of the generated image in pixels.
width: (Optional) Specifies the width of the generated image in pixels.
negative_prompt: A Twig template that specifies descriptions to avoid in the generated images.
guidance_scale: (Optional) Determines how closely the generated image should adhere to the prompt as opposed to the model's own creativity.
num_inference_steps: (Optional) Sets the number of steps the model undergoes to refine the generated image. Higher values can lead to more detailed images.
options: (Optional) Contains additional options for the image generation process: use_cache: (Optional, default: true) Utilizes previously generated images for similar requests to accelerate response times. Setting this to false ensures a new image is generated for each request, enhancing uniqueness but potentially increasing wait times. Serverless Inference API