Edit in GitHubLog an issue

Terminology

This section provides definitions and details for some terminology and parameters to be aware of when using the Firefly APIs and documentation.

Seed

The purpose of a seed is to give a starting point for image generation. Each API supports an optional array of seed values that will provide generation stability across multiple API calls (example, you can use the same seed to generate a similar image with different styles).

Only generated images can be used as seeds, and you can locate the seed value for any generated image in the outputs array of a successful response, ie:

Copied to your clipboard
"outputs": [
{
"seed": 1,
"image": {
"url": "https://image1.com/"
}
},
{
"seed": 1234,
"image": {
"url": "https://image2.com/"
}
}
]

Prompt

Instructions given to generate an output. Write descriptive prompts to generate specifically what you want — if you don't like the results, reword your prompt to get closer to what you want. See Writing Effective Prompts for more details.

Negative prompt

Instruct the model that it should NOT include certain elements in its generated image that it may otherwise assume. The model will avoid these words in the generated content with best effort. More details here.

Mask

With image masking, you can “conceal and reveal", meaning you can hide portions of your image and display other portions when editing an image. An image mask is like putting a mask over the parts of a picture you want to protect or hide, while exposing the other areas for editing.

When creating an image mask, use black or white depending on the results you're trying to achieve. A tip to remember is that black conceals and white reveals, thus you will want to use black on the parts you want to hide from being edited, and white on those that can be changed. For example, using the image and mask below will preserve the perfume bottle content outlined in black.

Inverted mask

Inverting a mask means to swap which area of the image is masked (ie: black and white areas are reversed). So for instance, in the case of the above perfume image mask, inverting it would result in the following, and preserve the background of the image instead.

MD (Multi Diffusion)

For these APIs, the model will process an image using its context. For example, in case of Generative Fill/Expand, the model needs to know what the rest of the image is before generating new content.

Reference Image

A sample image provided to be used as a reference while generating image results (ie: such as the image parameter in the Generate Similar API, or the style and structure.imageReference parameters in Generate Images).

Content class

Guides the overall image theme and styles that can be applied on top of each content type (ie: photo, art). If the parameter is not specified, it will be auto-detected.

Style

Use the style parameter to generate an image based on a preset value, or the look and feel of a reference image. Firefly will be influenced by either the preset style value when present, or detect the style in the supplied image, and apply the same style in the generated image. A style can be specified with preset styles (Such as, photo, art, graphic, bw), a reference image, or both.

Parameter Options

  • presets: a list of style presets to be applied to generated content.
  • source: presigned url of image to use for style match.
  • strength: indicates the intensity scale to apply the styles (1..100).

Structure

Firefly will detect the structure in the image supplied in the structure param and apply the same in the generated image. Structure in an image refers to the composition of an image and how the visual elements and subjects are arranged within the frame. The outline and depth are essential aspects that Firefly considers while matching the structure of a reference image. A reference image provided to use for determining the structure of the generated image is more specific than the content reference image in that only the structure is affected.

Example:

A car driving in the middle of a desert

Dimensions

Specifies the dimensions of the generated image via a size parameter in the APIs. Valid dimensions:

Non-upsampled:

widthheightDescription
1024
1024
Square
1152
896
Landscape
896
1152
Portrait
1344
768
Widescreen

Upsampled (2x):

widthheightDescription
2048
2048
Square
2304
1792
Landscape
1792
2304
Portrait
2688
1536
Widescreen

Tileable

Generates results that are repeated patterns, like tiles in a single image​. An image is tileable if it can be repeated infinitely in any direction without showing visible seams or edges. The default for tileable is false.

Locale Based Prompt Bias

Including the promptBiasingLocaleCode parameter where it's supported will generate more relevant content to the region specified.

Visual Intensity

The visualIntensity parameter adjusts the overall intensity of your photo's existing visual characteristic. Valid values are [ 2 .. 10 ].

Placement

The placement object adjust how the image will be positioned and sized in the final generation. You can specify only inset, only alignment, or none (omit all).

Copied to your clipboard
"placement": {
"alignment": {
"horizontal": "left",
"vertical": "top"
}
}

Inpainting

The process of filling-in a designated region of the visual input.

Outpainting

Expands the borders of an image using generative AI.

  • Privacy
  • Terms of Use
  • Do not sell or share my personal information
  • AdChoices
Copyright © 2024 Adobe. All rights reserved.