a1111-webui-api
v1.9.30001
Published
Typescript client for Automatic1111 Stable Diffusion WebUI
Downloads
24
Maintainers
Readme
Stable Diffusion Api
A Typescript API client for AUTOMATIC111/stable-diffusion-webui API that is actively maintained.
Note: The version of this packages represents the version of the A111-WebUI API it supports, and the last 4 digists represent the patch version of this package.
Format: [MAJOR].[MINOR].[A111-WebUI-API][PATCH]
If you want to use a different version of the API, you can install a different version of this package.
pull requests are welcome!
Installation
# Install using npm
npm install a1111-webui-api
# Install using yarn
yarn add a1111-webui-api
# Install using pnpm
pnpm install a1111-webui-api
Usage
Instantiation
import StableDiffusionApi from "a1111-webui-api";
const api = new StableDiffusionApi();
const api = new StableDiffusionApi({
host: "localhost",
port: 7860,
protocol: "http",
defaultSampler: "Euler a",
defaultStepCount: 20,
});
const api = new StableDiffusionApi({
baseUrl: "http://localhost:7860",
});
Authentication
Use the --api-auth
command line argument with "username:password" on the server to enable API authentication.
api.setAuth("username", "password");
txt2img
const result = await api.txt2img({
prompt: "An AI-powered robot that accidentally starts doing everyone's job, causing chaos in the workplace."
...
})
result.image.toFile('result.png')
img2img
const image = sharp('image.png')
const result = await api.img2img({
init_images: [image],
prompt: "Man, scared of AGI, running away on a burning lava floor."
...
})
result.image.toFile('result.png')
ControlNet Extension API usage
- To use the ControlNet API, you must have installed the ControlNet extension into your
stable-diffusion-webui
instance. - It's also necessary to have the desired ControlNet models installed into the extension's models directory.
Get models and modules
To get a list of all installed ControlNet models and modules, you can use the api.ControlNet.getModels()
and api.ControlNet.getModules()
methods.
const models = await api.ControlNet.getModels();
const modules = await api.ControlNet.getModules();
ControlNetUnit
To make use of the ControlNet API, you must first instantiate a ControlNetUnit
object in wich you can specify the ControlNet model and preprocessor to use. Next, to use the unit, you must pass it as an array in the controlnet_units
argument in the txt2img
or img2img
methods.
It's also possible to use multiple ControlNet units in the same request. To get some good results, it's recommended to use lower weights for each unit by setting the weight
argument to a lower value.
To get a list of all installed ControlNet models, you can use the api.ControlNet.getModels()
method.
const image = sharp("image.png");
const controlNetUnit = new ControlNetUnit({
model: "control_sd15_depth [fef5e48e]",
module: "depth",
input_images: [image],
processor_res: 512,
threshold_a: 64,
threshold_b: 64,
});
const result = await api.txt2img({
prompt: "Young lad laughing at all artists putting hard work and effort into their work.",
controlnet_units: [controlNetUnit],
});
result.image.toFile("result.png");
// To access the preprocessing result, you can use the following:
const depth = result.images[1];
depth.toFile("depth.png");
detect
Uses the selected ControlNet proprocessor module to predict a detection on the input image. To make use of the detection result, you must use the model of choise in the txt2img
or img2img
without a preprocessor enabled (use "none"
as the preprocessor module).
This comes in handy when you just want a detection result without generating a whole new image.
const image = sharp("image.png");
const result = await api.ControlNet.detect({
controlnet_module: "depth",
controlnet_input_images: [image],
controlnet_processor_res: 512,
controlnet_threshold_a: 64,
controlnet_threshold_b: 64,
});
result.image.toFile("result.png");