sd-api-ts
v0.0.2
Published
API translation for Automatic1111 Stable Diffusion WebUI
Downloads
4
Maintainers
Readme
It`s from https://github.com/jaschahuisman/sd-api
I have some fixed when using img2img.
how to publish
npm run build
npm publish --access public
Stable Diffusion Api
A Typescript API client for AUTOMATIC111/stable-diffusion-webui API that is unremarkably inspired by the Python library webuiapi.
Requisites
- To use this API client, you have to run
stable-diffusion-webui
with the--api
command line argument. - Optionally you can add
--nowebui
to disable the web interface.
Installation
npm install stable-diffusion-api
yarn add stable-diffusion-api
Usage
Instantiation
import StableDiffusionApi from "stable-diffusion-api";
const api = new StableDiffusionApi();
const api = new StableDiffusionApi({
host: "localhost",
port: 7860,
protocol: "http",
defaultSampler: "Euler a",
defaultStepCount: 20,
});
const api = new StableDiffusionApi({
baseUrl: "http://localhost:7860",
});
Authentication
Use the --api-auth
command line argument with "username:password" on the server to enable API authentication.
api.setAuth("username", "password");
txt2img
const result = await api.txt2img({
prompt: "An AI-powered robot that accidentally starts doing everyone's job, causing chaos in the workplace."
...
})
result.image.toFile('result.png')
| Result |:-------------------------: |
img2img
const image = sharp('image.png')
const result = await api.img2img({
init_images: [image],
prompt: "Man, scared of AGI, running away on a burning lava floor."
...
})
result.image.toFile('result.png')
| Input | Result | | :-------------------------------: | :----------------------------: | | | |
ControlNet Extension API usage
- To use the ControlNet API, you must have installed the ControlNet extension into your
stable-diffusion-webui
instance. - It's also necessary to have the desired ControlNet models installed into the extension's models directory.
Get models and modules
To get a list of all installed ControlNet models and modules, you can use the api.ControlNet.getModels()
and api.ControlNet.getModules()
methods.
const models = await api.ControlNet.getModels();
const modules = await api.ControlNet.getModules();
ControlNetUnit
To make use of the ControlNet API, you must first instantiate a ControlNetUnit
object in wich you can specify the ControlNet model and preprocessor to use. Next, to use the unit, you must pass it as an array in the controlnet_units
argument in the txt2img
or img2img
methods.
It's also possible to use multiple ControlNet units in the same request. To get some good results, it's recommended to use lower weights for each unit by setting the weight
argument to a lower value.
To get a list of all installed ControlNet models, you can use the api.ControlNet.getModels()
method.
const image = sharp("image.png");
const controlNetUnit = new ControlNetUnit({
model: "control_sd15_depth [fef5e48e]",
module: "depth",
input_images: [image],
processor_res: 512,
threshold_a: 64,
threshold_b: 64,
});
const result = await api.txt2img({
prompt:
"Young lad laughing at all artists putting hard work and effort into their work.",
controlnet_units: [controlNetUnit],
});
result.image.toFile("result.png");
// To access the preprocessing result, you can use the following:
const depth = result.images[1];
depth.toFile("depth.png");
| Input | Result | Depth | | :----------------------------------: | :------------------------------------: | :---------------------------------------: | | | | |
detect
Uses the selected ControlNet proprocessor module to predict a detection on the input image. To make use of the detection result, you must use the model of choise in the txt2img
or img2img
without a preprocessor enabled (use "none"
as the preprocessor module).
This comes in handy when you just want a detection result without generating a whole new image.
const image = sharp("image.png");
const result = await api.ControlNet.detect({
controlnet_module: "depth",
controlnet_input_images: [image],
controlnet_processor_res: 512,
controlnet_threshold_a: 64,
controlnet_threshold_b: 64,
});
result.image.toFile("result.png");
| Input | Result | | :--------------------------: | :--------------------------------: | | | |