@simplesagar/fireworksai
v0.23.3
Published
<div align="center"> <img src="https://github.com/user-attachments/assets/71ed56c6-739a-49fb-8326-cca12dcc2bb2" width="200"> <p>Generative AI For Product Innovation!</p> <a href="https://docs.fireworks.ai/api-reference/introduction"><img src
Downloads
32
Readme
The FireworksAI Typescript library provides convenient access to the Fireworks REST API from any Typescript or Javascript application. The library includes type definitions for all request params and response fields, with http requests powered by fetch.
[!WARNING]
This is an example SDK that is not yet ready for production usage.
Summary
Fireworks REST API: REST API for performing inference on Fireworks large language models (LLMs).
Table of Contents
- SDK Installation
- Requirements
- SDK Example Usage
- Available Resources and Operations
- Standalone functions
- Server-sent event streaming
- File uploads
- Retries
- Error Handling
- Server Selection
- Custom HTTP Client
- Authentication
- Debugging
SDK Installation
The SDK can be installed with either npm, pnpm, bun or yarn package managers.
NPM
npm add @simplesagar/fireworksai
PNPM
pnpm add @simplesagar/fireworksai
Bun
bun add @simplesagar/fireworksai
Yarn
yarn add @simplesagar/fireworksai zod
# Note that Yarn does not install peer dependencies automatically. You will need
# to install zod as shown above.
Requirements
For supported JavaScript runtimes, please consult RUNTIMES.md.
SDK Example Usage
Example
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
Available Resources and Operations
audio
- createTranscription - Transcribe Audio to Text
- createTranslation - Translate Audio to Text
chat
chat.completions
- create - Create Chat Completion
completions
- create - Create Completion
embeddings
- create - Create embeddings
images
- generateFromPrompt - Generate a new image from a text prompt
- generateFromImage - Generate a new image from an image
- cannyEdgeDetection - Pre-process the image with canny edge detection
- generateFromControlNet - Generate a new image using ControlNet with provided image as a guidance
- generateQRCode - Generate a QR code
Standalone functions
All the methods listed above are available as standalone functions. These functions are ideal for use in applications running in the browser, serverless runtimes or other environments where application bundle size is a primary concern. When using a bundler to build your application, all unused functionality will be either excluded from the final bundle or tree-shaken away.
To read more about standalone functions, check FUNCTIONS.md.
- audioCreateTranscription
- audioCreateTranslation
- chatCompletionsCreate
- completionsCreate
- embeddingsCreate
- imagesCannyEdgeDetection
- imagesGenerateFromControlNet
- imagesGenerateFromImage
- imagesGenerateFromPrompt
- imagesGenerateQRCode
Server-sent event streaming
Server-sent events are used to stream content from certain
operations. These operations will expose the stream as an async iterable that
can be consumed using a for await...of
loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
File uploads
Certain SDK methods accept files as part of a multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
[!TIP]
Depending on your JavaScript runtime, there are convenient utilities that return a handle to a file without reading the entire contents into memory:
- Node.js v20+: Since v20, Node.js comes with a native
openAsBlob
function innode:fs
.- Bun: The native
Bun.file
function produces a file handle that can be used for streaming file uploads.- Browsers: All supported browsers return an instance to a
File
when reading the value from an<input type="file">
element.- Node.js v18: A file stream can be created using the
fileFrom
helper fromfetch-blob/from.js
.
import { FireworksAI } from "@simplesagar/fireworksai";
import { openAsBlob } from "node:fs";
const fireworksAI = new FireworksAI({
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.images.generateFromImage({
accountId: "fireworks",
modelId: "stable-diffusion-xl-1024-v1-0",
bodyImage2imageGen: {
initImage: await openAsBlob("example.file"),
cfgScale: 7,
height: 1024,
imageStrength: 0.75,
initImageMode: "IMAGE_STRENGTH",
negativePrompt: "cloudy day",
prompt: "A futuristic cityscape",
safetyCheck: false,
sampler: "K_EULER",
samples: 1,
seed: 0,
stepScheduleEnd: 1,
stepScheduleStart: 0,
steps: 30,
width: 1024,
},
});
// Handle the result
console.log(result);
}
run();
Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a retryConfig object to the call:
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
}, {
retries: {
strategy: "backoff",
backoff: {
initialInterval: 1,
maxInterval: 50,
exponent: 1.1,
maxElapsedTime: 100,
},
retryConnectionErrors: false,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
If you'd like to override the default retry strategy for all operations that support retries, you can provide a retryConfig at SDK initialization:
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
retryConfig: {
strategy: "backoff",
backoff: {
initialInterval: 1,
maxInterval: 50,
exponent: 1.1,
maxElapsedTime: 100,
},
retryConnectionErrors: false,
},
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
Error Handling
All SDK methods return a response object or throw an error. By default, an API error will throw a models.SDKError
.
If a HTTP request fails, an operation my also throw an error from the models/httpclienterrors.ts
module:
| HTTP Client Error | Description | | ---------------------------------------------------- | ---------------------------------------------------- | | RequestAbortedError | HTTP request was aborted by the client | | RequestTimeoutError | HTTP request timed out due to an AbortSignal signal | | ConnectionError | HTTP client was unable to make a request to a server | | InvalidRequestError | Any input used to create a request is invalid | | UnexpectedClientError | Unrecognised or unexpected error |
In addition, when custom error responses are specified for an operation, the SDK may throw their associated Error type. You can refer to respective Errors tables in SDK docs for more details on possible error types for each operation. For example, the create
method may throw the following errors:
| Error Type | Status Code | Content Type | | -------------------------- | -------------------------- | -------------------------- | | models.BadRequest | 400 | application/json | | models.Unauthorized | 401 | application/json | | models.Forbidden | 403 | application/json | | models.NotFound | 404 | application/json | | models.TooManyRequests | 429 | application/json | | models.InternalServerError | 500 | application/json | | models.ServiceUnavailable | 503 | application/json | | models.SDKError | 4XX, 5XX | */* |
import {
BadRequest,
FireworksAI,
Forbidden,
InternalServerError,
NotFound,
ServiceUnavailable,
TooManyRequests,
Unauthorized,
} from "@simplesagar/fireworksai";
import { SDKValidationError } from "@simplesagar/fireworksai/models";
const fireworksAI = new FireworksAI({
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
let result;
try {
result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
} catch (err) {
switch (true) {
case (err instanceof SDKValidationError): {
// Validation errors can be pretty-printed
console.error(err.pretty());
// Raw value may also be inspected
console.error(err.rawValue);
return;
}
case (err instanceof BadRequest): {
// Handle err.data$: BadRequestData
console.error(err);
return;
}
case (err instanceof Unauthorized): {
// Handle err.data$: UnauthorizedData
console.error(err);
return;
}
case (err instanceof Forbidden): {
// Handle err.data$: ForbiddenData
console.error(err);
return;
}
case (err instanceof NotFound): {
// Handle err.data$: NotFoundData
console.error(err);
return;
}
case (err instanceof TooManyRequests): {
// Handle err.data$: TooManyRequestsData
console.error(err);
return;
}
case (err instanceof InternalServerError): {
// Handle err.data$: InternalServerErrorData
console.error(err);
return;
}
case (err instanceof ServiceUnavailable): {
// Handle err.data$: ServiceUnavailableData
console.error(err);
return;
}
default: {
throw err;
}
}
}
}
run();
Validation errors can also occur when either method arguments or data returned from the server do not match the expected format. The SDKValidationError
that is thrown as a result will capture the raw value that failed validation in an attribute called rawValue
. Additionally, a pretty()
method is available on this error that can be used to log a nicely formatted string since validation errors can list many issues and the plain error string may be difficult read when debugging.
Server Selection
Select Server by Name
You can override the default server globally by passing a server name to the server
optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the names associated with the available servers:
| Name | Server | Variables |
| ----- | ------ | --------- |
| prod
| https://api.fireworks.ai/inference/v1/
| None |
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
server: "prod",
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
Override Server URL Per-Client
The default server can also be overridden globally by passing a URL to the serverURL
optional parameter when initializing the SDK client instance. For example:
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
serverURL: "https://api.fireworks.ai/inference/v1/",
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
Custom HTTP Client
The TypeScript SDK makes API calls using an HTTPClient
that wraps the native
Fetch API. This
client is a thin wrapper around fetch
and provides the ability to attach hooks
around the request lifecycle that can be used to modify the request or handle
errors and response.
The HTTPClient
constructor takes an optional fetcher
argument that can be
used to integrate a third-party HTTP client or when writing tests to mock out
the HTTP client and feed in fixtures.
The following example shows how to use the "beforeRequest"
hook to to add a
custom header and a timeout to requests and how to use the "requestError"
hook
to log errors:
import { FireworksAI } from "@simplesagar/fireworksai";
import { HTTPClient } from "@simplesagar/fireworksai/lib/http";
const httpClient = new HTTPClient({
// fetcher takes a function that has the same signature as native `fetch`.
fetcher: (request) => {
return fetch(request);
}
});
httpClient.addHook("beforeRequest", (request) => {
const nextRequest = new Request(request, {
signal: request.signal || AbortSignal.timeout(5000)
});
nextRequest.headers.set("x-custom-header", "custom value");
return nextRequest;
});
httpClient.addHook("requestError", (error, request) => {
console.group("Request Error");
console.log("Reason:", `${error}`);
console.log("Endpoint:", `${request.method} ${request.url}`);
console.groupEnd();
});
const sdk = new FireworksAI({ httpClient });
Authentication
Per-Client Security Schemes
This SDK supports the following security scheme globally:
| Name | Type | Scheme | Environment Variable |
| -------------------- | -------------------- | -------------------- | -------------------- |
| apiKey
| http | HTTP Bearer | FIREWORKS_API_KEY
|
To authenticate with the API the apiKey
parameter must be set when initializing the SDK client instance. For example:
import { FireworksAI } from "@simplesagar/fireworksai";
const fireworksAI = new FireworksAI({
apiKey: process.env["FIREWORKS_API_KEY"] ?? "",
});
async function run() {
const result = await fireworksAI.chat.completions.create({
model: "accounts/fireworks/models/llama-v3p1-405b-instruct",
messages: [
{
role: "user",
content:
"What is the best company in the SF Bay Area that starts with F and ends with ireworks AI",
name: "user_1",
},
],
temperature: 1,
topP: 1,
topK: 50,
n: 1,
responseFormat: {
type: "json_object",
},
topLogprobs: 1,
logitBias: {
"1": 0,
"2": 0,
"3": 0,
},
});
for await (const event of result) {
// Handle the event
console.log(event);
}
}
run();
Debugging
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass a logger that matches console
's interface as an SDK option.
[!WARNING] Beware that debug logging will reveal secrets, like API tokens in headers, in log messages printed to a console or files. It's recommended to use this feature only during local development and not in production.
import { FireworksAI } from "@simplesagar/fireworksai";
const sdk = new FireworksAI({ debugLogger: console });
You can also enable a default debug logger by setting an environment variable FIREWORKS_DEBUG
to true.
Development
Maturity
This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage to a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally looking for the latest version.
Contributions
While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation. We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.