npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@jclem/effect-llm

v0.9.0

Published

An Effect wrapper around some LLM APIs

Downloads

13

Readme

Effect LLM

Effect LLM built with Effect for interacting with large language model APIs.

The goal is of the library is to make it as easy as possible to switch between various API providers (while also providing a means of using provider-specific functionality where needed).

Usage

Basic Usage

To use this library, you'll initialize a provider (a client for a specific API such as Anthropic or OpenAI) and use that provider to make LLM API calls via the Generation service.

const program = Effect.gen(function* () {
  const apiKey = yield* Config.redacted("ANTHROPIC_API_KEY");
  const provider = yield* Providers.Anthropic.make();

  const stream = Generation.stream(provider, {
    apiKey,
    model: Providers.Anthropic.Model.Claude35Sonnet,
    maxTokens: 512,
    events: [
      new Thread.UserMessage({
        content: [new TextChunk({ content: "Hello, I'm Jonathan." })],
      }),
    ],
  });

  const responseText = yield* Generation.getContent(stream);

  yield* Console.log("The model says:", responseText);
});

program.pipe(
  Effect.provide(HttpClient.layer),
  Effect.provide(BunContext.layer),
  BunRuntime.runMain,
);

Note that provider make functions generally also receive a subset of the streaming params as an argument, which can be used to provide some defaults:

Effect.gen(function* () {
  const provider = yield* Providers.Anthropic.make({
    defaultParameters: {
      apiKey: yield* Config.redacted("ANTHROPIC_API_KEY"),
      model: Providers.Anthropic.Model.Claude35Sonnet,
      maxTokens: 512,
      system: "Be cordial.",
      additionalParameters: {
        temperature: 0.5,
      },
    },
  });
});

Note that additionalParameters passed to a generation function will be merged with additionalParameters given to defaultParams.

Generally, it is recommended that you set up the provider as a layer so that it can be swapped out with relative ease[^1].

const apiKey = Config.redacted("ANTHROPIC_API_KEY");

const program = Effect.gen(function* () {
  const provider = yield* Generation.Generation;

  const stream = Generation.stream(provider, {
    apiKey: yield* apiKey,
    model: Providers.Anthropic.Model.Claude35Sonnet,
    maxTokens: 512,
    events: [
      new Thread.UserMessage({
        content: [new TextChunk({ content: "Hello, I'm Jonathan." })],
      }),
    ],
  });

  const responseText = yield* Generation.getContent(stream);

  yield* Console.log("The model says:", responseText);
});

program.pipe(
  Effect.provide(Layer.effect(Generation.Generation, Anthropic.make())),
  Effect.provide(HttpClient.layer),
  Effect.provide(BunContext.layer),
  BunRuntime.runMain,
);

Using the Google Provider

In order to use the Google Provider, you'll need to keep two things in mind:

The Google.make function accepts two parameters. The first are Google-specific configuration options, and the second are the optional default parameters that the other providers accept, as well:

Google.make(
  {
    // Required.
    serviceEndpoint: "https://us-central1-aiplatform.googleapis.com",
  },
  {
    system: "Be courteous.",
  },
);

Secondly, the "model" parameter must be the full model path parameter in this format:

const params = {
  model: `projects/${projectID}/locations/${locationID}/publishers/${publisher}/models/${modelName}`,
};

Tool-Calling

There are two ways of utilizing LLM tools in this library.

Using Generation.stream

The Generation.stream function accepts tools and toolCall as parameters. When using these parameters, you can expect to see the following events emitted from the stream:

| Name | Payload Type | Description | | --------------- | ----------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | | ToolCallStart | { readonly id: string; readonly name: string; } | Emitted when a tool call begins, but before its arguments have been streamed | | ToolCall | { readonly id: string; readonly name: string; readonly arguments: string; } | Emitted when a tool call and its arguments have been fully streamed and collected |

Note that when using Generation.stream, the tool calls are not validated or executed, nor are there arguments even parsed.

Using Generation.streamTools

If instead you would like to have effect-llm parse and execute tool calls for you, use Generation.streamTools. This tool accepts the same parameters as Generation.stream, with the addition of a maxIterations parameter used to limit the number of loops that will be executed. When using Generation.streamTools, the following sequence of events will occur:

  1. Send the completion request to the provider
  2. Parse the response
  3. If there are tool calls in the response:
    1. Parse the arguments
    2. Append the tool call to the events list
    3. Call the tool
    4. Append the tool result to the events list
    5. Go to (1) with the new events list
  4. OR, If there are no tool calls in the resopnse:
    1. End the stream

If the maxIterations limit is exceeded, the stream will emit a MaxIterationsError.

The stream returned by Generation.streamTools emits the same events as Generation.stream with some additions:

| Name | Payload Type | Description | | ------------------- | ----------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | | ToolCallStart | { readonly id: string; readonly name: string; } | Emitted when a tool call begins, but before its arguments have been streamed | | ToolCall | { readonly id: string; readonly name: string; readonly arguments: string; } | Emitted when a tool call and its arguments have been fully streamed and collected | | InvalidToolCall | { readonly id: string; readonly name: string; readonly arguments: string; } | Emitted when a tool call's arguments are invalid or the tool call is not in the defined tools | | ToolResultSuccess | { readonly id: string; readonly name: string; readonly result: unknown } | Emitted when a tool call's arguments are invalid or the tool call is not in the defined tools | | ToolResultError | { readonly id: string; readonly name: string; readonly result: unknown } | Emitted when a tool call's arguments are invalid or the tool call is not in the defined tools |

Defining Tools

To define a tool for use with Generation.streamTools[^2], use the Generation.defineTool function:

const apiKey = Config.redacted("ANTHROPIC_API_KEY");

const program = Effect.gen(function* () {
  const provider = yield* Generation.Generation;

  const sayHello = Generation.defineTool("sayHello", {
    description: "Say hello to the user",
    input: Schema.Struct({ name: Schema.String }),
    effect: (toolCallID, toolArgs) =>
      Console.log(`Hello, ${toolArgs.name}`).pipe(Effect.as({ ok: true })),
  });

  const stream = Generation.streamTools(provider, {
    apiKey: yield* apiKey,
    model: Providers.Anthropic.Model.Claude35Sonnet,
    maxTokens: 512,
    tools: [sayHello],
    events: [
      new Thread.UserMessage({
        content: [new TextChunk({ content: "Hello, I'm Jonathan." })],
      }),
    ],
  });

  yield* stream.pipe(Stream.runDrain, Effect.scoped);
});

program.pipe(
  Effect.provide(Layer.effect(Generation.Generation, Anthropic.make())),
  Effect.provide(HttpClient.layer),
  Effect.provide(BunContext.layer),
  BunRuntime.runMain,
);

Error Handling

Any errors that occur during tool execution will halt the stream and yield a ToolExecutionError. In order to handle an error and report it to the model, you should instead fail the effect with a ToolError using the Generation.toolError function:

const sayHello = Generation.defineTool("sayHello", {
  description: "Say hello to the user",
  input: Schema.Struct({ name: Schema.String }),
  effect: (toolCallID, toolArgs) =>
    Console.log(`Hello, ${toolArgs.name}`).pipe(
      Effect.catchAll((err) =>
        Generation.toolError({
          message: "An error occurred while saying hello",
          error: err,
        }),
      ),
      Effect.as({ ok: true }),
    ),
});

You can also fail mid-effect, since Generation.toolError actually fails the effect:

const sayHello = Generation.defineTool("sayHello", {
  description: "Say hello to the user",
  input: Schema.Struct({ name: Schema.String }),
  effect: (toolCallID, toolArgs) =>
    Effect.gen(function* () {
      return yield* Generation.toolError("Whoops!");
    }),
});

The payload passed to Generation.toolError can be any value, and it is serialized as JSON and sent to the model, which is notified that an error occurred.

Halting Early

If you want to halt the iteration loop eraly, you can use the Generation.haltToolLoop function:

const sayHello = Generation.defineTool("sayHello", {
  description: "Say hello to the user",
  input: Schema.Struct({ name: Schema.String }),
  effect: (toolCallID, toolArgs) =>
    Effect.gen(function* () {
      return yield* Generation.haltToolLoop();
    }),
});

This will immediately halt the loop before executing any other tool calls returned by the model in that same loop, and will yield end the stream without an error.

[^1]: There are some caveats to this—for example, the stream API doesn't require the maxTokens parameter, because OpenAI doesn't require it, but the Anthropic API will return a 400 if it's not provided.

[^2]: You can also use Generation.defineTool with Generation.stream, because currently, it uses the same parameter type, but doesn't actually validate or execute the tool calls.