npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

llamaindex

v0.8.21

Published

<p align="center"> <img height="100" width="100" alt="LlamaIndex logo" src="https://ts.llamaindex.ai/square.svg" /> </p> <h1 align="center">LlamaIndex.TS</h1> <h3 align="center"> Data framework for your LLM application. </h3>

Downloads

119,356

Readme

NPM Version NPM License NPM Downloads Discord

Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in JS runtime environments with TypeScript support.

Documentation: https://ts.llamaindex.ai/

Try examples online:

Open in Stackblitz

What is LlamaIndex.TS?

LlamaIndex.TS aims to be a lightweight, easy to use set of libraries to help you integrate large language models into your applications with your own data.

Compatibility

Multiple JS Environment Support

LlamaIndex.TS supports multiple JS environments, including:

  • Node.js >= 20 ✅
  • Deno ✅
  • Bun ✅
  • Nitro ✅
  • Vercel Edge Runtime ✅ (with some limitations)
  • Cloudflare Workers ✅ (with some limitations)

For now, browser support is limited due to the lack of support for AsyncLocalStorage-like APIs

Supported LLMs:

  • OpenAI LLms
  • Anthropic LLms
  • Groq LLMs
  • Llama2, Llama3, Llama3.1 LLMs
  • MistralAI LLMs
  • Fireworks LLMs
  • DeepSeek LLMs
  • ReplicateAI LLMs
  • TogetherAI LLMs
  • HuggingFace LLms
  • DeepInfra LLMs
  • Gemini LLMs

Getting started

npm install llamaindex
pnpm install llamaindex
yarn add llamaindex

Setup in Node.js, Deno, Bun, TypeScript...?

See our official document: https://ts.llamaindex.ai/docs/llamaindex/setup/getting-started

Tips when using in non-Node.js environments

When you are importing llamaindex in a non-Node.js environment(such as Vercel Edge, Cloudflare Workers, etc.) Some classes are not exported from top-level entry file.

The reason is that some classes are only compatible with Node.js runtime,(e.g. PDFReader) which uses Node.js specific APIs(like fs, child_process, crypto).

If you need any of those classes, you have to import them instead directly though their file path in the package. Here's an example for importing the PineconeVectorStore class:

import { PineconeVectorStore } from "llamaindex/storage/vectorStore/PineconeVectorStore";

As the PDFReader is not working with the Edge runtime, here's how to use the SimpleDirectoryReader with the LlamaParseReader to load PDFs:

import { SimpleDirectoryReader } from "llamaindex/readers/SimpleDirectoryReader";
import { LlamaParseReader } from "llamaindex/readers/LlamaParseReader";

export const DATA_DIR = "./data";

export async function getDocuments() {
  const reader = new SimpleDirectoryReader();
  // Load PDFs using LlamaParseReader
  return await reader.loadData({
    directoryPath: DATA_DIR,
    fileExtToReader: {
      pdf: new LlamaParseReader({ resultType: "markdown" }),
    },
  });
}

Note: Reader classes have to be added explictly to the fileExtToReader map in the Edge version of the SimpleDirectoryReader.

You'll find a complete example with LlamaIndexTS here: https://github.com/run-llama/create_llama_projects/tree/main/nextjs-edge-llamaparse

Playground

Check out our NextJS playground at https://llama-playground.vercel.app/. The source is available at https://github.com/run-llama/ts-playground

Core concepts for getting started:

  • Document: A document represents a text file, PDF file or other contiguous piece of data.

  • Node: The basic data building block. Most commonly, these are parts of the document split into manageable pieces that are small enough to be fed into an embedding model and LLM.

  • Embedding: Embeddings are sets of floating point numbers which represent the data in a Node. By comparing the similarity of embeddings, we can derive an understanding of the similarity of two pieces of data. One use case is to compare the embedding of a question with the embeddings of our Nodes to see which Nodes may contain the data needed to answer that question. Because the default service context is OpenAI, the default embedding is OpenAIEmbedding. If using different models, say through Ollama, use this Embedding (see all here).

  • Indices: Indices store the Nodes and the embeddings of those nodes. QueryEngines retrieve Nodes from these Indices using embedding similarity.

  • QueryEngine: Query engines are what generate the query you put in and give you back the result. Query engines generally combine a pre-built prompt with selected Nodes from your Index to give the LLM the context it needs to answer your query. To build a query engine from your Index (recommended), use the asQueryEngine method on your Index. See all query engines here.

  • ChatEngine: A ChatEngine helps you build a chatbot that will interact with your Indices. See all chat engines here.

  • SimplePrompt: A simple standardized function call definition that takes in inputs and formats them in a template literal. SimplePrompts can be specialized using currying and combined using other SimplePrompt functions.

Contributing:

Please see our contributing guide for more information. You are highly encouraged to contribute to LlamaIndex.TS!

Community

Please join our Discord! https://discord.com/invite/eN6D2HQ4aX