npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

mem0-ai-provider

v0.0.1

Published

Vercel AI Provider for providing memory to LLMs

Downloads

76

Readme

Mem0 AI SDK Provider

The Mem0 AI SDK Provider is a community-maintained library developed by Mem0 to integrate with the Vercel AI SDK. This library brings enhanced AI interaction capabilities to your applications by introducing persistent memory functionality. With Mem0, language model conversations gain memory, enabling more contextualized and personalized responses based on past interactions.

Discover more of Mem0 on GitHub.

For detailed information on using the Vercel AI SDK, refer to Vercel’s API Reference and Documentation.

Features

  • 🧠 Persistent memory storage for AI conversations
  • 🔄 Seamless integration with Vercel AI SDK
  • 🚀 Support for multiple LLM providers
  • 📝 Rich message format support
  • ⚡ Streaming capabilities
  • 🔍 Context-aware responses

Installation

npm install mem0-ai-provider

Before We Begin

Setting Up Mem0

  1. Obtain your Mem0 API Key from the Mem0 dashboard.

  2. Initialize the Mem0 Client:

    import { createMem0 } from "mem0-ai-provider";
    
    const mem0 = createMem0({
      MEM0_API_KEY: "m0-xxx"  // Alternatively, use the MEM0_API_KEY environment variable for better security
    });
  3. Add Memories to Enhance Context:

    import { LanguageModelV1Prompt } from "ai";
    import { addMemories } from "mem0-ai-provider";
    
    const messages: LanguageModelV1Prompt = [
        {
            role: "user",
            content: [
              { type: "text", text: "I love red cars." },
              { type: "text", text: "I like Toyota Cars." },
              { type: "text", text: "I prefer SUVs." },
            ],
        }
    ];
    
    await addMemories(messages, { user_id: "borat" });

These memories are now stored in your profile. You can view and manage them on the Mem0 Dashboard.

Usage Examples

1. Basic Text Generation with Memory Context

import { generateText } from "ai";
import { createMem0 } from "mem0-ai-provider";

const mem0 = createMem0();

const { text } = await generateText({
  model: mem0("openai.gpt-4-turbo", {
    user_id: "borat",
  }),
  prompt: "Suggest me a good car to buy!",
});

2. Combining OpenAI Provider with Memory Utils

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { retrieveMemories } from "mem0-ai-provider";

const prompt = "Suggest me a good car to buy.";
const memories = await retrieveMemories(prompt, {user_id: "borat"});

const { text } = await generateText({
  model: openai("gpt-4-turbo"),
  prompt: prompt,
  system: memories
});

3. Structured Message Format with Memory

import { generateText } from "ai";
import { createMem0 } from "mem0-ai-provider";

const mem0 = createMem0();

const { text } = await generateText({
  model: mem0("openai.gpt-4-turbo", {
    user_id: "borat",
  }),
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "Suggest me a good car to buy." },
        { type: "text", text: "Why is it better than the other cars for me?" },
        { type: "text", text: "Give options for every price range." },
      ],
    }
  ],
});

4. Advanced Memory Integration with OpenAI

import { generateText, LanguageModelV1Prompt } from "ai";
import { openai } from "@ai-sdk/openai";
import { appendMemories, flattenMessages } from "mem0-ai-provider";

// New format using system parameter for memory context
const messages: LanguageModelV1Prompt = [
  {
    role: "user",
    content: [
      { type: "text", text: "Suggest me a good car to buy." },
      { type: "text", text: "Why is it better than the other cars for me?" },
      { type: "text", text: "Give options for every price range." },
    ],
  },
];

const memories = await retrieveMemories(messages, { user_id: "borat" });

const { text } = await generateText({
  model: openai("gpt-4-turbo"),
  messages: messages,
  system: memories,
});

5. Streaming Responses with Memory Context

import { streamText } from "ai";
import { createMem0 } from "mem0-ai-provider";

const mem0 = createMem0();

const { textStream } = await streamText({
    model: mem0("openai.gpt-4-turbo", {
      user_id: "borat",
    }),
    prompt:
      "Suggest me a good car to buy! Why is it better than the other cars for me? Give options for every price range.",
);

for await (const textPart of textStream) {
    process.stdout.write(textPart);
}

Core Functions

  • createMem0(): Initializes a new mem0 provider instance with optional configuration
  • retrieveMemories(): Enriches prompts with relevant memories
  • addMemories(): Add memories to your profile

Configuration Options

const mem0 = createMem0({
  compatibility: 'strict',    // Enforces strict provider compatibility
});

Best Practices

  1. User Identification: Always provide a unique user_id identifier for consistent memory retrieval
  2. Context Management: Use appropriate context window sizes to balance performance and memory
  3. Error Handling: Implement proper error handling for memory operations
  4. Memory Cleanup: Regularly clean up unused memory contexts to optimize performance

We also have support for agent_id, app_id, and run_id. Refer Docs.

Notes

  • Requires proper API key configuration for underlying providers (e.g., OpenAI)
  • Memory features depend on proper user identification via user_id
  • Supports both streaming and non-streaming responses
  • Compatible with all Vercel AI SDK features and patterns

For detailed documentation, examples, and contribution guidelines, please visit our GitHub repository.