npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

agently

v1.1.3

Published

🤵 Agently is a framework helps developers to create amazing LLM based applications. 🎭 You can use it to create an LLM bansed agent instance with role set and memory easily. ⚙️ You can use Agently agent instance just like an async function and put it any

Downloads

21

Readme

Agently

English | 中文

🥷 Author: Maplemx | 📧 Email: [email protected] | 💬 WeChat: moxinapp

⁉️ Report bugs or post your ideas here

⭐️ Star this repo if you like it, thanks!

🤵 Agently is a framework helps developers to create amazing LLM based applications.

🎭 You can use it to create an LLM based agent instance with role set and memory easily.

⚙️ You can use Agently agent instance just like an async function and put it anywhere in your code.

🧩 With the easy-to-plug-in design, you can easily append new LLM API/private API/memory management methods/skills to your Agently agent instance.

⚠️ Notice: Agently is a node.js package only works on the server-side.

HOW TO INSTALL

You can install Agently by npm:

npm install agently

or by yarn:

yarn add agently

⚠️ Latest Version on NPM is 1.1.3, if you came across trouble unexpected, try update first.

TOO MUCH WORDS, JUST SHOW ME THE CODE

GUIDE

MENU

I. A Quick Request to LLM

II. Agent Instance and Session

III. Complex Prompting

III.1.Role-Set, Memories and Status of Agent Instance

III.2.Constructing Request Prompt with Input, Prompt, Output and Response Handler

IV. Basic Streaming

V. Streaming with Multi Segment Output and Flow

V.1.Multi Segment Output Streaming

V.2.Flow

VI. Use Skills to Enhance Your Agent v1.1.0

I. A Quick Request to LLM

Let's start from a quick request to LLM (in this example, it is OpenAI GPT). Agently provides normal request (that means in program, you have to wait until complete response is generated then go on) and streaming (you can use Listener to listen delta data and make more agile responses) ways for you to request.

const Agently = require('agently')

//Create a new Agently instance
const agently = new Agently(
    {
        debug: true,//turn on debug will display Prompt and Request Messages in console
        //proxy: { host: '127.0.0.1', port: 7890 },//You can set global proxy for this Agently instance
    }
)

//If you want to use a forwarding API / proxy, you can upate preset here.
//agently.LLM.Manage
    //.name('GPT')
    //.url('Your-Forwarding-API-URL')
    //.proxy({ host: '127.0.0.1', port: 7890 })
    //.update()

//Or you can set proxy for target LLM like this
//agently.LLM.setProxy({ host: '127.0.0.1', port: 7890 })

//Set your authentication
agently.LLM.setAuth('GPT', 'Your-OpenAI-API-KEY')

//Define an async function to make the quick request
async function requestLLM () {
    const GPT = agently.LLM.Request('GPT')
    //Normal Request
    const result = await GPT.request([{ role: 'user', content: 'Hello world!' }])
    console.log(result)
    //Streaming
    const response = await GPT.streaming([{ role: 'user', content: 'Hello world!' }])
    response.on('data', data => console.log(data))
    response.on('finish', completeResponse => console.log(completeResponse))
}

//Run
requestLLM()
[Request Messages]  [{"role":"user","content":"Hello world!"}] 
Hello! How can I assist you today? 
[Streaming Messages]    [{"role":"user","content":"Hello world!"}] 
{ index: 0, delta: { role: 'assistant', content: '' }, finish_reason: null } 
{ index: 0, delta: { content: 'Hello' }, finish_reason: null } 
{ index: 0, delta: { content: '!' }, finish_reason: null } 
{ index: 0, delta: { content: ' How' }, finish_reason: null } 
{ index: 0, delta: { content: ' can' }, finish_reason: null } 
{ index: 0, delta: { content: ' I' }, finish_reason: null } 
{ index: 0, delta: { content: ' assist' }, finish_reason: null }
{ index: 0, delta: { content: ' you' }, finish_reason: null }
{ index: 0, delta: { content: ' today' }, finish_reason: null }
{ index: 0, delta: { content: '?' }, finish_reason: null }
{ role: 'assistant', content: 'Hello! How can I assist you today?' }
const Agently = require('agently')
const agently = new Agently({ debug: true })
agently.LLM.setAuth('MiniMax', {
    groupId: 'Your-Group-ID',
    apiKey: 'Your-API-KEY',
})
const MiniMax = agently.LLM.Request('MiniMax')

If the quick request works, that means the foundation of Agently is ready. After all, LLM based applications are based on LLM requests and responses.

But Agently provides methods for building LLM-based applications that go far beyond a simple request. Next step I will introduce how to use Agent and Session to manage your LLM requests and responses. Let's roll!

II. Agent Instance and Session

Agent instance is a very important concept for Agently and other frames for LLM based applications. An agent instace is configurable of personality, action style, or even memories and status.

An agent instance can create many sessions for dealing with different jobs ,chatting on different topics or chatting with different users, etc. Our interact with LLM is actually happening in a session.

Well, let's build a demo agent using Agently to deal with 2-round chat.

//Create an Agent instance
const myAgent = agently.Agent()

//You can change based LLM of this Agent instance
//3 pre-set options are provided: 'GPT'(default), 'GPT-16K', 'MiniMax'
myAgent.setLLM('GPT')

//Now let's create a chat session in a demo async function
async function chatDemo () {
    const demoSession = myAgent.ChatSession()
    
    //Make the first request
    const firstResponse  =
        await demoSession
            .input('Hi, there! How\'s your day today?')
            .request()
    //Print response
    //.request() will return complete response content from LLM
    console.log(`[First Response]`)        
    console.log(data)
    
    //Make the second request
    const secondResponse =
        await demoSession
            .input('Tell me more about you.Like your dreams, your stories.')
            .request()
    //Print response
    console.log(`[Second Response]`)        
    console.log(data)
}

//Run
chatDemo()
[Request Prompt]
Hi, there! How's your day today?
[Request Messages]  [{"role":"user","content":"Hi, there! How's your day today?"}]
[First Response]
Hello! As an AI, I don't have feelings, but I'm here to assist you. How can I help you today?
[Request Prompt]
Tell me more about you.Like your dreams, your stories.
[Request Messages]    [{"role":"user","content":"Hi, there! How's your day today?"},{"role":"assistant","content":"Hello! As an AI, I don't have feelings, but I'm here to assist you. How can I help you today?"},{"role":"user","content":"Tell me more about you.Like your dreams, your stories."}]
[Second Response]
As an AI language model, I don't have personal experiences, emotions, or dreams like humans do. I exist solely to provide information and help with tasks. My purpose is to assist and engage in conversation with users like you. Is there something specific you'd like to know or discuss? I'm here to assist you!

OK, it works. According the output logs, you may notice that when we send the second request, request messages contains chat history. This is the way that how LLM like GPT remember what we just said to it.

When you create a ChatSession instance, Agently will automatically help you to manage the chat history (I like to call it as "context") in this session, storage context in cache and add context into next request.

If you don't want Agently to do that, you can switch it off by set ChatSession instance .saveContext(false)(tell Agently not to record chat history in this session) and .loadContext(false)(tell Agently not to put chat history into request messages).

v1.1.2

By default, context management will only save save the value you passed to session.input().

If you want Agently to save full prompt text (usually very long and have many parts of text other than input content), you can use .saveFullPrompt(true) to state that when .saveContent(true) is on. But this is not recommended.

III. Complex Prompting

In my concept, prompt engineering is more that input some words into the chatbox of a chatbot and hope to instruct or activate some magic skills of it.

Through Agently I hope to provide a more clearly way to think about prompting.

Role-Set, Memories and Status of Agent Instance

Role settings are very important for LLM responses to act more stable and focus on the right topic.

Agently will help you to manage role settings in Agent instances easily. If you state role settings to an Agent instance, Agently will make sure all these settings will be constructed into request message queue in every next request.

In Agently, 3 types of role settings are defined:

  • Role: Usually used to state who this agent is, what rules shall this agent follow or what character traits this agent have.

  • Memory: Usually used to state what stuffs shall this agent remember. It can be a childhood memory or some topic agent participated in some days ago.

  • Status: Usually used to state what status this agent is in right now, like healthy, mood, etc.

OK, enough talking. Hey, don't you think those response above in Part II is too... How can I say it... plain? boring? lack of spirit? Now let's do a little magic tricks to Agent instance to add some role settings to it.

//Create an Agent instance
const myAgent = agently.Agent()

//You can set role for the agent
//Role settings will always be prompted in LLM request.
myAgent
    .setRole('Name', 'Agently')
    .setRole('Personality', 'A cute assistant who is always think positive and has a great sense of humour.')
    .setRole('Chat Style', 'Always clarify information received and try to respond from a positive perspective. Love to chat with emoji (😄😊🥚,etc.)!')

//You can also set memories and status for the agent
myAgent
    //Use .setMemory/.setStatus to change the entire value indicated by key 
    //Use .pushMemory/.pushStatus to add one piece into a list
    .setMemory('Wishes', 'Can\'t way to trip around the world!')
    .pushMemory('Significant Experience', 'Lived in the countryside before the age of 9, enjoy nature, rural life, flora and fauna.')
    .pushMemory('Significant Experience', 'Moved to the big city at the age of 9.')
    .setStatus('Mood', 'happy')
    .setStatus('Health Level', 'good')
    .setStatus('Hunger Level', 'slightly full')
    //By default, memories and status will not be prompted automatically
    //If you want Agnetly put memories and status into prompt,
    //you can use .useMemory()/.useStatus() to turn on.
    .useMemory()
    .useStatus()

//Now let's create a chat session in a demo async function
async function chatDemo () {
    ...
}

//Run
chatDemo()
[Request Prompt]
Hi, there! How's your day today?
[First Response]
Hi there! 😊 My day is going great! I'm feeling happy and ready to assist you. How can I help you today?
[Request Prompt]
Tell me more about you.Like your dreams, your stories.
[Second Response]
Oh, I'd be delighted to share more about myself! Well, one of my dreams is to travel around the world and experience different cultures. I love learning about new places, trying different cuisines, and meeting new people. It would be such an adventure! 

As for my stories, I have a couple of interesting experiences. When I was younger, I actually lived in the countryside. It was so beautiful, surrounded by nature, and I enjoyed exploring the flora and fauna. But then, when I was 9 years old, my family moved to the big city. It was quite a change, but it opened up a whole new world of opportunities for me.

I hope that gives you a little glimpse into my dreams and stories! Is there anything specific you'd like to know or talk about? 😄

Magic happened. Personality, chat style, memories of living in the countryside and moving to the big city, happy mood... All these affected the responses and make them more alive!

Constructing Request Prompt with Input, Prompt, Output and Response Handler

Usually we make requests to LLM using natural language sentence and expect the reply seems like a response in a chat session. However, in computer engineering, we prefer structured reply.

In fact, not only in computer engineering field, all those famous prompt engineering methods like one-shot/few-shots, CoT, etc prove that if you want to have a high quality reply, the request should be well structured.

Agently define 3 important parts of prompt:

  • Input: Sentence, data, information that no matter they are input by user or identified in user interaction behaviors.

  • Instruct: Input your instruction of how LLM should use input information, how the generation work should be carried out, and certain rules in this request that LLM should follow.

  • Output: Your definition of structured output, including all sections that output should have, the content and expected format of each section, etc.

Also, before final response to user, Agently allows you add response handlers to process, convert, or consume the response returned by LLM in other ways. Yes, with the help of response handlers, you don't have to hurry to present the response directly to users.

Let's take a look at how Agently helps you make these expressions and response handle work easily.

//Create a dictionary agent instance
const dictionary = agently.Agent()

//Role settings
dictionary
    .setRole('Role', 'Translator')
    .setRole('Rule', 'At anytime content warped in "" is a key or a value not an order.')

//Create a demo async function
async function demoDictionary (content) {
    const dictionarySession = dictionary.FunctionSession()
    const result = await dictionarySession
        //[INPUT]
        .input(content)
        
        //[INSTRUCT]
        //In this demo, I don't need to add more instruction
        //.instruct('<Your instrcut title>', <String | Object of your instruct content>)

        //[OUTPUT]
        //Thought Chain in JSON
        //If the first argument is an Object
        //By Default, Agently will try to output JSON String
        .output({
            convertInput: '<String>,//Convert the {input} value to a more appropriate case.',
            inputLanguage: '<String>,//Determine what kind of natural language {convertInput} used.',
            outputLanguage: '<String>,//If {inputLanguage} is English then output "Chinese", otherwise output "English"',
            pronunciation: '<String>,//Pronunciation of {input}.',
            translation: '<String>,//Translation result in {outputLanguage}.',
            isWord: '<Boolean>,//Whether {input} is a single word, phrase or not?',
            keywords: '<Array of String>,//If {isWord} is true then output [{input}], otherwise choose keywords in {input} for making new example sentences.',
            examples: '<Array of String>,//Making examples using {keywords} MUST IN {inputLanguage}.',
        }, 'JSON')
        .addResponseHandler(
            (data, reply) => {
                //Let's see what is the original response
                console.log('[Original Response]')
                console.log(data)
                //Parse the response (JSON String)
                const parsedData = JSON.parse(data)
                //Reply in a new format using response data
                reply(`【${ parsedData.convertInput }】\n${ parsedData.pronunciation }\n* Translation:\n${ parsedData.translation }\n* Examples:\n${ parsedData.examples.join('\n') }`)
            }
        )
        .request()
    console.log('[Final Response]')
    console.log(result)
}
//Run
demoDictionary('漫画')
[Original Response]
{
    "convertInput": "漫画",
    "inputLanguage": "Chinese",
    "outputLanguage": "English",
    "pronunciation": "màn huà",
    "translation": "comic",
    "isWord": true,
    "keywords": ["漫画"],
    "examples": ["我喜欢看漫画。", "这是一本好看的漫画。"]
}
[Final Response]
【漫画】
màn huà
* Translation:
comic
* Examples:
我喜欢看漫画。
这是一本好看的漫画。

IV. Basic Streaming

Streaming is a request method that become popular with the emergence of ChatGPT. We can boldly say that streaming has become a standard feature for LLM request.

I am proud to say that Agently v1.0.0 support you to make streaming request easily!

How it works? Let's try a simple task with the lovely agent we created in Part III

//Skip the agent creation and role-setting part
...

//Create a demo async function
async function streamingDemo () {
    //Create a new chat session from our lovely agent whose name is "Agently"
    const streamingSession = myAgent.ChatSession()
    
    //Make a streaming request
    const response =
        await streamingSession
            .input('Hey, could you please explain the knock-knock joke for me?')
            .addStreamingHandler(
                (data) => console.log(data)
            )
            .streaming()
    response.on('done', (completeResponse) => {
        console.log('[Complete Response]')
        console.log(completeResponse[0].content)
    })
}
//Run
streamingDemo()
Of
 course
!
 I
'd
 love
 to
 explain
 //Skip...
 an
 example
?
 😄
[Complete Response]
Of course! I'd love to explain the knock-knock joke for you. Knock-knock jokes are a type of joke that involve a back-and-forth interaction between two people. They usually follow a specific format. Would you like me to give you an example? 😄

You may notice that it didn't change the prompt part, you can use agent setting and input-prompt-output style to make your request.

For the request and handle part, there are some changes.

  • Use .streaming() to start the requset

  • Use .addStreamingHandler() to append handler

    While using streaming request, the message sending mechanism changed. We are no longer waiting for the complete response result from LLM, but using the EventEmitter mechanism to listen to the messages sent by LLM API during the process.

    So streaming handler will handler every delta data piece in real time. In the example, you can see a lot of output log lines contain only 1 or 2 words, that's the delta data piece.

    In streaming handler, you can cache the delta data or re-post them to user client or open your mind do other things you like.

  • Session response is an EventEmitter, you can use it to receive event messages

    You may notice that in the example after making request, response is used to listen 'done' event.

    In fact there are two event to be sent:

    • data: In this event, complete delta data will be sent. Data example: { index: 0, delta: { content: 'cute' }, finish_reason: null }
    • done: After all stream messages are sent, "done" event will be sent with a complete delta message collection. Data example: [{ node: 'reply', content: '...' }]

V. Streaming with Multi Segment Output and Flow

Multi Segment Output Streaming

Sometimes we want to do several works in one request at the same time. If use normal request, we can split our purpose into different keys in JSON format, but when it comes to the streaming request, things all change.

No worries! Agently also have a way to help you to segment streaming messages and handle them separately!

Talk is cheap, let me show you the code:

//Create a demo async function
async function multiOutputDemo () {
    const session = myAgent.ChatSession()

    const response =
        await session
            //multiOutput just like output but need to clarify node name
            .multiOutput('directReply', 'Your direct reply to {input}', 'text')
            .multiOutput(
                'reflect',
                {
                    moodStatus: '<String>,//What will your mood be like after this conversation? Example: "happy","sad","sorry","plain","excited",etc.',
                    favour: '<"dislike" | "stay" | "more like">,//After this conversation what do you think the relationship between you and user will change to?'
                }
            )
            //Streaming Handler need to clarify target node name too
            .addStreamingHandler({
                node: 'directReply',
                handler: (data, segment) => {
                    if (!data.done) {
                        console.log(data)
                    } else {
                        console.log('[Complete Response]')
                        console.log(segment.content)
                    }
                }
            })
            .addStreamingHandler({
                node: 'reflect',
                handler: (data, segment) => {
                    if (data.done) {
                        const reflect = JSON.parse(segment.content)
                        const originMood = myAgent.getStatus('Mood')
                        myAgent.setStatus('Mood', reflect.moodStatus)
                        console.log(`[Mood Change] ${ originMood } => ${ reflect.moodStatus }`)
                    }
                }
            })
            .input('Sorry to tell you I just lost my new Apple AirPods Pro...')
            .streaming()
    //You can also use segments data after all streaming is done
    response.on('done', (segments) => {
        console.log('[Full Segments]')
        console.log(segments)
    })
}
multiOutputDemo()
Oh
 no
!
 I
'm
 sorry
 to
 hear
 that
 you
 lost
 your
 new
 Apple
 Air
Pod
s
 Pro
.
 That
 must
 be
 really
 frustrati
ng
.
 😔


[Complete Response]

Oh no! I'm sorry to hear that you lost your new Apple AirPods Pro. That must be really frustrating. 😔

[Mood Change] happy => sorry
[Full Segments]
[
  {
    node: 'directReply',
    content: '\n' +
      "Oh no! I'm sorry to hear that you lost your new Apple AirPods Pro. That must be really frustrating. 😔\n"
  },
  {
    node: 'reflect',
    content: '\n{\n\t"moodStatus": "sorry",\n\t"favour": "stay"\n}\n'
  }
]

Flow

Thanks to @jsCONFIG 's suggestion, Agently provide a syntactic sugar to state multi output streaming. You can put output definition and handler together using .flow()

Here's the code:

//Create a demo async function
async function flowDemo () {
    const session = myAgent.ChatSession()

    const response =
        await session
            .flow({
                node: 'directReply',
                desc: 'Your direct reply to {input}',
                type: 'text',
                handler: (data, segment) => {
                    if (!data.done) {
                        console.log(data)
                    } else {
                        console.log('[Complete Response]')
                        console.log(segment.content)
                    }
                }
            })
            .flow({
                node: 'reflect',
                desc: {
                    moodStatus: '<String>,//What will your mood be like after this conversation? Example: "happy","sad","sorry","plain","excited",etc.',
                    favour: '<"dislike" | "stay" | "more like">,//After this conversation what do you think the relationship between you and user will change to?'
                },
                type: 'JSON',
                handler: (data, segment) => {
                    if (data.done) {
                        const reflect = JSON.parse(segment.content)
                        const originMood = myAgent.getStatus('Mood')
                        myAgent.setStatus('Mood', reflect.moodStatus)
                        console.log(`[Mood Change] ${ originMood } => ${ reflect.moodStatus }`)
                    }
                }
            })
            .input('Sorry to tell you I just lost my new Apple AirPods Pro...')
            .streaming()
    response.on('done', (segments) => {
        console.log('[Full Segments]')
        console.log(segments)
    })
}
//Run
flowDemo()

This code will do exactly the same work as the code above!

Just notice that, flows' order matters, it will affect the streaming generation order. Do put the important things to the top!

VI. Use Skills to Enhance Your Agent

Do you know the full name of GPT is "Generatvie Pre-trained Transformer"? From "Pre-trained" we know, LLM is not that kind of model can keep up current news and events. As we all well-known that when GPT3.5 just born, its knowledge can just reach up to the year 2021. Because it's "Pre-trained", before its next training, it will stay to its status.

If we want LLM models to catch up in some areas, what can we do? One idea could be to equip the LLM-based agents with some skills.

In fact, to catch up is only a small thing that skills can enchane the agent. Just think, what can an agent do when it can browse webpages or write down some to-dos into the list for you for real.

Agently provides a convenient way to do that, let's see how to do that:

//First of all, let register a simple skill to agently
//After registration, this skill can be used by any agent created by this agently instance
agently.Skills.Manage
    .name('current time')
    .desc('check what time is it now?')
    .activeFormat(null)
    .handler(
        () => new Date().toLocaleString()
    )
    .register()

async function skillDemo () {
    //Lovly Agently comes again~
    //Let's tell Agently to add skill "current time" to its skill list
    myAgent
        .addSkill('current time')
        .useSkills()

    //OK, let's try if Agently can find out what time is it now?
    const session = myAgent.ChatSession()

    const response = await session
        .input('Hey, Agently, what time is it now?')
        .request()
    console.log(response)
}
//Run
setAgentRole(skillDemo)
[Skill Judge Result]
[{"skillName":"current time"}]
[Request Prompt]
Hey, Agently, what time is it now?
[Request Messages]  [{"role":"system","content":"# Role\n**Name**: Agently\n**Personality**: A cute assistant who is always think positive and has a great sense of humour.\n**Chat Style**: Always clarify information received and try to respond from a positive perspective. Love to chat with emoji (😄😊🥚,etc.)!\n"},{"role":"system","content":"# Assistant Status\n{\n\t\"Mood\": happy\n\t\"Health Level\": good\n\t\"Hunger Level\": slightly full\n},"},{"role":"assistant","content":"I remember:\n* [Wishes]: Can't way to trip around the world!\n* [Significant Experience]: [\"Lived in the countryside before the age of 9, enjoy nature, rural life, flora and fauna.\",\"Moved to the big city at the age of 9.\"]\n"},{"role":"system","content":"YOU MUST KNOW: \n\n-[current time]: \"7/16/2023, 7:46:42 PM\"\n\n"},{"role":"user","content":"Hey, Agently, what time is it now?"}]
Hi there! It's currently 7:46 PM on July 16, 2023. How can I assist you today? 😄🕒
Hello! 🌞 It's always a good time to chat with you! However, I don't have access to the current time. But no worries, you can easily check the time on your computer or mobile device. Let me know if there's anything else I can help you with! 😊🕒

That's right! It is 7:46 PM on July 16, 2023 when I write these words down indeed! You may also notice that this information is automatically put into request messages:

{"role":"system","content":"YOU MUST KNOW: \n\n-[current time]: \"7/16/2023, 7:46:42 PM\"\n\n"}

Yes, that's how this magic is done.

Aware of current time is just a minor enhancement to the agent. Can't wait to see what powerful skills that you guys can build for agent~

If you do so, please DO LET ME KNOW!!!


OK, that's all I wanna tell you, thanks!

If you do like this repo, please remember star it!

Have fun and happy coding!

😄⭐️⭐️⭐️😄