rag-eval
v1.2.0
Published
Perform various evaluations to input RAG data
Downloads
9
Readme
RAG e2e evaluation
Evaluation
Usage: rag-eval [options] <string>
Perform various evaluations to input RAG data
Arguments:
string path to input file
Options:
-V, --version output the version number
-l, --llm [string] LLM model name (choices: "gpt-3.5-turbo", "gpt-4-turbo", "claude1", "claude2", "llama2_7b", "phi", default: "gpt-4-turbo")
-m, --max [number] max number of items to evaluate (default: evaluate all items)
-s, --skip [number] skip number of items (default: no skip)
-o, --output [string] path to output folder
-c, --noCache disable cache
-h, --help display help for command
OpenAI & Anthropic LLMs
If you choose to use OpenAI (default) or Anthropic LLM, make sure you have set API token to environment variable:
export OPENAI_API_KEY=<YOUR OPENAI API KEY>
export ANTHROPIC_API_KEY=<YOUR ANTHROPIC API KEY>
Local LLMs
We use ollama to run LLMs locally, please install ollama first, then pull the desired LLM model.