npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

openai-code

v2.2.1

Published

An unofficial proxy layer that lets you use Anthropic Claude Code with any OpenAI API backend.

Downloads

1,443

Readme

OpenAI Code

An unofficial proxy layer that lets you use Anthropic Claude Code with OpenAI backends.

This repository provides a proxy server that allows Claude Code to work with OpenAI models instead of Anthropic's Claude models. The proxy translates requests in the Anthropic API format to OpenAI API calls and converts the responses back to Anthropic's format.

Features & Performance (TL;DR)

Smarter, Faster and Cheaper than Claude Code.

  • 100% working solution, even a little smarter at times
  • ~2-3x faster due to better OpenAI performance and fully reworked prompts from my side
  • ~2x cheaper due to lower token price (would be even more, but limited by extra tools prompt; OpenAI: Please work on this issue)
  • Maintains full compatibility with the Claude Code CLI (pin to Claude code: 0.2.32)

Model Comparison

Technically:

  • Proxies Anthropic API requests to the OpenAI API (streaming, non-streaming)
  • Handles tool/function call translations, as well as Task dispatching
  • Converts between Anthropic and OpenAI message formats

What's New in v2.2?

New Agents & Reasoning Modes

  • StackOverflow Agent (:so) - searches StackOverflow and includes up to 3 answers in the conversation
  • Perplexity Agent implemented (:p) - add OPENAI_CODE_PERPLEXITY_API_KEY to the .env of the project where you use.
  • The default reasoning mode is now a simple thought mode
  • Deep graph reasoning can be activated via :d command
  • The :v command activates the vector database based code similarity search
  • :v3 -> topK 3 matches included in reasoning

Directory Listing ignore files

Instead of the default Claude Code directoryStructure context, OpenAI Code now discovers project files recursively with an advanced algorithm and auto-updating.

To shrink the directory listing in the context, a new .claudeignore can be created (in your project root folder). Every glob pattern in this file will be ignored. This can be helpful if you have files tracked in Git, but they should not be discoverable by the LLM.

What's New in v2.1?

  • Dynamic Tools Integration: The vector database is now implemented as a dynamic tool. Dynamic tools are resolved through an agentic workflow before Claude Code sees them, enabling more flexible and context-aware tool usage.
  • Deep Reasoning: Whenever an issue comes up, the system recovers by re-reasoning. Only after 3 consecutively failing attempts, the system gives up.
  • Adaptive Goal Tracking: The goal is now dynamically determined by the LLM itself (reasoning about what the user actually wants).
  • Improved Goal Tracking: In version 2.1, the goal tracking mechanism has been refined to better align the proxy’s objectives with the user's input, ensuring more efficient and precise task execution.
  • Large Context Window: Instead of carrying a growing conversation, one huge system prompt is managed. This allows for better control over reasoning over the whole message history, even handling capturing/reconciling tool use over many turns (multiple edits etc.)
  • Drastic Performance Enhancements: A new recursive reasoning algorithm and an improved tool use protocol have resulted in significant performance improvements and a massive reduction in token usage.
  • Vector Database Integration: The system now auto-discovers, adds, and updates code files using OpenAI embeddings in real-time. The vector database is configurable via the environment variable OPENAI_CODE_STORE_FILE_NAME and stores data in a JSON file (default: CLAUDE_VECTORDB.json). See ARCH.md for detailed architecture.
  • Enhanced System Prompt Management: The proxy dynamically adjusts system prompts based on project configuration, optimizing the interaction workflow.
  • Other improvements from v2.0 remain, including better dynamic model selection, adaptive reasoning strength settings, and more robust error handling.

Prerequisites

  • Node.js (v16 or later)
  • An OpenAI API key
  • Claude Code globally installed: npm install -g @anthropic-ai/claude-code@0.2.32

A short Note on Security and Privacy

In light of recent actions by Anthropic's legal department against open-source projects related to Claude Code, the author of this proxy has chosen to remain anonymous. This project is perfectly legal and does not violate any EU or US laws. However, the author cannot engage in legal disputes. This decision is not meant to raise any concerns about privacy or safety for developers using this proxy. On the contrary, I have a strong commitment to privacy and safety. Developers are encouraged to review the code and suggest improvements via email. Unfortunately, due to legal risks, it is not feasible to host a public repository at this time.

Usage: OpenAI Code Proxy

No specific setup needed, just run: npx openai-code@2.2.1. It will download this repo's code and execute it, binding on port 6543.

Noob Warning: If npx is not found, you need to install Node.js first.

Customization

Whenever openai-code receives a request, it analyzes the system prompt prepared by Claude Code. It will find out the working directory and read the .env and CLAUDE_RULES.md from the requesting project's working directory. This way, OpenAI Code can offer awesome, project/request-based customizations!

OpenAI endpoint, HTTP Proxy, AI Model names

To customize, create a .env file in the project directory you want OpenAI Code to work with, and set the following variables:

OPENAI_CODE_API_KEY="your-openai-api-key"

# Optional settings:
#OPENAI_CODE_BASE_URL="https://api.openai.com/v1"    # Base URL for the OpenAI API.
#PROXY_URL="http://your-proxy-server:port"            # HTTP proxy URL if needed.
#REASONING_MODEL="o3-mini"                             # Reasoning model to use (default: o3-mini).
#OPENAI_CODE_EMBEDDING_MODEL="text-embedding-3-small"  # Embedding model (default if not specified).

# Optional identification settings:
#OPENAI_CODE_ORGANIZATION_ID="your-organization-id"
#OPENAI_CODE_PROJECT_ID="your-project-id"

Custom OpenAI Code Proxy Port

You might want to set the environment variable (NOT in the project directory but globally) OPENAI_CODE_PORT to use a different port than 6543 for starting the proxy.

You can also run imperatively: OPENAI_CODE_PORT=7654 npx openai-code@2.2.1

Important: Restart your shell or source your configuration file to register the new alias.

How It Works

Vector Database Enhancements

  • The vector database now includes functions for indexing relevant code from the context, allowing for more efficient retrieval of embeddings.
  • Dynamic Tool as Vector Database: In v2.0, the vector database is treated as a dynamic tool. It operates within an agentic workflow, ensuring that dynamic tools are executed and resolved before results are presented to Claude Code.
  • Recursive Reasoning and Efficiency: A new recursive reasoning algorithm combined with an improved tool use protocol has drastically improved performance and reduced token usage, making the system both faster and more cost-efficient.

Differences to Claude Code and Individual Prompts

Functional differences only occur when the reasoning model in use differs in behavior or because my prompts instruct the model differently. For example, I explicitly PROHIBIT the reading of any .env files. It's not perfect, but better than not doing anything about it...

Anthropic Legal

It is important to note that this project does not infringe upon any EU or US laws, nor does it violate the DMCA, as it does not utilize any Anthropic prompts or code. Each of my prompts has been meticulously designed by me, an experienced AI engineer.

Rant: This project is developed as free software, single-handedly, in just a few hours. Meanwhile, companies like OpenAI and Anthropic employ entire development teams with million-dollar budgets to achieve similar outcomes. Let's eat the rich, or so they say :)

By reducing the number of tokens used to an absolute minimum, I not only decrease the cost but also significantly enhance the speed of all operations.

Rant 2: One might praise the competency of Anthropic's business department. Well, let's just say that the verbosity in Anthropic's original system prompts results in tremendous waste of tokens, increased cost and decreased speed.

Anthropic's original prompts also point to wrong tool names in their own prompts... thanks to your behavior Anthropic, I leave it to you to find out what I mean by this. Have fun!

My streamlined approach ensures that a typical refactoring task, including writing tests and documentation, can be completed in a few seconds for ~2 Cents.

Architecture and Internals

The system is built around an Express-based proxy server (implemented in src/index.mjs) that handles HTTP requests by translating Anthropic-formatted messages into OpenAI API calls and efficiently coordinating tool execution, including dynamic goal tracking and indexing workflows.

The vector indexing is managed through dedicated modules (src/vectorindex.mjs and src/vectordb.mjs), which automatically scan code files, generate embeddings using OpenAI's models, and store them persistently (default JSON file: CLAUDE_VECTORDB.json). This setup ensures that the instance is always up-to-date with the latest code changes.

Key performance optimizations include an optimized matrix multiplication routine that employs loop unrolling. This algorithm accelerates the computation of dot products – essential for calculating cosine similarities between query embeddings and document embeddings – thereby delivering fast and accurate semantic search results.

Automatic Model Selection

This project automatically selects the appropriate model and reasoning strength for prompt execution.

According to my research:

  • The o3-mini is the optimal OpenAI reasoning model right now (obviously). This is the base model for all reasoning. The reasoning strength, however, is selected according to the actual demands. Whenever an error occurs, the reasoning strength is increased, striking a balance between speed, cost, and quality.
  • Deficiencies in OpenAI's o-series models selecting tools are mitigated using custom tool selection prompting.

More Developer Notes

Do you plan to contribute and email me suggestions? Do you plan to review my code and check if I might keylog every keyboard entry or send all your secret credentials to my evil server?

Here's an outline of this project's codebase for you to start:

  • Code Structure: The main server logic is contained in src/index.mjs (right, I gave a F on architecture for a few-hundred lines codebase). All prompts are located in src/prompts.mjs.
  • Third Party Dependencies: Express is used for the HTTP server low-level implementation to handle API requests and responses and SSE. OpenAI's official library is used for calling OpenAI APIs. https-proxy-agent is used for when a PROXY_URL is set (useful for enterprise environments or when behind a "great" firewall).
  • Error Handling, Logging: Errors are logged using the logToFile function, and server errors are handled gracefully with appropriate HTTP responses. All default logging happens in the console (stdout).
  • Configuration Initialization: The server initializes a configuration file (.claude.json) in your home user directory, if it doesn't exist yet, setting default Claude Code values for user settings.

Original Author's Verification Key

I'll leave this here, shall I ever want or need to verify that I'm the original author of this codebase.

AAAAC3NzaC1lZDI1NTE5AAAAIMpneofHS0ciT1pVEgZhbqqzbmUgPz0z/VjU91daL5uB

Contact

openai-code-npm@proton.me