roastify
v0.0.7
Published
A Node.js library that generates sarcastic and roast-style responses for API failures using OpenAI.
Downloads
45
Maintainers
Readme
Roastify
Roastify is a Node.js middleware that returns sarcastic and roast-style responses for API failures using OpenAI. Instead of boring error messages like "Invalid username or password," Roastify delivers witty burns to keep your API responses entertaining.
Features
- 🔥 Generates dynamic roast responses using OpenAI.
- 😆 Customizable failure types (authentication, validation, general errors, etc.).
- 🚀 Middleware for Express.js to make integration seamless.
- 🔑 Caller provides their own OpenAI API key (no key stored in the library).
Installation
npm install roastify
Usage
Basic Express Setup
const express = require("express");
const { roastMiddleware } = require("roastify");
const app = express();
app.use(roastMiddleware("your-openai-api-key"));
app.get("/fail", async (req, res) => {
await res.sendRoast();
});
app.listen(3000, () => console.log("Server running on port 3000"));
Custom Error Types
You can pass a failure type to get different styles of roast messages.
app.get("/auth-fail", async (req, res) => {
await res.sendRoast("authentication");
});
app.get("/validation-fail", async (req, res) => {
await res.sendRoast("validation");
});
How It Works
- Roastify intercepts failed requests and sends a sarcastic response instead of a standard error message.
- Responses are dynamically generated using OpenAI’s API to keep them fresh and unpredictable.
- The caller must provide an OpenAI API key when initializing the middleware.
Example Responses
| Failure Type | Example Roast Response | |---------------|----------------------| | Authentication | "Oh wow, another victim of their own terrible memory. Try again, genius." | | Validation | "You had ONE job: enter valid data. And you still messed it up." | | General | "Well, that didn't go as planned. But hey, at least you're consistent at failing!" |
Configuration
- API Key: You must pass an OpenAI API key when initializing the middleware.
- Max Tokens: Limited to 30 for concise responses.
- Temperature: Set to 0.8 for creative but controlled roasts.
Error Handling
If OpenAI fails to generate a response, Roastify falls back to a default roast:
{
"message": "Even the AI couldn't handle this request. That's how bad it is."
}
License
MIT License
😈 Turn your boring API errors into brutal roasts with Roastify! 🔥