glin-profanity
v3.1.5
Published
Glin-Profanity is a lightweight and efficient npm package designed to detect and filter profane language in text inputs across multiple languages. Whether you’re building a chat application, a comment section, or any platform where user-generated content
Maintainers
Keywords
Readme
A multilingual profanity detection and filtering engine for modern applications — by GLINCKER
✨ Overview
Glin-Profanity is a high-performance JavaScript/TypeScript library built to detect, filter, and sanitize profane or harmful language in user-generated content. With support for over 20+ languages, configurable severity levels, obfuscation detection, and framework-agnostic design, it's perfect for developers who care about building safe, inclusive platforms.
Whether you're moderating chat messages, community forums, or content input forms, Glin-Profanity empowers you to:
- 🧼 Filter text with real-time or batch processing
- 🗣️ Detect offensive terms in 20+ human languages
- 💬 Catch obfuscated profanity like
sh1t,f*ck,a$$hole - 🎚️ Adjust severity thresholds (
Exact,Fuzzy,Merged) - 🔁 Replace bad words with symbols or emojis
- 🧩 Works in any JavaScript environment -
- 🛡️ Add custom word lists or ignore specific terms
🚀 Key Features
💡 Why glin-profanity?
| | |
|---|---|
| 🔒 Privacy First | Runs entirely on-device. No API calls, no data leaves your app. GDPR/CCPA friendly. |
| ⚡ Blazing Fast | 23K-115K ops/sec rule-based, 21M+ ops/sec with caching. Sub-millisecond latency. |
| 🌍 Truly Multilingual | 23 languages with unified dictionary. Consistent detection across locales. |
| 🛡️ Evasion Resistant | Catches leetspeak (f4ck), Unicode tricks (fυck), zero-width chars, and homoglyphs. |
| 🤖 AI-Ready | Optional ML integration for context-aware toxicity detection beyond keywords. |
| 🧩 Zero Config | Works out of the box. No API keys, no server, no setup required. |
| 📦 Lightweight | ~90KB core bundle. Tree-shakeable. No heavy dependencies for basic usage. |
✨ What's New in v3.0
- Leetspeak Detection — Catch
f4ck,@ss,$h!twith 3 intensity levels - Unicode Normalization — Detect Cyrillic/Greek lookalikes, full-width chars, zero-width spaces
- Result Caching — 800x speedup for repeated checks
- ML Integration — Optional TensorFlow.js toxicity model for nuanced detection
- Performance — Optimized for high-throughput production workloads
📚 Table of Contents
- 🚀 Key Features
- 📦 Installation
- 🌍 Supported Languages
- ⚙️ Usage
- 🧠 API
- ⚠️ Note
- 🛠 Use Cases
- 🔬 Advanced Features
- 📊 Benchmarks
- 📄 License
Installation
To install Glin-Profanity, use npm:
npm install glin-profanityOR
yarn add glin-profanityOR
pnpm add glin-profanitySupported Languages
Glin-Profanity includes comprehensive profanity dictionaries for 23 languages:
🇸🇦 Arabic • 🇨🇳 Chinese • 🇨🇿 Czech • 🇩🇰 Danish • 🇬🇧 English • 🌍 Esperanto • 🇫🇮 Finnish • 🇫🇷 French • 🇩🇪 German • 🇮🇳 Hindi • 🇭🇺 Hungarian • 🇮🇹 Italian • 🇯🇵 Japanese • 🇰🇷 Korean • 🇳🇴 Norwegian • 🇮🇷 Persian • 🇵🇱 Polish • 🇵🇹 Portuguese • 🇷🇺 Russian • 🇪🇸 Spanish • 🇸🇪 Swedish • 🇹🇭 Thai • 🇹🇷 Turkish
Note: The JavaScript and Python packages maintain cross-language parity, ensuring consistent profanity detection across both ecosystems.
Usage
Basic Usage
Glin-Profanity now provides framework-agnostic core functions alongside React-specific hooks:
🟢 Node.js / Vanilla JavaScript
const { checkProfanity } = require('glin-profanity');
const text = "This is some bad text with damn words";
const result = checkProfanity(text, {
languages: ['english', 'spanish'],
replaceWith: '***'
});
console.log(result.containsProfanity); // true
console.log(result.profaneWords); // ['damn']
console.log(result.processedText); // "This is some bad text with *** words"🔷 TypeScript
import { checkProfanity, ProfanityCheckerConfig } from 'glin-profanity';
const config: ProfanityCheckerConfig = {
languages: ['english', 'spanish'],
severityLevels: true,
autoReplace: true,
replaceWith: '🤬'
};
const result = checkProfanity("inappropriate text", config);Framework Examples
⚛️ React
import React, { useState } from 'react';
import { useProfanityChecker, SeverityLevel } from 'glin-profanity';
const App = () => {
const [text, setText] = useState('');
const { result, checkText } = useProfanityChecker({
languages: ['english', 'spanish'],
severityLevels: true,
autoReplace: true,
replaceWith: '***',
minSeverity: SeverityLevel.EXACT
});
return (
<div>
<input value={text} onChange={(e) => setText(e.target.value)} />
<button onClick={() => checkText(text)}>Scan</button>
{result && result.containsProfanity && (
<p>Cleaned: {result.processedText}</p>
)}
</div>
);
};💚 Vue 3
<template>
<div>
<input v-model="text" @input="checkContent" />
<p v-if="hasProfanity">{{ cleanedText }}</p>
</div>
</template>
<script setup>
import { ref } from 'vue';
import { checkProfanity } from 'glin-profanity';
const text = ref('');
const hasProfanity = ref(false);
const cleanedText = ref('');
const checkContent = () => {
const result = checkProfanity(text.value, {
languages: ['english'],
autoReplace: true,
replaceWith: '***'
});
hasProfanity.value = result.containsProfanity;
cleanedText.value = result.autoReplaced;
};
</script>🔴 Angular
import { Component } from '@angular/core';
import { checkProfanity, ProfanityCheckResult } from 'glin-profanity';
@Component({
selector: 'app-comment',
template: `
<textarea [(ngModel)]="comment" (ngModelChange)="validateComment()"></textarea>
<div *ngIf="profanityResult?.containsProfanity" class="error">
Please remove inappropriate language
</div>
`
})
export class CommentComponent {
comment = '';
profanityResult: ProfanityCheckResult | null = null;
validateComment() {
this.profanityResult = checkProfanity(this.comment, {
languages: ['english', 'spanish'],
severityLevels: true
});
}
}🚂 Express.js Middleware
const express = require('express');
const { checkProfanity } = require('glin-profanity');
const profanityMiddleware = (req, res, next) => {
const result = checkProfanity(req.body.message || '', {
languages: ['english'],
autoReplace: true,
replaceWith: '[censored]'
});
if (result.containsProfanity) {
req.body.message = result.autoReplaced;
}
next();
};
app.post('/comment', profanityMiddleware, (req, res) => {
// Message is now sanitized
res.json({ message: req.body.message });
});API
🎯 Core Functions
checkProfanity
Framework-agnostic function for profanity detection.
checkProfanity(text: string, config?: ProfanityCheckerConfig): ProfanityCheckResultcheckProfanityAsync
Async version of checkProfanity.
checkProfanityAsync(text: string, config?: ProfanityCheckerConfig): Promise<ProfanityCheckResult>isWordProfane
Quick check if a single word is profane.
isWordProfane(word: string, config?: ProfanityCheckerConfig): boolean🔧 Filter Class
Constructor
new Filter(config?: FilterConfig);FilterConfig Options:
| Option | Type | Description |
|-------------------------|--------------------|-------------|
| languages | Language[] | Languages to include (e.g., ['english', 'spanish']) |
| allLanguages | boolean | If true, scan all available languages |
| caseSensitive | boolean | Match case exactly |
| wordBoundaries | boolean | Only match full words (turn off for substring matching) |
| customWords | string[] | Add your own words |
| replaceWith | string | Replace matched words with this string |
| severityLevels | boolean | Enable severity mapping (Exact, Fuzzy, Merged) |
| ignoreWords | string[] | Words to skip even if found |
| logProfanity | boolean | Log results via console |
| allowObfuscatedMatch | boolean | Enable fuzzy pattern matching like f*ck |
| fuzzyToleranceLevel | number (0–1) | Adjust how tolerant fuzzy matching is |
| autoReplace | boolean | Whether to auto-replace flagged words |
| minSeverity | SeverityLevel | Minimum severity to include in final list |
| customActions | (result) => void | Custom logging/callback support |
| detectLeetspeak | boolean | Enable leetspeak detection (e.g., f4ck → fuck) |
| leetspeakLevel | 'basic' \| 'moderate' \| 'aggressive' | Leetspeak detection intensity |
| normalizeUnicode | boolean | Enable Unicode normalization for homoglyphs |
| cacheResults | boolean | Cache results for repeated checks |
| maxCacheSize | number | Maximum cache size (default: 1000) |
Methods
isProfane
Checks if a given text contains profanities.
isProfane(value: string): boolean;value: The text to check.- Returns:
boolean-trueif the text contains profanities,falseotherwise.
checkProfanity
Returns details about profanities found in the text.
checkProfanity(text: string): CheckProfanityResult;text: The text to check.- Returns:
CheckProfanityResultcontainsProfanity:boolean-trueif the text contains profanities,falseotherwise.profaneWords:string[]- An array of profane words found in the text.processedText:string- The text with profane words replaced (ifreplaceWithis specified).severityMap:{ [word: string]: number }- A map of profane words to their severity levels (ifseverityLevelsis specified).
⚛️ useProfanityChecker Hook
A custom React hook for using the profanity checker.
Parameters
config: An optional configuration object (same as ProfanityCheckerConfig).
Return Value
result: The result of the profanity check.checkText: A function to check a given text for profanities.checkTextAsync: A function to check a given text for profanities asynchronously.reset: A function to reset the result state.isDirty: Boolean indicating if profanity was found.isWordProfane: Function to check if a single word is profane.
const { result, checkText, checkTextAsync, reset, isDirty, isWordProfane } = useProfanityChecker(config);Note
⚠️ Glin-Profanity is a best-effort tool. Language evolves, and no filter is perfect. Always supplement with human moderation for high-risk platforms.
🛠 Use Cases
- 🔐 Chat moderation in messaging apps
- 🧼 Comment sanitization for blogs or forums
- 🕹️ Game lobbies & multiplayer chats
- 🤖 AI content filters before processing input
🔬 Advanced Features
Leetspeak Detection
Detect and normalize leetspeak variations like f4ck, @ss, $h!t:
import { Filter } from 'glin-profanity';
const filter = new Filter({
languages: ['english'],
detectLeetspeak: true,
leetspeakLevel: 'moderate', // 'basic' | 'moderate' | 'aggressive'
});
filter.isProfane('f4ck'); // true
filter.isProfane('@ss'); // true
filter.isProfane('$h!t'); // true
filter.isProfane('f u c k'); // true (spaced characters)Leetspeak Levels:
basic: Numbers only (0→o, 1→i, 3→e, 4→a, 5→s)moderate: Basic + common symbols (@→a, $→s, !→i)aggressive: All known substitutions including rare ones
Unicode Normalization
Detect homoglyphs and Unicode obfuscation:
import { Filter } from 'glin-profanity';
const filter = new Filter({
languages: ['english'],
normalizeUnicode: true, // enabled by default
});
// Detects various Unicode tricks:
filter.isProfane('fυck'); // true (Greek upsilon υ → u)
filter.isProfane('fᴜck'); // true (Small caps ᴜ → u)
filter.isProfane('fuck'); // true (Zero-width spaces removed)
filter.isProfane('fuck'); // true (Full-width characters)Result Caching
Enable caching for high-performance repeated checks:
import { Filter } from 'glin-profanity';
const filter = new Filter({
languages: ['english'],
cacheResults: true,
maxCacheSize: 1000, // LRU eviction when full
});
// First call computes result
filter.checkProfanity('hello world'); // ~0.04ms
// Subsequent calls return cached result
filter.checkProfanity('hello world'); // ~0.00005ms (800x faster!)
// Cache management
console.log(filter.getCacheSize()); // 1
filter.clearCache();Configuration Management
Export and import filter configurations for sharing between environments:
import { Filter } from 'glin-profanity';
const filter = new Filter({
languages: ['english', 'spanish'],
detectLeetspeak: true,
leetspeakLevel: 'aggressive',
cacheResults: true,
});
// Export configuration
const config = filter.getConfig();
// Save to file: fs.writeFileSync('filter.config.json', JSON.stringify(config));
// Later, restore configuration
// const savedConfig = JSON.parse(fs.readFileSync('filter.config.json'));
// const restoredFilter = new Filter(savedConfig);
// Get dictionary size for monitoring
console.log(filter.getWordCount()); // 406ML-Based Detection
Optional TensorFlow.js-powered toxicity detection for context-aware filtering:
# Install optional dependencies
npm install @tensorflow/tfjs @tensorflow-models/toxicityimport { HybridFilter } from 'glin-profanity/ml';
const filter = new HybridFilter({
languages: ['english'],
detectLeetspeak: true,
enableML: true,
mlThreshold: 0.85,
combinationMode: 'or', // 'or' | 'and' | 'ml-override' | 'rules-first'
});
// Initialize ML model (async)
await filter.initialize();
// Hybrid check (rules + ML)
const result = await filter.checkProfanityAsync('you are terrible');
console.log(result.isToxic); // true
console.log(result.mlResult?.matchedCategories); // ['insult', 'toxicity']
console.log(result.confidence); // 0.92
// Sync rule-based check (fast, no ML)
filter.isProfane('badword'); // trueML Categories Detected:
toxicity- General toxic contentinsult- Insults and personal attacksthreat- Threatening languageobscene- Obscene/vulgar contentidentity_attack- Identity-based hatesexual_explicit- Sexually explicit contentsevere_toxicity- Highly toxic content
📊 Benchmarks
Performance benchmarks on a MacBook Pro (M1):
| Operation | Throughput | Average Time |
|-----------|------------|--------------|
| isProfane (clean text) | 23,524 ops/sec | 0.04ms |
| isProfane (profane text) | 114,666 ops/sec | 0.009ms |
| With leetspeak detection | 22,904 ops/sec | 0.04ms |
| With Unicode normalization | 24,058 ops/sec | 0.04ms |
| With caching (cached hit) | 21,396,095 ops/sec | 0.00005ms |
| checkProfanity (detailed) | 3,677 ops/sec | 0.27ms |
| Multi-language (4 langs) | 24,855 ops/sec | 0.04ms |
| All languages (23 langs) | 14,114 ops/sec | 0.07ms |
Key Findings:
- Leetspeak and Unicode normalization add minimal overhead
- Caching provides 800x speedup for repeated checks
- Multi-language support scales well
Run benchmarks yourself:
npm run benchmarkLicense
This software is also available under the GLINCKER LLC proprietary license. The proprietary license allows for use, modification, and distribution of the software with certain restrictions and conditions as set forth by GLINCKER LLC.
You are free to use this software for reference and educational purposes. However, any commercial use, distribution, or modification outside the terms of the MIT License requires explicit permission from GLINCKER LLC.
By using the software in any form, you agree to adhere to the terms of both the MIT License and the GLINCKER LLC proprietary license, where applicable. If there is any conflict between the terms of the MIT License and the GLINCKER LLC proprietary license, the terms of the GLINCKER LLC proprietary license shall prevail.
MIT License
GLIN PROFANITY is MIT licensed.

