Skip to content
#

toxicity-detection

Here are 39 public repositories matching this topic...

The Toxic Comment Detector is a tool powered by Hugging Face’s unitary/toxic-bert model, designed to identify harmful, offensive, or abusive language in real time. Built with a ReactJS frontend and a Flask backend, it provides detailed insights into toxicity levels, enabling safer online environments.

  • Updated Dec 12, 2024
  • JavaScript

An AI-powered content moderation system using Python and Hugging Face Transformers. Combines rule-based filtering and machine learning to detect and block toxic, profane, and politically sensitive content, built for developers and communities to create safer, positive online spaces.

  • Updated May 23, 2025
  • Python

🤖 Intelligent AI Agent for Real-time Content Moderation 97.5% accuracy | Multi-stage ML pipeline | Production-ready Zero-tier filtering + Embeddings + Fine-tuned BERT + RAG

  • Updated Jul 29, 2025
  • Python

Improve this page

Add a description, image, and links to the toxicity-detection topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the toxicity-detection topic, visit your repo's landing page and select "manage topics."

Learn more