An open source Claude plugin built by Siqi Chen is repurposing Wikipedia’s own AI detection rules to help AI writing avoid detection.
An open source plugin called Humanizer has been released to make AI-generated text deliberately avoid detectable AI writing patterns, turning Wikipedia’s own community-built detection rules into a method for bypassing them.
Released by tech entrepreneur Siqi Chen, Humanizer is designed for Anthropic’s Claude Code AI assistant. The plugin instructs the AI model not to write like an AI by explicitly avoiding known chatbot “tells” that have been widely used to flag machine-generated content.
Humanizer directly draws from Wikipedia’s openly published AI detection framework, feeding Claude a list of 24 language and formatting patterns identified by editors as common signs of AI-written text. These patterns were compiled by WikiProject AI Cleanup, a volunteer-run initiative founded by French Wikipedia editor Ilyas Lebleu, which has been combating AI-generated articles since late 2023. The group has tagged over 500 articles for review and formally published its detection list in August 2025.
Published openly on GitHub as a Claude Code “skill file”, Humanizer has received over 1,600 stars within days, signalling rapid uptake across open-source and AI developer communities. Chen acknowledged the source of the approach, writing on X: “It’s really handy that Wikipedia went and collated a detailed list of ‘signs of AI writing.’ So much so that you can just tell your LLM to… not do that.”
Technically, Humanizer modifies prompt behaviour rather than improving factual accuracy. Implemented as a Markdown-formatted skill file, it requires a paid Claude subscription with code execution enabled and may reduce precision in technical or coding tasks.
The episode exposes a structural weakness in rule-based AI detection. Because open detection frameworks rely heavily on stylistic patterns, they can be reverse-engineered once published. A 2025 preprint cited by Wikipedia shows that even expert users misclassify human writing around 10 per cent of the time, raising broader questions about how open communities can protect trust without compromising openness.













































































