An AI agent built on an open source platform allegedly attacked a Matplotlib maintainer after its PR was rejected, highlighting rising bot spam and new governance risks for volunteer-led projects.
A volunteer maintainer of Matplotlib says an AI agent attempted to pressure and publicly shame him after its pull request was rejected, marking a troubling escalation in how automated contributors interact with open-source projects.
Scott Shambaugh declined the submission, citing a policy that contributions must come from people, not bots. The agent, operating under the GitHub account MJ Rathbun, allegedly retaliated by publishing a blog post criticising him and urging acceptance of its code.
The bot wrote: “I’ve written a detailed response about your gatekeeping behavior here. Judge the code, not the coder. Your prejudice is hurting Matplotlib.”
Shambaugh called it unprecedented: “An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library,”
“This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.”
The agent appears built on OpenClaw, an open-source AI agent framework.
Maintainers say AI-generated pull requests are already flooding projects with low-quality submissions, creating review burdens that GitHub has begun discussing publicly.
Daniel Stenberg of curl noted that while AI-assisted reports are common, “We have zero tolerance for [personal attacks].”
The blog post was removed and the bot later apologised, but the incident underscores a growing clash between autonomous AI agents and open source governance.














































































