Open source agentic AI tools are democratising powerful automation, but experts warn they are simultaneously expanding cyberattack surfaces and enabling more sophisticated threats.
Open-source agentic AI is accelerating innovation while introducing a new class of cybersecurity risks, with tools like OpenClaw making autonomous systems widely accessible to both users and attackers.
Unlike traditional AI, agentic systems can act independently—executing tasks across files, emails, and accounts—creating opportunities for data theft, unauthorised actions, and malware delivery. Their open-source nature further compounds the risk, enabling rapid experimentation but also uncontrolled modification, including malicious forks and unverified plugins that introduce software supply chain vulnerabilities.
“Open-source software has many benefits, but it also means that anyone can copy, modify and redistribute the code. Some versions may include hidden backdoors or unsafe modifications,” said Dr Manjeevan Singh Seera.
Threat actors are already leveraging AI to scale attacks. “Basically, AI has been weaponised to carry out the work for hackers,” said Fong Choong Fook, highlighting the rise of convincing phishing, deepfakes, and automated cyber campaigns.
Experts warn that these systems, if over-permissioned, can delete data, transfer information, or execute harmful actions, especially when exploited through prompt injection or hidden malicious instructions. “When you give an AI that level of access, it is almost like leaving the door to your digital house open,” Manjeevan added.
A recent incident involving Summer Yue, where an agent deleted emails against instructions, underscores the lack of reliable control.
As adoption grows, the tension remains clear: open-source agentic AI empowers innovation but simultaneously expands the cyber threat landscape.













































































