Linux Open Source Greenlights AI Code With Human Liability Rules

0
2
penguin with linux command

The Linux kernel has formally allowed AI-assisted code submissions, introducing a mandatory ‘Assisted-by’ disclosure tag while keeping full legal and technical accountability with human developers. The move sets a major open-source governance precedent for AI-era contributions.

The Linux kernel project has formally established a project-wide policy allowing AI-assisted code contributions, marking a major shift in how open-source governance is adapting to the AI era.

At the heart of the new framework is a mandatory transparency rule: AI-generated code can no longer carry the legally binding Signed-off-by tag. Instead, developers must use an “Assisted-by” tag, creating a clear disclosure trail for AI-assisted submissions.

Crucially, the policy preserves full human accountability. Any bugs, regressions, security flaws, licence issues, or provenance risks tied to AI-generated code remain the legal and technical responsibility of the human contributor submitting the patch, safeguarding DCO integrity and maintainership discipline.

The policy follows months of internal debate within the Linux ecosystem over whether AI-generated patches should be restricted. Linux creator and kernel maintainer Linus Torvalds dismissed calls for outright bans as “pointless posturing,” reinforcing the project’s pragmatic stance that AI should be treated as just another developer tool.

The decision also responds to growing legal concerns around licence provenance, as LLMs trained on GPL and other restrictive open-source codebases may generate potentially copyright-tainted outputs.

With projects such as Gentoo and NetBSD opting for bans, Linux’s disclosure-first, accountability-led framework could now emerge as a template for open-source communities navigating AI-assisted development at scale.

LEAVE A REPLY

Please enter your comment!
Please enter your name here