Official Linux Kernel Rules for AI-Assisted Contributions
The Linux kernel project has formally set rules for AI-assisted contributions. Official documentation now allows developers to use AI coding tools across all parts of kernel development, but the human submitter carries full legal and technical responsibility for anything submitted.
The guidelines are now part of the kernel’s official documentation. They were authored by stable kernel maintainer Sasha Levin of Nvidia and backed by Linus Torvalds. The policy grew out of consensus reached at the 2025 kernel maintainers summit in Tokyo and was merged into the kernel repository this past week.
Human Responsibility for AI-Generated Kernel Code
Signed-off-by Tags Must Come From a Human
Under the new framework, AI agents are explicitly prohibited from adding Signed-off-by tags. Only a human can certify the Developer Certificate of Origin, which is the legal mechanism contributors use to confirm that their code complies with the kernel’s GPL-2.0 license.
A developer can use AI to help produce code, but the certification still belongs to a person. The human submitter must stand behind the work, not the tool.
Developers Must Review and Own the Submission
Anyone submitting AI-generated code must review that code, verify licensing compliance, and accept complete responsibility for the contribution. The rule is direct: if you submit it, you own it.
That means the project is not treating AI as an independent contributor. It is treated as assistance inside a workflow where the human developer remains accountable for the final result.
The New Assisted-by Tag Requirement
Contributors are also required to include a new Assisted-by attribution tag. This tag must specify the AI tool name, the model version, and any specialized analysis tools used.
An example format is:
Assisted-by: Claude:claude-3-opus coccinelle sparse
This requirement pushes disclosure into the contribution process itself. Instead of leaving AI use vague or assumed, the policy asks contributors to be explicit about which tools were involved.
Linux Kernel AI Policy Focuses on Accountability Over Enforcement
Torvalds and Levin have made clear that this policy is built for good actors. They also acknowledge the limit of any formal rule here: bad actors who want to hide AI usage could simply choose not to disclose it.
So the policy is not trying to solve dishonesty through strict enforcement. It is centered on accountability. The practical idea is simple: a human signs off on AI-assisted code and therefore takes legal and technical ownership of it. In that sense, the responsibility was already there. The difference now is that it has been formally written down.
Why the Linux Kernel Is Addressing AI Coding Tools Now
AI tools have become deeply embedded in developer workflows. A recent survey found that 84 percent of developers now use AI coding tools. Kernel maintainers have also already brought an AI code review system called Sashiko into their workflow.
Greg Kroah-Hartman, a senior kernel maintainer, noted in March that AI-driven activity around Linux security and code review had really jumped in recent weeks.
Taken together, those shifts explain why the project moved from informal practice to documented policy. AI assistance is no longer peripheral. It is now part of the working reality around kernel development.
Broader Open-Source Implications of the Linux Kernel AI Policy
A Possible Template for Other Open-Source Projects
The Linux kernel’s approach may become a model for other open-source projects facing the same questions. Rather than trying to settle every argument around AI-generated code, the policy concentrates on the issue that affects maintainers most directly: who is responsible when something breaks or creates legal risk.
That makes the framework practical. It does not try to remove uncertainty around AI. It establishes a clear owner for the output.
Copyright Questions Remain Unresolved
The policy does not attempt to resolve the broader debate over whether AI-generated code trained on copyrighted material can be properly licensed under GPL. That issue remains contested, and community discussions have already highlighted unresolved copyright concerns.
The kernel team’s answer, at least for now, is to prioritize transparency and human responsibility. Instead of building the policy around unresolved theory, it is built around disclosure and accountability.
What the Linux Kernel AI Contribution Rules Actually Change
The biggest change is not that AI can now be used. It is that the project has formally documented how that use must be handled.
Developers can use AI coding tools across kernel development. But they cannot pass off certification to an AI system. They cannot let an AI add the legal sign-off. And they must disclose AI assistance through the new Assisted-by tag, including the tool name, model version, and any specialized analysis tools involved.
In practice, the policy makes one principle unmistakable: AI can assist, but only a human can submit, certify, and take the blame if something goes wrong.

