Insights AI News How to follow Linux kernel AI contribution policy safely
post

AI News

17 Apr 2026

Read 10 min

How to follow Linux kernel AI contribution policy safely

Linux kernel AI contribution policy requires disclosure and human signoff so commits stay safe and legal.

Want to use AI when sending patches? The Linux kernel AI contribution policy requires clear disclosure and a human owner for every change. Add an Assisted-by line when tools helped, and sign off as responsible. Keep patches small, tested, and explained. This guide shows safe steps that maintainers expect. The kernel now treats AI like any other tool: helpful, but only if a person stays in charge. You must say when you used a model and you must accept full responsibility. That means you read the code, you understand it, and you can fix it. The bar is not higher than before. It is now written down and enforced.

What the Linux kernel AI contribution policy requires

Disclosure with Assisted-by

  • Add an “Assisted-by:” line to your commit when an AI tool influenced the change in any way.
  • Name the tool and, if you can, the version (for example: Assisted-by: ChatGPT, Assisted-by: GitHub Copilot 1.x).
  • When unsure, disclose. Openness builds trust with maintainers.

Human sign-off and ownership

  • Every patch still needs a human “Signed-off-by:” line.
  • You, the signer, own code quality, security, and legal risk.
  • If the code breaks things, you help fix it. If there is a legal issue, you are on the hook.

Understand what you submit

  • Be able to explain what the code does, why it is correct, and how you tested it.
  • Remove made-up comments, invented APIs, or copied text the tool might have inserted.
  • Keep changes small and focused so review stays practical.

How to comply with the Linux kernel AI contribution policy in practice

Before you write code

  • Check the MAINTAINERS file to see who will review your patch and any subsystem rules.
  • Search the mailing list for past discussions. Avoid repeat or rejected ideas.
  • Decide the smallest useful change you can make first.

While using an AI tool

  • Use it for drafts, outlines, or test ideas, not as an auto-merge machine.
  • Never paste proprietary or secret code into prompts.
  • Keep a local note of prompts, tool versions, and edits you made. Do not ship this note unless asked.

Code and test

  • Review every line yourself. Rewrite unclear parts in your own words.
  • Build the kernel or the changed subtree. Fix all compile warnings.
  • Run basic tests that touch your change. Note what you ran and the results.

Prepare a solid commit

  • Write a clear commit message: problem, why, how, and impact.
  • Add tags at the end, for example:
    Assisted-by: ToolName vX.Y
    Signed-off-by: Your Name <email>
  • Split unrelated changes into separate patches.

When to add “Assisted-by”

Always disclose for direct code help

  • You asked a model to write or transform code you used, even if you edited it.
  • You relied on AI to suggest fixes, tests, or refactors you kept.

Also disclose for indirect help

  • You used AI to explain an API and that shaped your patch.
  • You used AI to draft parts of the commit message or comments.

Disclosure likely not needed

  • You only read official docs, kernel code, or human-written threads.
  • You used common tools like grep, compilers, or linters with no AI features.

If you are on the fence, include the tag. The cost is low. The trust gain is high.

License and legal hygiene

Reduce copyright risk

  • Do not copy large outputs from a model without heavy review and editing.
  • Do not ask a model to “port code from project X” or to “reproduce file Y.”
  • If the tool inserts text that looks copied, remove or rewrite it.

Respect kernel licensing

  • The kernel is GPL-2.0. Your change must be compatible.
  • Do not import code of unknown origin. Be able to vouch for authorship.
  • Remember: your Signed-off-by asserts you have the right to submit the code.

Quality checks maintainers expect

  • Build succeeds on relevant configs (for example, the driver’s target arch).
  • No obvious undefined behavior or API misuse.
  • Commit message references affected files and functions.
  • Change is minimal, logically grouped, and easy to revert if needed.
  • Style issues are fixed only when touching that code for a real reason.

Common pitfalls that trigger NAKs

  • Huge, multi-purpose patches that bury the real change.
  • Invented functions, constants, or comments from the model.
  • “Vibe-coding” without tests or proof of need.
  • Silence about tool use when it is obvious from the diff.
  • Letting an agent open patches with no human review.

Example tag block (at the end of the commit message)

Assisted-by: GitHub Copilot 1.25
Signed-off-by: Alex Example <alex@example.com>

You can add more context in the message body if it helps reviewers, but keep the tags clean and standard.

Why this policy helps everyone

  • It keeps review workloads sane by setting clear duties.
  • It protects the project from sneaky license risks.
  • It rewards honest contributors who do the work and explain it.

Other projects have chosen bans or strict gates. The kernel chose disclosure and human responsibility. That matches its long history of traceable authorship and careful review.

Strong tools make it easy to produce lots of code fast. Good patches still need human care. If you disclose help, own the result, and ship tested, focused changes, you will follow the Linux kernel AI contribution policy and keep maintainers on your side.

(Source: https://hackaday.com/2026/04/14/new-linux-kernel-rules-put-the-onus-on-humans-for-ai-tool-usage)

For more news: Click Here

FAQ

Q: What does the Linux kernel AI contribution policy require when using AI tools for a patch? A: The Linux kernel AI contribution policy requires clear disclosure of any AI assistance with an Assisted-by line and a human Signed-off-by who accepts responsibility for code quality, security, and legal risks. Patches should be small, tested, and explained so maintainers can review and track AI-influenced changes. Q: When should I add an “Assisted-by:” line to my commit? A: Add an Assisted-by line whenever a model wrote or transformed code you used, or you relied on AI suggestions, tests, or refactors that you kept, and also when AI explanations or drafted commit text shaped your patch. If you only read official docs or used non-AI tools you likely don’t need it, but when in doubt include the tag because the cost is low and trust gain is high. Q: What does human sign-off mean under this policy and who is responsible? A: Human sign-off means adding a Signed-off-by line and personally owning the submitted change; the signer must bear responsibility for code quality, fixes, and any legal issues. You must read, understand, and be able to explain and fix the code you submit. Q: How should I prepare a commit message when submitting AI-assisted changes? A: Write a clear commit message that states the problem, why the change is correct, how you tested it, and the impact, and append tags such as Assisted-by: ToolName vX.Y and Signed-off-by: Your Name . Keep the tag block clean, split unrelated work into separate patches, and add more context in the message body if it helps reviewers. Q: What legal and licensing precautions does the policy recommend for AI-generated code? A: Avoid copying large model outputs, do not ask a model to port or reproduce files from other projects, and remove or rewrite any inserted text that looks copied. Remember the kernel is GPL-2.0, do not import code of unknown origin, and your Signed-off-by asserts you have the right to submit the code. Q: What quality checks do maintainers expect before accepting AI-assisted patches? A: Maintainers expect that the build succeeds on relevant configurations, there is no obvious undefined behavior or API misuse, and the change is minimal and logically grouped so it is easy to review and revert if needed. You should review every line, fix compile warnings, run basic tests touching your change, and reference affected files and functions in the commit message. Q: What common pitfalls can trigger a NAK under the new rules? A: Common NAK triggers include huge multi-purpose patches, invented functions, constants, or comments inserted by the model, and “vibe-coding” without tests or proof of need. Silence about obvious AI use or letting an agent open patches without human review can also lead to rejection. Q: How can I practically comply with the policy while using AI tools? A: To comply with the Linux kernel AI contribution policy, check the MAINTAINERS file and mailing list before coding, choose the smallest useful change, and use AI for drafts or ideas rather than as an auto-merge machine. Never paste proprietary code into prompts, keep a local note of prompts and edits (do not ship the note unless asked), review and rewrite every line, build and run basic tests, then submit a clear commit with Assisted-by and Signed-off-by tags.

Contents