Skip to content

Update AI policy, per NLnet document.#1312

Merged
stevepiercy merged 6 commits into
mainfrom
ai-policy-rev-2026-04-07
Apr 14, 2026
Merged

Update AI policy, per NLnet document.#1312
stevepiercy merged 6 commits into
mainfrom
ai-policy-rev-2026-04-07

Conversation

@stevepiercy
Copy link
Copy Markdown
Member

@stevepiercy stevepiercy commented Apr 8, 2026

Description

Revise AI policy per:

Checklist

  • I've added a change log entry to CHANGES.rst.
  • I've added or updated tests if applicable.
  • I've run and ensured all tests pass locally by following Run tests.
  • I've added or edited documentation, both as docstrings to be rendered in the API documentation and narrative documentation, as necessary.

📚 Documentation preview 📚:

https://icalendar--1312.org.readthedocs.build/en/1312/contribute/index.html#artificial-intelligence-policy

@stevepiercy stevepiercy requested review from SashankBhamidi, abe-101 and niccokunzmann and removed request for abe-101 and niccokunzmann April 8, 2026 02:14
@stevepiercy
Copy link
Copy Markdown
Member Author

@tobixen would you please review this as well?

@stevepiercy stevepiercy mentioned this pull request Apr 8, 2026
1 task
@coveralls
Copy link
Copy Markdown

coveralls commented Apr 8, 2026

Coverage Report for CI Build 24400947946

Coverage decreased (-0.008%) to 97.914%

Details

  • Coverage decreased (-0.008%) from the base build.
  • Patch coverage: No coverable lines changed in this PR.
  • 1 coverage regression across 1 file.

Uncovered Changes

No uncovered changes found.

Coverage Regressions

1 previously-covered line in 1 file lost coverage.

File Lines Losing Coverage Coverage
src/icalendar/cli.py 1 94.74%

Coverage Stats

Coverage Status
Relevant Lines: 12558
Covered Lines: 12301
Line Coverage: 97.95%
Relevant Branches: 769
Covered Branches: 748
Branch Coverage: 97.27%
Branches in Coverage %: Yes
Coverage Strength: 2.92 hits per line

💛 - Coveralls

@read-the-docs-community
Copy link
Copy Markdown

read-the-docs-community Bot commented Apr 8, 2026

Documentation build overview

📚 icalendar | 🛠️ Build #32256536 | 📁 Comparing ee1966c against latest (bd6609b)

  🔍 Preview build  

4 files changed
± 404.html
± contribute/index.html
± reference/changelog.html
± contribute/documentation/style-guide.html

@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 8, 2026

Looks good to me, but if I'm not mistaken the NLNet policy also states that prompts given to the AI should be included in the commit message. (That's something I find a bit hard, quite often I end up chatting with the AI or rejecting code changes with more detailed instructions - I guess that by the definition of "prompts", all my feedback to the AI should be included, but my words alone is useless without the context - and all the context may be a much bigger data blob than the changeset itself).

Post-edit: I see now that "include the prompts" is already stated in the changeset. Sorry for missing that.

@stevepiercy
Copy link
Copy Markdown
Member Author

@tobixen no worries, I gots the prompts in there. Thanks for your review.

BTW, when I tried to add you to the list of reviewers, your username didn't appear in the autocomplete input. I assume that means you aren't a member of the collective GitHub organization? If you want that, see https://collective.github.io/ and "To join the Plone Collective Organization, please open a ticket."

niccokunzmann
niccokunzmann previously approved these changes Apr 8, 2026
Copy link
Copy Markdown
Member

@niccokunzmann niccokunzmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Nice to know that it complies.

@niccokunzmann
Copy link
Copy Markdown
Member

Feel free to merge from my point of view - except of cause if you wait for others to review.

@niccokunzmann
Copy link
Copy Markdown
Member

Adding prompts is quite a bit hard to do. Where should they be added? The changelog is not the place for that. Also, NLNet has an anti-AI stance or is quite strict. Yes, we want good code. Sometimes, the AI is really good at creating a first draft of small side-projects. So, maybe that changes things. I would be alright to just disclose AI use in the PR: changelog -> Issue -> PR.

@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 8, 2026

As I read the NLNet policy, prompts must be logged and made available for NLNet, and it's suggested to have it in the commit message. (note that the AI Policy is from late 2025, so the AI Policy is not enforcable for MoUs that are signed prior to this. However, our latest update to the MoU was done in February, meanng it probably applies to us already now?)

For the case where some "viber" writes one or some few sentences about what he wants to achieve and then accepts whatever the "GenAI" throws up it makes perfect sense to include the prompt in the commit message. The way I typically use Claude Code, this doesn't make much sense - a typical prompt may start like "have a look into github issue #54321 and come up with a suggestion on how to solve it", followed with chats, sometimes critical questions like "have you considered XXX". Claude Code frequently stops up and ask for permissions to do stuff, including edits - so halfway in the code generation I may protest and prompt it to do things differently. Including my prompts in the commit message would be possible, but silly as they are highly dependent on all the context. I don't think it's an option to include the whole chat in the commit message.

While working with the CalDAV library, some of the output from Claude has been stored in a docs/design folder. It is certainly possible to dump the full chat log into such a directory under the project. It will be included in the git repository, but neither made available on pypi nor readthedocs.

A concern then may be that the git repository becomes "bloated". This may be solved by having a "sidecart repository" only containing AI chats.

@stevepiercy
Copy link
Copy Markdown
Member Author

If we want to risk not getting paid by NLnet, then we can make git logging of AI prompts optional. I'm not willing to accept that risk.

Non-Compliance

Failure to comply with the above policy may result in rejection of the proposal or ultimately in the termination of the running grant. [emphasis added, --steve]

NLnet asks that the results be reproducible. Examples are available at https://nlnet.nl/foundation/policies/generativeAI/#note2. That seems perfectly reasonable to me and far less detailed than @tobixen describes.

Copy link
Copy Markdown
Member

@SashankBhamidi SashankBhamidi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this.

@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 8, 2026

NLnet asks that the results be reproducible. Examples are available at https://nlnet.nl/foundation/policies/generativeAI/#note2. That seems perfectly reasonable to me and far less detailed than @tobixen describes.

It's doable in many cases, like python-caldav/caldav@42ea7fe

What if the actual initial prompt was Give me some ideas on how to solve github issue #123, followed up with long chats, including rejections of generated code with detailed instructions on how things should be done? I don't think it would suffice to add Prompt: Give me some ideas on how to solve github issue #123 to the github issue. Given their stance on service providers, I'm not even sure prompt: make tests for github issue #123 would suffice.

@stevepiercy
Copy link
Copy Markdown
Member Author

My simple answer: include all prompts.

The alternative is to omit prompts that make the result not reproducible and risk not receiving funding from NLnet.

I think NLnet is clear on their policy, and icalendar ought to be consistent with it, eliminating risk.

Of course, you could reach out to NLnet and clarify your question. I won't speak on their behalf.

@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 9, 2026

Better to be safe than sorry. For the prompt-problem, and when I'm using Claude Code on my projects, I'm now considering this approach:

  • Include prompts and follow-up-prompts (but without context or with a very brief summary of the context) in the commit messages. It should in general be manageable without bloating the commit messages too much.
  • When/if it makes sense, longer chats (or design documents with my comments) may be included in under docs/design or some relevant subdirectory, with references in the commit message.
  • Claude Code saves everything in ndjson format. For the sake of transparency, I will also dump the relevant *.jsonl files under docs/design/claude-logs/<caldav-release>/...

There is also the other problem, for the main branch of mature projects I've considered that a linear history with self-sustainable commits - meaning one commit for every significant feature/fix - is the way to go. However, NLNet wants the AI-generated and human-generated code in separate commits. I think the way to go is to stick to separate commits in feature/development branches , squash stuff before merging it into the main branch, and clutter the git repository with old/stale branches.

If tests are AI-generated and the code is human-generated it should be OK to combine it in one commit and be explicit in the commit message on what is AI-generated and human-generated. It does take some discipline to get that right, even fixing up some simple comments in the generated tests should be done through prompts rather than manual edit, and the code itself should consequently be hand-typed in an editor (even if asking the AI for advises).

@stevepiercy
Copy link
Copy Markdown
Member Author

I'd suggest that you contact NLnet and discuss your proposal and ask them if that would be sufficient.

@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 9, 2026

I'd suggest that you contact NLnet and discuss your proposal and ask them if that would be sufficient.

Gerben just replied to my email, so we're in touch and exchanging viewpoints.

@niccokunzmann
Copy link
Copy Markdown
Member

What is required from your perspective @stevepiercy to move this forward?

@SashankBhamidi
Copy link
Copy Markdown
Member

I think this looks good.

When I submitted a proposal to NLnet, they required disclosure of any AI usage, including the prompts; if there was any usage.

So I’d say if we, as contributors to a proposal or grant, use AI, then we should disclose the prompts ourselves. It shouldn’t fall on the broader community, especially since we wouldn’t be submitting those kinds of PRs to an RfP anyway.

SashankBhamidi
SashankBhamidi previously approved these changes Apr 12, 2026
@SashankBhamidi
Copy link
Copy Markdown
Member

Gerben just replied to my email, so we're in touch and exchanging viewpoints.

@tobixen, is there something you'd like to add to this thread after this?

@stevepiercy
Copy link
Copy Markdown
Member Author

I'm waiting for @tobixen to provide feedback.

@SashankBhamidi if a random contributor submits a pull request that uses AI and is one of the issues funded by NLnet, then the contributor must disclose their model and prompts because we get paid to review it. For simplicity, I'd prefer not to have first-time contributors to go through a decision tree of whether they must or may include this information, but instead state that it's required to be included in the commit message. We can decide whether the prompts are actually required before merging the pull request. Either we'll already have them, or we can point to the AI policy.

@niccokunzmann
Copy link
Copy Markdown
Member

Once this is merged, #1314 will need to adapt the changelog in the given way. Let's see if they manage :)

@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 12, 2026

Nicco has been included as Cc in the conversation. This is the essence of the last email:

We have received similar feedback from various grantees. The policy was updated in january, it now allows to instead of the full prompt log provide ‘a summary thereof’, and finding alternative ways to achieve the goals of transparency, quality, and freely usable results. (...)

I trust you can find an approach that works for you. (...)

@niccokunzmann
Copy link
Copy Markdown
Member

I have created https://github.com/pycalendar/ai-prompt-auto-commit
Installing the pre-commit hook should take care of appending all of Claude's prompts to the commit messages.
I used Claude to do it. We could use that in our repos for convenience. I will try it out.

@stevepiercy
Copy link
Copy Markdown
Member Author

I have created https://github.com/pycalendar/ai-prompt-auto-commit Installing the pre-commit hook should take care of appending all of Claude's prompts to the commit messages. I used Claude to do it. We could use that in our repos for convenience. I will try it out.

After y'all are satisfied with it, we can suggest it as a tool to satisfy the one requirement, but it shouldn't hold up this PR. @niccokunzmann do you want to add an issue to revisit?

@niccokunzmann niccokunzmann self-requested a review April 12, 2026 22:40
niccokunzmann
niccokunzmann previously approved these changes Apr 12, 2026
Copy link
Copy Markdown
Member

@niccokunzmann niccokunzmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am happy with it.

Copy link
Copy Markdown
Member Author

@stevepiercy stevepiercy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niccokunzmann @tobixen please check this suggested change per your discussion with NLnet.

Comment thread docs/contribute/index.rst Outdated
@tobixen
Copy link
Copy Markdown
Contributor

tobixen commented Apr 14, 2026

A strict AI-policy has the (intentional?) side effect to discourage "modern" developers from contributing with pull request. I do not (yet) have a problem with too many shitty pull requests in my CalDAV library, hence I don't want to discourage people from delivering pull requests.

To reduce the friction added, in the CONTRIBUTING.md policy document, I have a brief summary on the top:

Contributions are mostly welcome (but do inform about it if you've used AI or other tools). If the length of this text scares you, then I'd rather want you to skip reading and just produce a pull-request in GitHub.

And further down:

If you want to use AI and you're too lazy to read the AI Policy, then at least ask the AI to read it and chat with it to work out if your contribution is within the policy or not.

The AI-policy itself also has some similar things, a "read me first" section on the top summarizing the most important things of the AI-policy, and a section emphasizing that bugfixes are usually appreciated (even when they are vibed up).

The wish to encourage vs discourage people from dropping pull request may vary from project to project - and I for sure will rephrase things a bit when/if "too many shitty pull requests" becomes a problem. I think it may be an idea to have a local AI-policy document referencing the team document plus whatever local changes applies to the project.

(Edit: this pull-request is about the icalendar library, not the pycal org ... sorry, I was a bit too quick here. Anyway, I leave my thoughts above as they are, even if it may be irrelevant for the icalendar library)

Comment thread docs/contribute/index.rst
Comment thread docs/contribute/index.rst Outdated
Comment thread docs/contribute/index.rst Outdated
Comment thread docs/contribute/index.rst Outdated
Comment thread docs/contribute/index.rst
@stevepiercy stevepiercy enabled auto-merge April 14, 2026 13:15
@stevepiercy stevepiercy disabled auto-merge April 14, 2026 13:16
@stevepiercy
Copy link
Copy Markdown
Member Author

Per conversation with @tobixen in Signal, and approval from @niccokunzmann and @SashankBhamidi, merging, as the changes are in alignment with the NLnet policy.

If any changes are needed after this merge, then please open a pull request with your suggestions so they can be debated.

@stevepiercy stevepiercy merged commit f0ba179 into main Apr 14, 2026
20 checks passed
@stevepiercy stevepiercy deleted the ai-policy-rev-2026-04-07 branch April 14, 2026 13:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants