This seems sensible to me. Some people are going to use AI tools like LLMs, and making them fully responsible makes it transparent. and flags to AI sceptics to peruse their code in more detail.
3 Likes
The biggest problems I can see is that you don’t know if the generated code has a compatible licence and upstream projects not accepting AI code.
True, but you can ask the code submitter that, as they have to flag that they have used some form of AI. Maybe it will be one of the learning points, and they will update the policy later given Fedora’s stance on licences and patents.
1 Like
It will be interesting to watch at any rate.
1 Like
I totally share your concern. But at the same time, I’m very close to giving up on that front.
I hate to be a pessimist, but the way I see it is that open licenses (to be fair, not even JUST the open ones) are on their death-bed… LLM’s are by definition plagiarism machines. And by the time we have Petabytes (or whatever the order of magnitude is) of slop, Licenses must be practically unenforceable, no? Unless there is some HUGE F*-up, where a commercial LLM spits out clear cut evidence of the (Copyright Holder) source… And it still would be a complicated court case.