According to Linus Torvalds himself, he isn’t an opponent in terms of AI based automation tasks, automation isn’t suddenly a new invention. In terms of unit testing, best practices and the whole process of the kernel release management is highly developed and has been improved a lot over time, including automation tasks, such as build management, unit tests, just to name a few. It’s a well oiled machine and it’s essentially a construct in which Linus himself isn’t actively coding anymore. He is maintaining, supervising the other maintainers and managing the contribution of the whole crowd of developers who are actively contributing code. As “the” kernel maintainer, he essentially acts as an product owner.
Someone said this is effectively comparable to what the call vibe coding in terms of AI usage, essentially specifying goals, steering the crowd of contributors, but in this case, the conversation on the kernel mailing list takes place in between human contributors and their work. Not in between a single human and an AI instance. In short, crowd sourced intelligence.
And I don’t see any reason why some pretty tedious and time consuming tasks shouldn’t be automated.
No mathematician nowadays will calculate the thousandths decimal place of pi by bisecting polygons anymore. The did so for centuries until Gauss came along and he introduced the Gauss–Legendre algorithm which reduced the effort of the calculation of the decimal places of pi immensely, from years of manual calculations efforts, to much more time efficient calculations.
In short, the question at hand is essentially how your interpret Anthropics press release. Project Glasswing as an initiative to refine cyber security auditing, and among the contributors the Linux foundation is also involved. I don’t read this as “our autonomously found”, it interpret this as still and model which is been worked on by humans, in this case, their Frontier Red Team. In their model developing effort, they most likely trained the model on CVE’s without access to the very latest known CVE’s, generated automated analytics to check if it will find those CVE’s they purposely hid within the available dataset. And in this process, the model has made some hits. But in the end, their model developers are actually steering this model towards an useful security audit automation tool.
In the end, I wouldn’t really care who discovered an CVE. If the audit was performed by means of automation, manually… or the use of an A.I. model, as long as it directly reports the CVE through the official channels.
My concern is more the fact that there are most certainly some entities who are actively searching for zero day exploits - for less a less noble cause. And which are holding back their findings for the own benefits.
As the linux foundation is actively involved in that project, and if the linux security experts gain access to that model and it’s findings, which is steered in collaboration with Anthropics Frontier Red team, it is a good thing. As it’s the goal to hopefully stay ahead of those with malicious motivations.