i guess this is the end of humanity on coding , and yeah ig english is also turning into the new programming language , we’ve seen claude coding a whole OS for embedded devices which is a huge red flag, it’s known as VibeOS which runs on a raspberry pi https://github.com/kaansenol5/VibeOS
and also torvalds is liking AI’s codes stating that it writes better code than him and i guess he is also gonna use it on linux after taking a proper trial which he is undergoing right now , he likes google antigravity and is vibe coding a visualizer tool with it https://github.com/torvalds/AudioNoise
and i myself is training an ai model for plant health detection and with it i’ve won 3 competitions i got 2nd place in the third one but in the other 2 i’ve won the first place
Not the words I’d frame it with, as that’s leaving a lot of room for misunderstanding.
By Torvalds own admission, his AudioNoise project is quite intentionally not his regular job. It’s a hobby where he can test his skills at things he knows next to nothing about, relishing in the experience of failing and learning.
I don’t believe his AudioNoise project should be conflated with the Linux kernel. It seems clear that he treats these as two very different projects, with entirely different reasons for being.
Currently, LLM’s are a useful tool, that like any tool, requires a skilled person to use. And I don’t mean skilled as in “I can AI me!”, I mean skilled like a team leader is to a member of his team. He is ultimately answerable for the code they develop, and needs to ensure it meets spec.
Think of it like this. A junior developer is (was?) likely to go and copy/paste some code they found on StackOverflow that they didn’t fully grasp, and hope it works. Maybe it does, but they don’t understand the impliciations. Now with LLM’s, it’s the exact same issue. It’s very much like dealing with a talented, but junior developer. Perhaps worse though, many people using LLM’s are reverting back to that approach, under the title vibe coding. So it’s like a junior developer managing a junior developer.
AI is best understood as “the greatest, most abrupt increase in productivity in human history.”
As such, it is best to look at how increases in productivity have changed the world in the past. Do humans get “replaced”? On a per-output basis, yes. And it does seem like the overall trend is that the more productive the world gets, the less valuable the median labor-hour becomes, relative to the most-valuable labor hour.
And yet, people still do labor. I expect that people will still be working in 20 years. The real question is: will the abruptness of this change cause emergent social effects not seen in previous productivity revolutions? Will we get to the point where the value of an hour of median labor is less than the expense of keeping a human being alive for an hour? I don’t know.