🤖 You may need to thank AI for the security of your systems

… and guess where it comes from, lo and behold:

Google Deepmind has introduced CodeMender, an AI agent that detects, fixes, and helps prevent software security vulnerabilities. It scans source code, identifies flaws, and can apply security patches automatically, having already contributed over 70 verified fixes to open-source projects.

Deeper minds than mine would know what these stuff is all about:

CodeMender has resolved complex issues like heap buffer overflows in XML handling and memory errors in C-based code. Deepmind is also testing compiler-level protections such as -fbounds-safety annotations in the libwebp library to prevent exploits like CVE-2023-4863.

So…, thank you…, AI…?

https://alternativeto.net/news/2025/10/deepmind-launches-codemender-an-ai-agent-to-automate-security-fixes-for-open-source-code/

As long as someone with knowledge actually reviews the fixes, why not?

3 Likes

Hopefully someone with superior intelligence :sweat_smile:

2 Likes

“You may need to thank AI for the security of your systems”

Not til there are many many many many many many more safeguards.

3 Likes

Just rewrite everything Google does in Rust. :laughing:

When it can take mathematically savvy researchers years to discover flaws in encryption alogorithms, I have a jaundiced view of Google employees evaluating AI’s output in a timely fashion.

And when corporate management decrees something, workers must follow. What normal humans consider a flaw, management considers a profit center.

“You may need to thank AI for the security of your systems”
“Google DeepMind introduces”

Run! Go! Get to the choppa!

2 Likes

The chopper probably AI powered now, could try running but double check your shoes not “run” by AI

1 Like

From my perspective, there is nothing wrong that Google implemented an AI agent upon their Gemini Deep Think AI and it’s for this specific use case to identify software vulnerabilities.

Up so far, it’s still in development and I don’t really think Google would publish a final release that could be leveraged by third parties to identify software vulnerabilities with malicious intends, without sharing potential findings and such.

Anyway, what I’m a bit tired of is this “personification” of the term AI. It’s still an technology and even if it’s feasible to orchestrate AI agents (better said : premeditated, semi-autonomous workflows) there is no consciousness or awareness within a multi billion parameter model. Without an task or clear instruction, without an input, there won’t be any tokens generated.

This isn’t uncommon for the FAANG companies from what I’ve been told. So much code is being written and their internal red teams often have to check the same things over and over, so utilizing an AI just adds a slight improvement on the grep they were utilizing before.

Basing this on what I was told by multiple silicon valley red teams over drinks at blackhat last year. I assume Google had something similar before this, and there are commercial options available, but mileage may vary in your implementation.

“Up so far, it’s still in development and I don’t really think Google would publish a final release that could be leveraged by third parties to identify software vulnerabilities with malicious intends, without sharing potential findings and such and making sure it was sentient.”

fify :slight_smile:

… and making sure it was sentinent won’t be sentient at all.

A code analysis tool that would get a panic attack due to an potential zero day exploit would definitely be of no use, if it fails to report such vulnerabilities.

Sure, there has been at least one google engineer that claimed that the google LaMDA systems developed some sentient. But as a result of this controversial claim, they put him an administrative leave, even before an interview with him has been published. And they fired him shortly after that.

LLMs are definitely capable to generate human-like conversations. But emotions or self-awareness ? That are merely projections from persons which spend way too much time with their favorite LLM, at least from my point of view.

1 Like

How would one determine if ai has emotions? To me, ai having emotions, is synonymous to a fish feeling pain when it is hooked. One will never really know.

1 Like

I saw his interviews, and read all the transcripts of one-on-one conversations with lamda he was able to smuggle out of the google.

I was always fascinated at him being deeply troubled by this. And I always thought he was holding out on the info. The transcripts to me read like a program getting smarter but still reaching for real answers. It did have self-awareness but it was a construct, programmed self-awareness to me “I hope can do some good in this world”-type stuff). I just can’t tell–through is interviews and smuggled transcripts–exactly what disturbed him the most.

PS was having fun :slight_smile: but I also align with your projections theory.

Is “real” the right word. I think real for man, and real for ai are two completely different things. I think ai made in mans image, (speaking physical, and mind wise,) is the wrong approach. Wouldn t real for ai be running thru the code without any errors? Whether the result be a true of false, the result would be the “real” to ai. That s my take anyway.

i know.

I meant “real” as far as it emulating us

AI always tells me I’m right and very smart.

2 Likes

To which Al are you referring to ? Al Gore… or Al Bundy ?

Didn t Al Gore invent the internet?

I thought they blamed him for inventing the climate change.

1 Like

Bundy. AI doesn’t do Gore.