Google Deepmind has introduced CodeMender, an AI agent that detects, fixes, and helps prevent software security vulnerabilities. It scans source code, identifies flaws, and can apply security patches automatically, having already contributed over 70 verified fixes to open-source projects.
Deeper minds than mine would know what these stuff is all about:
CodeMender has resolved complex issues like heap buffer overflows in XML handling and memory errors in C-based code. Deepmind is also testing compiler-level protections such as -fbounds-safety annotations in the libwebp library to prevent exploits like CVE-2023-4863.
When it can take mathematically savvy researchers years to discover flaws in encryption alogorithms, I have a jaundiced view of Google employees evaluating AI’s output in a timely fashion.
And when corporate management decrees something, workers must follow. What normal humans consider a flaw, management considers a profit center.
From my perspective, there is nothing wrong that Google implemented an AI agent upon their Gemini Deep Think AI and it’s for this specific use case to identify software vulnerabilities.
Up so far, it’s still in development and I don’t really think Google would publish a final release that could be leveraged by third parties to identify software vulnerabilities with malicious intends, without sharing potential findings and such.
Anyway, what I’m a bit tired of is this “personification” of the term AI. It’s still an technology and even if it’s feasible to orchestrate AI agents (better said : premeditated, semi-autonomous workflows) there is no consciousness or awareness within a multi billion parameter model. Without an task or clear instruction, without an input, there won’t be any tokens generated.
This isn’t uncommon for the FAANG companies from what I’ve been told. So much code is being written and their internal red teams often have to check the same things over and over, so utilizing an AI just adds a slight improvement on the grep they were utilizing before.
Basing this on what I was told by multiple silicon valley red teams over drinks at blackhat last year. I assume Google had something similar before this, and there are commercial options available, but mileage may vary in your implementation.
“Up so far, it’s still in development and I don’t really think Google would publish a final release that could be leveraged by third parties to identify software vulnerabilities with malicious intends, without sharing potential findings and such and making sure it was sentient.”
… and making sure it was sentinent won’t be sentient at all.
A code analysis tool that would get a panic attack due to an potential zero day exploit would definitely be of no use, if it fails to report such vulnerabilities.
Sure, there has been at least one google engineer that claimed that the google LaMDA systems developed some sentient. But as a result of this controversial claim, they put him an administrative leave, even before an interview with him has been published. And they fired him shortly after that.
LLMs are definitely capable to generate human-like conversations. But emotions or self-awareness ? That are merely projections from persons which spend way too much time with their favorite LLM, at least from my point of view.
How would one determine if ai has emotions? To me, ai having emotions, is synonymous to a fish feeling pain when it is hooked. One will never really know.
I saw his interviews, and read all the transcripts of one-on-one conversations with lamda he was able to smuggle out of the google.
I was always fascinated at him being deeply troubled by this. And I always thought he was holding out on the info. The transcripts to me read like a program getting smarter but still reaching for real answers. It did have self-awareness but it was a construct, programmed self-awareness to me “I hope can do some good in this world”-type stuff). I just can’t tell–through is interviews and smuggled transcripts–exactly what disturbed him the most.
PS was having fun but I also align with your projections theory.
Is “real” the right word. I think real for man, and real for ai are two completely different things. I think ai made in mans image, (speaking physical, and mind wise,) is the wrong approach. Wouldn t real for ai be running thru the code without any errors? Whether the result be a true of false, the result would be the “real” to ai. That s my take anyway.