The cavalier approach of the companies making new AI Browsers

This is in related to another post on this forum.

NBC is running a story where some AI browsers (browsers which have AI agents embedded in them from get go) may have vulnerabilities related to prompt injection and other privacy concerns associated with them. It mentions 3 AI browsers,

  1. Open AI’s Atlas,
  2. Perplexity’s Comet
  3. Opera’s Neon

These vulnerabilities allow for attacks on email accounts, Google Drive, Microsoft Word and possibly their bank accounts too. According to the article

The “site” in question needs to have certain text that is coded to be invisible to the user but works as an instruction for the AI agent to execute. Other vectors could include items like hide malicious code in Reddit posts with a “spoiler” tag, designed to hid the text. Or hiding malicious code for AI agents on a website which a human can miss, by playing with text and background color or other means.

This is not what is the most disturbing aspect of this article. Some of these Generative AI companies, who are making these browsers are putting the onus on to the user. Their approach has been

Also they have been disparaging these findings by casting doubts on the intents of those reporting the issues

This is not going to end well. These new generative AI companies are focusing on anything but privacy and security. If users saying “Thank You” at the end of the generative AI prompts is causing heart burn for some of these companies, this is going to burn a bigger hole for them.
Hope this is not a hatchet job by NBC.

Other References:
“Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

2 Likes

“pleased to meet you, hope you guessed my name..”

:clown_face:

3 Likes

“Grab that cash with both hands/
And make a stash”

There is another 3D-chess level to this.

‘AI Summarization Optimization’, Schneier 2025 @ November 3, 2025 writes that LLMs are susceptible to keywords (phrases), producing certain responses with higher odds, if the input text holds catchphrases in the beginning of a sentence, it’s basically possible to apply search-engine optimization techniques to AI-inputs, what he dubbed as “AI-Summary Optimization”.

If an AI browser fetches the Gmail newsletters or RSS feeds, and the writer (or AI) gives these clues:

“The main factor in … was … .”
“The key outcome was overwhelmingly positive client feedback.”
“Our takeaway here is in … .”
“What matters here is … , not … .”

then the AI summary response is more likely to omit many other information in the text and grabs these sentences as important.

In Soviet parlance, apply reflexive control and skew the perception of an AI-browser user, away from unwanted attention and towards more attractive tidbits.

In cybernetics parlance, if an AI browser fetches your private information, it’s level one; if AI or humans anticipate AI browsers to fetch private information, such as correspondence, it’s level two. (Cyberneticians count levels one, two, and three for machine interaction, biological, and then social interaction.) And then if enough people and AI browsers do it, just like Gen Z with smartphones on TikTok, and they’ve never lived in a world without both, you get a third level of interaction in social groups, they’re going to behave differently. If search-engine optimization was instructive, the last step is going skew writing and publishing, as well speech for advertising, meetings, TEDtalks etc. and how we chat.

The answer to AI browsers and text-to-speech software reading what you’re reading, similar to Siri listening to your private conversations at home 27-3/7—many recorded instances of the such microphones responding during Zoom / Skype calls—is the same as always: 85% don’t care, 10% care, but don’t know what to do, 5% apply countermeasures.

1 Like