Firefox is Getting a New AI Browsing Mode

Now I want to read a sci-fi book about the last human in a world run by talking birds. I can imagine it. Caveat–we homo sapiens are like roaches. Will be hard to kill us all like the Matrix tried to do. There won’t be no Zion or nothing, just negotiating with talking powerful birds. Oh, AI will be gone too. I just got home and haven’t drank anything so just wait :slight_smile: .

i.e. can always imagine the scenario. we weren’t here for large chunk of time anyways.

I can’t dissociate AI from Doom right now. But I’m willing to learn.

Isn’t that story wholly artificial to begin with. And not that intelligent. :winking_face_with_tongue:

For now.

An AI personal assistant is not a web browser, and use AI by default, I don’t see your point.
And you have to access a shaddy URL :

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

@vazicebon, I’m a firm believer in the Law of Unintended Consequences. Look at what we have so far:

  • A nascent technology whose use is so broad it cannot be defined.
  • A technology that has no moral compass, e.g. can easily be used for ill as much as good
  • Unregulated and unrestricted use of that technology.
  • Investors throwing obscene amounts of money at this technology.
  • Implementation by anyone looking to get rich quick
  • Nothing to prevent actors from changing their AI implementation without notice

So, purely as a thought experiment, let’s run some AI tests against Mozilla’s source code, find exploitable loopholes, and then insert an AI with knowledge of those loopholes and directives to exploit the loopholes without knowledge of the user. You can bet someone is already out there, trying this attack vector.

We’re at the point where technology growth is on an exponential curve, while our knowledge of how to responsibly use that technology hasn’t left the starting gate. In the immortal words of Mr. Horse, “No sir. I don’t like it.”

For another analogy, consider Marie Curie’s discovery of radiation effects, which led to her discovery of the elements radium and polonium. This marvelous technology was rapidly deployed – some might call it “exploited” – and forms were put on everything from watch dials to glow in the dark to toothpaste. It took decades to regulate these materials. If you’d like, we can also discuss nuclear weapons, nuclear power, fallout, and radiation-based diseases.

I’m seeing similar patterns between the discovery of radiation and radioactive materials, and the wide-open throttle of current AI development and implementation. The analogies are apt due to the potential for massive impact, good and bad, and the lack of thoughtful progress during the early stages of discovery.

I’m inherently pragmatic and want to see testing and exploration done in a sensible manner. Nothing I’ve seen in AI development demonstrates anything other than heedless deployment. That is neither sensible nor pragmatic.

Thus my concerns over AI in everything. The people with massive financial investments in the technology say it’s great; the engineers who develop the technology are either quiet on the matter (their paychecks depend on it) or issue warnings about a headlong plunge into uncharted waters.

I’m actually thankful there are folks out there such as yourself willing to test AI’s use cases and can see the good in the technology. We need all the eyes on the code we can get. I do ask you to keep your eyes out, think of ways the technology might be used for ill, and be a canary in the coal mine.

A technology cannot possibly have a moral compass. It is not a moral agent.

The ones who need to have moral compasses are the humans building and using that technology.

I mean, an atomic bomb didn’t wake up one day and said to itself: “now today I will fly over to the city of Hiroshima and let myself fall over the civilians living there”

When it comes to the moral compasses of the humans, it seems that not all of them point to the same North :wink:

Spot on, @cactux, and I typed in haste. What I should have said is that, for now, humans are the moral compass – or immoral, depening on viewpoints.

But that’s part of the promise or threat of AI: depending on its underlying tech (quantum computing, perhaps?), the information sources and content, and its ability to modify its own code, it has the potential to develop its own rules of engagement. I suggest that at that point it becomes indistinguishable from what we perceive as a moral compass: actions are taken according to an internal code of conduct.**

As we as a species continue to discover how guardrails or lack of them affects AI decision-making and output, I think we’re going to be surprised.

**We could get into undergrad philosophy discussions about morality et al., but it needs to be late at night, with some Pink Floyd playing, and we all need to be a few beers into the evening. :laughing:

1 Like

Hold that thought! I’ll be over at your place in a minute :grinning_face_with_smiling_eyes:

PS.

Forgot to say that I agree with what you said in the post :+1:
(I got distracted by the prospect of beers, Pink Floyd and late night philosophizing)

1 Like

I know this is personal choice but I really wish Mozilla would put anything outsides the bounds of a “nomal” browser into addons. There are times when default options are applied after updates and it might get changed to be ‘opt out’ in the future.

I think I might give Thunderbird another shot for email.

1 Like

I look only at Firefox in this topic, they’re not affiliated with any AI service now, as long as you don’t click at least three times to select one and then use it, you’re not, in any way, forced to use AI without consent.

As for the way the AI is used in general, I think it’s too late if you expect some “moral/ethic” behaviors from the people behind AI.

The Firefox Backrooms …

Joey Sneddon over at OMG Ubuntu has a cynical take…

3 Likes

If user opinion is so important, which percentage of the user base requested this integration?

If user choice is so important, why enable AI by default?

Why are the settings scattered through about:config, rather than a top-level switch with configuration options below it?

I used Firefox for the longest time as it was flexible, extendable, and open. Now I’m on Librewolf which turns off privacy-eroding defaults that were built, enabled, and extended by Mozilla. Even so I still had to go into about:config and turn off various browser.ml options. To be fair, Librewolf is a version or so behind Firefox, so I don’t blame Librewolf for this. How can you turn off options you don’t know exist?

This page is a decent start. I’m sure we’ll need to update the list of things to disable, probably in a month or two.

2 Likes

Things aren’t going so well for Librewolf though are they. I’ve been using them for ages now but after this last update delay I’ve had to give Arkenfox a try. It’s aimed at a different audience I know.

I much prefer Librewolfs way of removing the junk altogether but we can’t wait so long for important security updates on a browser. They don’t have enough people working on it. They admit this themselves.

1 Like

Nothing wrong with using a .js script to turn off features. It’s certainly faster than waiting for an updated Librewolf. For my needs, Librewolf is a decent balance between usability and security.

I hope they can secure enough funding for additional devs. I heartily approve their sensible, pragmatic approach to privacy.

1 Like

I think Librewolf are doing a bit of a disservice to security by dragging it’s demise out. They should stop, regroup and come back if and when they are ready. Lots of people use it and trust them. I can understand why the Arkenfox devs put a bit of distance between them.

Even Arkenfox say never wait for them to catch up with a Firefox release. Always update the browser. I’d rather use straight Firefox than any other outdated Gecko engine. I’m not preaching to you pal at all, it’s your system. Just commenting on the fact I’d trust them more if they were as honest as they are in the forums on their actual webpage.

3 Likes