@vazicebon, I’m a firm believer in the Law of Unintended Consequences. Look at what we have so far:
- A nascent technology whose use is so broad it cannot be defined.
- A technology that has no moral compass, e.g. can easily be used for ill as much as good
- Unregulated and unrestricted use of that technology.
- Investors throwing obscene amounts of money at this technology.
- Implementation by anyone looking to get rich quick
- Nothing to prevent actors from changing their AI implementation without notice
So, purely as a thought experiment, let’s run some AI tests against Mozilla’s source code, find exploitable loopholes, and then insert an AI with knowledge of those loopholes and directives to exploit the loopholes without knowledge of the user. You can bet someone is already out there, trying this attack vector.
We’re at the point where technology growth is on an exponential curve, while our knowledge of how to responsibly use that technology hasn’t left the starting gate. In the immortal words of Mr. Horse, “No sir. I don’t like it.”
For another analogy, consider Marie Curie’s discovery of radiation effects, which led to her discovery of the elements radium and polonium. This marvelous technology was rapidly deployed – some might call it “exploited” – and forms were put on everything from watch dials to glow in the dark to toothpaste. It took decades to regulate these materials. If you’d like, we can also discuss nuclear weapons, nuclear power, fallout, and radiation-based diseases.
I’m seeing similar patterns between the discovery of radiation and radioactive materials, and the wide-open throttle of current AI development and implementation. The analogies are apt due to the potential for massive impact, good and bad, and the lack of thoughtful progress during the early stages of discovery.
I’m inherently pragmatic and want to see testing and exploration done in a sensible manner. Nothing I’ve seen in AI development demonstrates anything other than heedless deployment. That is neither sensible nor pragmatic.
Thus my concerns over AI in everything. The people with massive financial investments in the technology say it’s great; the engineers who develop the technology are either quiet on the matter (their paychecks depend on it) or issue warnings about a headlong plunge into uncharted waters.
I’m actually thankful there are folks out there such as yourself willing to test AI’s use cases and can see the good in the technology. We need all the eyes on the code we can get. I do ask you to keep your eyes out, think of ways the technology might be used for ill, and be a canary in the coal mine.