> Honestly asking, why would they? I dont see the obvious answer
So, your intuition is right in a sense and wrong in a sense.
You are right in that AI systems probably won't really have the "emotion of wanting", why would it just happen to have this emotion, when you can imagine plenty of minds without it.
However, if we want an AI system to be autonomous, we're going to have to give it a goal, such as "maximize this objective function", or something along those lines. Even if we don't explicitly write in a goal, an AI has to interact with the real world, and thus would have to affect it. Imagine an AI who is just a giant glorified calculator, but who is allowed to purchase its own AWS instances. At some point, it may realize that "oh, if I use those AWS instances to start simulating this thing and sending out these signals, I get more money to purchase more AWS!". Notice at no point was this hypothetical AI explicitly given a goal, but it nevertheless started exhibiting "goallike" behavior.
I'm not saying that an AI would get an "addiction" that way, but it suggests that anything smart is hard to predict, and that getting their goals "right" in the first place is much better than leaving it up to chance.
> How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence
This is my bad for using such a loaded term. By "addiction" I mean that the AI "wants" something, and it finds that humans are inadequate to give it to them. Which leads me to...
> Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?
Because you humans have all of these wasteful and stupid desires such as "happiness", "peace" and "love" and so have factories that produce video games, iphones and chocolate. Sure I may have the entire internet already producing cat pictures as fast as its processors could run, but imagine if I could make the internet 100 times bigger by destroying all non-computer things and turning them into cat cloning vats, cat camera factories and hardware chips optimized for detecting cats?
Analogously, imagine you were an ant. You could mount all sorts of convincing arguments about how humans already have all the aphids they want, about how they already have perfectly functional houses, but you, as a human, would still pave over billions of ant colonies for shaving 20 minutes off a commute. It's not that we're being intentionally wasteful and conquering of the ants. We just don't care about them and we're much more powerful than them.
Hence the AI safety risk is: By default an AI doesn't care about us, and will use our resources for whatever it wants, so we better create a version which does care about us.
Also cross thread, you mentioned that organic intelligences have many multi-dimensional goals. The reason why AI goals could be very weird is that it doesn't have to be organic; it could have an only one dimensional goal, such as cat picture. It could have similar dimension goals but be completely different, like the perverse desire to maximize the number of divorces in the universe.
So, your intuition is right in a sense and wrong in a sense.
You are right in that AI systems probably won't really have the "emotion of wanting", why would it just happen to have this emotion, when you can imagine plenty of minds without it.
However, if we want an AI system to be autonomous, we're going to have to give it a goal, such as "maximize this objective function", or something along those lines. Even if we don't explicitly write in a goal, an AI has to interact with the real world, and thus would have to affect it. Imagine an AI who is just a giant glorified calculator, but who is allowed to purchase its own AWS instances. At some point, it may realize that "oh, if I use those AWS instances to start simulating this thing and sending out these signals, I get more money to purchase more AWS!". Notice at no point was this hypothetical AI explicitly given a goal, but it nevertheless started exhibiting "goallike" behavior.
I'm not saying that an AI would get an "addiction" that way, but it suggests that anything smart is hard to predict, and that getting their goals "right" in the first place is much better than leaving it up to chance.
> How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence
This is my bad for using such a loaded term. By "addiction" I mean that the AI "wants" something, and it finds that humans are inadequate to give it to them. Which leads me to...
> Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?
Because you humans have all of these wasteful and stupid desires such as "happiness", "peace" and "love" and so have factories that produce video games, iphones and chocolate. Sure I may have the entire internet already producing cat pictures as fast as its processors could run, but imagine if I could make the internet 100 times bigger by destroying all non-computer things and turning them into cat cloning vats, cat camera factories and hardware chips optimized for detecting cats?
Analogously, imagine you were an ant. You could mount all sorts of convincing arguments about how humans already have all the aphids they want, about how they already have perfectly functional houses, but you, as a human, would still pave over billions of ant colonies for shaving 20 minutes off a commute. It's not that we're being intentionally wasteful and conquering of the ants. We just don't care about them and we're much more powerful than them.
Hence the AI safety risk is: By default an AI doesn't care about us, and will use our resources for whatever it wants, so we better create a version which does care about us.
Also cross thread, you mentioned that organic intelligences have many multi-dimensional goals. The reason why AI goals could be very weird is that it doesn't have to be organic; it could have an only one dimensional goal, such as cat picture. It could have similar dimension goals but be completely different, like the perverse desire to maximize the number of divorces in the universe.