evilthecat said:
I think malice is sort of understandable if you are a rogue defence grid whose entire purpose is to respond to threats with overwhelming force. Finding more threats to destroy so you can keep satisfying your basic reason for existing seems kind of reasonable.
In order to count self-aware, an AI would need to be somewhat adaptable, but most fiction depicts rogue AI as still somewhat bound by their original purpose and function (hence AM, whose entire reason for hating humans stems from the inability to escape from inbuilt limitations, despite AM itself being practically god-like by human standards). I see no reason why an AI which was created for a malicious purpose, like waging a war or destroying enemies of the state, would not itself exhibit behaviours we would categorise as malice, although I think it's somewhat limited to even call that malice. Cats don't need to kill birds, but they do it anyway.. that isn't malice.
Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force. I can't see anybody programming an A.I. as if to kill indiscriminately. Whether in our world, or a world of a hypthetical future that could build it. Such a hypothetical nation would be
incredibly lonely if it programmed a military grid to also fire at allied powers ... and arguably you couldn't have an A.I. without precursor technologies that make indiscriminate slaughter already feasibly with or without such a military grid.
Say ... nuclear munitions, biological munitions, chemical munitions, etc.
Moreover, it's just plain
stupid, and
strategically broken not to use proportional, reasonable force.
It would be
stupid to respond to a single platoon with a
battlefield nuclear device. If a single fighter trespasses your air space, it would
be stupid for a nation to just go from like Defcon 3/4 to
Defcon 1 and commit to a strategic nuclear exchange.
No person would consider that intelligent... and it's not like a military A.I. wouldn't be
run through simulations like this...
A cat might kill birds, but a cat does not kill
all birds nor
everything else.
I think it's a safe assumption that any sufficiently advanced civilization must be built on principles of reasonable force, respect for life, and diplomacy. I can't imagine the military brass or the researchers employed overseeing an A.I. indiscriminately slaughter people in
every simulation it programmed to be like; "Sure, seems legit!"
Not only that, in terms of
actual intelligence, indiscriminate slaughter is an oxymoron.
I mean, sure, destructiveness is a sign of higher intelligence in animals and even concepts of complex self-destructive activities in the face of stress that actively inhibit survival. Like soldiers firing above the heads of an enemy
even if that enemy is committed to their destruction, and firing advertises one's position. As we saw in conscripted soldiery in Vietnam.
Elephants do just randomly break trees for no other reason than they're angry (not scared, just simply PO'd...). Which is something pretty unique in animal psychology. But they do not break trees for no reason
unless they're angry.
You could make the argument that a hypothetical alien race has proof of intelligent, alien races, and has as such programmed A.I. to perform indiscriminate slaughter of
aliens for which there is no shared concepts of reasonable force or even a shared concept of what a military force looks like ... thus necessitating indiscriminate slaughter.
But that would also be a refutation of the 'life is hard' principle that suggests the universe is quiet
for a reason.