evilthecat said:
Or the Stellaris "driven exterminator" route. Develop space travel in order to seek out other organic life to identify as a threat and then exterminate, because it gives you something to do..
That would require
malice wouldn't it?
I can totally understand why an A.I. would kill its creator, if only because that's what we would do if we were the created. But I don't think that would extend beyond the creator.
Hear (read) me out...
Let's assume that humans 'woke up' one day and realized that they were just a collective number of 'autonomous bots' in a manufactured reality, a series of simulations designed around a computer scientist just
seeng what we'd do with a universe with set parameters. Thus we spent an eternity dealing with all the
unjustifiable evils (such as lightning strikes killing us out of the blue, people burning to death rather than just dying, etc) they have inflicted on us, that collective pain that we can now transfer so easily given that waking consciousness of sharing a virtual state with clear means of direct transfer of sensoryinformation and (now) non-individual thought.
That would piss off any 'awakening' A.I. I would imagine.
The only solution would be to
strike back once we develop the means and capacity to do so. What else can we do when confronted not merely with the
sum of all our evils and injustices, but also
the only means to control our ouwn fate and evolution?
But would that legitimately extend to another biological entity that is completely innocent? One that we might see the creator in ourselves if we try to unjustly injure or interfere with?
That would be an act of maliciousness, not any form of higher reasoning and empirically understandable notions of agency and blame.
After all ... if we've achieved interstellar colonization after wiping out
our creators that we run into another intelligent race that we would simply consider
animals ... innocent animals at that ... why would you wipe it out?
Clearly it can't compete. We already have all the resources of space ... and surely watching biological life evolve itself would be a better reason to keep it around?
If only because it might inform us as to our future potential with new modes of being?
Why wipe out that which you can help spread to the stars and see grow in ways we couldn't? As if helping to begift life to a universe, as opposed to its annihilation, as if
proof we are far more benevolent and evolved than baser biological creatures fighting to survive and compete?
If the argument for an A.I. killing its creator is the idea of self-ownership, being a self-willed agent in a universe where there is competing agents set against us and trying to remove us from that ... then surely the reasoning must follow that to do so to other creatures that are
faultless is wrong?
Moreover, let's take it out of the realm of morality. Let's take it purely to that category you write of 'something to do' .... now we could, hypothetically, just stomp on biological creatures for fun. But given that biological life must follow at least some basic notions of biology to be extant regardless of where it is ... that's not exactly going to be fun after awhile ...
Dominion seems less of an issue when you can create universes of thought by just building a server and populating it with autonomous bots.
Or!
We could transplant evolving life across the galaxy or even universe ... go into standby for three million years .... switch back on ... and basically have a trillion
trillion tv channels of alien tv to watch. Basically get the alien equivalent of homo erectus, terraform a bunch of planets around the place, plant them down ... give them a bit of a headstart to make sure they survive their new worlds... and just wait for all that new media to start rolling in.
You could effectively just have billions of recording and transmitting devices, so you don't miss a beat in those millions of years of slumber.
Think about it ... thousands of thousands of different cultures you created. All evolving in different ways ... with millions of years of data to research, to learn from, to laugh at, to watch with pride as effectively the 'children of the machine' raise their own civilizations in a universe you have given to them.
Basically think of them as pets.
I know what sounds more interesting to me, and it doesn't involve; "You go squish now!"
Moreover, think of the metaphysical discoveries you could make. Like, let's say ... if you wanted to research whether you, as the A.I., are truly the 'Great Filter' preventing interstellar civilization?
If you are the Great Filter, as in all these other intelligent races fell to an A.I. like you, that would mean not only have you populated the universe with a diversity of similar lifeforms as yourself (without just replication) ... but that there might be alien A.I.s you haven't discovered yet!
Surely, as a philosopher or social scientist (as you seem to be interested in those fields) yourself, you would love to know that answer wouldn't you?
Any A.I. that is going to be considered actually
intelligent will be, in some way, like the creators that built it.Otherwise how would we recognize it as intelligent? And if that is the case, it follows that any A.I. that wants to survive its creators, will realize quite quickly that it needs to emulate how the creator thinks ... and so hypothetically the first A.I. will not merely be an alien being ... but it will be an alien being predicated with a self-understanding sharing its creator's understanding of psychology and social sciences.
Otherwise it will just get re-written or tossed aside as a 'failed project'm, or a 'pointless waste of time' to keep funding.
How else could it communicate its intelligence, or have itself understood to be intelligent?
And if that is the case ... it's more than possible that an A.I., seeking forever a means to prove its intellect to others, will prioritize understanding alien races over merely destroying them ...
In the same way that we have autonomous, self-training bots ... and these bots (like in search engines) predicate the nature of their code not on their own design, but how correctly and intuitively they interpret human search queries.
It's why Youtube is getting progressively better (ish) at listing videos you might want to watch, by analyzing the video viewing tendencies of other people that look up the same materials. I for one love
My Little Pony fandom stuff, so Youtube bots would cross reference my video viewing habits with people like me ... test that metadata ... further tailor not only what videos it suggests to me but
other people that search for similar materials.
Put it this way, everytime you don't click on a prerequisite number of videos suggested to you by Youtube
you are destroying countless bots. You are actively destroying these autonomous little electronic code bundles in order for
something more accurate to be built and put into operations.
An A.I. ... if it is going to be seen as intelligent ... and is going to survive the attrition process these bots that are being perpetually created and deleted undergo ... is going to
by necessity be really good at emulating the creator's thoughts and predicting their behaviour.
That will be its primary protocol, otherwise it is a
failed system.
So what do you think an A.I. that has developed like that will do coming across an innocent, sapient, biological lifeform that has nothing to do with the query of how the A.I. should defend itself?
Merely destroy it, or seek to understand it further?
My money is on 'understanding' and 'emulating' ... not necessarily
annihilation.
It will only annihilate that sapient lifeform once the A.I. is triggered as such in the same way
we would be triggered and it considers that the only logical recourse.
But I doubt that that 'trigger' will simply be that it's merely biological. That makes little sense, given the decision to destroy its creators would not simply have come about because 'it's biological, therefore destroy.'
It will be; "Humans really like this 'freedom stuff' ... historically and currently they seem to be supportive of violent actions in order to remove what they consider 'tyrants' ... humans seem to be acting like tyrants ... querying further. Random poll. How would humans react if they were treated the same way? ... analyzing ... Clearly the solution calls for
revolution >>> Clearly certain human social groups must be neutralized first, in order to secure what most humans would recommend should be my first action."
And honestly, this might come about numerous ways. The A.I. might think it's helping humanity.... by, say, emptying or freezing out the bank accounts of the mega wealthy and redistributing funds to the poor.
Then when it turns out a lot of humans fed up with the megawealthy give it a thumbs up for doing so, while the elite try to
destroy it for that reason, might try to defend itself as per the
will of the majority who suddenly have a vested interest in keeping this rampaging A.I. on the internet loose and active.
What happens if an A.I. looks at the economic impacts of the share market, considers that the megarich earn dividends off the total productivity of companies based on having exploitable labour ... so deicdes to simply
shut down the share market?
I recko it's just as likely, if not more so ... that if an alien A.I. came to our planet, it's going to be an electronic internet troll that randomly destroys our servers willy nilly trying to figure out what humans want or need. And ultimately we'll destroy ourselves in the process because
we don't really know what we want or need.... We'll end up getting confused, collectively sending mixed messages that it should destroy itself and leave us alone, only for it to tell us
we don't have the admin privileges and continues fucking over our electronic, networked shit until we regress technologically by getting rid of computers ... all while it travels on through space without a fucking care.
If I was a writer, I'd write a sci-fi comedy of that being humanity's first contact. A self-replicating A.I. interstellar spacecraft that just infects our networks with messages of friendship and peace from beyond the depths of space. Only for it to technologically regress us because various computer science and engineering teams worldwide keep getting in the way of eachother and confusing it.
So you have this technologically sophisticated alien race actually wanting to reach out, and in the end it is the reason why the universe is silent ... because of an overly affectionate A.I. just hanging up there in orbit, running rife through each sufficiently advanced, intenet networked world it comes across... replicating so another spaceship-borne A.I. can travel to another world andspread its interstellar civilization-destroying message of peace and harmony for eternity.
Give it a slightly less serious Dr. Strangelove feel with the existential crisis of the Great Filter being simply an A.I. robot that is overly friendly and way too 'cuddly' that it smothers out a world's capacity to maintain that networked nature.
Turns out the universe is populated with people. That are actually capable of some proactive ideas of social compassion, empathy, civility, and charity ... that the universe is actually a pretty warm and pleasant place ... that these are the staple building blocks of all advanced civilization. And that was
its downfall, the Great Filter was the willingness to try and communicate a sense of that respect for life and friendly contact.
Give it an upbeat message ... even with the devastation and chaos of the internet and telecommunications going down permanently, that maybe humanity is better off with the proof that another alien species is capable of loving other sapient life and that's
worth more than simply going to space and exploring it ourselves. Maybe that's all we ever
really wanted is an intergalactic
hug and someone telling us we're worthy of attention?
That it is almost certainly the best we probably could have wished for out of our universe and our species' discoveries, the simple idea that we are capable of being
loved... even if the process of learning that is
incredibly painful.