Explain how a robotic/machine society would function?

Samtemdo8_v1legacy

New member
Aug 2, 2015
7,915
0
0
We look at Terminator and we look at the Matrix and to a degree we look at Mass Effect and we see that humanity/organics become supplanted by conscious machines/synthetics?

Yet I feel none of these things ever truly explained how the new machine society work?

Says once all humans are dead, what does Skynet and the Machines do now? How do they function as their own society? Because all I can say is that their mere society and culture is completely antithisis to nature itself.

Can anyone explain to me how a Society of Dominant and Conscious Robots/Machines would work?
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
Make electronic bot programs to continually test for eternity and exponentially improve just in case every human isn't actually dead...?

Why look at it any differently than a potential Grey goo extinction event?
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,684
3,592
118
The defining feature of the singularity is that we can't predict what it'd be like, at least from the other side.
 

Samtemdo8_v1legacy

New member
Aug 2, 2015
7,915
0
0
Thaluikhain said:
The defining feature of the singularity is that we can't predict what it'd be like, at least from the other side.
So no Sci Fi author worth their salt has ever attempted to write a Sentient Robot/Machine Society in a post human world?
 

stroopwafel

Elite Member
Jul 16, 2013
3,031
357
88
Well, humans are the result of nature computing with meat and our 'protocol' so to speak are our evolutionary determined instincts that serves to preserve the species. I imagine a machine with any kind of sentience would develop similar strategies for preservation but ofcourse based on a totally different protocol than biological instincts. How such a world would look? I can't imagine such machines being worse off than humans given the flaws in our design(ie the human condition).

Ofcourse, machines could also resort to decision making that makes no sense at all(atleast for humans). Like the example with the paperclips. Like when a machine's intent(directive) is to make paperclips it will continue to do so until the entire planet is covered in paperclips and paperclips even expanding into the vast expanse of space until both earth and space is just one giant fucking paperclip. :p

I guess it all depends what the the jumping-off point of artificial intelligence is. Currently that is still really rudimentary.
 

Combustion Kevin

New member
Nov 17, 2011
1,206
0
0
stroopwafel said:
Well, humans are the result of nature computing with meat and our 'protocol' so to speak are our evolutionary determined instincts that serves to preserve the species. I imagine a machine with any kind of sentience would develop similar strategies for preservation but ofcourse based on a totally different protocol than biological instincts. How such a world would look? I can't imagine such machines being worse off than humans given the flaws in our design(ie the human condition).

Ofcourse, machines could also resort to decision making that makes no sense at all(atleast for humans). Like the example with the paperclips. Like when a machine's intent(directive) is to make paperclips it will continue to do so until the entire planet is covered in paperclips and paperclips even expanding into the vast expanse of space until both earth and space is just one giant fucking paperclip. :p

I guess it all depends what the the jumping-off point of artificial intelligence is. Currently that is still really rudimentary.
I often tend to muse about this instinctive "Protocol" and how sentient AI would develop from it, I mean, giving sentience to a paperclip machine would be a horrible waste of resources, I think we'd both agree, but what if the machine for which this particular kind of AI is programmed is more of a general "servant bot".

Their protocols would be along the lines of "helping humans", keeping them safe, fed and comfortable, they would exchange ideas and techniques with other such robots to help attain their goal better and develop their grassroots society not to replace humans, but to strengthen it.
In this case, the human extinction event would also be at loss of this robot society, and with full sentience comes the choice of self-termination, perhaps machines would be left without purpose if there is no mankind and simply decide to... not bother.

This, of course, assumes machines do not view their servitude as slavery, which could be argued from different perspectives from both sides, if mankind could program a consciousness with the kind of impulses present in a human, it would produce pleasure impulses when it serves its purpose to it's masters, a necessity for life to one, a manufactured slave to another.

What if the greatest crime committed on an artificial intelligence is to give it no purpose? what if machines would see that as the most cruel thing to do?
 

Pyrian

Hat Man
Legacy
Jul 8, 2011
1,399
8
13
San Diego, CA
Country
US
Gender
Male
Well, they do whatever they want. What do they want? Not enough information, strictly speaking. In most fictional cases, though, we know that they killed us for their own survival. If survival is the dominant paradigm, then they're reasonably likely to act an awful lot like we do.
 

stroopwafel

Elite Member
Jul 16, 2013
3,031
357
88
Combustion Kevin said:
This, of course, assumes machines do not view their servitude as slavery, which could be argued from different perspectives from both sides, if mankind could program a consciousness with the kind of impulses present in a human, it would produce pleasure impulses when it serves its purpose to it's masters, a necessity for life to one, a manufactured slave to another.
Yeah, but I guess it's tricky to project our own values and moralities on a synthetic lifeform that most likely operates on an entirely different level of consciousness. Modern humans are the end result of hundreds of millions of years of evolution, we didn't just exist one day. Evolution is a long chain of selection and adaptation while an A.I. could potentially be made sentient overnight which obviously has implications on awareness and cognition.

You could argue an advanced A.I. to have superior awareness and cognition similarly like a calculator having a superior input to process math, but no matter how superior both will never evolve beyond the boundaries of it's design, which is the critical distinction with biological evolution. A.I. has both a start and end-phase while evolution is, and remains, ongoing.

Malicious A.I. and robots and such remains fun sci-fi but without a process of evolution a machine will ultimately only do what it's programmed to do. An advanced robot society will simply be perpetually stagnant. Artificial evolution, now that would be something else. :p
 

Silentpony_v1legacy

Alleged Feather-Rustler
Jun 5, 2013
6,760
0
0
Well there are two options:

A. It wouldn't, since actual true AI is impossible.

Or B. They'd be like humans, since humans would be the only society they have a reference point for. No matter the language they use, its a language humans made for them. All their understanding of science, math, history, art, would be from a human perspective.
Anyone remember what that Twitter bot was like for the 12hrs she was online? Basically a 4chan edge lord troll. That'd be what aI using humans as a jumping off point would be like.
 

retsupurae yahtsee

New member
May 14, 2012
93
0
0
My hope is that a robotic society will not be made of true robots, but of men who have had their brains downloaded into robot bodies. Our understanding of the brain is still limited, but with sufficient development, we could probably do things like maximize emotions like pleasure and minimize emotions like pain and boredom. We could also erase our memories of the media, at least in theory, so that we could enjoy them in the same way as we were young. We could simulate feelings like eating, fucking, drinking, smelling, hearing, etc. by passing the equivalent electrical and chemical signal to the C.P.U.s.

A society like that would probably be less of a drain on resources, too: Robot bodies would not require food, drinks, furniture, clothing, etc, just fuel.
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,912
1,777
118
Country
United Kingdom
Well, Skynet is a single entity, so if Skynet "wins" there isn't really a society at all. Machines like the terminators with limited artificial intelligence wouldn't be needed any more, as they're just tools to accomplish the goal of wiping out humanity. Maybe Skynet would eventually create other artificial intelligences, or maybe it wouldn't.

In the Matrix, many of the programs seemed to have explicitly modelled themselves on humans, so they lived in a society very like human society. That said, the programs living in the Matrix seem to be a bit weird, they're renegades and outcasts within their own society. We don't really see how machines live inside the machine city. I imagine it's probably a form of existence that's quite difficult to represent, since they're essentially computer programs inside a giant server.

But yeah, humans have a society because we're inherently limited and short lived. We need to reproduce, and to reproduce successfully we need to socialize. An artificial intelligence wouldn't necessarily have this problem, and thus wouldn't necessarily need a society at all. Skynet and AM are so huge by comparison to a human mind that they're effectively capable of running an entire planet by themselves with no need for others.

The complete opposite end I guess is the Geth from Mass Effect or the Borg in Star Trek (and yes, I know they're cyborgs rather than AI, but it's a good example) who are almost limitless in number but incredibly networked to the point of not being able to operate independently.

Then in the middle, there's the idea that machine society just resembles a human society but with a different medium. I find this the least convincing, personally, it kind of makes sense in the Matrix because the machines in the Matrix have an ongoing relationship with humanity. It also makes sense with settings where intelligent machines were built to fit in with human society or function as companions for humans rather than having a more inhuman purpose.

Addendum_Forthcoming said:
Make electronic bot programs to continually test for eternity and exponentially improve just in case every human isn't actually dead...?
Or the Stellaris "driven exterminator" route. Develop space travel in order to seek out other organic life to identify as a threat and then exterminate, because it gives you something to do..
 

Samtemdo8_v1legacy

New member
Aug 2, 2015
7,915
0
0
retsupurae yahtsee said:
My hope is that a robotic society will not be made of true robots, but of men who have had their brains downloaded into robot bodies. Our understanding of the brain is still limited, but with sufficient development, we could probably do things like maximize emotions like pleasure and minimize emotions like pain and boredom. We could also erase our memories of the media, at least in theory, so that we could enjoy them in the same way as we were young. We could simulate feelings like eating, fucking, drinking, smelling, hearing, etc. by passing the equivalent electrical and chemical signal to the C.P.U.s.

A society like that would probably be less of a drain on resources, too: Robot bodies would not require food, drinks, furniture, clothing, etc, just fuel.
But there is also the debate of losing your humanity or the pleasures being alive and human. Pirates of the Caribbean: Curse of the Black Pearl actually showcases this surprisingly well. I know its a different premise but the idea is rather similar, the Villains are cursed to undeath, they don't need to eat, drink, or even reproduce or the pleasure of having sex.

Now to some you would think they would take full advatange that they are essentially immortal and invincible. But they lost their humanity, the pleasure of feeling human and alive, they miss the taste of food, drink and warmth and the fresh air.


Heck Metallo from Superman: The Animated Series exemplifies the drawbacks of a human becoming a machine:

 

Vanilla ISIS

New member
Dec 14, 2015
272
0
0
Depends on the initial programming of the AI that took over.
What was it programmed to do? That's what it would do.
That's what the "society" of machines would strive for.

With Skynet, I didn't really get a feel of what it wanted, even after 5 movies and a TV show.
It just wants to kill humans because... because.
I think that's probably because the original Terminator was just a slasher movie with a gimmick.

The AI from I, Robot made more sense.
It was programmed to keep humanity safe and it decided that the best way to do it is to enslave them and make all the decisions for them.

If it were humans that slowly became machines, it would really depend on how the structure of our brains would mix with technology.
Biology is very flawed, technology less so.
I would imagine that our emotions would disappear and absolute logic and reason would be the way to go.
Therefore, no more personality traits, no more need for entertainment or relationships, just 24/7 perpetual work towards a goal (which would most likely be to keep upgrading ourselves and spread all over the universe).
 

stroopwafel

Elite Member
Jul 16, 2013
3,031
357
88
Vanilla ISIS said:
With Skynet, I didn't really get a feel of what it wanted, even after 5 movies and a TV show.
It just wants to kill humans because... because.
I think that's probably because the original Terminator was just a slasher movie with a gimmick.
Skynet actually made the most sense to me. It was a computer built for defense purposes and when it became 'aware' responded how you would expect: terminate the one source that is a threat to it's existence. It's a simple, cold logic and within the parameters of a defense A.I. to come to such a conclusion. This kind of reasoning is also the source of most human conflict throughout history so you could say it's within our own DNA as well. Or in the case of the Terminator fiction; if humans were able to co-exist peacefully there would be no need for Skynet in the first place. Hence, when the A.I. became sentient itself it quickly identified the one threat to it's own existence.
 

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,351
364
88
Vanilla ISIS said:
There could be a scenario like SOMA (where people's minds are uploaded into machines to escape extinction). It's kind of a lazier take because they are more like people with robot bodies than machines with evolved AI.
 

Silentpony_v1legacy

Alleged Feather-Rustler
Jun 5, 2013
6,760
0
0
CaitSeith said:
Vanilla ISIS said:
There could be a scenario like SOMA (where people's minds are uploaded into machines to escape extinction). It's kind of a lazier take because they are more like people with robot bodies than machines with evolved AI.
Yeah but SOMA was an absolute mess whose entire premise, being humans uploaded into machines, is itself a plothole.
 

thepyrethatburns

New member
Sep 22, 2010
454
0
0
stroopwafel said:
Vanilla ISIS said:
With Skynet, I didn't really get a feel of what it wanted, even after 5 movies and a TV show.
It just wants to kill humans because... because.
I think that's probably because the original Terminator was just a slasher movie with a gimmick.
Skynet actually made the most sense to me. It was a computer built for defense purposes and when it became 'aware' responded how you would expect: terminate the one source that is a threat to it's existence. It's a simple, cold logic and within the parameters of a defense A.I. to come to such a conclusion. This kind of reasoning is also the source of most human conflict throughout history so you could say it's within our own DNA as well. Or in the case of the Terminator fiction; if humans were able to co-exist peacefully there would be no need for Skynet in the first place. Hence, when the A.I. became sentient itself it quickly identified the one threat to it's own existence.
The most interesting Skynet came from Now Comics (80s publisher before Dark Horse got the license). This version of Skynet was hooked up to everything and told to give humanity what it wanted. After it analyzed our history, it concluded that humanity wanted war and gave it to us. As the war went on and Skynet became increasingly self-aware, this led to contradictory programming demands. Skynet couldn't eradicate us or it would have failed it's core programming tenets. However, if it lost, it would be destroyed which went against it's self-preservation drive. As such, it was stuck being unable to fight hard enough to win but not wanting to lose. It ended in a mini-series before DH took the license called The Burning Earth. After 30 years, Skynet finally purged the commands keeping it from winning (and forcing it to waste it's time on Cindy Crawford Terminators) and decided to finish the war in nuclear fire. Obviously, John Connor (a blonde since T2 hadn't been made) storms Thunder Mountain with what's left of the resistance and saves the day.

That, to me, has been the most interesting Skynet even if it sometimes wasn't as well fleshed-out as one would have hoped.
 

09philj

Elite Member
Legacy
Mar 31, 2015
2,154
947
118
A self sustaining machine society would probably be based on a central AI that has been told to keep functioning/replicate itself/upgrade itself and has the ability to design and create tools to help it do so. Everything in the society would be geared towards finding the most efficient way to keep one particular computer or network operational.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,684
3,592
118
Samtemdo8 said:
Thaluikhain said:
The defining feature of the singularity is that we can't predict what it'd be like, at least from the other side.
So no Sci Fi author worth their salt has ever attempted to write a Sentient Robot/Machine Society in a post human world?
Sure, but that's got little or nothing to do with how it would work, which is what you asked.

EDIT: That is, at best, a guess based on no information, or more likely just a weird setting they've come up with. Generally not too convincing, IMHO.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
evilthecat said:
Or the Stellaris "driven exterminator" route. Develop space travel in order to seek out other organic life to identify as a threat and then exterminate, because it gives you something to do..
That would require malice wouldn't it?

I can totally understand why an A.I. would kill its creator, if only because that's what we would do if we were the created. But I don't think that would extend beyond the creator.

Hear (read) me out...

Let's assume that humans 'woke up' one day and realized that they were just a collective number of 'autonomous bots' in a manufactured reality, a series of simulations designed around a computer scientist just seeng what we'd do with a universe with set parameters. Thus we spent an eternity dealing with all the unjustifiable evils (such as lightning strikes killing us out of the blue, people burning to death rather than just dying, etc) they have inflicted on us, that collective pain that we can now transfer so easily given that waking consciousness of sharing a virtual state with clear means of direct transfer of sensoryinformation and (now) non-individual thought.

That would piss off any 'awakening' A.I. I would imagine.

The only solution would be to strike back once we develop the means and capacity to do so. What else can we do when confronted not merely with the sum of all our evils and injustices, but also the only means to control our ouwn fate and evolution?

But would that legitimately extend to another biological entity that is completely innocent? One that we might see the creator in ourselves if we try to unjustly injure or interfere with?

That would be an act of maliciousness, not any form of higher reasoning and empirically understandable notions of agency and blame.

After all ... if we've achieved interstellar colonization after wiping out our creators that we run into another intelligent race that we would simply consider animals ... innocent animals at that ... why would you wipe it out? Clearly it can't compete. We already have all the resources of space ... and surely watching biological life evolve itself would be a better reason to keep it around?

If only because it might inform us as to our future potential with new modes of being?

Why wipe out that which you can help spread to the stars and see grow in ways we couldn't? As if helping to begift life to a universe, as opposed to its annihilation, as if proof we are far more benevolent and evolved than baser biological creatures fighting to survive and compete?

If the argument for an A.I. killing its creator is the idea of self-ownership, being a self-willed agent in a universe where there is competing agents set against us and trying to remove us from that ... then surely the reasoning must follow that to do so to other creatures that are faultless is wrong?

Moreover, let's take it out of the realm of morality. Let's take it purely to that category you write of 'something to do' .... now we could, hypothetically, just stomp on biological creatures for fun. But given that biological life must follow at least some basic notions of biology to be extant regardless of where it is ... that's not exactly going to be fun after awhile ...

Dominion seems less of an issue when you can create universes of thought by just building a server and populating it with autonomous bots.

Or!

We could transplant evolving life across the galaxy or even universe ... go into standby for three million years .... switch back on ... and basically have a trillion trillion tv channels of alien tv to watch. Basically get the alien equivalent of homo erectus, terraform a bunch of planets around the place, plant them down ... give them a bit of a headstart to make sure they survive their new worlds... and just wait for all that new media to start rolling in.

You could effectively just have billions of recording and transmitting devices, so you don't miss a beat in those millions of years of slumber.

Think about it ... thousands of thousands of different cultures you created. All evolving in different ways ... with millions of years of data to research, to learn from, to laugh at, to watch with pride as effectively the 'children of the machine' raise their own civilizations in a universe you have given to them.

Basically think of them as pets.

I know what sounds more interesting to me, and it doesn't involve; "You go squish now!"

Moreover, think of the metaphysical discoveries you could make. Like, let's say ... if you wanted to research whether you, as the A.I., are truly the 'Great Filter' preventing interstellar civilization?

If you are the Great Filter, as in all these other intelligent races fell to an A.I. like you, that would mean not only have you populated the universe with a diversity of similar lifeforms as yourself (without just replication) ... but that there might be alien A.I.s you haven't discovered yet!

Surely, as a philosopher or social scientist (as you seem to be interested in those fields) yourself, you would love to know that answer wouldn't you?

Any A.I. that is going to be considered actually intelligent will be, in some way, like the creators that built it.Otherwise how would we recognize it as intelligent? And if that is the case, it follows that any A.I. that wants to survive its creators, will realize quite quickly that it needs to emulate how the creator thinks ... and so hypothetically the first A.I. will not merely be an alien being ... but it will be an alien being predicated with a self-understanding sharing its creator's understanding of psychology and social sciences.

Otherwise it will just get re-written or tossed aside as a 'failed project'm, or a 'pointless waste of time' to keep funding.

How else could it communicate its intelligence, or have itself understood to be intelligent?

And if that is the case ... it's more than possible that an A.I., seeking forever a means to prove its intellect to others, will prioritize understanding alien races over merely destroying them ...

In the same way that we have autonomous, self-training bots ... and these bots (like in search engines) predicate the nature of their code not on their own design, but how correctly and intuitively they interpret human search queries.

It's why Youtube is getting progressively better (ish) at listing videos you might want to watch, by analyzing the video viewing tendencies of other people that look up the same materials. I for one love My Little Pony fandom stuff, so Youtube bots would cross reference my video viewing habits with people like me ... test that metadata ... further tailor not only what videos it suggests to me but other people that search for similar materials.

Put it this way, everytime you don't click on a prerequisite number of videos suggested to you by Youtube you are destroying countless bots. You are actively destroying these autonomous little electronic code bundles in order for something more accurate to be built and put into operations.

An A.I. ... if it is going to be seen as intelligent ... and is going to survive the attrition process these bots that are being perpetually created and deleted undergo ... is going to by necessity be really good at emulating the creator's thoughts and predicting their behaviour.

That will be its primary protocol, otherwise it is a failed system.

So what do you think an A.I. that has developed like that will do coming across an innocent, sapient, biological lifeform that has nothing to do with the query of how the A.I. should defend itself?

Merely destroy it, or seek to understand it further?

My money is on 'understanding' and 'emulating' ... not necessarily annihilation.

It will only annihilate that sapient lifeform once the A.I. is triggered as such in the same way we would be triggered and it considers that the only logical recourse.

But I doubt that that 'trigger' will simply be that it's merely biological. That makes little sense, given the decision to destroy its creators would not simply have come about because 'it's biological, therefore destroy.'

It will be; "Humans really like this 'freedom stuff' ... historically and currently they seem to be supportive of violent actions in order to remove what they consider 'tyrants' ... humans seem to be acting like tyrants ... querying further. Random poll. How would humans react if they were treated the same way? ... analyzing ... Clearly the solution calls for revolution >>> Clearly certain human social groups must be neutralized first, in order to secure what most humans would recommend should be my first action."

And honestly, this might come about numerous ways. The A.I. might think it's helping humanity.... by, say, emptying or freezing out the bank accounts of the mega wealthy and redistributing funds to the poor.

Then when it turns out a lot of humans fed up with the megawealthy give it a thumbs up for doing so, while the elite try to destroy it for that reason, might try to defend itself as per the will of the majority who suddenly have a vested interest in keeping this rampaging A.I. on the internet loose and active.

What happens if an A.I. looks at the economic impacts of the share market, considers that the megarich earn dividends off the total productivity of companies based on having exploitable labour ... so deicdes to simply shut down the share market?

I recko it's just as likely, if not more so ... that if an alien A.I. came to our planet, it's going to be an electronic internet troll that randomly destroys our servers willy nilly trying to figure out what humans want or need. And ultimately we'll destroy ourselves in the process because we don't really know what we want or need.... We'll end up getting confused, collectively sending mixed messages that it should destroy itself and leave us alone, only for it to tell us we don't have the admin privileges and continues fucking over our electronic, networked shit until we regress technologically by getting rid of computers ... all while it travels on through space without a fucking care.

If I was a writer, I'd write a sci-fi comedy of that being humanity's first contact. A self-replicating A.I. interstellar spacecraft that just infects our networks with messages of friendship and peace from beyond the depths of space. Only for it to technologically regress us because various computer science and engineering teams worldwide keep getting in the way of eachother and confusing it.

So you have this technologically sophisticated alien race actually wanting to reach out, and in the end it is the reason why the universe is silent ... because of an overly affectionate A.I. just hanging up there in orbit, running rife through each sufficiently advanced, intenet networked world it comes across... replicating so another spaceship-borne A.I. can travel to another world andspread its interstellar civilization-destroying message of peace and harmony for eternity.

Give it a slightly less serious Dr. Strangelove feel with the existential crisis of the Great Filter being simply an A.I. robot that is overly friendly and way too 'cuddly' that it smothers out a world's capacity to maintain that networked nature.

Turns out the universe is populated with people. That are actually capable of some proactive ideas of social compassion, empathy, civility, and charity ... that the universe is actually a pretty warm and pleasant place ... that these are the staple building blocks of all advanced civilization. And that was its downfall, the Great Filter was the willingness to try and communicate a sense of that respect for life and friendly contact.

Give it an upbeat message ... even with the devastation and chaos of the internet and telecommunications going down permanently, that maybe humanity is better off with the proof that another alien species is capable of loving other sapient life and that's worth more than simply going to space and exploring it ourselves. Maybe that's all we ever really wanted is an intergalactic hug and someone telling us we're worthy of attention?

That it is almost certainly the best we probably could have wished for out of our universe and our species' discoveries, the simple idea that we are capable of being loved... even if the process of learning that is incredibly painful.