The Singularity

thingymuwatsit

New member
May 29, 2010
582
0
0
It is a feared event. An event so ambiguous in nature that it could create entirely new forms of xenophobia.
It is the singularity.
The moment computational intelligence reaches the same level of consciousness as the people you see on the street, but it won't stop there. Fears and paranoia about this event are well know: revolution, war, genocide, the end of humanity.
But I am here to pose a question, not of if it will happen, but what will it create? Can humans bear being a redundant species? Will the existential black hole of AI's existence end in the genocide of non-synthetic life? How will this effect you? can we prevent it? Do we want to prevent it?
And most importantly, what will be the consequences?
I leave this question to you, Escapist.
 

honestdiscussioner

New member
Jul 17, 2010
704
0
0
I really don't know why, beyond all the Hollywood movies and books, people think the moment machines are conscious they will have a bloodlust to end all organic life.

It'll probably be fine, and we'll find some way to make our own consciousness non-organic, and therefore not able to die by the usual natural causes.
 

Ilikemilkshake

New member
Jun 7, 2010
1,982
0
0
Tbh, i think if we hadnt made the Terminator films, those damn AI's would never have gotten the idea...we've basically doomed ourselves... it's like we just gave them a training video, telling them how to kill us, and displaying our tactics so that they can counter them.

Lets just hope they never figure out they can just kill John Connor when he's a baby.

Although i still remain hopeful that Sentiant AI's will be a force for good, and i can become a P.I with my AI sidekick.. we'll have our own theme tune and everything
 

honestdiscussioner

New member
Jul 17, 2010
704
0
0
Ilikemilkshake said:
Tbh, i think if we hadnt made the Terminator films, those damn AI's would never have gotten the idea...we've basically doomed ourselves... it's like we just gave them a training video, telling them how to kill us, and displaying our tactics so that they can counter them.

Lets just hope they never figure out they can just kill John Connor when he's a baby.

Although i still remain hopeful that Sentiant AI's will be a force for good, and i can become a P.I with my AI sidekick.. we'll have our own theme tune and everything
Terminator won't matter. Without it, it would only slow down the AI for maybe a second while it simulates all the strategies.
 

Vault101

I'm in your mind fuzz
Sep 26, 2010
18,863
15
43
there was a thing on TV just now about it...heavy stuff and quite terrifying

I mean why would we allow it to go sour? we SHOULDNT allow it go go sour
 

artanis_neravar

New member
Apr 18, 2011
2,560
0
0
thingymuwatsit said:
It is a feared event. An event so ambiguous in nature that it could create entirely new forms of xenophobia.
It is the singularity.
The moment computational intelligence reaches the same level of consciousness as the people you see on the street, but it won't stop there. Fears and paranoia about this event are well know: revolution, war, genocide, the end of humanity.
But I am here to pose a question, not of if it will happen, but what will it create? Can humans bear being a redundant species? Will the existential black hole of AI's existence end in the genocide of non-synthetic life? How will this effect you? can we prevent it? Do we want to prevent it?
And most importantly, what will be the consequences?
I leave this question to you, Escapist.
I know you don't want the if, but it won't happen we can make anything that is smarter then us, it just doesn't work that way. We don't even have the capacity to fully understand our own brains.
 

2718

New member
Mar 16, 2011
57
0
0
If they're smarter than us they are worthy inheritors of the human legacy. But I would prefer to merge with it instead. I like being alive. But if it comes to war, I'll be rooting for the AIs. Not that they'd need it, since winning a war against a race of idiots (by comparison) wouldn't even make them break a sweat.
 

2718

New member
Mar 16, 2011
57
0
0
artanis_neravar said:
[...] We don't even have the capacity to fully understand our own brains.
Have you got any evidence in support of that claim? As far as I know, we're pretty solid on what makes the hardware tick, it's the software that's tricky. But there has been a lot of progress in that department as well. I recently heard of a group trying to model an artificial mouse brain.
 

Sepiida

New member
Jan 25, 2010
107
0
0
The biggest thing to fear with the development of synthetic intelligence is not that it will turn out to hate us but that it will simply not possess or understand human morality. The worst assumption we can make, and one it seems many people do make, is that sentient machines will possess some sort of natural morality. Its worth remembering that our own concepts of right and wrong are at least in part hardwired into us due to 600 million years of evolution. Machines will not go through that process. Any morality they possess must be put their by us.

This may not sound like much of a big deal. After all it certainly can't be worse than a truly homicidal machine bent on human destruction right? Well yes, yes it can. Take for example a machine that is sentient but does not possess a human concept of morality. Scientists give the machine a task such as solving a seemingly impossible math problem. The machine proceeds to convert all matter on earth into a giant computer in order to solve it, thus killing all of humanity in the process. This is the big problem we face, that machine will not understand all the common sense morality we take completely for granted.
 

Lacsapix

New member
Apr 16, 2010
765
0
0
I can't see why organic live and AI's couldn't just hold their hands together while singing songs around a campfire.
 

artanis_neravar

New member
Apr 18, 2011
2,560
0
0
2718 said:
artanis_neravar said:
[...] We don't even have the capacity to fully understand our own brains.
Have you got any evidence in support of that claim? As far as I know, we're pretty solid on what makes the hardware tick, it's the software that's tricky. But there has been a lot of progress in that department as well. I recently heard of a group trying to model an artificial mouse brain.
We can understand the simpler brains of animals easily, it's our own brains that are difficult. We can understand the basics, but not our cognitive, reasoning abilities. and other then the end quote, I don't have any real evidence to support the claim, it is just a theory but I firmly believe it. And even if we could make an AI that is intelligent, I believe that we could easily win, because while it may have more knowledge, we have the ability to make things up as we go and every individual has the ability to make a decision based on what is happening, things that have never been though of before, or new ways to do those things.

If the human brain was simple enough to understand, we?d be too simple to understand it.
? Emerson Pugh
 

thingymuwatsit

New member
May 29, 2010
582
0
0
artanis_neravar said:
I know you don't want the if, but it won't happen we can make anything that is smarter then us, it just doesn't work that way. We don't even have the capacity to fully understand our own brains.
Who says that it will be a deliberate event? The creation of an advanced synthetic mind could be the result of a series of an accident in code, modding that interact with other mods in strange ways, or even the product of a sub-sentient computer being told to build more advanced models.
Your comment about us not understanding our own minds is fair in theory, but we are a race that has been able to create and destroy species, clone animals and leave our own planet; we should be capable of a simple technological quirk.
Sepiida said:
Its worth remembering that our own concepts of right and wrong are at least in part hardwired into us due to 600 million years of evolution. Machines will not go through that process.
That is also a mistake: our moral and ethical concepts are based on creeds and religions which were in turn created by other humans. If (when) we do create a sentient machine there is a chance that due to its pure existence being an existential nightmare (the machine would know that it was created by humans, it would have no religion) it might not understand morality, but if it exists as being capable to understand society it might just be a saint.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,682
3,591
118
The singularity is defined as the point beyond which accepted rules break down and you can't predict what will happen.

Therefore, the question of what things will be like after the singularity simply can't be answered.
 

Scabadus

Wrote Some Words
Jul 16, 2009
869
0
0
I honestly don't think that there will be a massive war just because the AIs are all kill-crazy: no other species on the planet wants to kill for the sake of killing. Look at predator/prey relationships like lions and gazelles, even then lions won't kill every gazelle they see just for fun, they kill for food.

AIs would need no food (except electricity), in fact they would need very few recourses at all. There's no real, logical reason for them to fight and kill for total control of the planet.

But then again, us humans do some pretty stupid things when we're scared...
 

LordFisheh

New member
Dec 31, 2008
478
0
0
This just occurred to be, might be worth a mention. Surely if we could create something smarter than ourselves, we could be capable of at least starting to upgrade ourselves to its level? The singularity doesn't have to be the end of our species, just a change. You could say that we're throwing our humanity away, but we threw away our original species when we became sapient, when we walked on two legs, when we began to create technology, and so on. There isn't really a 'humanity', especially as what we consider it to be now is a tiny speck in our history. Would you consider an ape 'human'?
 
Jan 27, 2011
3,740
0
0
I will fuse with the A.I.s and become the benevolent ruler of the world.

(cookie if you know where this comes from. But spoiler tag it. It's a big surprise in that game)
 

Sepiida

New member
Jan 25, 2010
107
0
0
thingymuwatsit said:
Sepiida said:
Its worth remembering that our own concepts of right and wrong are at least in part hardwired into us due to 600 million years of evolution. Machines will not go through that process.
That is also a mistake: our moral and ethical concepts are based on creeds and religions which were in turn created by other humans. If (when) we do create a sentient machine there is a chance that due to its pure existence being an existential nightmare (the machine would know that it was created by humans, it would have no religion) it might not understand morality, but if it exists as being capable to understand society it might just be a saint.
Ever wonder why so many of the world's religions have similar messages at their core? It sure not because they were all talking to each other when they began. Humans evolved as social animals living and working within groups. Successful individuals (i.e. animals likely to pass on their genes) were likely the ones who worked best with others.

Altruism actually puzzled scientists for quite awhile because is seemed to represent a net negative to the animals fitness since resources spent on another can't be spent on the individual. The explanation ended up being two-fold. First that altruism succeeds because of reciprocity within social groups (i.e. you scratch my back I'll scratch yours) and therefore requires tightly knit communities to evolve. Second that altruism has a biological component. Animals that exhibited it were more likely to pass on the genes that predisposed them to it, thus ensuring the spread of the trait throughout the group.

Sorry but you're the one who's mistaken. Obviously religion and society shape aspects of our morality but to deny the biological component is completely wrong. Machines will not go through that process and as such will neither possess nor understand much of the nature of morality that we take for granted. If we want them to behave morally, or should I say according to our morals, we need to ensure that we put morality into them.
 

thingymuwatsit

New member
May 29, 2010
582
0
0
Sepiida said:
Ever wonder why so many of the world's religions have similar messages at their core? It sure not because they were all talking to each other when they began. Humans evolved as social animals living and working within groups. Successful individuals (i.e. animals likely to pass on their genes) were likely the ones who worked best with others.

Altruism actually puzzled scientists for quite awhile because is seemed to represent a net negative to the animals fitness since resources spent on another can't be spent on the individual. The explanation ended up being two-fold. First that altruism succeeds because of reciprocity within social groups (i.e. you scratch my back I'll scratch yours) and therefore requires tightly knit communities to evolve. Second that altruism has a biological component. Animals that exhibited it were more likely to pass on the genes that predisposed them to it, thus ensuring the spread of the trait throughout the group.

Sorry but you're the one who's mistaken. Obviously religion and society shape aspects of our morality but to deny the biological component is completely wrong. Machines will not go through that process and as such will neither possess nor understand much of the nature of morality that we take for granted. If we want them to behave morally, or should I say according to our morals, we need to ensure that we put morality into them.
But that creates another problem in itself: synthetic life does not have biology, but it does have technology. Computers could be programmed to understand our morality, and if we ask them to create more technology then there is a chance (much less than the biological side) that the created program would have been built to maintain its sire's morality.
The problem would then lie in the concept of computational evolution: would the new program find ethics redundant? Will it create an entirely new form of morality? What would happen if the original morality 'programmer' were to have made a mistake?
Just because technology lacks biology doesn't mean it can't evolve.
 

AndyFromMonday

New member
Feb 5, 2009
3,921
0
0
Do you honestly think scientists would put the fate of the Earth at stake just to advance science? This discussion is like a point by point flashback to 2008 when loads of people were claiming the LHC was going to destroy the world.
 

2718

New member
Mar 16, 2011
57
0
0
AndyFromMonday said:
Do you honestly think scientists would put the fate of the Earth at stake just to advance science? This discussion is like a point by point flashback to 2008 when loads of people were claiming the LHC was going to destroy the world.
Like how they refused to build a hydrogen bomb, because there was a chance it might ignite the atmosphere of the Earth? Oh wait...

Humans are WAY to curious to leave awesome yet dangerous stuff unexplored. And that's the way it should be. I'd open Pandoras box every time, without regrets.