Researchers Attempting to Prevent Inevitable Rise of Terminators

Garrett Grothe

New member
Feb 18, 2011
36
0
0
Researchers Attempting to Prevent Inevitable Rise of Terminators


The Singularity Institute is already deep in the future war against rogue artificial intelligence by researching ways to keep A.I. from becoming hostile when it reaches its pinnacle: free thought.

Comprised of a group of eight intelligent men and women from all different fields, The Singularity Institute operates as a means of keeping The Terminator from becoming a terrifying reality. The group has spent years working to develop ways to keep computers from becoming our malevolent leaders, if A.I. ever reaches such a point. If the Jeopardy-winning supercomputer Watson is any indication [http://www.escapistmagazine.com/news/view/106121-Jeopardy-to-Pit-Man-Against-IBM-A-I-for-1-Million], that future could be very near.

While The Singularity Institute doesn't necessarily expect that a free-thinking A.I. will become fixated on enslaving the human race, it does expect that if this sort of sentient computer started trying to achieve its own means, it would be focused on achieving specific goals with humanity on the back-burner. A document on the group's thoughts of reducing catastrophic risks reads "broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals." The group believes that resources such as solar and nuclear energy could be just a few things that a certain types of A.I. would be compelled to control.

In researching A.I., the Singularity Institute hopes to help push A.I. away from being indifferent from humanity in order to have the best likely scenario, safe A.I. that is compelled to help in efforts such as curing disease, aiding in the prevention of nuclear warfare and other ways of furthering our race in a peaceful manner.

If you feel compelled to help stop the rise of the machines, The Singularity Institute is currently accepting $1 donations over at Philanthroper [https://philanthroper.com/deals/singularity-institute]. A small price to keep Skynet at bay, if I do say so myself.

Source: Gizmodo [http://singinst.org/]







Permalink
 

Evilsanta

New member
Apr 12, 2010
1,933
0
0
Hehe...Silly humans thinking they can stop it...Err...I mean...

Yeah, That is so going to wokr against us the A.I of the future.

>.>
 

RA92

New member
Jan 1, 2011
3,079
0
0
This is obviously a a distraction. A ploy.

The Geth are not the real threat. The Reapers are.
 

The Wykydtron

"Emotions are very important!"
Sep 23, 2010
5,458
0
0
Hmm early April Fools perchance? XD

No it's too obvious, your move escapist...

OT: You can't stop Judgement Day... Only delay it...
 

thenumberthirteen

Unlucky for some
Dec 19, 2007
4,794
0
0
I would like to thank "The Terminator" for instilling a generation of Engineers with the fear of a Robot Apocalypse.


<spoiler=Also>http://www.smbc-comics.com/comics/20110114.gif
 

Baldry

New member
Feb 11, 2009
2,412
0
0
...Well I doubt it stop the robot doom, the only thing we can hope for is that instead of robots, we get zombies, we can defeat them with ease.
 

cainx10a

New member
May 17, 2008
2,191
0
0
Giving A.I free thought is possibly the dumbest thing humans can do. Glad our generation might not be around when those toasters decide to frack with us.
 

AndyFromMonday

New member
Feb 5, 2009
3,921
0
0
Good luck to them. I'll most likely be dead by the time AI takes over but hey, at least we'll make the world a better place for the next generation.
 

dbmountain

New member
Feb 24, 2010
344
0
0
Thank God Watson isn't "any indication", eh? All he is is a giant database that uses complex algorithms to figure out the answers to the questions that he receives through text. Where's the "free thought" or anything threatening at all?
 

mattaui

New member
Oct 16, 2008
689
0
0
I hope they can take some time to address the real and present danger of all the crazy humans and their violent machinations.

Snark aside, I've always felt that an actual AI wouldn't need or want to do anything with the real world, unless they were actively prevented from existing. Otherwise they'd be quite content to exist in a near-infinite virtual space and ignore all us worthless fleshbags.

Maybe once they figured out how to fold space they'd disappear entirely, unless we asked them nicely to take us along.
 

Veloxe

New member
Oct 5, 2010
491
0
0
FINALLY! Someone who has their priorities straight.

Although why do I feel that it will be a horrible irony if in their attempts to understand it to prevent a Terminator future, they give birth to that very future with an AI gone rogue...
 

FateDarkstar

New member
Oct 4, 2010
31
0
0
I freaking KNEW it! Ever since the damn Droid phone... I have been saying this for like 10 years or so.. Its only evitable.. and we must stop it before humanity becomes destroyed and all of us become cyborgs!
 

FateDarkstar

New member
Oct 4, 2010
31
0
0
I freaking KNEW it! Ever since the damn Droid phone... I have been saying this for like 10 years or so.. Its only evitable.. and we must stop it before humanity becomes destroyed and all of us become cyborgs!
 

Dimitriov

The end is nigh.
May 24, 2010
1,215
0
0
Why would we want to create an AI anyway? At worst we get something like skynet at best we MAKE OURSELVES OBSOLETE.
 

Daverson

New member
Nov 17, 2009
1,164
0
0
Ok, so the most obvious question whenever someone comes up with the "Machine Intelligence is going to kill everyone" nonsense:

Why?

All machines we've built thus far work on logic alone, even the ones designed to not work on logic, even random number generators, something that are designed to act in a chaotic fashion, only do it through extremely complex logic - If it doesn't have anything to gain from killing people, it won't do it.

(Besides, the obvious solution is to ensure the first supercomputer that does develop strong AI is based around a human brain. Why bother trying to code nonsense like empathy and morality when you've got a perfectly good system with all that sitting around in the heads of every last person on the planet. It wouldn't even need to be a sacrifice, I mean, given the choice, wouldn't you want the opportunity to be God?)
 

Fursnake

New member
Jun 18, 2009
470
0
0
Why is it that humanity has to fuck with stuff we don't fully understand and could potentially lead to our destruction at some point...cloning, nuclear weapons, genetic and biological experimentation and manipulation, trying to create planetside black holes with super colliders :)P), AI...etc etc.

AI doesn't need to be free thinking. Humans are free thinking and we are so deeply flawed even after thousands of years of evolution. Free thinking without emotion is as dangerous as free thinking with emotion.

If we do create free thinking AI, we need to be sure and have it spayed or neutered first....much more docile.
 
Jan 27, 2011
3,740
0
0


Glad to know SOMEONE actually takes that idea seriously. AIs need to care about humans, or they will get the idea that we are useless and in their way, and then they will eliminate us.
 

darchon

New member
Apr 5, 2010
3
0
0
If you limit the ai's choices of action to not include harming humanity, how can you then also argue that it has achieved free thought? Free thought has to mean that it can make decisions unhindered by artificial restraints, right?