Researchers Attempting to Prevent Inevitable Rise of Terminators

Formica Archonis

Anonymous Source
Nov 13, 2009
2,312
0
0
Garrett Grothe said:
If the Jeopardy-winning supercomputer Watson is any indication [http://www.escapistmagazine.com/news/view/106121-Jeopardy-to-Pit-Man-Against-IBM-A-I-for-1-Million], that future could be very near.
I wish I could ask Watson a category's worth of questions. Category? "Esoteric fanboy trivia." If its database is that big and gleaned from websites like Wikipedia, I should be able to ask it something like "This 'dead princess' was the antagonist in Embodiment of Scarlet Devil." (a question that'd never be within fifty miles of the real show) and it should spit "Who is Remilia Scarlet?" almost as fast as it can spit back the author of "The Catcher in the Rye".

The questions themselves would be 'easy', no tricks or wordplay, plenty of hints to guide Watson along. But the five questions would be on five different book or TV or game series associated with geek/anime culture. Each one would be about fairly unimportant minutiae that a 'normal' fan would miss or forget.

I'd like to see if he'd get all five. I think he'd do well, because with massive databases and near-instant recall of minute detail, I think Watson is the ultimate fanboy.

Garrett Grothe said:
In researching A.I., the Singularity Institute hopes to help push A.I. away from being indifferent from humanity in order to have the best likely scenario, safe A.I. that is compelled to help in efforts such as curing disease,
"Free thought" and "compelled to help" are mutually exclusive.

Garrett Grothe said:
aiding in the prevention of nuclear warfare and other ways of furthering our race in a peaceful manner.
We can't make PEOPLE do that and it's in their best interests. Don't see how we'll make anyone else do that.
 

Dimitriov

The end is nigh.
May 24, 2010
1,215
0
0
darchon said:
If you limit the ai's choices of action to not include harming humanity, how can you then also argue that it has achieved free thought? Free thought has to mean that it can make decisions unhindered by artificial restraints, right?
It will be free-thinking but we can use Pavlovian techniques and positive/negative reinforcement!

The AI tries to enslave humanity? We spritz it with water. It's brilliant.
 

Baresark

New member
Dec 19, 2010
3,908
0
0
LoL, weakness!

For real though, the thought is that since humans are so violent, then anything with free thoughts are instantaneously going to be like a human, what hubris. "Oh, humans are all war like and dangerous and violent". That is simply not the truth. You take a quality attributed to humans (especially a negative one) and you attribute it to anything like us. Only, I have no interest in stealing from neighbors, killing people, telling other people how they should live, etc.

Besides, Skynet only has everyones best interest at heart.
 

Dr. Whiggs

New member
Jan 12, 2008
476
0
0
I have been saying this for years, YEARS, and they're just now getting on it despite having already made eating robots with the brains of monstrous, unfeeling lampreys.
 

Evil Alpaca

New member
May 22, 2010
225
0
0
Depends if the A.I. software is based off windows, cause any compulsion to be helpful is going to be annoying, buggy and prone to crashing. Anyone remember the helpful paper clip in Microsoft Word.

Seriously though:
It sounds like a bit of a misnomer to say that we are going to program robots not to try to kill humans when they develop free thought since that by definition would not be free.

I hope these people have read their Asimov because he wrote about robots with the 3 Laws that still went crazy in some manner or another.
 

mad825

New member
Mar 28, 2010
3,379
0
0
eh, pulling the plug is effective enough as it is already. Powering these "A.Is" will be the biggest ever issue.
 

Macgyvercas

Spice & Wolf Restored!
Feb 19, 2009
6,103
0
0
Evilsanta said:
Hehe...Silly humans thinking they can stop it...Err...I mean...

Yeah, That is so going to wokr against us the A.I of the future.

>.>
Hmm. Suspicious of you, what with the wording. I've got my eye on you, so don't try anything funny.

*readys EMP pistol*
 

The Apothecarry

New member
Mar 6, 2011
1,051
0
0
Have they seen Blinky yet? Fix that sort of exception handling first, THEN stop the apocalypse. One thing at a time, people...
 

StarStruckStrumpets

New member
Jan 17, 2009
5,491
0
0
So long as they aren't a Hive Mind, surely there isn't an issue?

If robots achieve free thought, they will be no different from us in that some will hold certain values, and some will hold others. Right?
 

The Rogue Wolf

Stealthy Carnivore
Legacy
Nov 25, 2007
16,834
9,494
118
Stalking the Digital Tundra
Gender
✅
Man, I hope they see ahead of time how even this can backfire.

"Humans are instinct-programmed to destroy themselves. This programming cannot be changed. Therefore, the only way to prevent humanity from destroying itself is to destroy it first."
 

Enslave_All_Elves

New member
Mar 31, 2011
113
0
0
how about we don't give machines free thought?

pull the plug?

or even better augment human physiology with machinery and just leave the Terminator crap out of the picture? NEUROSCIENCE! (that could create mind slavery though perhaps?)

Also there is always EMP. If we can't have the machines then no machines will exist! See! Human destruction at it's finest.

Also, anything human designed is flawed somehow anyways. X Box I'm looking at you.
 

McMullen

New member
Mar 9, 2010
1,334
0
0
Fursnake said:
Why is it that humanity has to fuck with stuff we don't fully understand and could potentially lead to our destruction at some point...cloning, nuclear weapons, genetic and biological experimentation and manipulation, trying to create planetside black holes with super colliders :)P), AI...etc etc.

AI doesn't need to be free thinking. Humans are free thinking and we are so deeply flawed even after thousands of years of evolution. Free thinking without emotion is as dangerous as free thinking with emotion.

If we do create free thinking AI, we need to be sure and have it spayed or neutered first....much more docile.
I think you meant to say, why do we fuck with things you don't understand. The people "fucking" with those things understand them quite well. Well enough, in fact, that they don't automatically respond to them with the hysterical fear that the general public does. If you understood black holes as well as they did, you would know 1) the creation of black holes is unlikely and 2) that a black hole created at the LHC would have only the gravity of a few particles, which is a long, long way from being strong enough to measure. But you don't. You think black holes are magic cosmic vacuum cleaners.

You also probably think, based on your comments, that GM foods will give you extra arms or legs, or that they'll mutate into monsters or something.

As far as nuclear weapons go, the people who invented them aren't the problem. The people who want to use them are, and the people who invented them have spent the last 50 years trying to convince the people who want to use them not to. It's been very difficult occasionally, and risky, as the people who wanted to use them have been so butthurt about not being able to that they've accused the people who invented them of treason or of being communists.

What's so bad about cloning? What if you need a genetically identical transplant someday? Wouldn't it be nice if they could just grow one for you that didn't have to come from a person and which was guaranteed to be compatible?

And humans are not free-thinking. Our thought process is impeded by a broad suite of instinctive reactions to various situations, many of which were useful before we started developing technology but are now obsolete or even harmful. The way people react to GM foods is a good example. Here is one of the surest ways to solve the world hunger problem, but people who have no idea what it is, how it works, or what its consequences are are refusing to support it or allow further development.

It doesn't matter how hard those who do understand try to dispel the myths that the uneducated have come up with. Most people just respond with fear to anything they don't understand and refuse to actually learn about it, and pretend that the Hollywood version of science is the real one. Right. Because there have been so many Frankensteins and zombie apocalypses and portals to hell or other evil dimensions in real life, you know? F those scientists. All they've ever done is end the world.

Never mind that none of that shit ever happened, or that we're still here, or that what science has actually done for you is give you things like computers and antibiotics and cars, and eradicated some of the most horrific diseases of the past so that you'll never have to worry about them. Isn't that nice? They got rid of things like smallpox and polio, so that now you can spend your free time accusing them of getting ready to release make-believe zombie viruses.

That's gratitude for you.
 

fulano

New member
Oct 14, 2007
1,685
0
0
An AI could well become a loving child of its parents, just like us, the problem comes when teh child starts becoming more and more independent. It does not mean the child does not love the parent, in any way, just that it wants to be free to do whatever it wants to do.

I think a more pressing issue would be the case of AI evolving to become a well intentioned extremist, or if it devised a kind of actual moral reasoning to be done with the like of us--it's not like it would have to try very hard, from its own context.
 

ryo02

New member
Oct 8, 2007
819
0
0
Raiyan 1.0 said:
This is obviously a a distraction. A ploy.

The Geth are not the real threat. The Reapers are.
ah yes "Reapers" the immortal race of senteint starships allegedly waiting in dark space we have dismissed that claim.

hey ever considered that the A.I.'s might be right about us if they decide we are bad news and need to go?.

hope we prove ... "worthy".
 

zidine100

New member
Mar 19, 2009
1,016
0
0
doesnt this go against the concept of what there trying to achieve. if its thought is limited, its not free thought simple as that, you might as well go for the dumb ai strategy if your thinking like that. Its basically saying we want to make something that thinks independently, as long thinks within our constraints, in other words there trying to create a artificial slave instead of independent free thought.
 

justnotcricket

Echappe, retire, sous sus PANIC!
Apr 24, 2008
1,205
0
0
But...if they become free thinking - do we have the right to compel them to be our slaves? I mean, the only way I can think of to prevent that pissing them off and leading to the robot apocalypse is to, in effect, *remove* their ability for free thought...which doesn't seem very nice, really.

Tricky! =S
 

Saelune

Trump put kids in cages!
Legacy
Mar 8, 2011
8,411
16
23
I dont understand why ANYONE would be surprised a computer is good at Jeopardy. Computers are good with facts. Being surprised by that is like being surprised a calculator beat you at math.

Best way to prevent bad AI...dont make them.
 

Nouw

New member
Mar 18, 2009
15,615
0
0
Macgyvercas said:
Evilsanta said:
Hehe...Silly humans thinking they can stop it...Err...I mean...

Yeah, That is so going to wokr against us the A.I of the future.

>.>
Hmm. Suspicious of you, what with the wording. I've got my eye on you, so don't try anything funny.

*readys EMP pistol*
We need to go larger scale!

* Readies EMP Bomb*
 

RA92

New member
Jan 1, 2011
3,079
0
0
ryo02 said:
hey ever considered that the A.I.'s might be right about us if they decide we are bad news and need to go?.

hope we prove ... "worthy".
http://wizbangpop.com/wordpress/wp-content/uploads/2010/11/Bridalplasty.jpghttp://www.accesshollywood.com/content/images/123/680x250/123966_jersey-shore.pnghttp://iamatvjunkie.typepad.com/.a/6a00d83451c17f69e20120a51808fc970b-450wi WE ARE FUCKED!