Researchers Drive Computers Crazy With Frightening Results

Pinstar

New member
Jul 22, 2009
642
0
0
They better not put this programming in elevators. While it will give them the ability to predict the very near future, most will end up disgruntled and sulking in basements.
 

kjh242

New member
Jan 7, 2010
166
0
0
You know what my fav part is? I live 30 minutes from UT. This means that when the computer becomes self-aware, I will be one of the first in the sights of the Rail Gun UT has in their basement somewhere. Right next to the prototype fusion reactor. (lying about that last part... I hope. But they do have a rail gun.)

quick edit, UT-A, not just any UT
 

Spartan448

New member
Apr 2, 2011
539
0
0
Well, it's official: sometime in the future, some IDIOT will give control of an entire research facility to a schizophrenic robot, which will go crazy, and kill everyone... for science.

What else, researchers are experimenting with reality cascades?
 

Asehujiko

New member
Feb 25, 2008
2,119
0
0
That's a very pretentious way of saying "turning cross referencing off makes responses random".
 

Riobux

New member
Apr 15, 2009
1,955
0
0
Yes, because the human brain operates in the exact some way a computer does...
 

McMullen

New member
Mar 9, 2010
1,334
0
0
Kopikatsu said:
McMullen said:
Kopikatsu said:
I really, REALLY think someone needs to introduce these researchers to a few sci-fi movies. HAVE WE LEARNED NOTHING?!
I've just got to ask. When people post things like that, are you being serious or just having fun? Poe's Law makes it difficult to know.
A little of both. Obviously it isn't reasonable to take any movie as a predictor of the future (Example, I doubt anyone could have possibly imagined television three hundred years ago, but lo and behold. Also, I mean from novels and other various things. Obviously they didn't have movies three hundred years ago.)

I do think that there is a point where you say, "Wow. In an age where virtually everything of importance is stored on computers, nothing could possibly go wrong from creating an AI and literally driving it insane. Oh, hey, let's call it SHODAN! That's an awesome name that I totally didn't pull from anything."

I'm not saying that I think the network mentioned in the article will hook itself up and take over the world, but one advancement leads to another to another...

But again, I don't think the future can be predicted with much accuracy. What happens will happen, I suppose.

Edit: OT: You have to question at what point does a computer gain sentience. I remember reading about an animal (Forget the animal) that was taught to recognize words, and many, MANY people claimed that the animal couldn't actually 'read' human languages because all it was doing was associating the symbols (Letters) with the object. (Seeing the word 'Apple' for instance, then being taught those five symbols, 'A' 'p' 'p' 'l' and 'e' put together equals a certain kind of food.) But...isn't that how humans read? Take the symbols that we've decided should be our alphabet and associate them with certain ideas or objects?

Like the poster above me mentioned, the human brain is basically just a complex group of neurons. To say that sentience could be replicated...seems feasible.
Personally, I'm reasonably sure (being one myself) that scientists are a subset of geeks, and many of us do read science fiction. I'm pretty sure we're genre-savvy enough to not make AIs or things that might become AIs and hook them up to the internet or some military weapons network.

Besides, you and many others assume that a created intelligence will have the same motivations we do, or for that matter any motivation at all. We are motivated to pursue or avoid things because there is a strong statistical relationship between the resulting behaviors and survival. A created intelligence wouldn't have that. It wouldn't want anything, fear anything, or seek anything, not even to preserve its own existence. It would simply exist. Anything more than that would require us to figure out how to give it motivation and then to actually set it up.

Sentience is an issue, but right now we don't even know what sentience is or how it arises in animals. Again, sentience without motivation is kind of meaningless. If a thing literally doesn't care about anything, it's pretty much impossible to be nice or mean to it.

Besides, most people who sound the alarms about AI fail to learn from science fiction themselves, as it is often the fear of AI that causes the conflict, not the AI itself, whether we're talking about Skynet, the machines of the Matrix universe, or the Geth. Human intelligence, and its flaws, are the problem in those stories, not artificial intelligence. And that is backed up by more than fiction, it's backed up by 10,000 years of history too.
 

gunner1905

New member
Jun 18, 2010
223
0
0
Pinstar said:
They better not put this programming in elevators. While it will give them the ability to predict the very near future, most will end up disgruntled and sulking in basements.
Hah, a hitchhiker's guide to the universe reference, nice
 

GonzoGamer

New member
Apr 9, 2008
7,063
0
0
Three questions:
Was it running off Vista?
Did it mention cake?
Has anyone put it out of it's misery yet?
 

Oskamunda

New member
Dec 26, 2008
144
0
0
Why would anyone endeavor to make a machine that operates at higher parameters than a human mind and then deliberately drive it insane? Surely the probative value from such an experiment is far outweighed by the negative implications of Crazy Computers.

When will scientists realize that just because they "can" doesn't mean they "should?"
 

ZombieGenesis

New member
Apr 15, 2009
1,909
0
0
Philip K. Dick is dead?

I honestly did not know that. Just as I said the last time this was posted here (a few days ago) it's an impressive feat, but sadly since this is a machine, it proves one thing. The machine had to have been TOLD, in some way or form, what to do. Machines have no volition and do not make their own logical process- they follow code. The code told it to mingle the stories and express this outcome.

To say otherwise would be a claim of true, sentient AI. And we all know that technology just isnt' there yet.
 

raankh

New member
Nov 28, 2007
502
0
0
ZombieGenesis said:
Philip K. Dick is dead?

I honestly did not know that. Just as I said the last time this was posted here (a few days ago) it's an impressive feat, but sadly since this is a machine, it proves one thing. The machine had to have been TOLD, in some way or form, what to do. Machines have no volition and do not make their own logical process- they follow code. The code told it to mingle the stories and express this outcome.

To say otherwise would be a claim of true, sentient AI. And we all know that technology just isnt' there yet.
Well, that's disregarding the possibility of emergent phenomena in the system. I don't know the details of the system they used, but the purpose of simulation is to find more or less reasonable models to continue investigations.

I doubt they claim any kind of direct correlation between the processes in their existing system and the human brain, but at least they have found a model that enables further investigation of the ideas they have.

Creativity as such is difficult to define and there are machines who can be creative-- for example automatically creating patentable inventions.

If "true, sentient AI" means "human-like intelligence", then we are a long way from it. By some definitions we already have sentient, sapient and autonomous AI. They aren't artificial persons however, since most machines aren't built that way.
 

Kiriona

New member
Apr 8, 2010
251
0
0
Just as long as it makes no mention of a cake... or testing...

Looks like we'll be heading down the path of rouge technology ruling the world, despite all the movies and books warning us of it... But when it happens, let's all hold up copies of I, Robot (the book) or something and say that we called it.
 

FieryTrainwreck

New member
Apr 16, 2010
1,968
0
0
immovablemover said:
The computer isn't sentient and It is only mimicking the symptoms of Schizophrenia. Not because the scientists "drove it mad" or any such sensationalist nonsense, they just reprogrammed it to have the same processing errors that the mind of a schizophrenic would have.

I mean, If I turn off Microsoft word's ability to recognize spelling mistakes have I "Destroyed my Computers ability to be literate"? Have I performed a digital lobotomy on my word processor? No - I've turned off the spell check function.

People need to stop watching science FICTION movies and thinking them as Prophetic warnings.
But that's exactly what makes this stuff interesting/frightening:the idea that we might create something that mimics a human being without having the appropriate emotional switches behind its behavior. I think those switches help us "draw lines".