Are we heading down the path to a robot apocalypse?

Gitsnik

New member
May 13, 2008
798
0
0
Playbahnosh said:
Again, I beg to differ. Granted, it's not a grandiose scheme to make a program play 20 questions, hell, even I could program a rudimentary one, but not like 20Q. That thing is thinking for itself, and it's terrifying. Maybe not in a "robotic apocalypse" sense, but still. Playing 20 questions is not just about extensive knowledge, it's about asking the right questions in the right order and then using deductive logic to arrive at a possible answer, all that in 20 questions. And that damned program is doing it! D: I played 20Q a few times, starting with easy ones and then more and more obscure ones. It got it right in 20 questions almost 80% of the time, it was uncanny. And not just that it guessed it right, but it guessed it using seemingly totally unrelated (sometimes borderline ridiculous) questions. Freaky.
Differentiation is great, it keeps us thinking about how things are going. But Artificial Intelligence is not answering questions in a game (despite what they say it is in site) - if the system randomly started talking about something obscure like the weather it would be closer to AI than it is now. There is no "order" to the questions that can't be mathematically deduced - here we have a question (4 actually). Each question halves (quaters) the possibilities as we go down the line. Combined with weighting of each answer, and storing the scores of each individual (remember that the questions are not really free form) and you have a 20Q bot. These guys have used a neural network, I could make one in roughly 24 hours out of a web page and a database - no neural netting necessary.
Here, an excerpt from this page:

The 20Q A.I. has the ability to learn beyond what it is taught. We call this "Spontaneous Knowledge," or 20Q is thinking about the game you just finished and drawing conclusions from similar objects. Some conclusions seem obvious, others can be quite insightful and spooky. Younger neural networks and newer objects produce the best spontaneous knowledge.

It can not even use it's programming to it's full extent of deductive logic and virtual neural networks, but it can exceed it's original programming. That is a fucking Skynet embryo right there! When it gathers enough knowledge and creates a neural network extensive enough, we will have a sentient AI on our hands. If that doesn't terrify the shit outta you, I don't know what will...
nc exceeds its original programming by being used for more things than the original author intended. 20Q programmed with a basic learning function (my network defence system has one of those by the way) and enough storage space can keep track of everything that people put into it. The ingenious of the programmer behind it makes it look intelligent, but it's just differentiating between pre-defined answers based on a large enough input set. "Spontaneous knowledge" can also be looked at another way: wrong. Note how there are more spontaneous knowledge counts for younger and newer networks. These are the gibberish I mentioned about my own foray into AI.

What I can't stress enough is the image of AI compared to the actuality of it. If 20Q could go and read the doctor who fansite (for example) to learn the answers to the questions as a base line (this includes a couple of things, primarily natural language interpretation) and then use those as a basis for a game of 20Q with the audience, then it would be closer to AI than not, as opposed to the other way around right now, but still far out of it. AI involves interaction with the environment as much as it does "thinking" - this is the definition of our intelligence, and the measure we have for all beings.

Nothing we've seen yet has passed the turing test (the only true measure we have of AI) - my own software was only ever lucid when copying my own journal notes or for maybe three lines in 90.
Ah, the Loebner Prize. I was expecting that to pop up in this thread. When a natural observer communicating through a terminal, using natural language, can't decide with absolute certainty whether it's a program or a human at the other end of the terminal prompt. Well, 20Q certainly wouldn't pass the test, but there are many conversational AIs, that came freakishly close. A.L.I.C.E. for example, or Elbot, that decieved three judges in the human-AI comparison test. I think we are not far from an AI actually getting the silver Loebner price (text-only communication). If that happens, the gold might be not far, just a matter of adding fancy graphics and a speech engine.
Pretty much exactly right. Remember for a counter comparison though, that some of the humans were misjudged as robots.

Asimov's laws are flawed. Maybe flawed is the wrong word, but they need to be enhanced somehow. (Perfect example: I Robot: Save the girl! Save the girl!).
Asimov actually upgraded and rephrased his laws of robotics many times, the most noteworthy was the the "zeroth" rule, that says, a robot may brake the three rules if it serves the betterment of mankind. Other authors added a fourth and fifth rule also, stating that the robot must establish its identity as a robot in all cases, and that a robot must know it is a robot. Of course, you can't take these rules as granted and absolute, more like directives. But it's a start.
Agreed.
 

Playbahnosh

New member
Dec 12, 2007
606
0
0
Gitsnik said:
The ingenious of the programmer behind it makes it look intelligent, but it's just differentiating between pre-defined answers based on a large enough input set. "Spontaneous knowledge" can also be looked at another way: wrong. Note how there are more spontaneous knowledge counts for younger and newer networks. These are the gibberish I mentioned about my own foray into AI.
Well, yes. If you dissect the programming of 20Q, or any other AI bots around, you'll only find a bunch of loops and IFs stacked on top of each other. These programs does not think per say, only faking it. But...

If you think about it, everything thinking related could be broken down into base code. If we had enough time and resources, we could make an insanely huge program that can mimic human thinking down to the letter using only base logic. A some ANDs here, a few XORs there, and *poof*, a digitalized human brain. Granted, with today's technology, it would take an enormous amount of resources and a few hundred years to build a brain-bot that can think on the level of a toddler with Down Syndrome, but it would work. It would think.

Remember for a counter comparison though, that some of the humans were misjudged as robots.
I had a funny run in with Elbot the other day. I went to the website to check it out and started a conversation. I asked Elbot about the Loebner Price and the Turing test, and it said, there are even humans that could not pass the Turing test. The irony, it was too much... :)
 

Gitsnik

New member
May 13, 2008
798
0
0
Playbahnosh said:
Gitsnik said:
The ingenious of the programmer behind it makes it look intelligent, but it's just differentiating between pre-defined answers based on a large enough input set. "Spontaneous knowledge" can also be looked at another way: wrong. Note how there are more spontaneous knowledge counts for younger and newer networks. These are the gibberish I mentioned about my own foray into AI.
Well, yes. If you dissect the programming of 20Q, or any other AI bots around, you'll only find a bunch of loops and IFs stacked on top of each other. These programs does not think per say, only faking it. But...

If you think about it, everything thinking related could be broken down into base code. If we had enough time and resources, we could make an insanely huge program that can mimic human thinking down to the letter using only base logic. A some ANDs here, a few XORs there, and *poof*, a digitalized human brain. Granted, with today's technology, it would take an enormous amount of resources and a few hundred years to build a brain-bot that can think on the level of a toddler with Down Syndrome, but it would work. It would think.

Remember for a counter comparison though, that some of the humans were misjudged as robots.
I had a funny run in with Elbot the other day. I went to the website to check it out and started a conversation. I asked Elbot about the Loebner Price and the Turing test, and it said, there are even humans that could not pass the Turing test. The irony, it was too much... :)
If you're ever bored, have a good solid think about what pertains humans with a "soul" and their "programming" - then go have a chat to a priest about it.

Something else to think about: We do create AI already. When a mother and a father love each other very much... magic happens... then they spend the next few years training a mimic-capable-learning-difference engine in their basic knowledge. There are some inherent programs - I myself have violent tendencies and strong physical genes that leave me almost identical to my birth father at this age - but the basics are there. Writing a robot to do the same thing isn't as hard as it seems*, just that so far everyone I have ever seen expects the AI to be operational within a short time frame, rather than the years of investment we have in AI version 1 - The Human Baby.

If you spend enough time working with these things, your head will explode. There is also a joke about it that I always remember (in paraphrase):

"Anyone who works significant amount of time in the field of AI Research will come to believe, irrefutably, that there is a God"

Edit: * I mean theoretically. I had real trouble doing it even when I invested a couple of months in nothing else (including forgetting to eat and sleep).
 

StarStruckStrumpets

New member
Jan 17, 2009
5,491
0
0
I have a simple idea that will indicate when the robot apocalypse will come. Give a robot orders that it will follow, and we are fine, when it asks "Why?" then we are doomed. The moment robots start to question the purpose they have...they will become like us, aside from the fleshy-ness obviously.
 

Playbahnosh

New member
Dec 12, 2007
606
0
0
Gitsnik said:
If you're ever bored, have a good solid think about what pertains humans with a "soul" and their "programming" - then go have a chat to a priest about it.
It would be pointless. I have my own idea about the "soul" thing, the priest has his own, and trying to change each other's views about it would be impossible, since I don't believe in God, and the priest has his beliefs and dogma to stick to. I believe the "soul" is just a collection of things we can't explain about the human psyche, yet. From demons invading people we arrived to modern medicine in just under a millenia. I think our preconception about "souls" will change as well as time passes and technology advances.

I believe more in quantum physics, than in some incomprehensible deity, who, for some utterly ridiculous reason, has first century human properties. I think we created God in our own image, to represent natural phenomenons we couldn't explain back then (that is, mostly everything). I think, on the level of the building blocks of the universe, atoms, quarks, hadrons...whatever, we are all the same, we are made of the same stuff. There is no difference between a Hydrogen atoms in the air, hydrogen atoms in the sea, and the hydrogen atoms most of your body consists of. But if thats true, it means that our brains consists of the same matter. And if thats true, and we build a robot identical to us in every sense, with same thinking capacity, it will be the same as us. Has your mind exploded yet?! ;)

Something else to think about: We do create AI already. When a mother and a father love each other very much... magic happens... then they spend the next few years training a mimic-capable-learning-difference engine in their basic knowledge. There are some inherent programs - I myself have violent tendencies and strong physical genes that leave me almost identical to my birth father at this age - but the basics are there. Writing a robot to do the same thing isn't as hard as it seems*, just that so far everyone I have ever seen expects the AI to be operational within a short time frame, rather than the years of investment we have in AI version 1 - The Human Baby.
That's exactly what I was thinking about. The brain is just very sophisticated bio-chemical machine. It uses electricity and certain chemicals to relay information, and uses a form of decentralized computing mechanism with certain distinguishable centers for different functions. It has a memory, it has data storage capability, it has the ability to sort said data...etc. We are still far from understanding completely how our brain works, but if and when we do, that will be the day God dies, because people will realize God never existed at all. Just a figment of our imagination, which is a compilation of erratic data our built in supercomputer (brain) created in a desperate attempt to explain something that was unexplainable at the time because of insufficient information. So we created gods, deities with unlimited power, and somehow every unexplained thing suddenly made sense: the gods did it.

It was the easy way to avoid infinite loops in our programming, a failsafe if you will. The creation of God was an accident. Such imaginary beings and concepts, gods, spirits, ghosts, paranormal things, miracles...these are just ways to make the implausible plausible again.

If you spend enough time working with these things, your head will explode. There is also a joke about it that I always remember (in paraphrase):

"Anyone who works significant amount of time in the field of AI Research will come to believe, irrefutably, that there is a God"

Edit: * I mean theoretically. I had real trouble doing it even when I invested a couple of months in nothing else (including forgetting to eat and sleep).
Exactly. As I said, the God concept is just an escape route when we can't explain something, no matter how hard we try. "Oh, he survived the accident without a scratch, it is a miracle!". And after some time of running around in the loop to try to come up with a plausible answer our brain will brake the loop to prevent it from "exploding" and the failsafe kicks in: "Yep, it's God alright."
 

Overlord Moo

New member
Apr 10, 2009
343
0
0
matnatz said:
I think we're still pretty much safe from robots as long as we are on the other side of a cluttered room.

OP, I like the way you have such a fitting character in your avatar who is really paranoid but is actually right even when everyone else in in doubt :p.
Hadn't even thought of that. Genius!
 

Vrex360

Badass Alien
Mar 2, 2009
8,379
0
0
Wow, I just walked into something big...

I reckon it is possible that we could be heading down that path what with all the dependancy of machines and how THEY are evolving not US.

That said the idea of 'oh my god the world is ending, the machines are taking over' sound's a little too Harry Throssel for my taste.
 

guardian001

New member
Oct 20, 2008
519
0
0
Not even close. We haven't even created a robot that can think for itself. They can't learn, and they can't think. They know exactly what we tell them, nothing more, nothing less.
 

Gitsnik

New member
May 13, 2008
798
0
0
Playbahnosh said:
It would be pointless. I have my own idea about the "soul" thing, the priest has his own, and trying to change each other's views about it would be impossible, since I don't believe in God, and the priest has his beliefs and dogma to stick to. I believe the "soul" is just a collection of things we can't explain about the human psyche, yet. From demons invading people we arrived to modern medicine in just under a millenia. I think our preconception about "souls" will change as well as time passes and technology advances.
Which is pretty well how our discussion is going to go down. I've spent too much time in the field to be comfortable with someone saying 20Q is AI, and you're convinced it is. Neither of us is likely to shake our arguments.

I believe more in quantum physics, than in some incomprehensible deity, who, for some utterly ridiculous reason, has first century human properties. I think we created God in our own image, to represent natural phenomenons we couldn't explain back then (that is, mostly everything). I think, on the level of the building blocks of the universe, atoms, quarks, hadrons...whatever, we are all the same, we are made of the same stuff. There is no difference between a Hydrogen atoms in the air, hydrogen atoms in the sea, and the hydrogen atoms most of your body consists of. But if thats true, it means that our brains consists of the same matter. And if thats true, and we build a robot identical to us in every sense, with same thinking capacity, it will be the same as us. Has your mind exploded yet?! ;)
Similarities are not what I was getting at. I was driving towards what encompasses a soul, how is it that a human goes to heaven but an animal can't when the animal doesn't take religion - BUT the animal obviously loves because of its reactions to a good master etc and love is the main point every priest, rabbi, leader and pastor I have ever spoken with has used. Then we go into things like why is it a human with the soul, why can't a machine have one and so on and so forth. Which was the exercise I was trying to get you to perform above.

Something else to think about: We do create AI already. When a mother and a father love each other very much... magic happens... then they spend the next few years training a mimic-capable-learning-difference engine in their basic knowledge. There are some inherent programs - I myself have violent tendencies and strong physical genes that leave me almost identical to my birth father at this age - but the basics are there. Writing a robot to do the same thing isn't as hard as it seems*, just that so far everyone I have ever seen expects the AI to be operational within a short time frame, rather than the years of investment we have in AI version 1 - The Human Baby.
That's exactly what I was thinking about. The brain is just very sophisticated bio-chemical machine. It uses electricity and certain chemicals to relay information, and uses a form of decentralized computing mechanism with certain distinguishable centers for different functions. It has a memory, it has data storage capability, it has the ability to sort said data...etc. We are still far from understanding completely how our brain works, but if and when we do, that will be the day God dies, because people will realize God never existed at all. Just a figment of our imagination, which is a compilation of erratic data our built in supercomputer (brain) created in a desperate attempt to explain something that was unexplainable at the time because of insufficient information. So we created gods, deities with unlimited power, and somehow every unexplained thing suddenly made sense: the gods did it.
Where did the universe come from ;) What was around to create the big bang. What was it in when the items collided to cause said big bang. Einstein used physics to prove the existence of a God, why should we question him so.

It was the easy way to avoid infinite loops in our programming, a failsafe if you will. The creation of God was an accident. Such imaginary beings and concepts, gods, spirits, ghosts, paranormal things, miracles...these are just ways to make the implausible plausible again.
How strange. I can't see the air yet I feel its effects and can study it with the right technology. Just because I can't yet detect, say, ghosts, doesn't mean they shouldn't exist. There is a reason normal people are afraid of the dark. It might just be throwback to having to stave off lions in the night, but it might not be too.

If you spend enough time working with these things, your head will explode. There is also a joke about it that I always remember (in paraphrase):

"Anyone who works significant amount of time in the field of AI Research will come to believe, irrefutably, that there is a God"

Edit: * I mean theoretically. I had real trouble doing it even when I invested a couple of months in nothing else (including forgetting to eat and sleep).
Exactly. As I said, the God concept is just an escape route when we can't explain something, no matter how hard we try. "Oh, he survived the accident without a scratch, it is a miracle!". And after some time of running around in the loop to try to come up with a plausible answer our brain will brake the loop to prevent it from "exploding" and the failsafe kicks in: "Yep, it's God alright."
Not so much an escape route here. The joke is in reference to the top minds in the field being unable to produce anything even close to AI - these are some of the smartest people available. Thus, "There must be a God!".
 

Playbahnosh

New member
Dec 12, 2007
606
0
0
Gitsnik said:
Which is pretty well how our discussion is going to go down. I've spent too much time in the field to be comfortable with someone saying 20Q is AI, and you're convinced it is. Neither of us is likely to shake our arguments.
Okay, for the sake of the conversation, I'm willing to consider it. In my definition, an AI is a program, that can gather data, process that data and using logic, and it can arrive to a certain conclusion and make a decision that is not pre-programmed (hardcoded "switch"), on it's own, based on experience. Okay, there are gaps in this definition the size of the Moon, but that's what I consider AI. You might be some master of AIs, I don't know, and I'm not gonna argue with you on this one, just saying that differing definitions are not the end of the world, just a base for conversation :) There is really no empirical right or wrong in a topic like this, just different perspectives.

Similarities are not what I was getting at. I was driving towards what encompasses a soul, how is it that a human goes to heaven but an animal can't when the animal doesn't take religion - BUT the animal obviously loves because of its reactions to a good master etc and love is the main point every priest, rabbi, leader and pastor I have ever spoken with has used. Then we go into things like why is it a human with the soul, why can't a machine have one and so on and so forth. Which was the exercise I was trying to get you to perform above.
Religious debates are ultimately futile, either because you are a non-believer, who doesn't believe there is a all-powerful supernatural being (with very human traits, nonetheless) reigning over us, or because you are believer, who considers me a disgusting pagan devilspawn, who will burn in the tormenting fires of Hell for all eternity for my sins. That's why I hate churches and religion with a passion. Not the people in them, just the system. Because instead of uniting people, it segregates and divides people even further, and creates gaps the size of the Grand Canyon between the different religious groups and non-believers. More people died for and because of religion than any other cause of death in human history. For some reason, the different Gods of the religions all hate each other and every religion and church promises the only way to salvation and solace. Don't get me wrong, I'm not against spiritualism and philosophy, and maybe there is a God, I don't know, nobody knows. But churches and religions are causing more suffering than good.

Where did the universe come from ;) What was around to create the big bang. What was it in when the items collided to cause said big bang. Einstein used physics to prove the existence of a God, why should we question him so.
Because its in our nature to question things, it is the primary advancing force of science. No knowledge is eternal and for granted, just a stage. If we didn't question so called "facts", we would still believe Earth is flat and the Sun revolves around Earth. Nobody knows how our universe came to existence, not even Einstein. He was a very smart man, but even he couldn't know everything. We can formulate hypothesizes, and we can theorize about what and how it happened, but since not one of us was there, we can't know for sure. Some people believe there was a big bang, some believe there were multiple big bangs, some believe the universe is actually contracting and not expanding, some believe God created it. Who knows... :) Maybe none of them are right, maybe all of them are.

But one thing is certain, questioning things is a good thing. To be proven wrong shouldn't be seen as a failure, it should be celebrated, because it takes us to a new level of understanding.

Just because I can't yet detect, say, ghosts, doesn't mean they shouldn't exist. There is a reason normal people are afraid of the dark. It might just be throwback to having to stave off lions in the night, but it might not be too.
This is exactly what I said, maybe I wasn't clear enough. The God concept and the supernatural, IMHO, is not a factual substitution, just a way to fill the void, a wildcard if you will. A placeholder for future knowledge. Our ancient ancestors thought lightning was the wrath of gods, they thought aurora borealis is where Nirvana is, they thought the weather can be influenced by dancing, singing or sacrifices. Sure, now we know better, but for the lack of knowledge, they substituted these with gods and supernatural. Just take the Bible for example, and it's interpretations through the ages. As centuries passed, the interpretations changed from unified literal to factual and from that to philosophical, and it got divided into different versions. The now we know, that no deities speak to us from flaming bushes, epilepsy is not demonic possession, sacrificing lamb won't make our lives better, and while Jesus said many very true and good things, he probably never existed. The philosophical teachings of the Bible are very good things, like we shouldn't kill and steal from each other, that we should treat other people as we want ourselves to be treated...etc. But some people sadly can't think and see the these things, and instead they take it literally. The churches and organized religion fucked up the whole picture with pre-determined interpretations of the book, that all believers should follow to the letter. It's bad, very bad, because knowledge is emergent and not factual, as we gather new information, as we start to understand more and more things about the world around us, interpretations of old "facts" have to change, that's how we move forward, that's how we evolve.

Not so much an escape route here. The joke is in reference to the top minds in the field being unable to produce anything even close to AI - these are some of the smartest people available. Thus, "There must be a God!".
Not yet anyway. It will change in time, I'm sure :)
 

Nmil-ek

New member
Dec 16, 2008
2,597
0
0
No before we worry about advanced ai and war machines unning amok we would A. need to build them and B. sort out the problem of infinate energy.
 

Gitsnik

New member
May 13, 2008
798
0
0
Playbahnosh said:
Gitsnik said:
Which is pretty well how our discussion is going to go down. I've spent too much time in the field to be comfortable with someone saying 20Q is AI, and you're convinced it is. Neither of us is likely to shake our arguments.
Okay, for the sake of the conversation, I'm willing to consider it. In my definition, an AI is a program, that can gather data, process that data and using logic, and it can arrive to a certain conclusion and make a decision that is not pre-programmed (hardcoded "switch"), on it's own, based on experience. Okay, there are gaps in this definition the size of the Moon, but that's what I consider AI. You might be some master of AIs, I don't know, and I'm not gonna argue with you on this one, just saying that differing definitions are not the end of the world, just a base for conversation :) There is really no empirical right or wrong in a topic like this, just different perspectives.
The rest I agree with now that we've nutted out we're basically on the same page :) Just wanted to say that your definition of an AI is already available to the world. Look at weapons physics in video games, or the way that the soldiers interact with their environment in something like F.E.A.R. Still, I like your definition for the sake of a computational program. It's a good starting point.

I'm no master, I've just spent a few years in the field (I started off writing Nematodes for security purposes - these things learn network traffic and responses and so forth. They took a lot of input from me initially, but haven't needed an input for over a year now), studying every paper published and pulling apart every bot I've been able to get my hand on. Self taught, occasionally program them for companies, but no expert. Ta for the passing reference though.
 

Playbahnosh

New member
Dec 12, 2007
606
0
0
Gitsnik said:
The rest I agree with now that we've nutted out we're basically on the same page :) Just wanted to say that your definition of an AI is already available to the world. Look at weapons physics in video games, or the way that the soldiers interact with their environment in something like F.E.A.R. Still, I like your definition for the sake of a computational program. It's a good starting point.
Well, my only encounters with (so called) AIs were in video games mostly, so my definition mirrors that. This semester, I took a course on Artificial Intelligence here on the university, it looked interesting at first, but was ultimately an unfair course. The professor was focusing on the dry side of the subject, logic calculations, binary search trees and searching algorithms, game theory...etc. That wasn't very interesting. I had to research the subject on my own to find the interesting stuff. We didn't even touch the subject of conversational or video game AIs for example. I asked the prof why we didn't talk about those things, and she said that these programs are not AIs, just imposters, trying to look and behave like AIs, but just pretending. She said we are yet to make a program that we can truly call AI.

Based on what you said in your previous posts, I guess she has a point.
 

sagacious

New member
May 7, 2009
484
0
0
OMG the sky is falling! :p

but seriously, I may be really out of the loop, but I haven't heard about any perfected neural-interface implants (have you been watching the matrix?), and on the record, those battefield robots are largely still remote-piloted, plus, a navigation AI is a far cry from senteince (sp?)

I liked terminator salvation, but that doesn't mean it's going to happen.
 

Sanity For Sale

New member
Apr 18, 2009
10
0
0
Well, robotics are immediately screwed by a few simple facts.

1. Earth is mainly water
2. They require human upkeep
3. They need electricity, i.e. will run out of juice.
4. My house is safe, we have enough magnets to tow a car.

And even if we're dealing with some sort of cognizant hive mind of machinery, all we'd have to do is hit power plants.

Most current technology is augmentation of human capabilities with machines, meaning that the order of command is kept the same. So, to directly answer, no I do not think so. But I still have a magnetized knife just in case.