Differentiation is great, it keeps us thinking about how things are going. But Artificial Intelligence is not answering questions in a game (despite what they say it is in site) - if the system randomly started talking about something obscure like the weather it would be closer to AI than it is now. There is no "order" to the questions that can't be mathematically deduced - here we have a question (4 actually). Each question halves (quaters) the possibilities as we go down the line. Combined with weighting of each answer, and storing the scores of each individual (remember that the questions are not really free form) and you have a 20Q bot. These guys have used a neural network, I could make one in roughly 24 hours out of a web page and a database - no neural netting necessary.Playbahnosh said:Again, I beg to differ. Granted, it's not a grandiose scheme to make a program play 20 questions, hell, even I could program a rudimentary one, but not like 20Q. That thing is thinking for itself, and it's terrifying. Maybe not in a "robotic apocalypse" sense, but still. Playing 20 questions is not just about extensive knowledge, it's about asking the right questions in the right order and then using deductive logic to arrive at a possible answer, all that in 20 questions. And that damned program is doing it! D: I played 20Q a few times, starting with easy ones and then more and more obscure ones. It got it right in 20 questions almost 80% of the time, it was uncanny. And not just that it guessed it right, but it guessed it using seemingly totally unrelated (sometimes borderline ridiculous) questions. Freaky.
nc exceeds its original programming by being used for more things than the original author intended. 20Q programmed with a basic learning function (my network defence system has one of those by the way) and enough storage space can keep track of everything that people put into it. The ingenious of the programmer behind it makes it look intelligent, but it's just differentiating between pre-defined answers based on a large enough input set. "Spontaneous knowledge" can also be looked at another way: wrong. Note how there are more spontaneous knowledge counts for younger and newer networks. These are the gibberish I mentioned about my own foray into AI.Here, an excerpt from this page:
The 20Q A.I. has the ability to learn beyond what it is taught. We call this "Spontaneous Knowledge," or 20Q is thinking about the game you just finished and drawing conclusions from similar objects. Some conclusions seem obvious, others can be quite insightful and spooky. Younger neural networks and newer objects produce the best spontaneous knowledge.
It can not even use it's programming to it's full extent of deductive logic and virtual neural networks, but it can exceed it's original programming. That is a fucking Skynet embryo right there! When it gathers enough knowledge and creates a neural network extensive enough, we will have a sentient AI on our hands. If that doesn't terrify the shit outta you, I don't know what will...
What I can't stress enough is the image of AI compared to the actuality of it. If 20Q could go and read the doctor who fansite (for example) to learn the answers to the questions as a base line (this includes a couple of things, primarily natural language interpretation) and then use those as a basis for a game of 20Q with the audience, then it would be closer to AI than not, as opposed to the other way around right now, but still far out of it. AI involves interaction with the environment as much as it does "thinking" - this is the definition of our intelligence, and the measure we have for all beings.
Pretty much exactly right. Remember for a counter comparison though, that some of the humans were misjudged as robots.Ah, the Loebner Prize. I was expecting that to pop up in this thread. When a natural observer communicating through a terminal, using natural language, can't decide with absolute certainty whether it's a program or a human at the other end of the terminal prompt. Well, 20Q certainly wouldn't pass the test, but there are many conversational AIs, that came freakishly close. A.L.I.C.E. for example, or Elbot, that decieved three judges in the human-AI comparison test. I think we are not far from an AI actually getting the silver Loebner price (text-only communication). If that happens, the gold might be not far, just a matter of adding fancy graphics and a speech engine.Nothing we've seen yet has passed the turing test (the only true measure we have of AI) - my own software was only ever lucid when copying my own journal notes or for maybe three lines in 90.
Agreed.Asimov actually upgraded and rephrased his laws of robotics many times, the most noteworthy was the the "zeroth" rule, that says, a robot may brake the three rules if it serves the betterment of mankind. Other authors added a fourth and fifth rule also, stating that the robot must establish its identity as a robot in all cases, and that a robot must know it is a robot. Of course, you can't take these rules as granted and absolute, more like directives. But it's a start.Asimov's laws are flawed. Maybe flawed is the wrong word, but they need to be enhanced somehow. (Perfect example: I Robot: Save the girl! Save the girl!).