Gitsnik said:
20Q is based on the game of twenty questions. Anyone with a broad enough knowledge can solve any input of that game within the 20 question balance - 20Q itself is not AI, nor is it anywhere near close.
Again, I beg to differ. Granted, it's not a grandiose scheme to make a program play 20 questions, hell, even I could program a rudimentary one, but not like 20Q. That thing is thinking for itself, and it's terrifying. Maybe not in a "robotic apocalypse" sense, but still. Playing 20 questions is not just about extensive knowledge, it's about asking the right questions in the right order and then using deductive logic to arrive at a possible answer, all that in 20 questions. And that damned program is doing it! D: I played 20Q a few times, starting with easy ones and then more and more obscure ones. It got it right in 20 questions almost 80% of the time, it was uncanny. And not just that it guessed it right, but it guessed it using seemingly totally unrelated (sometimes borderline ridiculous) questions. Freaky. Here, an excerpt from
this page:
The 20Q A.I. has the ability to learn beyond what it is taught. We call this "Spontaneous Knowledge," or 20Q is thinking about the game you just finished and drawing conclusions from similar objects. Some conclusions seem obvious, others can be quite insightful and spooky. Younger neural networks and newer objects produce the best spontaneous knowledge.
It can not even use it's programming to it's full extent of deductive logic and virtual neural networks, but
it can exceed it's original programming. That is a fucking Skynet embryo right there! When it gathers enough knowledge and creates a neural network extensive enough, we will have a sentient AI on our hands. If that doesn't terrify the shit outta you, I don't know what will...
Nothing we've seen yet has passed the turing test (the only true measure we have of AI) - my own software was only ever lucid when copying my own journal notes or for maybe three lines in 90.
Ah, the Loebner Prize. I was expecting that to pop up in this thread. When a natural observer communicating through a terminal, using natural language, can't decide with absolute certainty whether it's a program or a human at the other end of the terminal prompt. Well, 20Q certainly wouldn't pass the test, but there are many conversational AIs, that came freakishly close. A.L.I.C.E. for example, or Elbot, that decieved three judges in the human-AI comparison test. I think we are not far from an AI actually getting the silver Loebner price (text-only communication). If that happens, the gold might be not far, just a matter of adding fancy graphics and a speech engine.
Asimov's laws are flawed. Maybe flawed is the wrong word, but they need to be enhanced somehow. (Perfect example: I Robot: Save the girl! Save the girl!).
Asimov actually upgraded and rephrased his laws of robotics many times, the most noteworthy was the the "zeroth" rule, that says, a robot may brake the three rules if it serves the betterment of mankind. Other authors added a fourth and fifth rule also, stating that the robot must establish its identity as a robot in all cases, and that a robot must know it is a robot. Of course, you can't take these rules as granted and absolute, more like directives. But it's a start.