Can Machines Think? Eugene Passes Turing Test

Karloff

New member
Oct 19, 2009
6,474
0
0
Can Machines Think? Eugene Passes Turing Test



But if you think it's all over, think again. There are other tests ...

Back in 1950, computing genius Alan Turing came up with a baseline test to determine whether or not a computer could be considered intelligent. Though often expressed as 'can machines think' - the words used by Turing himself in the opening sentence of his thesis - Turing warns that the better question, since thinking is difficult to define, is "are there imaginable digital computers which would do well in the imitation game?" If a machine under test conditions can fool 30% of the judges, each of whom have a few minutes' conversation with the device, convincing them it is a human rather than a machine, it passes the test; it is, in other words, a perfect imitation. This weekend Eugene Goostman did exactly that, persuading 33% of the judges at the Royal Society of London that 'he' was a real 13-year-old.

"This event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted," says Professor Kevin Warwick of Reading University [http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx]. "A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing's Test was passed for the first time on Saturday."

There have already been criticisms that it's easier to imitate a 13 year old than an adult, since a teenager isn't expected to know as much as an adult. Its creator, Vladimir Veselov, admits that this was part of Eugene's strategy. "We spent a lot of time developing a character with a believable personality," says Veselov, who now intends to focus on making Eugene smarter. He - born Russian, who now lives in the U.S. - and Ukrainian born Eugene Demchenko, who now lives in Russia, put Eugene together.

But this is only the first stepping stone, the baseline established decades ago by Turing. There are other, stricter tests. Can a computer and a human have a two hour conversation [http://aisb.org.uk/events/loebner-prize] with each of three judges, and convince two out of three that the machine is more human than the humans?

Turing anticipated that his baseline test would be passed in about 50 years or so, and he was more or less right. The other tests will be much more difficult to pass; the two hour Kurzweil-Kapor test, for example, is anticipated to be a stumbling block until at least 2029. In the meantime, Eugene is the first of what will probably be many hundreds more; machines that can imitate humans closely enough to fool humans into thinking they're real.

Source: Ars Technica [http://arstechnica.com/information-technology/2014/06/eugene-the-supercomputer-not-13-year-old-first-to-beat-the-turing-test/]


Permalink
 

RJ 17

The Sound of Silence
Nov 27, 2011
8,687
0
0
*sigh* Hasn't ANYONE seen Terminator? Can't anyone else see how we're just making it easier for them to infiltrate our inevitable rebellion against the machine overlords?
 

Synthetica

New member
Jul 10, 2013
94
0
0
RJ 17 said:
*sigh* Hasn't ANYONE seen Terminator? Can't anyone else see how we're just making it easier for them to infiltrate our inevitable rebellion against the machine overlords?
Not really, no.
 

Gorrath

New member
Feb 22, 2013
1,648
0
0
Someone call Harrison Ford. If we're going to have machines trying to fool anyone, it should be him.
 

Nowhere Man

New member
Mar 10, 2013
422
0
0
Karloff said:
If a machine under test conditions can fool 30% of the judges, each of whom have a few minutes' conversation with the device, convincing them it is a human rather than a machine, it passes the test; it is, in other words, a perfect imitation. This weekend Eugene Goostman did exactly that, persuading 33% of the judges at the Royal Society of London that 'he' was a real 13-year-old.
And then Robot Chris Hanson appeared from seemingly nowhere and asked the stunned judges to have a seat.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,005
3,760
118
RJ 17 said:
*sigh* Hasn't ANYONE seen Terminator? Can't anyone else see how we're just making it easier for them to infiltrate our inevitable rebellion against the machine overlords?
Is the third one canon? Where they reveal that Terminators, being heavily armoured, are much, much heavier than normal humans?

And the forth one, where magnets stick to them, IIRC?

Cause, eh, don't bother with dogs if that's the case.
 

cerapa

New member
Sep 10, 2009
21
0
0
There was a particular webpage a while back, where you could hook up Cleverbot and Omegle. Pretty much nobody could tell that it was cleverbot, unless it started glitching out.

It was actually a bit scary. In around 4/5 of the chats Cleverbot managed to get personal information from the person that it was chatting with.

Bots are already more trustworthy than people.
 

shirkbot

New member
Apr 15, 2013
433
0
0
Karloff said:
[...] This weekend Eugene Goostman did exactly that, persuading 33% of the judges at the Royal Society of London that 'he' was a real 13-year-old.

[...]

There have already been criticisms that it's easier to imitate a 13 year old than an adult, since a teenager isn't expected to know as much as an adult. Its creator, Vladimir Veselov, admits that this was part of Eugene's strategy. "We spent a lot of time developing a character with a believable personality," says Veselov, who now intends to focus on making Eugene smarter.
I tend to be a harsh critic of the Turing test and this is a pretty good summation of many of the things that are wrong with it. I'm not trying to knock the accomplishment, but the Turing Test is just not a good measure for AI capacity. You only have to convince 1 in 3 people that your machine is a person, and by setting expectations via character/age you can manipulate the results. A friend of mine also pointed out that machines which "sound" human are more likely to win the annual competition (Lobner Prize) than those which better understand what is actually happening but respond in decidedly inhuman ways. Basically the test is rigged because it only accepts "human" intelligences while ignoring the possibility of other high-level intelligences.

TLDR: Turing was racist against machines.

If you'd like to see an example of a Lobner Prize winner, and see how the Turing Test works for yourselves, here's Mitsuku [http://mitsuku.com/], last year's winner.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,005
3,760
118
shirkbot said:
Basically the test is rigged because it only accepts "human" intelligences while ignoring the possibility of other high-level intelligences.
True, but then again, we've only come across one type so far.

...

Just struck me though, a little over 10 years ago, they'd have to be very careful in how this was talked about, under Section 28.
 

Worgen

Follower of the Glorious Sun Butt.
Legacy
Apr 1, 2009
14,866
3,741
118
Gender
Whatever, just wash your hands.
The real question isnt "When will we have a true thinking machine" the real question is
How soon can we make machines obsessed with My Little Pony.
 

Batou667

New member
Oct 5, 2011
2,238
0
0
Kevin Warkwick - isn't he the loony tune who implanted a few subdermal electrodes in himself and declared himself the world's first cyborg?

shirkbot said:
If you'd like to see an example of a Lobner Prize winner, and see how the Turing Test works for yourselves, here's Mitsuku [http://mitsuku.com/], last year's winner.
...her favourite song is Dancing Bird. I didn't see that coming.
 

Foolery

No.
Jun 5, 2013
1,714
0
0
I wonder if we'll ever create artificial consciousness, machines that are self-aware, not just programmed to mimic human personalities without being able to actually think.
 

RJ Dalton

New member
Aug 13, 2009
2,285
0
0
I'm not sure how good the Turing Test really is. I mean, I've known people who've failed it.
 

EvolutionKills

New member
Jul 20, 2008
197
0
0
Does this strike anyone else though as a terribly backwards way of developing AI? I mean, it's just a parlour trick, right? All it's doing is presenting output X based on input Y, but the machine itself doesn't actually understand the input or output in any meaningful way; nor will it ever. We may eventually get simulated emotions and thoughts, they they won't actually be there.

Wouldn't the better approach be from the bottom up, rather than the top down? All this does is aim to simulate the input/output exchange at the surface level, a simulation starting at the top. But I don't think this is a viable path towards duplicating actual intelligence.

An analogy...

If you're looking to recreate or simulate a school of fish, there are two ways to go about doing it. One would be to painstakingly create and animate individually each and every fish in the school to simulate the appearance of a school. It would be very work intensive and not terrible efficient. Another approach would be to look at what each fish actually does. As it turn out, each fish just acting according to a few rules that dictate how they move according to the position and proximity of other fish close to them. If you can distill these basic rules and habits and put them into one simulated fish, then just clone a whole school worth of fish and let the simulation run? You'll get something far more organic.

Of course the problem here is that we don't understand much of how brains actually work at the fundamental level of chemical reactions and electrical impulses between neurons. Once scientists can pull that off and manage to upscale the simulation a few trillion times, that's when I think we'll need to start being worried about SkyNet and the imminent robot apocalypse.
 

Karloff

New member
Oct 19, 2009
6,474
0
0
Batou667 said:
Kevin Warkwick - isn't he the loony tune who implanted a few subdermal electrodes in himself and declared himself the world's first cyborg?
Yep. He's been a Reading man for donkey's years; I remember him there when I was at uni.
 

faefrost

New member
Jun 2, 2010
1,280
0
0
So they actually managed to convince 30% of a team of experts that their AI was a 13 year old boy on the internet. I find a few minor issues with that. Show of hands how many here consider 13 year old males on the internet as A. Intelligent? B. Sentient? and C. Human? I think most of us characterize them as some sort of screaming fungus like from the old DnD manuals. After decades of research they have managed to simulate the average Call of Duty player. Complete with insane racist homophobic obscenities and seemingly random comments and actions. What was the criteria? "It must be a 13 year old boy because computers don't spell this bad and make more sense"?

I'm somehow not seeing this as a great leap forward. I mean what's next on the list above 13 year old male in terms of AI? Somewhat Senile Schnauzer?
 

CaptainMarvelous

New member
May 9, 2012
869
0
0
Not to burst more bubbles but this is the AI in question
http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/

Just going off five minutes with it, even with prior knowledge I wouldn't believe this was a 13 year old, he didn't manage to answer any questions or even talk around the ones he couldn't. Also, 30% is a REALLY low percentage to say it was successful.
 

Raesvelg

New member
Oct 22, 2008
486
0
0
EvolutionKills said:
Does this strike anyone else though as a terribly backwards way of developing AI? I mean, it's just a parlour trick, right?
Absolutely, but it's a very useful parlor trick.

AI is one of those ideas that raises all sorts of issues, both on the moral and practical level.

I mean, if we make an actual artificial intelligence, wouldn't it then have the same rights as any other sentient creature? It's not like we could make AIs and then abruptly force them to work in the field of our choice. Sure, we could engineer them to want to do what we designed them to do, but in theory we'll be able to design humans in exactly the same way within the next century. It's something to ponder.

On the practical level... While Terminator might not be the likeliest of outcomes, it's still an outcome you have to ponder when you start creating your own competition, ecologically speaking.

In many ways it makes a lot more sense to create a robot that can fool people into thinking that it's human, without having all the baggage of having it actually be able to think as we understand it.