Strong AI Thesis - The Question of computerized minds.

Jayemsal

New member
Dec 28, 2012
209
0
0
I recently had to write a paper for my current philosophy class, and the last time I did a similar thread like this it went rather well, so here we go.

The following Essay belongs to me:

The Strong AI Thesis
With the escalation of modern technology, we have come to a point where digital computers have become exceptionally powerful and complex. We have even begun to develop technology that could be considered artificial intelligence or "AI".There are hyopthetically two different kinds of AI, The Weak AI , That which may only be considered to be simulating a mind, and the "Strong AI", that which may be considered to actually be a mind. The development of this technology has sparked a discussion on the possibilities of what these kinds of AI are capable of, and whether or not they can truly be considered "minds." The debate has been sparked by modern philosophers who question whether such a computer can actually think, or simply simulate thought, or intelligence. This essay will discuss the differences in opinions relating to this subject. Alan Turing developed a test called the Turing Test, a test which may be used to determine whether or not a computer can be truly be considered a "Strong AI; This test involves communication between a computer and a person, with a panel of judges overlooking the conversation and determining whether or not a computer or human being is involved." As of the time this essay was written, no computer has ever passed the Turing test. There is a man named John Searle who has deep doubts as to whether a computer is even capable of passing this test. Searle has devised an arguement known as "The Chinese Room" thought experiment, which challenges the very idea that a computer may be capable of truly being a mind.
John Searle's "Chinese Room" thought experiment demonstrates that an AI, developed through a digital computer can only follow specific programming protocols, and will follow strict guidelines when presented with a problem or question. The manner in which this computer develops a solution or response cannot be considered a "Strong AI" because it does not comprehend an actual question or problem in the same way a human mind does, but instead uses a type of "Search Engine" to determine an apropriate response to the presented problem. John Searle does not accept the idea that the Turing test is an acceptable model for determining whether a computer can truly have a mind, for two reasons; A group of judges, being part of the test, can be subject to personal bias, and even if they are able to judge a digital computer as having a mind, their decision is irrelevent because of the fact that the computer is still only simulating thought via programmed protocols and cannot by definition actually generate what may be considered a "thought" in and of itself.
I believe that Searle has a reasonable argument, but I find it to be rather flimsy in that it is determined entirely by our current understanding of computer sciences and technology. It very well may happen in the future that a digital computer is developed which may very well be considered to have its own "mind." Although the problem itelf lies in the way a digital computer processes information and I fear that this method of processing will always be under heavy suspicion from those who doubt the legitimacy of "Strong AI" theory.
The concept has been presented that a "Strong AI" may be developed that does not violate the "Chinese Room" thought experiment, this AI would be developed using a method of replication of the human brain, using a computer. This is of course still strictly conceptual, but if it is possible for us to develop a technology that processes information in the same way as a human brain, we may be equally capable of developing a technology that is capable of replicating the way that brain may respond to such information.
So for now, we may say that "Strong AI" is not possible to exist, though this is entirely dependant on our current technological limitations and is extremely subject to change in the future.




--Discussion: What do you think about the possibilities of AI programs passing these tests?
 

Axolotl

New member
Feb 17, 2008
2,401
0
0
Any test you can come up with that isn't impossible for a human to pass it will be possible for a computer to pass.

Unfortunately we cannot actually test if other humans have minds so the chances of finding out if an AI is strong or weak is slim to nonexistent. I mean if something is just an inorganic replica of a human brain then it seems reasonable to say that it has a mind since it will likely operate just as a human brain does. However obviously things other than human brains can have minds but I am highly doubtful that we would ever be able to really know largely due to the explanatory gap.
 

Olas

Hello!
Dec 24, 2011
3,226
0
0
All I know is that the more we advance towards human level thinking the more we realize just how far we have yet to go. Watson may have beaten the top jeopardy players, but it's doubtful that Watson is even truly aware that it won.

Perhaps an AI is strong when we no longer have to actively try to make it seem strong. In other words when a computer can pass the Turing test without us specifically programming it to do so. When humanlike responses come to it naturally instead of through guided instruction.
 

Flatfrog

New member
Dec 29, 2010
885
0
0
Searle's thought experiment is pure nonsense and has been debunked a thousand times. It proves nothing about computer minds, any more than realising that our brains obey the laws of physics proves that we don't have minds.

Having said that, I agree that the further AI develops, the further we seem to be from true thought. The only person who I think is anywhere near the right track is Hofstadter, and because he concentrates on small, toy worlds and trying to tease out complex ideas about human creativity, silly if impressive projects like Watson get all the limelight, despite saying very little about human thinking.

My problem with the Turing Test has always been that it's too hard on the computer. Even a thinking machine will still experience the world very differently to us, and will necessarily have a different outlook on life. Expecting it to pass the Turing Test is essentially asking it to lie convincingly.

I always think Iain M Banks is the only one who really gets it. His thinking machines are wonderfully alien.
 

Epic Fail 1977

New member
Dec 14, 2010
686
0
0
Flatfrog said:
Searle's thought experiment is pure nonsense and has been debunked a thousand times. It proves nothing about computer minds, any more than realising that our brains obey the laws of physics proves that we don't have minds.
Glad I'm not the only one who read the OP and thought that Searle was talking out of his bum.

Flatfrog said:
My problem with the Turing Test has always been that it's too hard on the computer. Even a thinking machine will still experience the world very differently to us, and will necessarily have a different outlook on life. Expecting it to pass the Turing Test is essentially asking it to lie convincingly.
Exactly! People have this extremely annoying tendency to think of mind and body as separate things, especially when discussing AI. If you want to simulate a human brain you're going to need to make it run on a fuel called "blood" and make sure it reacts to different life stages the way a real brain does and then put it in a human foetus and connect it to every single nerve and then get some parents to raise the little bugger (without knowing that it's not actually human) and, well, even then it's probably not really going to work right.

Flatfrog said:
I always think Iain M Banks is the only one who really gets it. His thinking machines are wonderfully alien.
Hey, are you me? Did someone use one of my backups without permission?
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,695
3,594
118
Firstly, we haven't got a great idea of what "mind" or "intelligence" is, and how to measure it. Therefore making something as intelligent as a human seems unlikely for the time being.

Secondly, the Turing test is generally simplified as being a computer pretending to be human, but was originally about a computer being as good at pretending to be a woman as a man was.
 

Flatfrog

New member
Dec 29, 2010
885
0
0
thaluikhain said:
Firstly, we haven't got a great idea of what "mind" or "intelligence" is, and how to measure it. Therefore making something as intelligent as a human seems unlikely for the time being.

Secondly, the Turing test is generally simplified as being a computer pretending to be human, but was originally about a computer being as good at pretending to be a woman as a man was.
Er... no it wasn't. It was just based on a parlour game where a man pretends to be a woman.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,695
3,594
118
Flatfrog said:
thaluikhain said:
Firstly, we haven't got a great idea of what "mind" or "intelligence" is, and how to measure it. Therefore making something as intelligent as a human seems unlikely for the time being.

Secondly, the Turing test is generally simplified as being a computer pretending to be human, but was originally about a computer being as good at pretending to be a woman as a man was.
Er... no it wasn't. It was just based on a parlour game where a man pretends to be a woman.
Yeah, and so the idea is to see if the machine is as good at that as a man.
 

Flatfrog

New member
Dec 29, 2010
885
0
0
Guy Jackson said:
Flatfrog said:
Searle's thought experiment is pure nonsense and has been debunked a thousand times. It proves nothing about computer minds, any more than realising that our brains obey the laws of physics proves that we don't have minds.
Glad I'm not the only one who read the OP and thought that Searle was talking out of his bum.
I can't believe that people are still quoting it after all this time when it's so patently stupid. That's philosophers for you.

Guy Jackson said:
Exactly! People have this extremely annoying tendency to think of mind and body as separate things, especially when discussing AI. If you want to simulate a human brain you're going to need to make it run on a fuel called "blood" and make sure it reacts to different life stages the way a real brain does and then put it in a human foetus and connect it to every single nerve and then get some parents to raise the little bugger (without knowing that it's not actually human) and, well, even then it's probably not really going to work right.
For me it's about senses. I experience the world through eyes and ears and touch. A computer can use cameras an microphones but can also experience things through direct data transfer. What would it 'feel like' to have people communicate with you through keyboard text appearing directly in your brain? Or to experience music as a high-quality recording that bypasses sound waves altogether?
 

Flatfrog

New member
Dec 29, 2010
885
0
0
thaluikhain said:
Flatfrog said:
thaluikhain said:
Firstly, we haven't got a great idea of what "mind" or "intelligence" is, and how to measure it. Therefore making something as intelligent as a human seems unlikely for the time being.

Secondly, the Turing test is generally simplified as being a computer pretending to be human, but was originally about a computer being as good at pretending to be a woman as a man was.
Er... no it wasn't. It was just based on a parlour game where a man pretends to be a woman.
Yeah, and so the idea is to see if the machine is as good at that as a man.
Seriously, no. The male/female part was just meant as an introduction to the idea. Nothing in Turing's paper says anything about the computer 'pretending to be a woman'. Look at the example questions:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at
R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
Compare this to the sample questions he gives for the original parlour game:

C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A's object in the
game to try and cause C to make the wrong identification. His answer might
therefore be:
"My hair is shingled, and the longest strands are about nine inches long."
 

Epic Fail 1977

New member
Dec 14, 2010
686
0
0
Flatfrog said:
Guy Jackson said:
Exactly! People have this extremely annoying tendency to think of mind and body as separate things, especially when discussing AI. If you want to simulate a human brain you're going to need to make it run on a fuel called "blood" and make sure it reacts to different life stages the way a real brain does and then put it in a human foetus and connect it to every single nerve and then get some parents to raise the little bugger (without knowing that it's not actually human) and, well, even then it's probably not really going to work right.
For me it's about senses. I experience the world through eyes and ears and touch. A computer can use cameras an microphones but can also experience things through direct data transfer. What would it 'feel like' to have people communicate with you through keyboard text appearing directly in your brain? Or to experience music as a high-quality recording that bypasses sound waves altogether?
You say "senses" I say "bodies".

But what I was trying to get at is I think it goes further than just the quality of the signal. Your "senses" are nerve signals from your body interpreted your brain, but the interpretation itself is learned. For example did you know that optical illusions don't work on children under the age of 4? They don't see the illusion part, they just see what is actually there. This is because optical illusions don't trick our eyes, they trick our brains which have learned over many years to interpret the data from our eyes in certain ways. Studies have shown that children go through many stages of visual learning and I think it's obvious that many other types of learning are going on as well, even into adulthood and beyond. I think this learning is absolutely fundamental to who and what we are. That's why I think you can't just build a model of a fully developed human brain and stick a camera on it and ask it "what do you see?" and expect the answer to be 'human'.
 

micahrp

New member
Nov 5, 2011
46
0
0
Jayemsal said:
So for now, we may say that "Strong AI" is not possible to exist, though this is entirely dependant on our current technological limitations and is extremely subject to change in the future.

--Discussion: What do you think about the possibilities of AI programs passing these tests?
There is a new book I gave away to other programmers I know for chistmas titled "How to Create a Mind - The Secret of Human Thought Revealed" by Ray Kurzweil, author of "The Singularity is Near" and the person we can trace the modern functional AIs back to (like right out of the box working voice recognition systems).

We absolutely know what a mind is and he describes it very well in his book!

All brains are statistical forecast engines.

They collect and store data, correlate rules and use the rules and current input from sensors to forecast future (immediate and distant) states.

Simple Example: Someone tosses you a ball. Your sensors send the visual data to your brain. Your brain recognizes the arc from previous experience and forecasts the path. It then sends output to muscles to move your hand into the predicted path (that output is also a learned forecast). It waits for more visual or touch sensor input to instruct the hand to close. This completes a successfull (or unsuccessfull if missed) forecast cycle.

Everything we do is this. We have been collecting and correlating sensor data since the womb. I often state the main problem with most humans is that they don't recognize they are doing this and start thinking the correlated rules in their head ARE reality instead of just a model of reality.

Our current AI programs, like the US Post Office's hand writing recognition or out of the box Voice recognition like Siri are specific applications of this collecting data, correlating rules and using those rules to accurately forecast results. The rules are so small and complex they are no longer human recognizable, only human usable (the rules can be moved from program to program). The same as our brains, we can't recognize exactly how we store the rules yet (so that we can make backups of our brains), but we can use them. The only next step left to create a true AI is to create a GENERAL program that does this same exact thing.

In the book Ray points that hardware has finally reached the point where individual serial computer clock cycles can now match the individual human brains parallel clock cycles so all we are waiting for is someone to write the program and we will have walking talking Mr. Data (yes he uses that exact example and states it will be reality by 2030).

Personal note, if you didn't find reference to Ray Kurzweil in your research you need to step up your quality of sources. He is not an obscure reference. He has been in this field for over 30 years using these principles to create real world working objects such as the first omni-font optical character recognizer, the first print and text to speech reader, the first commercially marketed large vocabulary speech recognizer, the first CCD flat bed scanner (I had to copy from the book jacket since I didn't even know all the things he made). Other personal note, I took many psychology and philosophy classes in college because I didn't know what I wanted to do. Then it hit me: why be in a field that either has to deal with broken minds that mostly refuse to be fixed or will never create a mind when I could be in a field that will produce a mind? So I switched to CS and love it. I now work creating specific examples of the above: automatic data collection systems that feed into statistical forecast engines.
 
Jun 16, 2010
1,153
0
0
Another interesting thought experiment in the philosophy of mind is the "china brain [http://en.wikipedia.org/wiki/China_brain]."

Basically, you can build a computer out of anything that can switch between two states. In the china brain thought experiment, it's a computer made of the whole nation of china, acting as individual neurons. But you could build a computer out of running water if you really wanted to. Or cheese.

Which means, if you can build a computer that thinks, then you could technically arrange cheese in such a way that it has a mind of its own. Kind of bizarre, no?

Captcha: how about that!

...
Was that sarcasm?
I think Captcha is an AI...
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
Flatfrog said:
Searle's thought experiment is pure nonsense and has been debunked a thousand times. It proves nothing about computer minds, any more than realising that our brains obey the laws of physics proves that we don't have minds.
Yeah, I don't mean to discredit the guy, as he's clearly smart and influential in the AI field but I don't think that scenario of his is worth being brought up as much as it is.

Flatfrog said:
My problem with the Turing Test has always been that it's too hard on the computer. Even a thinking machine will still experience the world very differently to us, and will necessarily have a different outlook on life. Expecting it to pass the Turing Test is essentially asking it to lie convincingly.
And that's my other minor annoyance with people talking about AI - the Turing test is terribly inconclusive based on pretty much a dream by one guy in the times before AI was actually a thing. Yes, Turing is also a smart guy...heck calling him "a smart guy" is probably discrediting him, since he was way more than that, but the test named after him is not the be all and end all of what AI is or is likely to be. There are humans who cannot pass it, after all - it's hardly measuring what's real intelligence or not. Especially considering it's trying only one possible direction - conversation.
 

LeeArac

New member
Aug 16, 2011
26
0
0
Res Plus said:
I love Banks thinking machines, so mind boggling more intelligent than a human that they take a paternalistic stance on humanity, which driven by them being so intelligent they become moral as that is the only logical course. A benevolent Skynet, quality stuff.
Morality is logical now? What? Pretty sure that's... not the case. For social animals like us, maybe, who actually DO need the presence and abilities of others of our species as a result of enlightened self-interest, sure... but for a hyper-intelligent machine mind? I really doubt it.
 

Esotera

New member
May 5, 2011
3,400
0
0
The Turing test isn't exactly amazing as it's possible to create a program with the intent purpose of being good at conversation, but having very poor general intelligence. As AI comes on I'm sure we'll consistently beat the test, but general intelligence will take a lot longer than expected.

I don't see anything holding back strong AI except possibly computational resources & the lack of memristors in computing. Any intelligence we do create will be so different from our own that it will be very hard to recognise....you could argue that Google as a system is a strong AI.
 

CrystalShadow

don't upset the insane catgirl
Apr 11, 2009
3,829
0
0
This question is unfortunately, tied directly to solipsism philosophically speaking.

The difference between Strong AI and Weak AI given those definitions seems to rely on whether the external appearance of having a mind implies actually possessing one.

The very nature of the chinese room argument implicitly assumes that it's possible to be able to behave as though you have a mind without actually having one.

But how then would you test for this? What is the distinction between being able to appear to have a functioning mind, but not actually having one, and having an actual, genuine mind?

Here's a thought I've given in several arguments, which seems to annoy certain groups of scientifically inclined people. But it does seem relevant here:

My only ability to interact and understand the world is through my own subjective experiences.
These experiences, are the only thing I know for certain to be real, yet they cannot be measured in any way that would be considered objective.

I can described them using language, but there are many aspects of these experiences that cannot be described in any way that does not already presume your experiences are the same.

Certain experiences can be correlated to objectively measurable physical phenomena. (Well, objective in the sense that if we assume other people are real, they get the exact same results.)
However, many of the experiences related to these physical phenomena have no apparent functional reason for being the way they are. The most easily discussed of these would be what are usually referred to as 'Qualia'.

I tend to use the example of colour here, because it illustrates the problem quite well. The physical nature of colour is that it is related to the relative frequency response of certain cells in the eye to photons with different wavelengths.
From the point of view of processing this information to do some kind of useful task with it, the key thing that needs to happen is discrimination. (being able to tell two different colours apart from eachother.)

Now, we know from experiments that we can fool the human mind into seeing most colours using a mixture of just 3 wavelengths.
Sticking to the primary colours for a moment, functionally, it's useful to be able to tell the difference between blue and red, for instance.
But, we experience blue and red in a specific way mentally. Yet, a quick glance at the kind of calculations involved will show you that there is no apparent objective reason for this. I can swap the experience of red with that of blue, and it has no practical consequence. (that is, I can swap what red seems to look like with blue and vice versa, and as long as it's a complete swap, it changes nothing of practical effect)
Furthermore, if we take two people, and assume one sees red and blue opposite to how the other sees it, we cannot devise any means to verify if this is actually the case.
No means seems to exist to directly verify what each person experiences (not even in theory do I know of a method). Nothing about the physical phenomena changes anything about what they experience (except that it's different for each of them), and there's no language for either of them to describe such a difference, because the only words describe the common elements. If we ask them to point at a patch of red, they would both agree that it's red, irrespective of whether to one it seemed to look like what the other experienced as blue. Because, after all, even though what they experience as a result is different, what they are pointing at as a point of reference is the same for both of them.

Now there are many arguments made about this particular perspective. (One I find particularly bizarre being that subjective experience/qualia are an illusion. - which has the ironic implication that the only thing that is truly verifiable without making extra assumptions about how the world works is something that apparently doesn't exist.)

But if you go with this, and drop the assumption that appearing to have a mind actually means you do have one. (In other words, if you allow the possibility that it's possible for a human being to exist that shows all the outward signs of being intelligent and experiencing things every bit as much as you know from experience that you do), then things get pretty messy.

Since such internal experiences cannot be measured directly, (how would you do so? Brain scan? At best you can establish the correlation between your personal experiences and the neural state of your brain. - But you then have a reliable sample size of exactly 1, because you cannot measure the experiences of others, you are left to infer that they are the same solely because of that specific correlation.)
What you're left with is inferring from their behaviour what their experiences are.
But, while this seems a reasonable inference to make in an intuitive sense, it has the undesirable trait of being completely unverifiable.

That means you are left with the conclusion that there is no objective way to determine whether a human being, other than yourself, actually experiences anything at all.
You cannot establish by any known experiment, the difference between a mindless automaton that nonetheless functions as though they were a thinking, feeling human being with actual experiences, and a person that does actually have feelings, thoughts and experiences of their own.

Can you see where this relates to the question?
The chinese room thought experiment demonstrates the idea that it's possible for a seemingly intelligent thing to actually have no idea about what it's doing. It can perform a seemingly intelligent task that would seem to require a mind without actually having one.
But, if you cannot test anything other than the external functionality, how can you possibly know the difference between something with a mind and something without one?
Without a way of testing for the existence of a mind, the two perform identically, therefore we cannot establish which has a mind and which does not.

The 'mind' being argued about in this sense is either ill-defined, or something which cannot be measured.
That's the reason for the Turing test existing in the first place. There seems to be no other criteria other than function which can be reliably used to decide what is intelligent and what isn't.

Trying to make a distinction then becomes a moot point, because we cannot perform any test that would give a meaningful answer.
 

mateushac

New member
Apr 4, 2010
343
0
0
CrystalShadow said:
Amazingly long yet pretty interesting wall of text
And that's why you don't philosophy. Philosophy ruins it for everyone.

OT: Searle's argument demmands a test that'll prove whether a computer does or does not have a mind. How the hell are we supposed to do that if we haven't even settled on what a mind actually is yet?