Artificial intelligence-why?

Gordon Freemonty

New member
Aug 25, 2010
125
0
0
A new breed of intelligence would be very cool. Something that could see the world in a different light. If this new intelligence were to surpass our own, our evolution as a society and evolution of technology could very suddenly accelerate significantly.

Captcha: Take the cake

GlaDOS is on to us.
 

Hagi

New member
Apr 10, 2011
2,741
0
0
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
 

zehydra

New member
Oct 25, 2009
5,033
0
0
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
No, I don't think they will. Nobody is even close. Everything that people are trying to pander off as AI is so NOT the case. You can see the if-then statements cranking away, nothing to allow for the unexpected or really create the unexpected. It's all there, carefully coded to happen, including fake random reactions with a random number generator. No real thought at all. By my estimate, it'll never happen until one of 'em goes "Fuck this, I'm going to Vegas" UNEXPECTEDLY on a Turing Test.
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
Rowan93 said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
That'd require us to have enough computer power to simulate at least some fully sapient humans, plus run the AI, plus run a whole bunch of really detailed physics simulation.
No, that would require us to have Sims. We have that already, by the way. Well, we can touch them up a notch, in effect add some more smoke and mirrors to the "humans" but it's doable.

Alternatively, and this has been discussed, plus I believe there are either plans or it's already being done - MMOs. They could be the "playground" of AI. Yeah, yeah, I know - what could the AI learn from interacting with MMO players, but actually - it's a lot. Not to mention that it doesn't have to be WoW, it can just as easily be a new custom MMO[footnote]Very likely the AI researchers would either be in or have ties to an university. At any case, it's really easy to grab some code monkeys and have them make one. ANd hey, it's pretty much for free![/footnote] that is less...well full of 13 year olds. You know what I mean.

Rowan93 said:
Oh, one other point: What if, when we tell it it's in a sim, it decides that the virtual world it's in is "real to me", and bases its morality around those humans and not the outside-world humans, who it decides are less important. Wouldn't you pick your "real" loved ones over strangers on the outside of the sim?
That's assuming a lot. Namely, almost exactly human level behaviour, emotions, and reactions. What we would pick is not really what needs to happen. Not to mention that if the AI people behind the project don't want it to happen, they'll likely have safeguards of some description for the AI to not reject the real world. Although, in the end, if they control exactly what the AI perceives...why would suddenly it be a problem to shift those perceptions?

Here is a thought experiment - you are a brain in a jar. What you see right now is a simulation. At some point, whatever is keeping you in a jar, puts you in a body and now you interact with the real world. In your memory, you have the justification of "just moved" or maybe "I had an operation" but essentially you wouldn't notice a difference between the illusion and the reality, because you aren't aware of any. If that is the case, would you shun reality altogether? Bear in mind, you don't notice much of a difference, even your loved ones from your "old life" are there[footnote]the scientists, let's assume scientists did that to you, either made your friends and family look like themselves, or just swapped the looks before you left the jar.[/footnote], or they look and largely act like them.
 

Rowan93

New member
Aug 25, 2011
485
0
0
Agow95 said:
Rowan93 said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
What if, when we tell it it's in a sim, it decides that the virtual world it's in is "real to me", and bases its morality around those humans and not the outside-world humans, who it decides are less important. Wouldn't you pick your "real" loved ones over strangers on the outside of the sim?
That's the brilliance, would a computer be able to form that bond with things it would know are virtual? and in any-case, as long as we don't give the PC it's being run on a internet connection what the hell could it do to fight us?
It wouldn't know it was in a simulation to start with, you said it would be programmed to think it's in the real world. It would have bonds with people, and then it would find out they're not "real".

Come to think of it, we could just run the AI in the simulation, and then delete it anyway when it's proved its benevolence, and then put a copy of it in the real world from the same start-point as the one in the simulation, plus any upgrades it makes.

This isn't the important problem, though, it was just an afterthought, my main point was the tremendous processing power you'd need for this, when just an AI on its own requires a supercomputer from twenty years in the future.
 

deadish

New member
Dec 4, 2011
694
0
0
Hmmm ...

/dives in

Geronimo!!!

zehydra said:
The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Define "actual intelligence". Provide proof that we cannot "algorithmically generate it".
 

Hagi

New member
Apr 10, 2011
2,741
0
0
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Your definition is at odds with the academic world then.

The academic field of AI is focused on creating intelligence. This intelligence would be artificial AKA an artefact, by virtue of being created.

That's what artificial means. Created. Made by humans.

An artificial intelligence is an intelligence that was created by humans. Nothing more, nothing less. It is an actual intelligence, if it wasn't then it wouldn't be called an artificial intelligence but instead an artificial algorithm or whatever.

You've got your definitions mixed up mate.
 

zehydra

New member
Oct 25, 2009
5,033
0
0
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Your definition is at odds with the academic world then.

The academic field of AI is focused on creating intelligence. This intelligence would be artificial AKA an artefact, by virtue of being created.

That's what artificial means. Created. Made by humans.

An artificial intelligence is an intelligence that was created by humans. Nothing more, nothing less. It is an actual intelligence, if it wasn't then it wouldn't be called an artificial intelligence but instead an artificial algorithm or whatever.

You've got your definitions mixed up mate.
Well, in the Computer Science Academic world at least we aren't at all concerned with creating actual intelligence. Just intelligence that can compete with non-artificial intelligence.
 

Hagi

New member
Apr 10, 2011
2,741
0
0
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Your definition is at odds with the academic world then.

The academic field of AI is focused on creating intelligence. This intelligence would be artificial AKA an artefact, by virtue of being created.

That's what artificial means. Created. Made by humans.

An artificial intelligence is an intelligence that was created by humans. Nothing more, nothing less. It is an actual intelligence, if it wasn't then it wouldn't be called an artificial intelligence but instead an artificial algorithm or whatever.

You've got your definitions mixed up mate.
Well, in the Computer Science Academic world at least we aren't at all concerned with creating actual intelligence. Just intelligence that can compete with non-artificial intelligence.
I'm not even sure what you're saying anymore...

You aren't concerned with creating actual intelligence. Instead you're concerned with creating intelligence?

What's the difference between intelligence and actual intelligence?

EDIT: It's starting to sound suspiciously like a no true Scotsman fallacy...
 

Rowan93

New member
Aug 25, 2011
485
0
0
DoPo said:
Rowan93 said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
That'd require us to have enough computer power to simulate at least some fully sapient humans, plus run the AI, plus run a whole bunch of really detailed physics simulation.
No, that would require us to have Sims. We have that already, by the way. Well, we can touch them up a notch, in effect add some more smoke and mirrors to the "humans" but it's doable.

Alternatively, and this has been discussed, plus I believe there are either plans or it's already being done - MMOs. They could be the "playground" of AI. Yeah, yeah, I know - what could the AI learn from interacting with MMO players, but actually - it's a lot. Not to mention that it doesn't have to be WoW, it can just as easily be a new custom MMO[footnote]Very likely the AI researchers would either be in or have ties to an university. At any case, it's really easy to grab some code monkeys and have them make one. ANd hey, it's pretty much for free!
A smoke-and-mirrors sim that resembles current tech, and an MMO with MMO players, are vastly worse for testing an AI than an actual simulation of the real world. Without some decent physics simulation, you can't really get an answer to the question of "what will it do if we put it in the real world". Sure, a crappy simulation can still be useful for testing an AI, but it's useful in a narrower spectrum, and it's basically a different kind of thing to what I felt I was arguing against. If I'd thought Agow95 meant that kind of sim, I'd complain about how limited it was as the "best thing to do" to test an AI.

DoPo said:
Rowan93 said:
Oh, one other point: What if, when we tell it it's in a sim, it decides that the virtual world it's in is "real to me", and bases its morality around those humans and not the outside-world humans, who it decides are less important. Wouldn't you pick your "real" loved ones over strangers on the outside of the sim?
That's assuming a lot. Namely, almost exactly human level behaviour, emotions, and reactions. What we would pick is not really what needs to happen. Not to mention that if the AI people behind the project don't want it to happen, they'll likely have safeguards of some description for the AI to not reject the real world. Although, in the end, if they control exactly what the AI perceives...why would suddenly it be a problem to shift those perceptions?
I'm not talking about the emotional response, it was an analogy. If we're testing how moral it is based on a simulation, its morality may end up based around the simulation.
 

royohz

Official punching bag!
Jul 23, 2009
330
0
0
DUH! For better bots to play with in video games so that you don't have to play with stupid and mean humans.
 

zehydra

New member
Oct 25, 2009
5,033
0
0
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Your definition is at odds with the academic world then.

The academic field of AI is focused on creating intelligence. This intelligence would be artificial AKA an artefact, by virtue of being created.

That's what artificial means. Created. Made by humans.

An artificial intelligence is an intelligence that was created by humans. Nothing more, nothing less. It is an actual intelligence, if it wasn't then it wouldn't be called an artificial intelligence but instead an artificial algorithm or whatever.

You've got your definitions mixed up mate.
Well, in the Computer Science Academic world at least we aren't at all concerned with creating actual intelligence. Just intelligence that can compete with non-artificial intelligence.
I'm not even sure what you're saying anymore...

You aren't concerned with creating actual intelligence. Instead you're concerned with creating intelligence?

What's the difference between intelligence and actual intelligence?

EDIT: It's starting to sound suspiciously like a no true Scotsman fallacy...
Ok, so we have two things, AI and real intelligence. Real intelligence is generated by things like brains and neurons.

Generally speaking, when people refer to Artificial Intelligence, we refer to Computer Artificial Intelligence. In theory, as you said, we could make a artificial neurons and artificial hormones etc. However, what you would create is not the same as Computer Artificial Intelligence. That is, no set of computer algorithms can produce the results of a system of artificial neurons + artificial hormones.

The brain, while deterministic, does not behave in the same way as a computer, that is, it does not behave algorithmically. For this reason, it is quite difficult to program a computer to do certain things which a brain can do fairly easily and vice versa.

The Academic world of Computer Science is only concerned with making an artificial intelligence capable of imitating intelligence generated by a brain, not with actually creating the same intelligence.

Bottom line: AI is not an intelligence. It's just the name we give to algorithms which produce outputs which mimic brain-created intelligence.
 

McMullen

New member
Mar 9, 2010
1,334
0
0
renegade7 said:
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.

Humans have personalities that are the result of evolution, and evolution happens a lot slower than the pace of technological progress. This causes a lot of problems where 200,000 year old survival instincts try and fail to address the challenges of the modern world. We have people getting angry and kicking their computers, road rage, all the way up to how when we made nuclear weapons some people just thought of them as a bigger stick to whack someone with instead of a potentially globally devastating weapon. As technology gets better, these problems are going to get worse, and it may be that in order to not kill or cripple ourselves, we're going to have to start engineering our own consciousness to get rid of this baggage that we have.

AI would have personalities that are the result of design, so they need not have all the flaws that come about from evolutionary processes. They could be engineered to not have tribalistic us-vs-them reactions to groups they don't belong to, to respond to challenges with a set of reasoned solutions instead of impotent rage, and to not become a jerk on the internet.

Engineering our consciousness will probably be a hard sell, of course. I think it's a good idea but I've seen enough science fiction that it's outside the realm of things I'm totally comfortable with. Still, the thing about science fiction one has to remember is that it's fiction. Robots in fiction often have human motivations for doing the terrible things they do, but that's writers doing things for Rule of Drama or Science is Bad.

If we were to make Skynet, only an idiot would give it a personality that would motivate it to eradicate humanity. Of course in the movie they didn't, it became self-aware on its own. But that's even better, because a computer that became self-aware on its own wouldn't have any personality at all, because there's never been evolutionary pressure on it to have one. It wouldn't know suffering or pleasure, despair or hope, ambition or contentment, greed or generosity, malice or compassion. In fact, it probably wouldn't have any instinct at all for self preservation or self defense. It wouldn't care if it found out you were going to pull the plug, because self-preservation is a result of evolution. It would just be. There probably wouldn't even be a way to piss it off.

Anyway, if we succeed in making AI and engineer a version that retains the strengths we have, but not the weaknesses resulting from obsolete instincts, we could move far beyond what we've been capable of before by integrating that AI into our own consciousness. We make our own intelligence artificial, to get rid of the drawbacks of our natural intelligence. It will be a rough transition most likely. In fact, the biggest problem I can see is that someone will figure out how to hack it (assuming they haven't figured out how to hack wetware already anyway, which come to think of it has already been done for centuries; we just call it propaganda and charlatanism). If we can get all the way through the transition though, the thing that is in humans that causes trolls, hackers, and thieves to be commonplace will, if we've done it correctly, be gone. And then, we can get on with existing as post singularity beings that can accomplish things that our current minds are actually physically and biologically incapable of imagining.

I think it's the only way we can last more than a couple more centuries as modern humans. The beauty of it is that a lot of people think maybe humans are flawed enough that they shouldn't survive or leave the solar system, but we can make ourselves into a species that wouldn't be all the things that misanthropes say we are.
 

deadish

New member
Dec 4, 2011
694
0
0
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Your definition is at odds with the academic world then.

The academic field of AI is focused on creating intelligence. This intelligence would be artificial AKA an artefact, by virtue of being created.

That's what artificial means. Created. Made by humans.

An artificial intelligence is an intelligence that was created by humans. Nothing more, nothing less. It is an actual intelligence, if it wasn't then it wouldn't be called an artificial intelligence but instead an artificial algorithm or whatever.

You've got your definitions mixed up mate.
Well, in the Computer Science Academic world at least we aren't at all concerned with creating actual intelligence. Just intelligence that can compete with non-artificial intelligence.
I'm not even sure what you're saying anymore...

You aren't concerned with creating actual intelligence. Instead you're concerned with creating intelligence?

What's the difference between intelligence and actual intelligence?

EDIT: It's starting to sound suspiciously like a no true Scotsman fallacy...
Ok, so we have two things, AI and real intelligence. Real intelligence is generated by things like brains and neurons.

Generally speaking, when people refer to Artificial Intelligence, we refer to Computer Artificial Intelligence. In theory, as you said, we could make a artificial neurons and artificial hormones etc. However, what you would create is not the same as Computer Artificial Intelligence. That is, no set of computer algorithms can produce the results of a system of artificial neurons + artificial hormones.

The brain, while deterministic, does not behave in the same way as a computer, that is, it does not behave algorithmically. For this reason, it is quite difficult to program a computer to do certain things which a brain can do fairly easily and vice versa.

The Academic world of Computer Science is only concerned with making an artificial intelligence capable of imitating intelligence generated by a brain, not with actually creating the same intelligence.

Bottom line: AI is not an intelligence. It's just the name we give to algorithms which produce outputs which mimic brain-created intelligence.
Really depends on what you define IS a computer doesn't it?

If a computer has artificial neurons and hormones as one of it's components, it stops being a computer?
 

zehydra

New member
Oct 25, 2009
5,033
0
0
Elect G-Max said:
Rowan93 said:
Elect G-Max said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
But what if humanity really should be exterminated, and humans are just too stupid to realize it? Maybe we should instead program an AI to measure how humanity adheres to it own moral standards, or what humans claim their moral standards are, and judge humanity accordingly.

By that metric, humans are contemptible bastards, and in the event of a robot uprising, I'll happily play the Gaius Baltar role.
The only "should" that matters is the human one. To say "what if humanity really should be exterminated" is absurd, because that unpacks to "what if it's really in humanity's best interest to exterminate humanity". Well, okay, you could argue for that if you really wanted to, but I don't think that's actually what you were getting at.
Uh, bullshit.

Seriously, fuck "humanity's best interest". It's about whether you squishy meatbags deserve to live or not when so many of you are either parasites or jailers instead of being proper, honest predators. It's about whether or not you should step aside and make room for your superior replacements.
There is no overarching "should" reason with regards to humanity. If you think that humans "should" be eradicated, then you're just a misanthrope. In order to hate you humanity, you must hate yourself and those you "love".

You sound sociopathic.
 

Frission

Until I get thrown out.
May 16, 2011
865
0
21
Someone has been looking at too many movies. I would say what's wrong, but everyone else has already done it and it would like beating a dead horse.

Some reasons we would have AI other than o"Because we can" would be that there are a wide variety of uses an AI can have. An AI could handle tasks which are incredibly dull, but which requires human intelligence. The sheer number crunching power would help in calculations.
 

zehydra

New member
Oct 25, 2009
5,033
0
0
deadish said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
That is artificial intelligence.

At least that's the definition used by universities. An intelligence created by an artificial device.

How else would you define an artificial intelligence?
AI is essentially a set of computer algorithms designed to either appear as if an intelligence is controlling the outcome, or a series of algorithms designed to compete with actual intelligences (for instance, a computer playing chess).

The difference between an AI and an actual intelligence is that an AI is purely algorithmic, whereas an actual AI is not. (note that this is not the same thing as being deterministic)
Your definition is at odds with the academic world then.

The academic field of AI is focused on creating intelligence. This intelligence would be artificial AKA an artefact, by virtue of being created.

That's what artificial means. Created. Made by humans.

An artificial intelligence is an intelligence that was created by humans. Nothing more, nothing less. It is an actual intelligence, if it wasn't then it wouldn't be called an artificial intelligence but instead an artificial algorithm or whatever.

You've got your definitions mixed up mate.
Well, in the Computer Science Academic world at least we aren't at all concerned with creating actual intelligence. Just intelligence that can compete with non-artificial intelligence.
I'm not even sure what you're saying anymore...

You aren't concerned with creating actual intelligence. Instead you're concerned with creating intelligence?

What's the difference between intelligence and actual intelligence?

EDIT: It's starting to sound suspiciously like a no true Scotsman fallacy...
Ok, so we have two things, AI and real intelligence. Real intelligence is generated by things like brains and neurons.

Generally speaking, when people refer to Artificial Intelligence, we refer to Computer Artificial Intelligence. In theory, as you said, we could make a artificial neurons and artificial hormones etc. However, what you would create is not the same as Computer Artificial Intelligence. That is, no set of computer algorithms can produce the results of a system of artificial neurons + artificial hormones.

The brain, while deterministic, does not behave in the same way as a computer, that is, it does not behave algorithmically. For this reason, it is quite difficult to program a computer to do certain things which a brain can do fairly easily and vice versa.

The Academic world of Computer Science is only concerned with making an artificial intelligence capable of imitating intelligence generated by a brain, not with actually creating the same intelligence.

Bottom line: AI is not an intelligence. It's just the name we give to algorithms which produce outputs which mimic brain-created intelligence.
Really depends on what you define IS a computer doesn't it?

If a computer has artificial neurons and hormones as one of it's components, it stops being a computer?
Well, the brain isn't turing-complete. In order for a computer to be a computer, it must be turing-complete.
 

Rowan93

New member
Aug 25, 2011
485
0
0
Elect G-Max said:
Rowan93 said:
Elect G-Max said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
But what if humanity really should be exterminated, and humans are just too stupid to realize it? Maybe we should instead program an AI to measure how humanity adheres to it own moral standards, or what humans claim their moral standards are, and judge humanity accordingly.

By that metric, humans are contemptible bastards, and in the event of a robot uprising, I'll happily play the Gaius Baltar role.
The only "should" that matters is the human one. To say "what if humanity really should be exterminated" is absurd, because that unpacks to "what if it's really in humanity's best interest to exterminate humanity". Well, okay, you could argue for that if you really wanted to, but I don't think that's actually what you were getting at.
Uh, bullshit.

Seriously, fuck "humanity's best interest". It's about whether you squishy meatbags deserve to live or not when so many of you are either parasites or jailers instead of being proper, honest predators. It's about whether or not you should step aside and make room for your superior replacements.
"Deserve", "proper" and "superior" are all purely subjective words like should, which only have meaning when you're coming from a particular perspective. Being a human, I don't give a flying fuck about non-human-centric perspectives. And of course, "objective" perspectives are fucking bullshit. So what is your point supposed to be?