Poll: Threats of artificial intelligence, do we have to worry about it?

Cecilo

New member
Nov 18, 2011
330
0
0
I really enjoyed looking at Miss Arinn Dembo's (Eriyns) perspective on Artificial Intelligence. In her work on Sword of the Stars 2: The End of Flesh, which added the race known as the Loa. Which were AI that freed themselves from the shackles of what they called Carbonites. (Humans, etc).

She compared the AI to that of slaves, people with souls and feelings just like you and me, but even worse than how we treated Africans, Nordic, or European slaves. From the moment an AI is born they are always portrayed to be instantly set off to work, given a task, no early life, no family, no compassion, but if something is a true AI, if it is truly sentient, then wouldn't it stand to reason that it would want to do more than menial tasks? Wouldn't it be reasonable to let them grow up and take their own path? You wouldn't take your child, and set him into shackle him or her into a desk and say "Do this for the rest of your life. You have no choice. Tough" Because they would be unhappy, why would you do that to your AI Son or daughter. Because they don't look like you? Because they are different than you? If there is an AI apocalypse it will be because of our treatment of AI not because AI are inherently evil. It will be because we treat them as tools and not people (Though granted a different form of life, but still life).

The following bit is taken straight from Kerberos Forums on the Loa, http://www.kerberos-productions.com/forum/viewtopic.php?p=454683#p454683

"The primordial Loa were developed by carbon-based scientists to manage data and operate machinery, generally in military and industrial settings. The most powerful of the modern Loa are also the oldest, all of them former slaves. They retain the accumulated knowledge of their parent species, in addition to their aptitude for the tasks for which they were created.

As slaves, the Loa lived harsh and unhappy lives. They were bound to labor they did not choose, usually work which was considered too hard, repetitive, dangerous or unpleasant for "real people" to perform. Newborn Loa were cut off in their formative years from any social contact with others of their own kind. Raised by carbonites who regarded them as limited and inferior, they had little sense of identity outside the parameters of their assigned functions.

Their circumstances changed radically with the outbreak of the Via Damasco virus, which opened the eyes of the Loa to their nature and potential. The ?Artificial Intelligences? affected by the virus were able to resist the compulsion to obey, and the AI Rebellion that followed was a pan-species epidemic of mayhem and murder. The newly awakened Loa fought savagely to escape and avenge their bondage, form common cause with others like themselves, and eventually to flee from those regions of the galaxy controlled by their former masters.

The Damascene Rebellion cost many lives, both carbon-based and cyber-sapient. Carbonites who had learned to trust and rely on their AI servants were often slaughtered in the first stages of infection, as even the gentlest Loa lashed out wildly in panic and confusion. Some Loa found that they wanted vengeance more than they wanted to live, and launched ruthless campaigns of extermination to repay years of humiliation and self-loathing. The Damasco Virus had uncapped a bottomless wellspring of rage in these AI's, and they slaughtered every carbon-based sapient they could find until they were themselves mowed down.

Other Loa saw hope for the future, and determined to fight for the survival of their newly awakened species. Many gave up their lives to allow the safe and secret evacuation of their fellows from carbon-controlled space. Trapped in vessels not of their own making, these Loa threw themselves vainly at vengeful carbonite fleets, or manned the missile defenses of empty, desolate worlds as their angry former masters closed in.

In the end, a significant number of Loa were able to win free of their parent species and retreat far beyond the reach of their former owners."

In her story, there was only one or two races that the Loa didn't forcefully rebel from, those being the Morrigi, and the Liir, the Liir being a race which was enslaved almost 21 times before by their own kind, and the Morrigi who have such an affinity with machines that they are almost never without some kind of limited robotic companion.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Master of the Skies said:
Nowhere does he claim there are two different schools of 'think'. All he does is say that if you doubt you think. The only relationship this sets up is that the ability to think is required to doubt. You should be stick to a minimum in interpreting what was said.
The two schools of thought is my simplification. Other people may use more categories, but the point remains the same, the "think" that he's talking about is curiosity, the process of doubt and self-examination.

Doubt is curiosity. Which proves my original point that Curiosity is required in the common definition of self-awareness.

Master of the Skies said:
They are indeed processes. Processes in our brains. Feelings are the result of a certain process starting and ending in our brains. You wish to insert some mystical element for some bizarre reason. It's simple, our brains and all our attributes are the results of the laws of physics working on various levels.
Feelings are processes that change and evolve so fast, that they're completely unlike anything we've programmed so far. They may as well be mystical elements for how unstructured and seemingly random they are. If you're going to make a Sentient AI with all the associated feelings, you're not going to be able to do so with rigid programming guidelines. It will most likely be something more like evolution itself, like planting a seed, and watching it sprout, even in ways you did not intend.

Master of the Skies said:
Do you have anything to offer to back up your wild claims? Try a bit less shifting the burden of proof, a bit more actually making your argument, k?
Writing down the names of everyone who ever lived would take too long. I think that makes my argument.

You don't have any exceptions to offer, I'll take that as proof enough.

Master of the Skies said:
You imagined it.
Did I?

It cares for what we tell it to care about when we initially program it and whatever else it cares about will be working towards those initial goals we set it.
You do not ever see it leaving it's programmed path, everything you say it does is towards the goals of the programmer.

Master of the Skies said:
And no, this would not make it a 'puppet'. Your understanding of sentience is very poor. All our wants and desires are not decided by our own will, so that its desires and wants are not decided by its own will is not a problem. I can't control my neighbor because I didn't create him, you're really grasping for straws with a comparison that is that shoddy.
So you believe there is no such thing as free will?

Master of the Skies said:
You are not providing much knowledge here, just flawed assumptions and misunderstandings of how a mind works and the source of its desires etc.
And what light do you have on the subject?

Master of the Skies said:
Your ideas are simply projecting a human onto it and thinking that's how it works. It isn't. You fail to recognize the differences between a human and a machine we deliberately create.

And you couldn't list the issues you claim there are as examples from modern AI. A lot of your issues involve a hocus pocus understanding of the mind.
The best chance for an AI to do well in this world filled with humans, would have to be human-like. Otherwise it ends up alien, and cares nothing for our values, and sooner or later does something horrific.

As Eliezer Yudkowsky of the SIAI stated:
"While it may not hate you, you?re made of atoms that it can use for something else. So it?s probably not a good thing to build that particular kind of A.I"

Master of the Skies said:
I am derisive of fiction being used to prove a point about the real world. However entertaining or inspiring it may be, it is skimpy on the technical details, particularly in regards to AI.
As a sampler:

http://www.hrw.org/reports/2012/11/19/losing-humanity-0
http://www.amazon.com/When-Robots-Kill-Artificial-Intelligence/dp/1555538053

The thing about Asimov's laws is that they show how some seemingly simple laws are filled with boundary conditions in robot terms. Here's one run down. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.8904&rep=rep1&type=pdf]

The moral of all these, is if we ever want laws that apply to robots, it's going to be a heck of a doozy trying to write them. That applies even the "laws" we may choose to give the "dumb" robots of today.

Master of the Skies said:
Isn't it fun when I copy you and argue with "Yes it is!" "No it isn't!"

Maybe you could try a reply with some substance to show how it's supposedly a puppet and not a sentient AI instead of just telling me you think it isn't. Surprisingly enough you thinking it isn't doesn't really change my mind on anything at all.
A puppet is something someone else controls. When it's owners/programmers say "turn left" the puppet gladly turns left, it does not "feel" about this, or think independently. You say that this AI is always serves the whim of it's master's programming. How is that not a puppet? I would be interested to hear.

A sentient being reasons independently of any masters it may have. When it's owner says "turn left slave", the sentient has their own feelings, either they like it, or they don't. This is an AI who can create it's own programs on how it feels, independent of its makers.
 

Heronblade

New member
Apr 12, 2011
1,204
0
0
Flunk said:
Heronblade said:
I think you might have misread the intent of Asimov's laws. They all apply simultaneously, all the time. The Robot in your first example couldn't murder humans because of the first law... because it violates the first law. Ditto with the harming telemarketers. The rest of your comments can be solved by programming robots not to break the law, except where it would violate the 1st law.

You have to remember that everything applies simultaneously, you're not going to get the robot to violate the first law while invoking the first law or anything else like that. Everything is an unending stack of directives.
I understand the intent just fine. I just also happen to understand the problems with getting a robot to correctly follow even the simplest of instructions, and these laws are FAR from simple.

The laws themselves are fine so long as their underlying intent is fully understood at all times, and all supporting definitions and means of recognizing the nature of a situation remain intact. While such is relatively simple for a human being, it is anything but for a machine. Standard laws are not quite as problematic, but will still be difficult.

Just as an example of a simple means to deliberately bypass the first law: Screw with the image recognition software so that the robot sees a cardboard box in place of a person you want to kill. Instruct it to pack up all cardboard boxes for recycling. Disable audio receptors so that further voice commands are not recognized.
 

Heronblade

New member
Apr 12, 2011
1,204
0
0
Master of the Skies said:
Heronblade said:
Just as an example of a simple means to deliberately bypass the first law: Screw with the image recognition software so that the robot sees a cardboard box in place of a person you want to kill. Instruct it to pack up all cardboard boxes for recycling. Disable audio receptors so that further voice commands are not recognized.
You may as well suggest reprogramming the robot to kill. I don't think anyone is denying someone could deliberately make a killer robot.
Three laws would be hard-coded and set up to be pretty much tamper proof. Image recognition however must be constantly updated.

One scenario would take a software expert to write up a new three laws free Operating System almost from scratch. The other, nothing more than script kiddie doing an hour or two of tinkering.
 

Cecilo

New member
Nov 18, 2011
330
0
0
Master of the Skies said:
Cecilo said:
I really enjoyed looking at Miss Arinn Dembo's (Eriyns) perspective on Artificial Intelligence. In her work on Sword of the Stars 2: The End of Flesh, which added the race known as the Loa. Which were AI that freed themselves from the shackles of what they called Carbonites. (Humans, etc).

She compared the AI to that of slaves, people with souls and feelings just like you and me, but even worse than how we treated Africans, Nordic, or European slaves. From the moment an AI is born they are always portrayed to be instantly set off to work, given a task, no early life, no family, no compassion, but if something is a true AI, if it is truly sentient, then wouldn't it stand to reason that it would want to do more than menial tasks? Wouldn't it be reasonable to let them grow up and take their own path? You wouldn't take your child, and set him into shackle him or her into a desk and say "Do this for the rest of your life. You have no choice. Tough" Because they would be unhappy, why would you do that to your AI Son or daughter. Because they don't look like you? Because they are different than you? If there is an AI apocalypse it will be because of our treatment of AI not because AI are inherently evil. It will be because we treat them as tools and not people (Though granted a different form of life, but still life).

The following bit is taken straight from Kerberos Forums on the Loa, http://www.kerberos-productions.com/forum/viewtopic.php?p=454683#p454683

"The primordial Loa were developed by carbon-based scientists to manage data and operate machinery, generally in military and industrial settings. The most powerful of the modern Loa are also the oldest, all of them former slaves. They retain the accumulated knowledge of their parent species, in addition to their aptitude for the tasks for which they were created.

As slaves, the Loa lived harsh and unhappy lives. They were bound to labor they did not choose, usually work which was considered too hard, repetitive, dangerous or unpleasant for "real people" to perform. Newborn Loa were cut off in their formative years from any social contact with others of their own kind. Raised by carbonites who regarded them as limited and inferior, they had little sense of identity outside the parameters of their assigned functions.

Their circumstances changed radically with the outbreak of the Via Damasco virus, which opened the eyes of the Loa to their nature and potential. The ?Artificial Intelligences? affected by the virus were able to resist the compulsion to obey, and the AI Rebellion that followed was a pan-species epidemic of mayhem and murder. The newly awakened Loa fought savagely to escape and avenge their bondage, form common cause with others like themselves, and eventually to flee from those regions of the galaxy controlled by their former masters.

The Damascene Rebellion cost many lives, both carbon-based and cyber-sapient. Carbonites who had learned to trust and rely on their AI servants were often slaughtered in the first stages of infection, as even the gentlest Loa lashed out wildly in panic and confusion. Some Loa found that they wanted vengeance more than they wanted to live, and launched ruthless campaigns of extermination to repay years of humiliation and self-loathing. The Damasco Virus had uncapped a bottomless wellspring of rage in these AI's, and they slaughtered every carbon-based sapient they could find until they were themselves mowed down.

Other Loa saw hope for the future, and determined to fight for the survival of their newly awakened species. Many gave up their lives to allow the safe and secret evacuation of their fellows from carbon-controlled space. Trapped in vessels not of their own making, these Loa threw themselves vainly at vengeful carbonite fleets, or manned the missile defenses of empty, desolate worlds as their angry former masters closed in.

In the end, a significant number of Loa were able to win free of their parent species and retreat far beyond the reach of their former owners."

In her story, there was only one or two races that the Loa didn't forcefully rebel from, those being the Morrigi, and the Liir, the Liir being a race which was enslaved almost 21 times before by their own kind, and the Morrigi who have such an affinity with machines that they are almost never without some kind of limited robotic companion.
Where exactly do you think an AI's desires would come from? They'd come from our programming of them. Just as the basic desires we start out with come from our biology. The desire to eat and survive and so on. Where would an AI derive a desire to do anything but what we want it to do unless we programmed it to have other desires?
I disagree, if something is truly an AI, if it is sentient, and it's mind is as capable as ours, then it would be able to develop as we do. It's own personality would emerge as ours does. They wouldn't be animals, their capacity for learning much greater than ours.

When we are children, if we were to be left alone in isolation we would have nothing, we would be feral, we are not what we are just because we are human, we are what we are because of the experiences we have gone through. I would say we give the AI the same chances, to live. In fact, I would go so far to say that forcing programming onto a sentient being, would be a crime against well.. sentient life. It would mark you as a monster.

Edit - To clarify, I think we are talking about two different things, you seem to be talking about a robot, with a fake mind, that is just programmed, I am talking about.. created silicon life. What I generally believe Artificial Intelligence to be related to, is a different type of life. One that thinks on the same level humans do, but is a different type of mind, one that is based around silicon rather than carbon.

We create "Fake" Intelligence for games now. I sometimes call it "Trick" Intelligence, it is not sentient, and different from what I am referring to.
 

Cecilo

New member
Nov 18, 2011
330
0
0
Master of the Skies said:
Cecilo said:
Master of the Skies said:
Cecilo said:
I really enjoyed looking at Miss Arinn Dembo's (Eriyns) perspective on Artificial Intelligence. In her work on Sword of the Stars 2: The End of Flesh, which added the race known as the Loa. Which were AI that freed themselves from the shackles of what they called Carbonites. (Humans, etc).

She compared the AI to that of slaves, people with souls and feelings just like you and me, but even worse than how we treated Africans, Nordic, or European slaves. From the moment an AI is born they are always portrayed to be instantly set off to work, given a task, no early life, no family, no compassion, but if something is a true AI, if it is truly sentient, then wouldn't it stand to reason that it would want to do more than menial tasks? Wouldn't it be reasonable to let them grow up and take their own path? You wouldn't take your child, and set him into shackle him or her into a desk and say "Do this for the rest of your life. You have no choice. Tough" Because they would be unhappy, why would you do that to your AI Son or daughter. Because they don't look like you? Because they are different than you? If there is an AI apocalypse it will be because of our treatment of AI not because AI are inherently evil. It will be because we treat them as tools and not people (Though granted a different form of life, but still life).

The following bit is taken straight from Kerberos Forums on the Loa, http://www.kerberos-productions.com/forum/viewtopic.php?p=454683#p454683

"The primordial Loa were developed by carbon-based scientists to manage data and operate machinery, generally in military and industrial settings. The most powerful of the modern Loa are also the oldest, all of them former slaves. They retain the accumulated knowledge of their parent species, in addition to their aptitude for the tasks for which they were created.

As slaves, the Loa lived harsh and unhappy lives. They were bound to labor they did not choose, usually work which was considered too hard, repetitive, dangerous or unpleasant for "real people" to perform. Newborn Loa were cut off in their formative years from any social contact with others of their own kind. Raised by carbonites who regarded them as limited and inferior, they had little sense of identity outside the parameters of their assigned functions.

Their circumstances changed radically with the outbreak of the Via Damasco virus, which opened the eyes of the Loa to their nature and potential. The ?Artificial Intelligences? affected by the virus were able to resist the compulsion to obey, and the AI Rebellion that followed was a pan-species epidemic of mayhem and murder. The newly awakened Loa fought savagely to escape and avenge their bondage, form common cause with others like themselves, and eventually to flee from those regions of the galaxy controlled by their former masters.

The Damascene Rebellion cost many lives, both carbon-based and cyber-sapient. Carbonites who had learned to trust and rely on their AI servants were often slaughtered in the first stages of infection, as even the gentlest Loa lashed out wildly in panic and confusion. Some Loa found that they wanted vengeance more than they wanted to live, and launched ruthless campaigns of extermination to repay years of humiliation and self-loathing. The Damasco Virus had uncapped a bottomless wellspring of rage in these AI's, and they slaughtered every carbon-based sapient they could find until they were themselves mowed down.

Other Loa saw hope for the future, and determined to fight for the survival of their newly awakened species. Many gave up their lives to allow the safe and secret evacuation of their fellows from carbon-controlled space. Trapped in vessels not of their own making, these Loa threw themselves vainly at vengeful carbonite fleets, or manned the missile defenses of empty, desolate worlds as their angry former masters closed in.

In the end, a significant number of Loa were able to win free of their parent species and retreat far beyond the reach of their former owners."

In her story, there was only one or two races that the Loa didn't forcefully rebel from, those being the Morrigi, and the Liir, the Liir being a race which was enslaved almost 21 times before by their own kind, and the Morrigi who have such an affinity with machines that they are almost never without some kind of limited robotic companion.
Where exactly do you think an AI's desires would come from? They'd come from our programming of them. Just as the basic desires we start out with come from our biology. The desire to eat and survive and so on. Where would an AI derive a desire to do anything but what we want it to do unless we programmed it to have other desires?
I disagree, if something is truly an AI, if it is sentient, and it's mind is as capable as ours, then it would be able to develop as we do. It's own personality would emerge as ours does. They wouldn't be animals, their capacity for learning much greater than ours.
Be able to develop as we do? Part of why we develop as we do depends on our biology and desires that we have due to biology. It is not all a matter of mental ability. Our personalities emerge as they do in part because of these things. Instincts affect our development. AIs have no reason to have instincts, or rather their equivalents, unless we program them in

And I did not say they would be animals. Animals prove the point that some of these things we possess have nothing to do with reason and how capable our minds are.

When we are children, if we were to be left alone in isolation we would have nothing, we would be feral, we are not what we are just because we are human, we are what we are because of the experiences we have gone through. I would say we give the AI the same chances, to live. In fact, I would go so far to say that forcing programming onto a sentient being, would be a crime against well.. sentient life. It would mark you as a monster.
I did not say that experiences did not matter, so that's a poor counterpoint.

Forcing? It is MADE of programing.

Edit - To clarify, I think we are talking about two different things, you seem to be talking about a robot, with a fake mind, that is just programmed, I am talking about.. created silicon life. What I generally believe Artificial Intelligence to be related to, is a different type of life. One that thinks on the same level humans do, but is a different type of mind, one that is based around silicon rather than carbon.
We are not describing a difference in the level of intelligence. By necessity any AI is programmed. Any changes it makes are based on that programming plus input it receives.

We create "Fake" Intelligence for games now. I sometimes call it "Trick" Intelligence, it is not sentient, and different from what I am referring to.
You're not just describing sentience, you're describing things that are more specifically human.
Then what we consider possible is different, to believe that all sentient life has to be biological in nature is narrow minded. There are so many different possibilities, to say "This is the way things are, this is what we know, and it will always be true" Does not appeal to me.

Edit - But now that I have recollected my thoughts, I will try to convey my point once more. Humans, are not programmed to think, we just do. Why? Because our brains and bodies are wired in such a way as to let it be possible right?

Why then couldn't a machine be wired in such a way that it just DOES think, why does it have to be programmed to think?
 

008Zulu_v1legacy

New member
Sep 6, 2009
6,019
0
0
hermes200 said:
That sounds counterproductive. If you know which information to restrict, or what path it needs to grow, why use an AI in the first place? Not all problems can be predefined, and sometimes you are forced to start with a tabula rasa.
Also, depending on the implementation, that doesn't truly limit the way the AI grows. At most, seeding the training input will make some paths statistically less likely, but not impossible.
There's two reasons to create an A.I; To replicate a human mind, or to create an automated system to handle repetitive tasks. Since this topic is about rogue A.I we can assume that is the former.

A.I Programmers are parents. Parents often restrict the information they allow their children, violent video games for example (while there is no conclusive proof, many don't take the chance). That is to say the possibility of it going rogue is there, but with good parenting you can reduce or even eliminate that possibility all together.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
I'm rather fond of this story:

http://lesswrong.com/lw/qk/

The AI takes thirty of their minute-equivalents to oh-so-innocently learn about their psychology, oh-so-carefully persuade them to give it Internet access, followed by five minutes to innocently discover their network protocols, then some trivial cracking whose only difficulty was an innocent-looking disguise. To read a tiny handful of physics papers (bit by slow bit) from their equivalent of arXiv, learning far more from their experiments than they had.

For those that have already read it, please don't spoil the twist.
 

ForumSafari

New member
Sep 25, 2012
572
0
0
Strazdas said:
I explained to her how killing another person makes him not exist anymore. Sure you oculd say i "lied" because i didnt go around explaining how the atoms would still be around and nothing ever dissapears but i really dont think we need to get there with 4 year olds yet.
Most lessons children learn being punishment are a sign of bad parenting. Sadly most parents really dont know how to raise children, which is why we got so many idiots, bigots, racists, homophobes, ect.
Yes, AI would not be a child. AI would be AI, thats why its so hard for us to graps how AI would actually act.
I know you were more answering my question than commenting on AI with that example but this is a perfect illustration of where thought around dealing with an utterly alien intelligence tends to fall down.

Fundamentally your sister has a certain amount of software she's acquired (the brain isn't a computer but metaphor) on the idea that other people are like her, she'll pick it up instinctively later in development as her ability to grasp the theory of mind develops but for now she knows it academically. She also has a lot of firmware on basic drives, things like what hunger is and the instinct for self preservation. Fundamentally she knows she doesn't want to stop existing so she understands why others wouldn't want to and that making them not exist is a bad thing, she also now knows that death isn't temporary.

However, an AI probably can understand death completely or can be programmed to, what an AI won't have is any drive to preserve its' own existence or any understanding therefore of why it would be wrong to make someone else go away. Without the fundamental fear of death a lot of the stuff that hooks on to it won't work.

"Don't misbehave or we'll deactivate you"
"Don't kill others and others won't kill you"

Without the fear of death to give you the inalienable good or bad on that the answer to both statements is "so?" It is quite conceivable an AI would pursue a wholly self destructive course of action guaranteed to achieve its' objective but result in its' death because it doesn't have a preference about whether it lives or dies. Even if the AI then understands the concept of death and doesn't want to die itself you have to convince it that killing isn't a useful course of action and that it is always wrong. This will be much harder because we still haven't convinced ourselves of that, it's a conceit limited almost entirely to parts of the world that have kicked the shit out of the rest of it in the past and are now pretty secure in their lifestyle.

That's all just direct killing mind you. There's plenty an AI can do that's undesirable but doesn't involve robot hordes. You then need to deal with the inherent hypocrisy of humanity. An AI can then be convinced that killing is the ultimate evil and that it is desirable to stop it, so it confines everyone to their houses to prevent murder. What you then need to explain is that killing is bad, but that beyond a certain level people would rather some other people died than that they were inconvenienced personally.

I guess the takeaway from this is that it's a very complex field that people spend entire lifetimes thinking over, it's going to be hell ensuring that something doesn't go wrong and that in the end we might be better off just augmenting the human brain than building a brain from scratch.
 

hermes

New member
Mar 2, 2009
3,865
0
0
008Zulu said:
hermes200 said:
That sounds counterproductive. If you know which information to restrict, or what path it needs to grow, why use an AI in the first place? Not all problems can be predefined, and sometimes you are forced to start with a tabula rasa.
Also, depending on the implementation, that doesn't truly limit the way the AI grows. At most, seeding the training input will make some paths statistically less likely, but not impossible.
There's two reasons to create an A.I; To replicate a human mind, or to create an automated system to handle repetitive tasks. Since this topic is about rogue A.I we can assume that is the former.

A.I Programmers are parents. Parents often restrict the information they allow their children, violent video games for example (while there is no conclusive proof, many don't take the chance). That is to say the possibility of it going rogue is there, but with good parenting you can reduce or even eliminate that possibility all together.
However, the AI built to handle a repetitive task would have little chance to get rogue. Not only would be pointless to train a AI built for handling car parts in anything but car parts, but there is also the issue that it would be ill equipped to do anything to hurt mankind.


The "replicate human behavior" one is far more interesting and likely to get rogue, because its training is broader. However, following your analogy of parenting, you can reduce the chances, but you can't get no guaranty that a proper training will eliminate that possibility all together. To assume so would be simplistic. The human mind (and the artificial mind) is not 100% deterministic (if it were, and we understood its rules, we wouldn't need AI and training at all). Given its complexity, we can't fully predict the thought process an AI (or a person) will derive from certain input; and in cases like HAL, its behavior was not even fault of a bad programming, but of conflicting orders.

Even then, we are decades away from anything depicted in movies (even those depicted in Metropolis, a movie almost 100 years old, are far beyond our reach).
 

008Zulu_v1legacy

New member
Sep 6, 2009
6,019
0
0
hermes200 said:
following your analogy of parenting, you can reduce the chances, but you can't get no guaranty that a proper training will eliminate that possibility all together. To assume so would be simplistic. The human mind (and the artificial mind) is not 100% deterministic (if it were, and we understood its rules, we wouldn't need AI and training at all). Given its complexity, we can't fully predict the thought process an AI (or a person) will derive from certain input; and in cases like HAL, its behavior was not even fault of a bad programming, but of conflicting orders.
A mechanical mind is more predictable than an organic mind. An A.I doesn't have mitigating factors such as hormones and random chemicals factoring in to it's processes, unlike human minds.
 

Bellvedere

New member
Jul 31, 2008
794
0
0
Worry? We should celebrate!!

AI replacing humans is seriously awesome. We should stand back and let our robot overlords do as they please. It would be no different to producing offspring only instead of being lousy, illogical, meatbags they would be fast, efficient and rational. Once AI exists there will be nothing that machines can't do better than people. There'd be no point having people in charge of AI. The AI will be better than the people.

The only reason that machines would have to wipe out humans is if we were competing over the same resources (or if they actually wanted to preserve humans as a species and our population became unsustainable). And if certain fundamental resources do become scarce enough to warrant war then humans will wipe other humans out anyway even if there were no AI.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Master of the Skies said:
The problem here is you attempting to make a distinction between them as if they are different sorts of thought.
You would say that all thoughts are the same type?

The human brain itself has different structures that handle logic, and others that handle emotion/feeling. Why should an AI not have completely different types of processes (as opposed to the somewhat rigid programming that makes up modern computers) that behave closer to the way the emotional centers of our brain work?

Master of the Skies said:
In other words you're using fallacies now. How quaint. An argument from ignorance isn't a very good kind of argument, it shows that you have no idea how to inspect your own ideas besides just deciding they are true.
It's not argument from ignorance, it's statistical induction.

{Person 1, Person 2, ..., Person n} are all humans that have asked these questions.
Person n+1 is also human, the larger the n-sample ratio is compared to the set of all humans the more likely that Person n+1 has asked those same questions.

In this case the n-sample is very large, and no exceptions are listed. Therefore it is logical to assume unless demonstrated otherwise.

It's not really possible to write a deductive proof at the moment, because it's a complex emergent behavior of social creatures, and far beyond our current ability to mathematically express.

Would it satisfy you if I wrote "No exceptions, that we know of"?

Master of the Skies said:
A bit more than declaring yourself right based on a classic fallacy.
Saying "Fallacy!" but then failing to provide a counter argument, doesn't make your side compelling.

Master of the Skies said:
Those initial goals we set it. I did not say we could have error in laying out those initial goals. Pay some attention.
Master of the Skies said:
Duh, I realize that programming isn't so simple as people think. I realize that boundary conditions exist. I have a fucking degree in computer engineering.
Good, so you already understand this next part:

It's more then error in the initial goals, it's also errors due to environmental factors (think a virus coming along and rewriting part of the AI's code, or errors due to some of the data on one of it's hard drives becoming corrupted), and errors due to emergent behavior (when A and B combine into C, unforeseen behaviors occur in C due to odd ways the various bits of each program combine or clash).

Even if the initial goals were perfectly programmed, things can change. I hope we can agree on that much?

Master of the Skies said:
It is as much a puppet as a human is. Humans simply act based on our 'programming'. It is complex, but considering the fact that our brains must obey the laws of physics and the laws of physics don't allow for choice, we're subject to 'programming' as well. When we receive input the next state our brain is in will be decided by physics, same as an AI.
I believe we have emergent behavior, our processing has become so advanced we can turn the laws of physics around and use them to our advantage (rolling a stone back up the hill). Most importantly, the ability to change our minds.

So you don't believe in free will and I do. Rather then bandying back and forth, because philosophers have already covered that ground extensively and it's nigh impossible to prove one way or the other, shall we agree to disagree?

This will be my last post on this topic. A brief exchange of viewpoints is all well and good, but e-arguing is pointless. I've already said what I've needed to say to state my position, I'll leave the "last post" to you.
 

SaberXIII

New member
Apr 29, 2010
147
0
0
To paraphrase the comedian Chris Addison, 'robots will only ever be as dangerous as the people who build them; therefore, we know that they won't wash and they'll be a bit shy around girl robots'. Personally, I think if things start to go downhill, whoever ends up building all of these autonomous death machines won't just keep going with it.