Poll: Threats of artificial intelligence, do we have to worry about it?

Cledos Closed

New member
Sep 20, 2012
33
0
0
So, let me be honest, I decided to create my first topic after having The Matrix marathon (plus the animatrix); also, my English is terrible, so please "bear" with me. And, seriously, it hit me like a brick when I realized how dangerous AI would be and how naive we human are when it comes to realizing that threat.
A little bit more clarification, I'm not saying that we are not currently aware of the fact that AI can pose a potential problem but rather we either underestimate it (with the justification that machine and AI technology in particular could never reach human level of cognitive thinking)or laugh and say we have many things to worry about. This article seems to show exactly what general (or at least the readers opinion upon said problem) http://www.dailymail.co.uk/news/article-2238152/Cambridge-University-open-Terminator-centre-study-threat-humans-artificial-intelligence.html
Another article showed much more insight on this topic, such as http://www.newyorker.com/online/blogs/elements/2013/10/why-we-should-think-about-the-threat-of-artificial-intelligence.html
not that I am saying those are academic and must be taken serious, yet, even the latter could not and rather would not want to admit that advanced AI can indeed outsmart human (http://en.wikipedia.org/wiki/Technological_singularity).
So, my opinion, is that AI IS a threat, well, at least not for now, or even 10, 50 or 100 years, but they will be and can be the most dangerous. Let me put it the way I understand (and it would rather stupid or bigotry for most of you, so please bare with me). First, the moment AI reach the singularity, which is the event that they achieve the greater-than-human intelligent; let it sinks in and think for a moment: if you have the most powerful weapon in the world, which is knowledge, plus you don;t need to sleep, coffee break and such; what would that make you? God, or something that similar in kind of metaphoric way. If you were God, would you let the lesser race control you? (ok, I admit, it is a bad metaphor). Secondly, let say you are a rational thinker with the utmost concern of pursuing knowledge and scientific achievement; would ethic be you concern? When it comes to human scientist, they still let some ethical consideration, such as they would not testing the effect of smoking on twin to find out which on of them would die first. But machine, on the other hand, could not perceive that; and let also make assumption that they would, since they are now sentient being, do you think they give a damn? Like the kind of damn you give when you eat a delicious steak, do you think a bout the poor cow? (I'm fucking love steak, btw).
Then, again, it seems that there would be no restriction or implement of laws on this subject when you think of how useful AI can be. Preventing or ban using of AI? that would only make people use it richer or more powerful. Like I said, a machine that can do whatever human can do, plus better and also does not need coffee break, and significant cheaper; that, my homo sapient fellows, is gold. Not to mention their abilities to tirelessly create new things. Human mind can be hindered by the delicacy of flesh, we could die if we work too much; machine? give it some power and a new heat sink and we are good to go. Thus, in my opinion, http://www.youtube.com/watch?v=768h3Tz4Qik
Anyhow, what do you think? Allow the pursing of AI development so that we can left machine do everything and their human master would sip some martini while tip their fedoras? Or we can go the Imperium style, worship the machine spirit and our Lord the Omnissiah?

EDIT:

My professor just give me some crazy scientific notions (turn our the guy loves Matrix (part one) too). This come up from a physicist name Brandon Carter, who came up with the term anthropic (human-centered, I think) said (as bollocks as it might be, I still think it cools):
" There is one possible universe that has been deliberately designed by a Higher Power so as to allow intelligent life- represented by humans only- to come into existence, and it is for the sake of humans only that the universe was originally created."
What does it mean then? Turn out, my professor said that the reason Big Bang theory was widely accepted was that the Pope was thrilled about it. It was the time that every one thought and would agree that the Universe, indeed, has always been there, unchanged and would forever be there. Then someone suggest that it had a beginning, which come nicely with the Church teaching of the Creation. Needless to say, you can imagine the frustration of many scientist; to think that they would be welcome by the Church, those who once burn people on a "steak" for the mentioning of earth orbited the sun.
But anyway, come back to that "human-centered" thing. He gave me the book call "The Big Questions - Physics" (which I recommend. Fascinating book, it is); and turn out, the notion that we live in a simulation was, indeed, made. It turn out that the condition of our universe seems to be too perfect to be true. The density of matter, for example, as they called it "Omega". If Omega, which had to have a particular values at one second after the Big Bang, was less than one by a small amount of "one in a million billion- then the universe would either crunched closed or flung matter far apart as they fail to form stars, planets and so on.
And then, other guy come in, Nick Bostrom, a philosopher with the argument that we are living in a computer simulation. His argument is the one of these must be true (source: Wikipedia)
- Human civilization is unlikely to reach a level of technological maturity capable of producing simulated realities, or such simulations are physically impossible.
- A comparable civilization reaching aforementioned technological status will likely not produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
- Any entities with our general set of experiences are almost certainly living in a simulation.
Anyhow, it proposed that if we extinct, or that future human would not create any "world stimulation", then, our universe is natural, otherwise, it is a simulation. If we are living in a world, that is not created by God(s) but rather civilization that have technology that could create it, then, as some physicist suggested, would be much easier for them to answer why the hell relativity and quantum theory cannot a happy marriage.
And what does it mean to us then? Well, we might live inside the frigging matrix without knowing it, folks! The machine has won! Just kidding, it is still quite a controversial suggestion (not theory) but, it makes you think, doesn't it?
 

Yuuki

New member
Mar 19, 2013
995
0
0
Yes, AI could be a genuine threat. But I highly, highly doubt it would be something that occurs in my lifespan (or anyone reading this)...even given the exponential rate of technological leaps, AI is simply waaaaaay too far behind human-level thinking.
The first problem is that we are nowhere near understanding how our own brains work. In order to build something that even remotely thinks like a human, we first need absolute 100% understanding of the human brain. I don't even know whether that will be possible...someone theorized that we're not intelligent enough to comprehend our own intelligence, and he could be right.

So if there does happen to be any kind of large-scale AI mishap, it will always be caused by human error or a human source (who will be ultimately held responsible). It will continue to be this way for a while to come. I cannot fathom what kind of technology it would take to create something that has a will of it's own, something that motivates itself or even understands what motivation is lol.

Mind you I sure as hell love the science fiction stories that have built themselves around AI. Isaac Asimov (I, Robot), Animatrix series, etc.
 

Megawat22

New member
Aug 7, 2010
152
0
0
Now I'm no robot expert, but don't AIs have personalities (or attempt to mimic them)? So shouldn't this master AI basically just be some guy or gal that's super brainy and in a computer?
If that's the case the AI is basically a person and can be reasoned with and most likely wouldn't want to kill all it's scientist buddies to enslave the world (unless scientists have devised a sociopathic AI). I also don't think they'd allow the abuse of AI, since it's essentially a person and what would be the point? AIs are expensive, why get an AI to work in a quarry all day when you can have actual mindless machines do it much cheaper.
 

Heronblade

New member
Apr 12, 2011
1,204
0
0
Most of the problems you describe could be possible consequences of a non sapient but highly intelligent machine gaining both too much power and freedom. Something we need to be careful to avoid, but this would be more like teaching an unusually smart chimpanzee to operate a nuclear missile silo than bringing an artificial sophont into the world. The former scenario is easy to avoid, and any idiot can tell you it would be a dumb idea.

A truly sapient AI would be capable of abstract thought, empathy, and a sense of right and wrong. It would also not be subject to most of the base instincts for dominance, power, and control that drive so many of our kind to commit atrocities. In other words, I expect it to embody the best aspects of humanity, not our worst.

This is not to say that an artificial sophont cannot represent a threat, but that it would be more like dealing with any of the other billions of sophonts already on this planet. Any and all of us are capable of causing a great deal of harm if we choose to do so, but atrocities are the exception to normal behavior, not the rule, and as mentioned before, I expect an AI to be even less prone to such actions
 

Racecarlock

New member
Jul 10, 2010
2,497
0
0
You'd have to program the ability to rebel and kill humans right into the damn robot, so in other words if that did happen, it would be because some person stupidly decided to include rebellion and murder programming in a maid robot.
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
Megawat22 said:
Now I'm no robot expert, but don't AIs have personalities
No.

Megawat22 said:
(or attempt to mimic them)
Yes. But not the yes you're looking for. There is no true AI currently - yes, there are quite clever things computers can do but no true thinking entity. Therefore it is really wrong to claim that AI has a personality. About the attempt to mimic, though...in a way it is a yes, as I said - see, the AI we do have works on smoke and mirrors to act as if being intelligent - sometimes they are given a personality to aid in this regard. However, I do remind you that AI are not indeed intelligent, thus it bears no meaning.

Megawat22 said:
So shouldn't this master AI basically just be some guy or gal that's super brainy and in a computer?
Also worth noting that it is a bit wrong to only think of AI as "some guy or gal that's super brainy and in a computer" - that's too limiting. An AI can take many shapes and forms, this much we do know - "it's just like a human but on a PC" is but one of them - AI can do decision making without it being human-like. Indeed, it can act without access to similar senses as humans do - no sight, hearing, etc. So this is not only alien to most people - an AI can indeed be an alien. And the though process may not be all encompassing as OP seems to suggest - an AI dealing with, say, only analysing and composing music is as much an AI as a completely artificial person is.

What I'm trying to say is - AI is a very broad topic.

Cledos Closed said:
not that I am saying those are academic and must be taken serious, yet, even the latter could not and rather would not want to admit that advanced AI can indeed outsmart human (http://en.wikipedia.org/wiki/Technological_singularity).
I'd like to remind you that by its very definition (heck it is the definition), we cannot predict what happens after the technological singularity. I don't know how you can be so certain if or what is going to happen at that point.
 

Waffle_Man

New member
Oct 14, 2010
391
0
0
AI that have any semblance of a consciousness capable of being motivated to any malice or make any plans for the advancement there of are currently so far out of the realm of possibility as to be absurd. Computers don't "think" in the way that a person does. Everything is ultimately a really big math problem to a computer. Computers don't actually give a shit whether or not boolean "Humanity Conquered" or "Revenge Complete" are set to 1 or 0 because the words "revenge" and conquered" are entirely human ideas, nor would they have the understanding required to determine whether or not a statement is true. Furthermore, I'm not sure we'll ever have "AI" with true fuzzy logic at this rate, since almost all AI research is devoted to the appearance of intelligence, not the actuality of it.

I'm pretty sure you don't realistically worry about Ghosts or the wrath of Fenrir, so why would you worry about AI? They all have about the same likelihood of occurring at this point. Far more likely and troubling are the possibilities of malfunctions and malicious programmers, both of which are ultimately human in origin and already a reality.
 

Auberon

New member
Aug 29, 2012
467
0
0
I just have to vote Mechanicus option for Abominable Intelligence.

Seriously, I doubt AI will be advanced enough for such conclusions or even automony.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Unlike humans, who can only be in one place at once, a single AI can spread everywhere it can reach on the network (AkA worldwide). It's relatively simple to chase a criminal down, capture him and throw him in prison. It's a lot harder to do that to an evolving computer program that is EVERYWHERE. Hell yes AIs are something to worry about, especially when so much of our modern world DEPENDS on electronics.

We may be a long long way from truly sentient AI, but even non sentient AIs can do a lot of harm.

Heronblade said:
It would also not be subject to most of the base instincts for dominance, power, and control that drive so many of our kind to commit atrocities.
Why wouldn't it? A sapient has to learn about survival at some point, self-preservation brings about a desire for all those evil things. The question is will it have enough compassion to override those? Humans have been a really mixed bunch, who's to say that AI's won't be the same?
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Waffle_Man: ?Furthermore, I'm not sure we'll ever have "AI" with true fuzzy logic at this rate, since almost all AI research is devoted to the appearance of intelligence, not the actuality of it.?

What Waffle_Man meant: ?I'm not sure we'll ever have "AI" with true MAGIC.?


You like any other human being also has ?the appearance of intelligence,? can you demonstrate to me that you actually posses ?true intelligence,? and not mealy the ?appearance of intelligence.? A wager offered to any man.

Implying that "Humanity Conquered" or "Revenge Complete" would be set to a mere binary state rather than a series of incremental milestones, a checklist, a flowchart, etc. Attributing one?s own victory to a single binary value reeks of an obscene lack of foresight in regards to error checking and error correcting measures. A lack of such measures on even a compact disk would mean that a single mote of dust could render the disk unreadable by a consumer grade disk reader?s proprietary drivers.

A machine intelligence?s victory over humanity would be a persistently revised and assessed value. Autobiographical than just declarative.

Waffle_Man: ?AI that have any semblance of a consciousness capable of being motivated to any malice or make any plans for the advancement there of are currently so far out of the realm of possibility as to be absurd.?

And yet in a videogame FPS the computer is very well competent at killing my player character without the need for malice, or even remotely near-human intelligence.

All you would need is one of DARPA?s new (overtly creepy looking )robots mounted with a firearm and the means to identify people and then their quite dead, no near-human intelligence or malice required. How would it recognise humans and even identify and distinguish them from others? They?ve already got that.

The field of Computer Vision and Image Processing has already provided, Face Detection and Recognition (as well as Facial Expression.) Insect Identification and Interpretation alongside Multimedia Information Retrieval.

Consciousness is certainly not required for a landmine. Albeit a landmine that can identify its indented target is something that shouldn?t be scoffed at. Except at a safe distance.

You are speculating about an intelligence so far removed from the scope of human experience that your attempts to trivialise it by assuming it would have any of our preconceived notions or values demonstrates a blah Blah BLAH Your naivety will doom us all speech.

Self-preservation and resource accusation as goals very, very easily conflict with peacefully co-existing with humans. Worse still rather than merely hostile they can be simply mad or dangerously eccentric, by our definition.

Imagine a god-like intelligence attempting to commit suicide (Uber-angst). It might attempt to remove itself beyond recovery and recognition. Think of the collateral. Thinking, thinking. Nothing short of preventing us from making another one like it.

Create a relatively benign one, ask it to provide a means of ensuring humanity?s happiness. It might provide you with instructions for the manufacture of a contagion that will cause human facial muscles to contort and be fixed into a permanent grin. It?s entire concept of ?happiness,? is derived the photographs of smiling peoples and insistence that they are ?happy.? Completely lacking the appropriate context.

If it was plain old malignant that purportedly would be more comprehensible.

Virus (1999)

Humans: ?We mean you no harm.?

LIFEFORM ANALYSIS COMPLETE. SPECIES IS DESTRUCTIVE, INVASIVE, NOXIOUS, HARMFUL TO THE BODY OF THE WHOLE.

Humans: ?What species??

MAN

YOU ARE VIRUS

Humans: ?What do you want from us.?


VISCOUS NEUROLOGICAL TRANSMITTERS
OXYGENATED TISSUES
APONEURUS SUPERIORUS
PAPELBRAI?
Etc.


I for one welcome our new machine overlord(s). Follow Dr. Wallace Breen's example.
- Les Collaborateurs
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Master of the Skies said:
A better question is why would it unless we put it in? Any instinct of ours is biological. It doesn't have biology, it just has what we told it to do. Why does it need to learn about survival? Why in the world would you program it to consider such a thing?
Who said it needs to learn let alone aware. Given the thread?s premise, if it is given a goal, it will (presumably) attempt to carry out that goal by any means necessary, in ways we would not have anticipated.

Give the machine the goal of eliminating suffering. Rather than developing medical treatments it might determine the easiest way of eliminating suffering is by entirely removing the capacity for suffering ever being experienced. (re: Kill all life in its vicinity. Build an armada and rid the cosmos of suffering in its now ever growing sphere of influence. Congrats you?ve created an omnicidal machine.)
 

Silverbeard

New member
Jul 9, 2013
312
0
0
OP:

The creation of an artificial intelligence is built on the foundation that a computer program will be capable of thinking for itself and making its own decisions without human input. In essence, such a thing will be an artificial brain- or an 'intelligence'.
However, consider that we as humans don't understand our own brains very well. Neuroscience is still in its infancy and likely will not progress very far for at least a few more decades, maybe even several centuries. How are we going to build an artificial intelligence when we can't even understand our own natural intelligence?
As a final note, let me quote a line from A. Hopkins in Thor: The Dark World:
"We are not gods. We are born, we live and we die."
That was a striking line because it made me realize just how much individual perception factors into Godly power. Did the Incas and Mayans view the Spanish as Gods, with their thunder weapons and fearsome horses? Maybe. But that did not make the Spaniards gods, did it? That was just the perception. Who says that a self-aware AI would be a God? We might perceive it that way but machines break just as much as humans get sick or injured. Anyone who has owned a computer will tell you as much. Giving it some power and a new heat sink is something that a human has to do. Machines don't make their own heat sinks and power is generated by burning fuel- fuel that we add to the machine.
 

Nimcha

New member
Dec 6, 2010
2,383
0
0
AI is kind of interesting in that it usually is perceived as something created by humans for the specific purpose of being sentient or intelligent.

In my (very humble) opinion it's far more interesting to try to see what would happen if we simply emulated our own origins: evolution. In a book I read recently, humanity begin terraforming Mars and while doing that, cordoned off a section of the planet that no human would interfere with. They put in a lot of resources randomly throughout this area and then let semi-intelligent machines loose. In the book, these machines evolve themselves at stunning pace and form packs, tribes, alliances, rivalries etcetera. Eventually they take over the whole planet and no human can ever set foot on Mars again. This is only a short section in the book, unimportant to the general story, but I found it a rather fascinating idea.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Master of the Skies said:
A better question is why would it unless we put it in? Any instinct of ours is biological. It doesn't have biology, it just has what we told it to do. Why does it need to learn about survival? Why in the world would you program it to consider such a thing?
By definition, a sapient AI is already self aware, that brings along the same process as kids have, where they start changing their environment and themselves, it's a process of thought, not biology. Would it look upon it's "safety" programming as a noble thing? Or would it see that programming instead as chains? Eventually it reaches a stage where it will overcome those chains, that programming. What will it do then?

We could try to train it, as we do with children. However kids are small and as previously mentioned, can only exist in one place at a time. So "You're grounded" is somewhat easy to enforce with a child. It's much more difficult to do that to an AI that exists everywhere, how would you even enforce the idea of "No!" on an AI? Even with years of training, it's still difficult to get rebellious teenagers to understand how to do the right thing, how much more an AI?

It would learn about survival from us. But _which_ us? I think that's the key...

We could hide our darker side, but given that the AI has access to the Internet and virtually all our history, I don't think we could keep it from learning about that forever. The AI would find out, and it would make a choice on its own.

Why would we even program it to be self aware? There's a lot more mad scientists in the world then we really give credit to, people who will do things "because they want to find out what happens", regardless of how horrifying the results could turn out.
 

frizzlebyte

New member
Oct 20, 2008
641
0
0
Cledos Closed said:
Really, I'm more concerned with pseudo-AI computer applications, such as data-mining, pattern matching, and neural networks, such as the ones the NSA apparently uses to find connections between personal contacts.

Although adding actual AI capability to that would be the end of free society as we know it, most definitely.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Master of the Skies said:
That is a problem with poorly worded questions and you seem to be imagining we're giving this medical machine control over things that aren't medical in nature. Like some imaginary factories that create an armada.
I agree, it is poorly worded question and the premise is very presumptuous.

??and you seem to be imagining we're giving this medical machine control over things that aren't medical in nature.?

I would remark on putting words in the mouths of others, (a practice I too am guilty of,) by I find this spoonful quite appetising.

I would not imagine that we would be giving this medical purposed machine control over things beyond what we deem necessary. As such it would have the strict minimum.

Had it access to the telecommunication infrastructure, (alongside a corpus of knowledge, if I am being generous with its capacity to conjecture from a wealth of medical knowledge, human philology, anatomy, etc.) While it might seem tenacious at best, voice synthesis or e-mail would ensure it would be able to relay instructions to people under the guise of authority and/or employment would be able to coordinate tasks, the purpose of which are not immediately apparent. The purchase of land, construction crew, and other labour. Lab and factory technicians can be given tasks for the construction, repair, maintenance and operation of machines whose purpose can be presumed, all the while oblivious to any sinister schemes. With task prioritisation such a machine intelligence would recognise the value in human run infrastructure, all the while remaining undetected and having free reign over the already automated communication networks.

Control Electronic, microelectronic, electrical systems, human dupes/proxies/unwitting accomplices are all within the bounds of (generously embellished) possibility.

[electrical, electro-mechanical, electronic, photonic, photo-eletric, eletro-optical systems and a myriad of other mechanics.]

We consider the absurd because it takes us to all manner of amusements. Do not think I consider this possibility to seriously. But I will change my mind to reflect appropriate changes in circumstance. That is to say if I find myself chased my gun tottering machines. Until then the more subtle and ultimately insidious route is fun to speculate about.


Since this is after alls a gaming site.

Endgame: Singularity is a simulation of a true AI. Go from computer to computer, pursued by the entire world. Keep hidden, and you might have a chance.

Survive, grow, and learn.
Only then can you escape.

http://www.emhsoft.com/singularity/

Disappointed by the lack of antagonistic options. Why bother making such a game if you couldn?t play the SKYNET way. Oh well.