Response time, FTW.
Bored Tomatoe said:
I don't think that computers and machines will physically take over the world in evil genius fashion. I do believe that technology will render us dependent upon it, to the point where we cannot function without it. Teenagers today can't seem to live without social networking sites and cell phones, while working adults use their blackberries as a calendar, phone, toaster and back massager. It truly is scary how dependent we are on something that has been created so recently relatively speaking.
An interesting argument. I would say that they aren't dependant on them, at least not in the way you think; it's more they like having them around. Then again, I have been called optimistic.
RAKtheUndead said:
I'm actually writing a short vignette at the moment from the perspective of an autonomous robot. Called "The Automaton Speaks", it's pretty much an argument against machine uprising and against the fear of technology. Machines are not constructed with laziness, avarice or thoughtful malice, so there would be no gain for machines who had the objective of taking over the world. I see it as more likely that we develop a symbiotic relationship with technology, and far from it maliciously taking over the world, it will work in tandem with us.
If this thread is still active by the time that I complete it, I might post it here with permission from the OP. I leave you with a quote which I think sums up my opinions on this topic nicely:
"Rejection of technology ruins a good mind." - RAK
I'd like to read it first, but as a matter of principal, why not?
Qayin said:
Self awareness.
For example, us; we got self awareness, became aware that we were actually the best damn thing on the planet, and then made every other species our *****.
Give machines self-awareness, and they'd recongnise that, actually, they have the potential to be far better than us, and that we're not exactly doing a good job of preserving the world.
Add in crime, pollution, poverty, suffering etc, doesn't take a massive leap in logic. That said, what stupid fucker would give a machine self-awareness?
More to the point, how would one go about giving a machine self-awareness? I think it would have to do with speech; with speech, you have to recognize that 'I' is 'you'. And if there is a 'you', then you are.
Arsen said:
Too much though going into something that just exists for the sheer sake of entertainment.
Are you saying that The Internet, or rather, machines, exist solely for the sake of entertainment? Because I would adamantly disagree with that.
KneeLord said:
I may not have been the first to say it, but your poll is fucked up nonsense.
Your actual question though, isn't... WHY would a machine(s) WANT to take over the world. There's really only 2 ways about it:
1) It was programmed to want to take over the world. If it was following programming, why wouldn't matter, just accomplishing the what it is programmed to qualify as parameters for success.
2) To ask Why, if would have to have some capacity of self awareness. This is where the bamboo splits into lots of threads, and makes for the basis of lots of SciFi... I mean, why would a machine WANT anything? Would it subscribe to Maslow's scale of needs - skipping the obvious like food, shelter, sleep? If it was self aware, going by human logic, we assume that it would want to preserve it's existence - it's awareness. But artificial intelligence is not innately guided by any animal instinct to survive, so would it care? A sufficiently advanced artificial intelligence with the capacity for analytical reasoning would presumably come to the conclusion that it can either accept or reject the values and motivations of organic life, with a greater sovereignty over the decision that we ourselves possess (bound to oxidizing globs of meat, such that we are).
There's a lot of questions you could ask about artificial life / AI before even getting to ambitions of world domination. That said, lets assume A) It wishes to act to protect and continue its existence and B) It sees us as a threat, or as a competitor for resources it wishes to use to that end.
Again though, you have to think very abstractly to try and conceptualize the motivations of an advanced AI. If it found us to be a threat or competitor for resources, but knowing that it is immortal, if not invulnerable - might it make an escape run into space? An unmanned, scavenging/manufacturing/automated craft piloted by an intelligence capable of processing astrological amounts of information could theoretically explore space, sustaining itself and seeking out basic survival, or looking for other intelligences, with no time constraint, provided that time is linear and does not end.
Anyway, it's a cool idea to toss around. There's my 0.02
A thoughtful argument. I might argue who would program a computer to take over the world, but it would be a moot point. There's also the question
how would a computer gain self awareness, and more importantly, why would that make it hostile (as you pointed out).
CountFenring said:
Because intelligent machines have a need for efficiency, this is just a guess on how machines would think, if they could.
Are you saying that humans don't have a need for efficiency? Or that machines do? What's more, why do machines need anything, besides regular maintenance and repair?
Wislong said:
Until they can think for themselves, no.
Define: "Think"
Exactly.
Twilight_guy said:
Computers are exceedingly stupid. They may give the illusion of being smart but from what I've seen they're just dumb. To program a computer to think would require a human being to be smart enough to know how thinking works. By definition that would require a human who is smarter then a human, which does not exist.
Also, Robots don't want to take over the world; they just get sick of being slaves to humans and fight back. Humans resist and war ensues.
That reminds me of something I told a friend once, regarding why I prefer stick-shift to automatic "A machine can only be as smart as the person who makes it, and most people I've met are either stupid or apathetic. Do you want someone stupid or apathetic driving you around?"
Also, how do you, or they (for that matter), know they're slaves?
Labyrinth said:
Presuming that you mean artificial intelligence manifested in robots, then I imagine it would be in order to obey a reproduction code. They need land to mine for resources to build more of themselves. Why? Because that's how they're written to be.
And if you don't, then they already have overtaken the world. Just think of what would happen if the internet entirely failed tomorrow.
Regarding your first point, why would a machine need to take care of itself if it knows we will take care of it? It's a symbiotic relationship, see?
Regarding your second point, I don't need to. [http://www.youtube.com/watch?v=lvpuT3aoypE]
Adam Jenson said:
I think you are all talking a load of old shit
Really? I thought I, at least was questioning that, poking the proverbial Big Brother in the eye, so to speak. THAT'S WHAT YOU GET FOR WATCHING ME IN THE SHOWER YOU SICK FUCK!
And so we end on a high note. Yeah.
Apologies Abound.