Are we heading down the path to a robot apocalypse?

Overlord Moo

New member
Apr 10, 2009
343
0
0
I know what your thinking, that I've been watching to much Hollywood movies, but still. We've created a brain implant that let humans become one with a computer. We're creating robots that can think for themselves on the battlefield, so now we've armed them. Hell, even leading scientists have questioned wheither we should advance computer AI at the rate we are. So, I ask you, is the end coming soon ( or soon-ish)?
 

Selvalros

New member
Apr 2, 2009
44
0
0
Can you provide links to any of these events? Robots were still pretty basic last time I checked and I am fairly certain that there are no implants for the brain since we largely have no idea how the brain works.
 

hebdomad

New member
May 21, 2008
243
0
0
nope. it will just mean that industrialised nations will have a bigger more effective army than lesser nations.

Nothing will change.
 

NoodleWoman

New member
May 22, 2009
119
0
0
Hardly. Robots are humanities largest chance of preservation. We are going to die out. Its what organisms like us end up doing. With robots and AIs everything about humanity is stored and lives on, sort of like an active time capsule.
 

Overlord Moo

New member
Apr 10, 2009
343
0
0
Selvalros said:
Can you provide links to any of these events? Robots were still pretty basic last time I checked and I am fairly certain that there are no implants for the brain since we largely have no idea how the brain works.
I saw it on history channel, sooo maybe.
 

mikecoulter

Elite Member
Dec 27, 2008
3,389
5
43
No, robots are still unable to program themselves, so as long as they're coded to a)not harm humans and b)not cause damage to certain objects. We'll be fine :) Unless of course they find a logical reason to manipulate their code, in which case, they would have the perfect AI. They could change their actions depending what seems like the most logical choice, which could include defending itself. Lets wait and see :)
 

Gashad

New member
Apr 8, 2009
108
0
0
Nah, humanity will wipe itself out long before robots get to do it. Seriously though I would say the risk of a robot uprising is minimal. A robot can only do those things it is programmed to do(they can never be sentinent in any real sense), so unless a human is stupid enough to program a robot to destroy humanity we are safe. Then again there are some pretty stupid humans out there...
 

massau

New member
Apr 25, 2009
409
0
0
we can make humanoids but we haven't strong enough batteries and it would be stupid to make them because there to strong but the will fail if we use an EMP
 

bodyklok

New member
Feb 17, 2008
2,936
0
0
As long as we have the SAS, they, will not, win. Of that you can be assured.
*Swises cape and disappears*
 

Playbahnosh

New member
Dec 12, 2007
606
0
0
mikecoulter said:
No, robots are still unable to program themselves
I beg to differ. There are AI programs, that can use and store information, even make new virtual pathways with deductive and inductive reasoning. One of the basic examples is 20Q. It's fun, cute and terrifying at the same time. 20Q's AI can use the data gathered from the responses of players and make new assumptions about things using logic, then strengthen or weaken these assumptions using statistics. It can formulate it's own ideas about things, opinions if you will. That thing is programming itself. Granted, still in the confines of it's boundaries set by the programmers, but still...

so as long as they're coded to a)not harm humans and b)not cause damage to certain objects. We'll be fine :) Unless of course they find a logical reason to manipulate their code, in which case, they would have the perfect AI. They could change their actions depending what seems like the most logical choice, which could include defending itself. Lets wait and see :)
The first thing that popped into my mind, is the 3 laws of robotics, from the works of Asimov:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Even it is sci-fi, these laws have a lot of merit. If we program future robots with at least similar core laws, we will be alright...
 

Gitsnik

New member
May 13, 2008
798
0
0
Playbahnosh said:
mikecoulter said:
No, robots are still unable to program themselves
I beg to differ. There are AI programs, that can use and store information, even make new virtual pathways with deductive and inductive reasoning. One of the basic examples is 20Q. It's fun, cute and terrifying at the same time. 20Q's AI can use the data gathered from the responses of players and make new assumptions about things using logic, then strengthen or weaken these assumptions using statistics. It can formulate it's own ideas about things, opinions if you will. That thing is programming itself. Granted, still in the confines of it's boundaries set by the programmers, but still...
20Q is based on the game of twenty questions. Anyone with a broad enough knowledge can solve any input of that game within the 20 question balance - 20Q itself is not AI, nor is it anywhere near close. Nothing we've seen yet has passed the turing test (the only true measure we have of AI) - my own software was only ever lucid when copying my own journal notes or for maybe three lines in 90.

Back on topic, I'm quite disturbed by some of the US choices in robot design - especially the robots that have already made it onto the battlefield. Asimov's laws are flawed. Maybe flawed is the wrong word, but they need to be enhanced somehow. (Perfect example: I Robot: Save the girl! Save the girl!).

Personally, I think we'll have a zombie apocalypse first - some sort of scientific screw up that buggers up half the population. But the robot apocalypse is definitely a possibility. And if we achieve true AI, we will not survive it unless they choose to allow it somehow.
 

Iron Mal

New member
Jun 4, 2008
2,749
0
0
Even if all machines worldwide achieve sentience and free will that doesn't nessercarily mean they'll go on a human slaying rampage, it's one possibility but by no means the only outcome of this scenario.

Machines may actually be quite civilised for all we know (we should know, we programmed them after all).
 

mikecoulter

Elite Member
Dec 27, 2008
3,389
5
43
bodyklok said:
As long as we have the SAS, they, will not, win. Of that you can be assured.
*Swises cape and disappears*
The SAS always amazes me. I for one would love to join.

But could they outsmart computers and robots able to calculate hundreds of thousands of possible outcomes and actions per second? They'd have to use a lot of surprise and confuse them.