What Will Kill You First, Cancer or Robots? Probably Robots

The Wooster

King Snap
Jul 15, 2008
15,305
0
0
What Will Kill You First, Cancer or Robots? Probably Robots


Cambridge thinkers propose more research into "extinction-level" risks to mankind.

What are you supposed to be doing right now? The washing up? I wouldn't bother, mate. We're all doomed anyway.

At least, that's what popular culture tells us. Even if the environment doesn't turn against us with fiery abandon, there's always meteors, robot uprisings, nanomachine swarms, sentient viruses, etc. The only thing left to do is take bets on which terrifying cataclysm will kill us first, and that's pretty much what a new initiative proposed by a gaggle of boffins at Cambridge University would do.

The Centre for the Study of Existential Risk (CESR) would analyze risks to the future of mankind, particularly those we could be directly responsible for. The Centre, proposed by a philosopher, a scientist and a software engineer, would gather experts from policy, law, risk assessment and scientific fields to help investigate and grade potential threats. According to the Daily Mail, the proposal is backed by Lord Rees, who holds the rather grand-sounding post of Astronomer Royal. [http://en.wikipedia.org/wiki/Martin_Rees,_Baron_Rees_of_Ludlow]

Judging by comments from philosopher and co-founder, Huw Price, the potential threat of artificial intelligence seems to be pretty high on the centre's agenda.

The problem, as Price sees it, is that when an artificial general intelligence (AGI) becomes smart enough to write its own computer programs and create adorable little AGI babies (applets?), mankind could be looking at a potential competitor.

"Think how it might be to compete for resources with the dominant species," says Price. "Take gorillas for example - the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival."

"Nature didn't anticipate us, and we in our turn shouldn't take artificial general intelligence (AGI) for granted."

Price quoted former software engineer and Skype Founder, Jaan Tallinn, who once said he sometimes feels he's more likely to die from an AI accident than something as mundane as cancer or heart disease. Tallinn has spent the past few years campaigning for more serious discussion of the ethical and safety aspects of AI development.

"We need to take seriously the possibility that there might be a 'Pandora's box' moment with AGI that, if missed, could be disastrous," writes Price. "With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies."

Source: The Register [http://www.theregister.co.uk/2012/11/26/new_centre_human_extinction_risks/]


Permalink
 

Fappy

\[T]/
Jan 4, 2010
12,010
0
41
Country
United States
My step dad and I were actually talking about this just last week and he believes we'll likely fully integrate with such technology before it gets to a point where it'd try to turn on us. I'd prefer a cyborg future to a human vs. robot one any day.
 

Hero in a half shell

It's not easy being green
Dec 30, 2009
4,286
0
0
So we could end up living out the last years of our lives in a human conservation zone, full of the optimum level and quantity of human stimulus made by the computers to save the endangered humans.
I'd be up for that: Being waited on hand and foot by machines, maybe not the way we expected to be, but it still counts.
 

Bobic

New member
Nov 10, 2009
1,532
0
0
Sounds like someone's been reading this little number.

http://www.amazon.co.uk/Everything-Going-Kill-Everybody-Terrifyingly/dp/0307464342/ref=sr_1_1?ie=UTF8&qid=1353962061&sr=8-1

On a side note, I'm gonna go do a facebook search for John Connor and start making some new friends.
 

gigastar

Insert one-liner here.
Sep 13, 2010
4,419
0
0
Seems to me if that we create a superior race, organic or otherwise, that proceeds to wipe out humanity, its just natrual selection ***** slapping us for being so bloody stupid and made of feeble meat.
 

Scow2

New member
Aug 3, 2009
801
0
0
Hero in a half shell said:
So we could end up living out the last years of our lives in a human conservation zone, full of the optimum level and quantity of human stimulus made by the computers to save the endangered humans.
I'd be up for that: Being waited on hand and foot by machines, maybe not the way we expected to be, but it still counts.
They tried that. But some humans escaped, and tried to make life hell for everyone else enjoying the pseudoparadise instead of just plugging themselves back into the Matrix.
 

Vegosiux

New member
May 18, 2011
4,381
0
0
Well, we're all doomed in a few billion years when our sun leaves the main sequence and goes red giant.

DVS BSTrD said:
If robots could develop the ability to love, I wonder if any of them would be biocurious?
Bring out Proposition Infinity again!

In more seriousness though, yes, technology is something we need to be careful with in order for something not to go horribly wrong (or horribly right, that's usually even worse).

Back to jokes, does going into cardiac arrest when a game glitches on you half a second before you'd have won count as "AI accident"?
 

Falterfire

New member
Jul 9, 2012
810
0
0
This baffles me. Speaking as a programmer, I can say that programs tend to only do what you tell them to. I don't know a ton about Machine Learning but I'm reasonably certain that it would be impossible to accidentally program a humanity destroying AI.
 

Falterfire

New member
Jul 9, 2012
810
0
0
Kwil said:
Oh I dunno. Look at auto-correct. Apply to international diplomats. Imagine what happens when the answer to "Have you informed your guys where we are on the deal?" goes from "Got to tell new kid on the way" to "Go to hell. Nuke is on the way."
There's a difference between retarded speech correction and intentionally malicious self aware AI. Autocorrect may be the cause of Armageddon, but not due to intentional malice on the part of the program.
 

Formica Archonis

Anonymous Source
Nov 13, 2009
2,312
0
0
Let's see, evidence of each one trying to kill me....

Cancer: Cancer already tried once.
Robots: A coder watched the Terminator movies.

I'm guessing cancer.
 

Baresark

New member
Dec 19, 2010
3,908
0
0
Eh, I'm not convinced. I mean, they would have to have some understanding of how things evolve and most software engineers are far to ignorant to know this. The human brain has evolved continuously. When it evolved a system for sight, it then started evolving a system for sight on top of that system. It's the same for almost all parts of the brain. In order for a machine to eventually have the ability to wipe out humans, the original programmers would have to understand this process. But they don't. Software is always the same; "We made system X to handle task Y. We're done!" This is humanities biggest weakness in creating true AGI. It would have to be aware of itself to the extent it would need to know how to evolve itself. Also, they are assuming that an AGI would be as clueless as humans in encroaching on foreign environments (AGI and humans would have very different environment needs unlike humans and great apes who share the same environment needs), or at very least hostile towards people, which there is no reason to assume it would be.

Truth be told, someone would have to fund these guys, and that sounds like asking for government handouts. To which I would say a resounding no, personally.
 

thesilentman

What this
Jun 14, 2012
4,513
0
0
My brain is hurting because of what the guy's suggesting. Seriously? Reproducing software? Computers writing their own software?! I can't be the only one to call it BS. Call me back when you all successfully teach computers emotions, okay?
 

Evil Smurf

Admin of Catoholics Anonymous
Nov 11, 2011
11,597
0
0
I thought is would be the fact that Mobile phones are giving us cancer. Maybe the robots are rising up....
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
On december 21st the first sentien computer will boot up. mark my words.

My brain is hurting because of what the guy's suggesting. Seriously? Reproducing software? Computers writing their own software?! I can't be the only one to call it BS. Call me back when you all successfully teach computers emotions, okay?
well, we already have computer viruses that re-write themselves to stay undetectable. thats as much evolution as we managed to get computers to do. its scary really, as one day they may accidentally rewrite themselves to be smart, you know, pretty much like how evolution works. its just that the information amount is so small this probably never will happen. one of our body cells store more information than our largest hard drive. and rewrites itself with every generation.


well our machinery progresses fast and will continue to do so, but i think we will develop implants that speed up our processing (like a robotic eye that is superior) before we invent true AI, which will lead is to become brains in robot bodies or even ghost in the machine before AI will develop. And by then, we may as well consider AI our equal. Except that it will likely be more logical.