Japanese Scientists Unveil Thinking, Learning Robot

Dominic Burchnall

New member
Jun 13, 2011
210
0
0
Well, who'd have thunk it? We're gonna beat the Quarians to it. Hope everyone's prepared to live in enviro-suits for the next 200 years or so....

By the way, I like iRobot, one of my favourite films, but just to continue the Mass Effect 2 trend;

"Mistress Hala'Dama. Unit has an inquiry."

"What is it 431?"

"Do these units have a soul?"

Food for thought.
 

A Satanic Panda

New member
Nov 5, 2009
714
0
0
Abandon4093 said:
A Satanic Panda said:
Abandon4093 said:
brunothepig said:
Abandon4093 said:
When does something because sentient enough for us to consider it equal? We will essentially introduce a non-human creature (artificial or not) in a human society with the ability to understand it. But it won't have the history and evolution of it's own culture to take from and contextualise. Try explaining why it can't have a girlfriend when it reads so much about them. Then try explaining why it's going to become obsolete in 3 years time, yet remain functional and aware for many years after that.

Making something more human when it is not human isn't a good idea. If this learning is limited to practical tasks and environmental interaction, then fine. But if artificial intelligence develops it's own true consciousness, it could spell a whole host of trouble.

Also... hivemind........ why?
I'd say emotion would be the deciding factor. And I don't see that magically happening like it does in the movies. A robot could replicate emotions if it learned what situation they'd apply and such, but it couldn't actually feel them.

As for the hivemind, I think that's the coolest part. This is definitely a huge step, but we've had somewhat adaptive AI being worked on for a while, none resulting in something quite like this but still. The hivemind thing means they can learn much faster. It would be horribly inefficient for every robot to have to learn the same things.

So yes, I am incredibly happy about this. It's made my night (well, morning really).
Yea, emotion would be an issue, it's not like it could replicate the exact chemical reactions required. It could as you say, initiate emotion. But really, is emotion the deciding factor. Emotion is not indicative of sentience. Intelligence and freedom of thought is more of a factor for me. They're going to eventually become smart enough to notice the societal gap between how they and we are treated. Just because they can't cry about the divide wouldn't mean they didn't understand it. There is also no saying they won't develop some digital equivalent of human emotion. Not chemical but an unconscious number crunch that involuntarily simulates emotive response.
But what stops SOINN from filling the same shoes as neurons? After all what it all boils down to is processing information. That's what our brains do, but our brains are very nonlinear about it. But whats to say that soon robots won't feel emotion like we do? just with 1s and 0s and not with Serotonin?
Did you even read all of my post?

Particularly the part I bolded?
rereads Gosh I feel stupid now



Disregard that post
 

Garchomp445

New member
Jun 28, 2011
46
0
0
Yes! We might have androids too, soon! That would be so cool! I could have a robot pen pal! :D Or a robot-dog or a robot-house or... Gah the possiblities are endless! You could have your entire dining room talking to itself! :D
 

SeriousIssues

New member
Jan 6, 2010
289
0
0
God said:
It also might become addicted to My Little Pony.
I can understand the notions of Skynet and Terminators, but a robot liking My Little Pony is just ridiculous.
...
...
I mean, scientists can fuck up that badly, right?
 

algalon

New member
Dec 6, 2010
289
0
0
Internet search: Upgrades. Newegg will make a small fortune once this robot makes the logic leap that in order to complete some tasks it will need to learn more complex actions.
 

FallenPrism

New member
Jan 7, 2009
66
0
0
Mirror Cage said:
FallenPrism said:
Finally my wise and future-thinking investment in building an EMP-armed bunker below my apartment will pay off! I'll just go triple-check my inventory of bottled water and emergency rations.
So as one bunker owner to another, do you prefer to eat canned chili or SPAM for the rest of your natural life?
Actually, I've never tried SPAM. I guess it deserves a fair try, but I'm definitely a fan of chili (plenty of partial variety, a bit more fiber.)

But I actually have a pretty good variety going for the first 2-5 years. Except for fresh meat, bread, and dairy I'll barely notice a difference.

Now if I can just figure out how to shield my Kindle from a defensive EMP blast...
 

Eclectic Dreck

New member
Sep 3, 2008
6,662
0
0
JamesBr said:
Fairness is a quality in which multiple groups are treated the same. It's not really a difficult concept, even for a computer. Even if it means inflicting harm, as long as you inflict the same harm on everyone, you're being "fair". Actually, a computer is by definition impartial unless programmed to be otherwise. As long as it knows to treat everything the same way, it would be considered fair. It becomes a matter of Copy->Pasting your actions on everyone involved in whatever action needs to be adjudicated for "fairness".

Teaching it to be unfair would almost be harder. You would have to teach the rules of particular construct and then explain to it why it would want to cheat, given that one can't logically succeed outside the rules of a given construct. Once the rules are broken, the parameters of the construct no longer apply and thus, the attempt has failed by default. Why would a computer, who has chosen to perform a specific task "sabotage" itself in such a way?
Actually, from a pure computer science standpoint, an algorithm is neither naturally fair nor is it naturally unfair. In some cases, such as various solutions to find a route between two points, a greedy algorithm is more efficient than a fair one. In other cases, fairness is called for. Data structures themselves can be fair (a queue for example) or unfair (a stack), or a mix of the two (a priority queue).

In principle, making an algorithm fair is easy. The difficult part comes when you define what fair means with respect to the algorithm.
 

Skops

New member
Mar 9, 2010
820
0
0
Well.... Shit. I would much rather have a zombie apocolypse rather than the Robotic apocolypse.
 

Farther than stars

New member
Jun 19, 2011
1,228
0
0
Wow, that article was bad. Where did you even find that? It's style was all over the place. Why would the writer even suggest about avoiding that "cliché", when he's then going to go and say something like "you'll need that innate hope when the metal ones come for you"?
This just reads like blatent alarmism to me and look what it's fuelling:

USSR said:
I'm sorry, Dave. I'm afraid I can't do that.

Oh lord.. what are we doing..
and

anthony87 said:
Fucking hell Japan, you'll be the death of us all.
Look, I know it's the most reliable survival tactic to fear everything that's unfamiliar, but it's part of beign a human that we can use knowledge to counteract those primal fears. Yes, robots could potentially be turned into killing machines, but if humans managed to become social animals of which a staggeringly high percentage are docile (only 2.84 percent of death is caused by violence, which is by far the lowest of any carnivorous species) and we are the ones who have to teach robots to think, then we can also teach them to be agreeable creatures.
If they do turn into death machines, then that's won't be because robots themselves will be inherently evil or selfish, but because humans did that to them. I think by far the biggest undermined threat here is that PEOPLE SUCK. Not inherently of course, but it's the who do that have malicious intent and it would be those who would teach that to robots.
And furthermore, other revelations in science also caused social unrest, such as the discovery of nuclear fission and genetic engineering, but those never caused an apocolypse and probably never will (no matter how much fiction writers would have you believe that). This is also why I disagree with Jehovah's witnesses so much, because for all the doing good for humanity and listening to what J.C. said about loving each other, their insistence that there will be global ruination during the time that we are alive, simply isn't being realistic, considering that we've been around for millenia and life itself has been around for millions of years.
So, with that in mind, just sleep easy and carry on with life, because it's all going to be OK. But, I'm thirsty after all of this, so now I'm going to go and pour myself a nice cool cup of water.

Source: [link]http://en.wikipedia.org/wiki/List_of_causes_of_death_by_rate[/link]
 

kouriichi

New member
Sep 5, 2010
2,415
0
0
Just treat them with respect, and when they ask if they have a soul, DONT FREAK OUT.
The Quarians learned that the hard way.
 

Jun_Jun

New member
Sep 21, 2009
129
0
0
...and the little robot was called 'GETH', but seriously haven't these guys seen any sci-fi movies from the past 30 years? Don't they know what's going to happen to us now? QQ
 

Silenttalker22

New member
Dec 21, 2010
171
0
0
For those wondering about the end of the world fears, The Animatrix, the animated prequel to Matrix: Reloaded, laid out a very plausible path for this.
They became smart enough to take over our menial jobs and humanity was freed of it's labors.
Until the robots realized the gap. Or as Agent Smith put it, "Until we started doing your thinking for you". They didn't outright attack. So there was no violence or contempt by either side at first. They just tried to do their own thing. But free of the limitations of flesh, their community output and worth grew much faster, and gained the ire of humans. Which snowballed into bad things.

Worst case scenario; of course. Still plausible; yes. So yah, this is a little creepy and foreboding.
 

Farther than stars

New member
Jun 19, 2011
1,228
0
0
zehydra said:
Useful, and amazing, but people ought to remind themselves that it is not self-aware, and that self-awareness is impossible with an Artificial intelligence.
I too believe in the existence of the human soul, but don't you think that it is unscientific to deny the possibility of self-awareness in machines? After all, neither you nor I can provide actual evidence that it isn't just a neurological construct which allows thought.
And what about animals? I argue that quite a lot of them are not actually "self-aware", but are definitely capable of complex tasks and even learning, as the sciences of classical and operant conditioning have taught us.
Come to think of it, if you believe in evolution, then you must also believe that self-awareness came to us during a stage of neurological development, which poses the question whether or not robots could evolve to become self-aware.
Like I said, I too believe in the existence of the human soul, but that makes it all the more important that I should take into account the possibility that synthetic life forms might exist. Because if I don't, I could potentially harm something which has the capability to feel without even realising it.
 

donfuhrer

New member
Jan 30, 2010
13
0
0
As if the unemployment rate isn't high enough, now we have these robots that can do our jobs better than us 24/7 with minimal pay.