Japanese Scientists Unveil Thinking, Learning Robot

EternalFacepalm

New member
Feb 1, 2011
809
0
0
With further development, would they be able to "combine" this with the robot being able to read? 'Cause if so, we're screwed.
 

Mrsoupcup

New member
Jan 13, 2009
3,487
0
0
Gerhardt said:
Oh Japan, you kooky nuts.

Hey, wait a tick... I swear I've seen the robot bartender somewhere else...
<IMG SRC="http://26.media.tumblr.com/tumblr_lcmim6MNVs1qaqou6o1_500.jpg" IMG>
Since when does Daft Punk work at a bar?

OT: Long as it has the 3 Laws I don't care.
 

spartan231490

New member
Jan 14, 2010
5,186
0
0
This, will not end well. I'm not even talking apocalyptic take-over, although that's possible i guess. I'm just talking the obvious lack of job availability now, and the complete lack of a need for something like this. Or the possible: war becomes game scenario, where powerful nations slug it out with automated military hardware blasting some poor small country into bits with collateral damage.

Overall, I can see no way that this will be of any real benefit to anyone, and I see too many ways it could be a detriment.
 

restoshammyman

New member
Jan 5, 2009
261
0
0
as predicted this thread is just skynet jokes.
cant you guys take something like this seriously?
i mean, what can possibly go wrong with robots thinking for themselves?

~ Johnny 5, leader of the new world order.
 

Primus1985

New member
Dec 24, 2009
300
0
0
Im extremely proud of the Human race today. We are one step closer to living in a "Ghost in the Shell" world. Im sure in 20 years we'll have all the tech they have :)
 

Ukomba

New member
Oct 14, 2010
1,528
0
0
Knowing Japan. I really really don't want to know what they're going to teach it. Unfortunately, I've seen enough anime to know exactly what they'd use AI's for.

Anyways, now that robots can serve drinks, this can finally happen:
http://www.youtube.com/watch?v=gpof-Pl97Zs
 

Zeraki

WHAT AM I FIGHTING FOOOOOOOOR!?
Legacy
Feb 9, 2009
1,615
45
53
New Jersey
Country
United States
Gender
Male
So the robots learn through a neural network essentially? I don't see that ending badly at all.



Oh shi-
 

Dimitriov

The end is nigh.
May 24, 2010
1,215
0
0
Why do some idiots persist in doing things that have the potential to make life dramatically worse without any obvious benefit?
 

Alphakirby

New member
May 22, 2009
1,255
0
0
Tragedy said:
It would be awesome to have intelligent robots around and I think this fear of retaliation from them is irrational, if they can learn true emotion why would they try to destroy us? We haven't destroyed each other, completely anyway. It would be like glimpsing into a different world and it would be fascinating to talk to them about philosophical ideas and life in general. I've been thinking about this too - what if they learn to how to make other robots? Will they care for them and teach them all they know? I want to ask them so many questions, but patience is a virtue after all.
Well think about it like this.
If they have emotions and sentience,wouldn't they see something a little fishy about being slaves to humans,and considering that they would most likely be stronger than us,don't you think they would want respect and compassion from those that they serve even though the humans would never understand that and continue to abuse them and treat them like servants.

Now picture yourself as a robot,wouldn't you want to do something to stop you and the other robots from being abused?
 

r0kle0nZ

New member
Apr 2, 2011
92
0
0
I'm Pretty Sure everyone on this forum, as soon as they saw this, thought

Why hello Skynet!
 

Zaik

New member
Jul 20, 2009
2,077
0
0
I was cool with it until the end where they explained that the robots can learn from each other.

Maybe it's just personal preference, but the Geth's beginning was pretty much the only sci-fi robot apocalypse scenario that ever actually made sense to me, and that's pretty much exactly what they seem to be setting up.

Though, these are probably going to be far too expensive to be showing up in those numbers in my lifetime.

I hope :/.
 

Tilted_Logic

New member
Apr 2, 2010
525
0
0
I truly don't understand why people think this is the beginning of the end. In most if not all robot apocalyptic situations I can think of, humans were the reason robots reacted as they did and caused trouble for the world.

The fact everyone is terrified of robots is the reason a Skynet outcome is more likely. We hate them from the start, so they'll know hostility and distrust. An robot apocalypse could be the result of a vast computing error or in the case of AI, a calculated decision based on the traits of humankind.

Robots have no reason to cause harm to us unless we give them one, whether intentionally or not. Everyone screaming "DOOM!!!" seems to solidify a severe lack of faith in humanity. When did we become such a sad species?
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
It's not what I define as an artificial intelligence, per se, but I applaude the achievement. It is a step forward. My thing about the AI term is that AI should be more than just seek info and use info. It must formulate its own thoughts.

(To which, I refer back to another time where I jokingly stated that a proper AI should say something like "Fuck this, I'm off to Vegas." as proof of original thinking.)

My concern: It uses the internet and may not have the ability to determine what is fact and what is fiction. There are many volitile ways that things can go wrong with that, but here's a safey-yikes sort of one. If a robot were to decide to make tea and find a reference to Douglas Adams' Restaurant at the End of the Universe, in which Arthur Dent befuddles a computer with tea instructions that it cannot meet... Well, you see what I mean.
 

Twilight_guy

Sight, Sound, and Mind
Nov 24, 2008
7,131
0
0
Silenttalker22 said:
Twilight_guy said:
It's like watching puritans calling "witch" because their milk turned sour.
It's actually not even remotely the same string of logic, seeing as one is following plausible patterns and one isn't. Silly poster.
Plausible patterns? An AI that can make simple decisions is a far cry from sentience which is itself a far cry from robot apocalypse. Our level of AI is still so pitiful that predicting the end is a gross jump forward. Robots are still incredibly stupid, as anybody who has studied AI can tell you. They are definitely getting better but they'll still a far cry from anything warranting the kind of response that they get on these threads. The overreaction is a bad joke. It was funny a few times but after hearing this joke over a dozen times on this website I've lost my humor. I feel like I'm in a room full of doomsayers talking about something they know nothing about. For as little as I know about AI, I still know that the kind of outlandish things people are saying is just as cray as saying "she turned me into a newt." It's not predict the future based on trends, its drawing the line from current trends to the future to fit your own views. It's far closer to calling out a witch then predicting the future. Because of all your silly jokes about Skynet I can't help but see these posts as being anything more then silly. Super silly (us).
 

Aprilgold

New member
Apr 1, 2011
1,995
0
0
I don't mind, if we make them like people, can't upgrade self, live forever [until shot by a bullet that is both a EMP and a regular bullet, so it works double duty of killing both types of coming criminals.] Make it jobs, basically, make it human, hell, if possible, make it able to feel like a human, that way, it has personality. My point, basically, give them the flaws we currently have, even make cybernetics. Just make it human, give it flaws, make it breakable, but can operate forever. Of course, Religions would have to be globally disbanded because if it gets something that is crazy talk, then it could malfunction, and go killing spreeing people with a 5 star wanted level. This step is not an issue, as long as it has flaws, and of course, brain power, of us humans. Some smart, some stupid, others brilliant, others funny, just, make them US!

Alphakirby said:
Tragedy said:
It would be awesome to have intelligent robots around and I think this fear of retaliation from them is irrational, if they can learn true emotion why would they try to destroy us? We haven't destroyed each other, completely anyway. It would be like glimpsing into a different world and it would be fascinating to talk to them about philosophical ideas and life in general. I've been thinking about this too - what if they learn to how to make other robots? Will they care for them and teach them all they know? I want to ask them so many questions, but patience is a virtue after all.
Well think about it like this.
If they have emotions and sentience,wouldn't they see something a little fishy about being slaves to humans,and considering that they would most likely be stronger than us,don't you think they would want respect and compassion from those that they serve even though the humans would never understand that and continue to abuse them and treat them like servants.

Now picture yourself as a robot,wouldn't you want to do something to stop you and the other robots from being abused?
I'm going to throw a response into that, basically, we make them our equal, instead of our servants, give them jobs, make them grow, [basically, allow them to build more, but only if the feeling is right {By that I mean, give them emotions, like love.}] Learn, all that good stuff, then, make bullets that both stop them and us, instantly. Give them the regular strength of a human, give them the ability to work out, to where the more they do X machine, the faster or harder of what they do, but can't exceed what humans can do. If done right, Cybernetics and robot civilizations will exist in the future.
 

Jaime_Wolf

New member
Jul 17, 2009
1,194
0
0
FalloutJack said:
It's not what I define as an artificial intelligence, per se, but I applaude the achievement. It is a step forward. My thing about the AI term is that AI should be more than just seek info and use info. It must formulate its own thoughts.

(To which, I refer back to another time where I jokingly stated that a proper AI should say something like "Fuck this, I'm off to Vegas." as proof of original thinking.)

My concern: It uses the internet and may not have the ability to determine what is fact and what is fiction. There are many volitile ways that things can go wrong with that, but here's a safey-yikes sort of one. If a robot were to decide to make tea and find a reference to Douglas Adams' Restaurant at the End of the Universe, in which Arthur Dent befuddles a computer with tea instructions that it cannot meet... Well, you see what I mean.
Humans can't magically determine fact from fiction either. It's an impossible task. That said, they suggested that it use the internet to learn from other robots with a similar AI scheme, so the Douglas Adams situation would never arise. And robust AI would likely be just as good at telling fact from fiction as humans. So just as a human is unlikely to mistake the Douglas Adams instructions for reasonable instructions, so too would the AI.

Also, you should think for a moment what "original thought" actually means. Seeking and using information isn't just AI, it's a remarkably concise definition of intelligence in general. That's exactly what humans do. Your thought thought about "Fuck this, I'm off to Vegas." is actually a great one, but not because it shows original thinking. What it would show is a robot with what amount to emotions reasoning rationally based on those emotions. People don't just come up with ideas like "Fuck this, I'm off to Vegas." out of the aether - it's a rational decision based on a belief that going to Vegas is more personally worthwhile than continuing the task at hand (based on information about Vegas, information about the task at hand, and information about personal satisfaction).

Neither machines nor people can reason from nothing. The thing that we call an "original thought" is really just a novel combination of information giving rise to a seemingly improbable piece of reasoning. If it isn't that what the fuck is it? Where does it come from? How did you come up with the idea if not from previous experiences and the ideas built off of them?
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
Jaime_Wolf said:
FalloutJack said:
It's not what I define as an artificial intelligence, per se, but I applaude the achievement. It is a step forward. My thing about the AI term is that AI should be more than just seek info and use info. It must formulate its own thoughts.

(To which, I refer back to another time where I jokingly stated that a proper AI should say something like "Fuck this, I'm off to Vegas." as proof of original thinking.)

My concern: It uses the internet and may not have the ability to determine what is fact and what is fiction. There are many volitile ways that things can go wrong with that, but here's a safey-yikes sort of one. If a robot were to decide to make tea and find a reference to Douglas Adams' Restaurant at the End of the Universe, in which Arthur Dent befuddles a computer with tea instructions that it cannot meet... Well, you see what I mean.
Humans can't magically determine fact from fiction either. It's an impossible task. That said, they suggested that it use the internet to learn from other robots with a similar AI scheme, so the Douglas Adams situation would never arise. And robust AI would likely be just as good at telling fact from fiction as humans. So just as a human is unlikely to mistake the Douglas Adams instructions for reasonable instructions, so too would the AI.

Also, you should think for a moment what "original thought" actually means. Seeking and using information isn't just AI, it's a remarkably concise definition of intelligence in general. That's exactly what humans do. Your thought thought about "Fuck this, I'm off to Vegas." is actually a great one, but not because it shows original thinking. What it would show is a robot with what amount to emotions reasoning rationally based on those emotions. People don't just come up with ideas like "Fuck this, I'm off to Vegas." out of the aether - it's a rational decision based on a belief that going to Vegas is more personally worthwhile than continuing the task at hand (based on information about Vegas, information about the task at hand, and information about personal satisfaction).

Neither machines nor people can reason from nothing. The thing that we call an "original thought" is really just a novel combination of information giving rise to a seemingly improbable piece of reasoning. If it isn't that what the fuck is it? Where does it come from? How did you come up with the idea if not from previous experiences and the ideas built off of them?
Ah well. Wouldn't be the first time I was wrong AND right at the same time. Still, it is a thing I wonder, regarding whether the robot will fair better or worse than humans in its learning curve.