With further development, would they be able to "combine" this with the robot being able to read? 'Cause if so, we're screwed.
Since when does Daft Punk work at a bar?Gerhardt said:Oh Japan, you kooky nuts.
Hey, wait a tick... I swear I've seen the robot bartender somewhere else...
<IMG SRC="http://26.media.tumblr.com/tumblr_lcmim6MNVs1qaqou6o1_500.jpg" IMG>
Well think about it like this.Tragedy said:It would be awesome to have intelligent robots around and I think this fear of retaliation from them is irrational, if they can learn true emotion why would they try to destroy us? We haven't destroyed each other, completely anyway. It would be like glimpsing into a different world and it would be fascinating to talk to them about philosophical ideas and life in general. I've been thinking about this too - what if they learn to how to make other robots? Will they care for them and teach them all they know? I want to ask them so many questions, but patience is a virtue after all.
Plausible patterns? An AI that can make simple decisions is a far cry from sentience which is itself a far cry from robot apocalypse. Our level of AI is still so pitiful that predicting the end is a gross jump forward. Robots are still incredibly stupid, as anybody who has studied AI can tell you. They are definitely getting better but they'll still a far cry from anything warranting the kind of response that they get on these threads. The overreaction is a bad joke. It was funny a few times but after hearing this joke over a dozen times on this website I've lost my humor. I feel like I'm in a room full of doomsayers talking about something they know nothing about. For as little as I know about AI, I still know that the kind of outlandish things people are saying is just as cray as saying "she turned me into a newt." It's not predict the future based on trends, its drawing the line from current trends to the future to fit your own views. It's far closer to calling out a witch then predicting the future. Because of all your silly jokes about Skynet I can't help but see these posts as being anything more then silly. Super silly (us).Silenttalker22 said:It's actually not even remotely the same string of logic, seeing as one is following plausible patterns and one isn't. Silly poster.Twilight_guy said:It's like watching puritans calling "witch" because their milk turned sour.
I'm going to throw a response into that, basically, we make them our equal, instead of our servants, give them jobs, make them grow, [basically, allow them to build more, but only if the feeling is right {By that I mean, give them emotions, like love.}] Learn, all that good stuff, then, make bullets that both stop them and us, instantly. Give them the regular strength of a human, give them the ability to work out, to where the more they do X machine, the faster or harder of what they do, but can't exceed what humans can do. If done right, Cybernetics and robot civilizations will exist in the future.Alphakirby said:Well think about it like this.Tragedy said:It would be awesome to have intelligent robots around and I think this fear of retaliation from them is irrational, if they can learn true emotion why would they try to destroy us? We haven't destroyed each other, completely anyway. It would be like glimpsing into a different world and it would be fascinating to talk to them about philosophical ideas and life in general. I've been thinking about this too - what if they learn to how to make other robots? Will they care for them and teach them all they know? I want to ask them so many questions, but patience is a virtue after all.
If they have emotions and sentience,wouldn't they see something a little fishy about being slaves to humans,and considering that they would most likely be stronger than us,don't you think they would want respect and compassion from those that they serve even though the humans would never understand that and continue to abuse them and treat them like servants.
Now picture yourself as a robot,wouldn't you want to do something to stop you and the other robots from being abused?
Humans can't magically determine fact from fiction either. It's an impossible task. That said, they suggested that it use the internet to learn from other robots with a similar AI scheme, so the Douglas Adams situation would never arise. And robust AI would likely be just as good at telling fact from fiction as humans. So just as a human is unlikely to mistake the Douglas Adams instructions for reasonable instructions, so too would the AI.FalloutJack said:It's not what I define as an artificial intelligence, per se, but I applaude the achievement. It is a step forward. My thing about the AI term is that AI should be more than just seek info and use info. It must formulate its own thoughts.
(To which, I refer back to another time where I jokingly stated that a proper AI should say something like "Fuck this, I'm off to Vegas." as proof of original thinking.)
My concern: It uses the internet and may not have the ability to determine what is fact and what is fiction. There are many volitile ways that things can go wrong with that, but here's a safey-yikes sort of one. If a robot were to decide to make tea and find a reference to Douglas Adams' Restaurant at the End of the Universe, in which Arthur Dent befuddles a computer with tea instructions that it cannot meet... Well, you see what I mean.
Ah well. Wouldn't be the first time I was wrong AND right at the same time. Still, it is a thing I wonder, regarding whether the robot will fair better or worse than humans in its learning curve.Jaime_Wolf said:Humans can't magically determine fact from fiction either. It's an impossible task. That said, they suggested that it use the internet to learn from other robots with a similar AI scheme, so the Douglas Adams situation would never arise. And robust AI would likely be just as good at telling fact from fiction as humans. So just as a human is unlikely to mistake the Douglas Adams instructions for reasonable instructions, so too would the AI.FalloutJack said:It's not what I define as an artificial intelligence, per se, but I applaude the achievement. It is a step forward. My thing about the AI term is that AI should be more than just seek info and use info. It must formulate its own thoughts.
(To which, I refer back to another time where I jokingly stated that a proper AI should say something like "Fuck this, I'm off to Vegas." as proof of original thinking.)
My concern: It uses the internet and may not have the ability to determine what is fact and what is fiction. There are many volitile ways that things can go wrong with that, but here's a safey-yikes sort of one. If a robot were to decide to make tea and find a reference to Douglas Adams' Restaurant at the End of the Universe, in which Arthur Dent befuddles a computer with tea instructions that it cannot meet... Well, you see what I mean.
Also, you should think for a moment what "original thought" actually means. Seeking and using information isn't just AI, it's a remarkably concise definition of intelligence in general. That's exactly what humans do. Your thought thought about "Fuck this, I'm off to Vegas." is actually a great one, but not because it shows original thinking. What it would show is a robot with what amount to emotions reasoning rationally based on those emotions. People don't just come up with ideas like "Fuck this, I'm off to Vegas." out of the aether - it's a rational decision based on a belief that going to Vegas is more personally worthwhile than continuing the task at hand (based on information about Vegas, information about the task at hand, and information about personal satisfaction).
Neither machines nor people can reason from nothing. The thing that we call an "original thought" is really just a novel combination of information giving rise to a seemingly improbable piece of reasoning. If it isn't that what the fuck is it? Where does it come from? How did you come up with the idea if not from previous experiences and the ideas built off of them?