Ben Simon said:
I just saw the video on the "problem-solving robot." It got me thinking about the classic question of robotic sentience. Should a robot that can think for itself, make independent decisions, and appear to have a personality be treated as a person? I say no, because they are artificial, and it wouldn't really have feelings or thoughts as we know them. They would just seem to. If you respond to the poll, assume that there isn't a threat of robotic uprising if you say no.
EDIT: "Ne" is supposed to be "no". I don't know how to fix that.
It is profoundly debatable whether they would have "feelings" or "thoughts". In the end, your brain is a big computational system and it seems rather inconsistent to suggest that a big computational system made of organic materials somehow deserves rights that a big computational system made out of other materials doesn't.
What do you think "feelings" and "thoughts" are? Where do they come from? We think these concepts are really clear-cut, basic, and human-specific, but there are a lot of reasons to believe that nothing special is going on at all - that these are just words describing the phenomena that arise from the combination of lower-level systems in the brain (really, where else would they come from?). True artificial intelligence is likely to be a more or less exact copy of human neural processing, so it would give rise to the same feelings, thoughts, and beliefs. If asked, any true AI would probably confirm having things like feelings, thoughts, and beliefs.
The problem this presents is that there's no real way to verify whether they have thoughts or feelings without know what we're really talking about physically when we talk about thoughts and feelings. The view of AI you're suggesting has been discussed at length in the philosophical literature (this is a nice starting point if you're interested: http://en.wikipedia.org/wiki/P_zombie).