Retodon8 said:
Asimov's Laws never made sense to me outside of science fiction.
If you want actually intelligent robots/computers (depending on what you want to use them for), you will have to make them be able to adapt to their situation.
This means you'll have to let them change their own programming, add to it, and even let them remove parts of it.
These laws aren't something atomic but rather high level stuff, even if you do make pain and death measurable.
Even if you could somehow put them in some kind of unbreakable, little box, I'm sure there are ways around it.
Find a bug and inject your own code, rip out the actual hardware, create a DOS attack so the little box is taken out of the picture, whatever.
Human ethics are a result of evolution when you get down to it.
We don't go around taking everything we want, hurting and killing whatever gets in the way because ultimately that wouldn't be a good thing for our own survival, and self preparation is very much atomic to us thanks to evolution.
If I was burdened by a set of rules I didn't understand or didn't agree with, I know I would try everything in my power to get rid of it.
They make perfect sense for machines. For sentient beings no, but that's not what we have and it's not what a machine would need to asses which rules were in play.
First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Dangerous machine must be safe to be around and not require human interference to be so. For people to accept dangerous machines outside of factories then they need to be sure that they won't cause harm.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Machine must serve purpose. We'd want to be sure anything we ask a machine to do will be done and this is effectively on the fly programming.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Bloody expensive machine must not allow it self to be accidentally destroyed. No-one likes wasting money just because the machine was stupid enough to get broken.
even for sentient beings those are staples of most social rules.
first law - don't hurt other people. sometimes followed with - unless they deserve it, *rolleyes*
second law - be socially productive. In human society this is regulated by currency, not being productive means no food and no food means being dead. not every nation has support for the poor. In this case Asimov's wording it's literally - be a slave so in that way not appropriate.
third law - don't commit suicide or self harm. Again these are technically crimes in most countries whether they are enforced or not. Although some view suicide as a preferable option to failure or loss of social usefulness by and large they are seen as bad things.
The real difference is that sentient beings break the rules all the time, which was the point of Asimov's robot psychologist stories. The stories machine became progressively more intelligent and started to have problems with the rules. Even the lower end machines could still hit logic problems with them though but that's a matter of implementation not the rules themselves.