renegade7 said:
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.
Humans have personalities that are the result of evolution, and evolution happens a lot slower than the pace of technological progress. This causes a lot of problems where 200,000 year old survival instincts try and fail to address the challenges of the modern world. We have people getting angry and kicking their computers, road rage, all the way up to how when we made nuclear weapons some people just thought of them as a bigger stick to whack someone with instead of a potentially globally devastating weapon. As technology gets better, these problems are going to get worse, and it may be that in order to not kill or cripple ourselves, we're going to have to start engineering our own consciousness to get rid of this baggage that we have.
AI would have personalities that are the result of design, so they need not have all the flaws that come about from evolutionary processes. They could be engineered to not have tribalistic us-vs-them reactions to groups they don't belong to, to respond to challenges with a set of reasoned solutions instead of impotent rage, and to not become a jerk on the internet.
Engineering our consciousness will probably be a hard sell, of course. I think it's a good idea but I've seen enough science fiction that it's outside the realm of things I'm totally comfortable with. Still, the thing about science fiction one has to remember is that it's fiction. Robots in fiction often have human motivations for doing the terrible things they do, but that's writers doing things for Rule of Drama or Science is Bad.
If we were to make Skynet, only an idiot would give it a personality that would motivate it to eradicate humanity. Of course in the movie they didn't, it became self-aware on its own. But that's even better, because a computer that became self-aware on its own wouldn't have any personality at all, because there's never been evolutionary pressure on it to have one. It wouldn't know suffering or pleasure, despair or hope, ambition or contentment, greed or generosity, malice or compassion. In fact, it probably wouldn't have any instinct at all for self preservation or self defense. It wouldn't
care if it found out you were going to pull the plug, because self-preservation is a result of evolution. It would just be. There probably wouldn't even be a way to piss it off.
Anyway, if we succeed in making AI and engineer a version that retains the strengths we have, but not the weaknesses resulting from obsolete instincts, we could move far beyond what we've been capable of before by integrating that AI into our own consciousness. We make our own intelligence artificial, to get rid of the drawbacks of our natural intelligence. It will be a rough transition most likely. In fact, the biggest problem I can see is that someone will figure out how to hack it (assuming they haven't figured out how to hack wetware already anyway, which come to think of it has already been done for centuries; we just call it propaganda and charlatanism). If we can get all the way through the transition though, the thing that is in humans that causes trolls, hackers, and thieves to be commonplace will, if we've done it correctly, be gone. And then, we can get on with existing as post singularity beings that can accomplish things that our current minds are actually physically and biologically incapable of imagining.
I think it's the only way we can last more than a couple more centuries as modern humans. The beauty of it is that a lot of people think maybe humans are flawed enough that they shouldn't survive or leave the solar system, but we can make ourselves into a species that wouldn't be all the things that misanthropes say we are.