Ok, so the most obvious question whenever someone comes up with the "Machine Intelligence is going to kill everyone" nonsense:
Why?
All machines we've built thus far work on logic alone, even the ones designed to not work on logic, even random number generators, something that are designed to act in a chaotic fashion, only do it through extremely complex logic - If it doesn't have anything to gain from killing people, it won't do it.
(Besides, the obvious solution is to ensure the first supercomputer that does develop strong AI is based around a human brain. Why bother trying to code nonsense like empathy and morality when you've got a perfectly good system with all that sitting around in the heads of every last person on the planet. It wouldn't even need to be a sacrifice, I mean, given the choice, wouldn't you want the opportunity to be God?)