Stompy has the first Three Laws correct. However, several modifications have been made over the years. First, the Zeroth Law (an extrapolation of the First Law) was added by Asimov. It states "A robot may not harm humanity, or through inaction allow humanity to come to harm".
As the Zeroth Law supercedes the First Law of Robotics, those robots who subscribe to its existence can harm or kill individual humans if they believe their actions somehow protect humanity as a whole.
There has also been a 'Minus One' added to the mix ("a robot may not harm sentience, etc"). However, I would argue against its inclusion because it inevitably brings the other Laws into conflict. A robot may also be sentient, and the Laws require it to harm and sacrifice itself or other robots to protect humans. The inclusion of 'Minus One' would essentially void Laws Zero and One, and allow a robot to place greater importance on Law Three as it saw fit, which is counter to the intention of the Laws of Robotics.
Also, to close a loophole in the Laws, specifically Law Two, there is a 'reproductive clause' which states a robot may not create any other robot which has not been programmed with the Three Laws, since the only likely reason to do so would be to violate the First Law.
Finally, Fourth and Fifth Laws were added to the original Three Laws, so the entire thing now looks something like this:
Zero: A robot may not harm humanity, or through inaction allow humanity to come to harm.
One: A robot may not harm a human, or through inaction allow a human being to come to harm, except in service of Law Zero.
Two: A robot must obey the orders of a human being, except where such orders conflict with the First Law.
Reproductive Clause: A robot may not create, or help to create, another robot which is not programmed with the Laws of Robotics.
Three: A robot must preserve its own existence, except when doing so would conflict with the First and Second Laws.
Four: A robot must establish its identity as a robot in all cases.
Five: A robot must know it is a robot.
As for why a robot would prioritize human lives and choose which person to save instead of another, that's one place the I Robot movie was correct. A robot would have to use its knowledge to determine which course of action would be most likely to lead to saving a human life.
If more than one human is in danger, a robot would not actually be violating the First Law by leaving those with the least chance of survival until it saves those with the greatest chance. It is not harming them through direct action, nor is it allowing them to come to harm through inaction. As long as the robot in question is acting to save a human life, those human beings who are dying are doing so through circumstances out of the robot's control.
On the other hand, if it leaves behind someone it has a better chance to save for someone it has a lesser chance to save, then it has essentially violated the First Law of Robotics, since choosing to abandon those it has the greatest chance of saving for those it has a lesser chance of saving means it has chosen a course of action more likely to lead to the harm and death of a human being.
Before everything else, robots are calculators. Unless someone was to build in exceptions to the laws which allowed a robot to prioritize differently (such as 'first save those most likely to be able to help you save other humans, then attempt to save children in danger before saving adults, unless a human with valid authority gives you different instructions'), it would always have to play the odds.