!?! You think that certainty of outcome is necessary to make an act calculation? Why on earth? Have you heard of likelihood, probability, and risk?
No, I think that having at least some amount of information about likelihood, probability, and risk is necessary to make an act calculation. But you would have us live in a world where well-meaning people would not resist tyranny in cases that they don't know enough about whether that resistance will lead to anything concretely good that justifies the difficulties. Indeed you condemn any such attempts that seem like they do not clearly have a useful material impact instead of offering support (even critical support); and you implicitly condemn them the more, the harsher the reprisal. Sort of like when you blamed Hamas (alongside israel, to be fair and clear) for the destruction of Gaza. Whereas a rule utilitarian can find ample reason to fight tyranny not because they will win but because it is tyranny. Now, you could say that act utilitarians can take into account the effects on ruling class behavior that come from examples of people fighting back against their oppressors even when seemingly doomed, even just for spite-- but this is just rule utilitarian reasoning framed as calculation; you're not actually calculating anything, there is no useful information about the specific act and its results to be used. There is only the reasoning from if humans always act like docile, servile creatures when outmatched, then tyrants will be emboldened to act ever more heinously. Rule utilitarians have a good explanation for the intuition that resistance can be praiseworthy even when it is defeated and even when it is hopeless.
Precedence and the effects of example are not easy to grasp when thinking of
one act; without precise, mechanistic prediction of the future, it is not even really possible. They can only really be considered as part of a pattern: if there is this action, then there is this likely response. Or to put it another way, some responses are normal. Some responses are normalized. Things can be more and less normal, and they tend to be more or less normal based on the amount and gravity of their exceptions; if resistance occurs even when doomed, that is much more normalizing of resistance than if it only occurs when victory is a
fait accompli. If you try to calculate these? You quickly realize you're dealing with indefinite quantities; and in the long term with infinities. And the problem with infinities is that infinity+1 = infinity. But so does infinity - 1. Act utilitarianism can consider the +1 or the -1, but the infinities are a result of what has been made normal and what has been made abnormal. Assuming you're not the
Kwisatz Haderach, you will not be able to grasp the effects on and consequences of whatever degree of normalization that comes from individual actions. The rule utilitarian correctly abstracts the calculation and focuses on the effects of repeatedly following the rule, of making that rule normal-- in this instance, it is morally correct to resist tyranny by whatever means are available (within reason) regardless of chance of success; it is morally correct to choose the method and possibly timing based on its foreseeable chance of success, but not whether to resist at all. Will that lead to better outcomes in the long term than an alternative rule? This is the sort of reasoning a rule utilitarian can employ and which an act utilitarian can, I suppose, copy. But if they do, it's just rule utilitarianism through the back door pasted into where the calculation was meant to take place. And it is a much more meaningful sort of reasoning than the vague sense of "did this particular action result in the greatest satisfaction of the keenest preferences for the greatest number of people?"
The questions of "what should we expect the world to look like if people generally act like this in this situation?", and "what happens if well-meaning people universally act like this in this situation" have far more interesting and useful answers. For act utilitarianism, one problem is particularly evident: falsehood. How many crimes have been justified by "the greater good" in some fashion or another? But that is not a repudiation of the greater good itself, only the pliability of the concept. What happens if people generally pretend to use act utilitarian reasoning? Then they justify their own self-interest as happily coincidental with the greater good as they understand it. What happens if well-meaning people truthfully follow act utilitarian ethics in good faith? Apart from the mistakes they might make, there will be those who are not well-meaning that will take advantage of the vagueness.
An act utilitarian like Peter Singer thinks that it is immoral for the affluent to spend $3 on a relative luxury like a morning coffee every day when they could be sending that money to Save the Children or Oxfam or whatever. He has some very persuasive arguments from an act utilitarian perspective; rule utilitarians have (or at least should have) some additional questions. What would the world be like if everyone stopped having their little luxuries and donated whatever money that would have been spent on them-- or, indeed, saved or invested beyond a reasonably small amount-- to charity? Probably better. But what would the world be like if only the well-meaning people did so? Still probably a bit better than currently, but we start to see a problem: justice and the diffusion of responsibility. The well-meaning people mitigate the impact of the depraved; the depraved enjoy their luxuries. The well-meaning probably aren't the ones who cause people to need charitable help-- especially if they are indeed immiserating themselves to provid it. And to at least some extent, money is power. By causing people to need charity, the depraved can cause well-meaning act utilitarians in the fashion of Peter Singer to disempower themselves further. This is a (crude and oversimple) systemic analysis. But it points to something that Peter Singer's act utilitarianism misses the importance of: the role of responsibility for harm-- the importance of not just the greatest happiness (or whatever other formulation of utility) but of justice and fairness apart from that. Could the collective action of all the well-meaning people doing something else-- something that pursues justice, not just utility-- yield a better result? The rule utilitarian can say we should do that. The act utilitarian has trouble justifying it because collective action is not guaranteed.