First published 7 March 2017. Last updated 24 October 2017.
1) Why Utilitarianism?
I’ll keep this section brief due to space constraints, and because anything I could write here, by virtue of its fundamentality, has a higher probability of being unoriginal to the reader than what will come after. Utilitarianism defeats rival ethical systems partly by enveloping them. Consider Bentham’s classic argument:
“When a man tries to combat the principle of utility, his reasons are drawn—without his being aware of it—from that very principle itself. If his arguments prove anything, it isn’t that the principle is wrong but that he is applying it wrongly.”
Thus, a responsible implementation of utilitarianism is strictly superior to deontological or virtue ethics, because it can include consequential values for things like rule obeisance, dignity preservation, teleological consistency, and so forth, to the extent that doing so is deemed appropriate, without sacrificing an ability to be flexible in corner cases. If a utilitarian assigns a great enough value to conventional rule obeisance (e.g. do not kill), he may even agree with the deontologist in some formulations of low-stakes trolley problems (say, 2 people on the default track and 1 on the switch track). But crucially, his value for rule obeisance is not infinitely high like the deontologist’s, so he is not bound to stay the course in high-stakes problems (a million people on the default track) as the strict deontologist is.
Fundamental objections to utilitarianism are generally poorly formed. One such objection is that utilitarianism can endorse actions which are deontologically wrong. A famous example is Rawls’s complaint that in a world where slavery produces net positive outcomes, the utilitarian must endorse it, even though we know that slavery is wrong. This is a failure of imagination. Slavery has been wrong in our world precisely because it has produced net negative outcomes. In the world where it produces good outcomes, it is by definition good. Rawls makes the mistake of carrying intuitive baggage from our world, which tells him that slavery is bad, into his imagination of other worlds.
Another objection is that utilitarianism, especially Singer’s formulation in Famine, Affluence, and Morality, is too demanding. This fails for two simple reasons. One is that we do not have a good reason to believe that an ethical system should be easy to follow. Indeed, given the amount of suffering in the world, perhaps it ought to surprise us if that were the case. Furthermore, utilitarians are justified in preserving their own instrumental value, so they need not relinquish all comfort. They are justified, for example, in spending enough resources on leisure to keep themselves productive during their working hours.
For the remainder of the paper, I address subtle issues within utilitarianism, rather than objections which seek to replace utilitarianism with a fundamentally distinct system.
2) Average Utilitarianism vs. Total Utilitarianism
An important problem in utilitarianism is what to do about population size. This question is important not only because of its direct ramifications for social planning, but also because it should inform our thinking about life and death on a smaller scale. This question pops up often enough in discussions of various philosophical problems that my friends and I have taken to calling it the “cow problem” – because we originally opened the can of worms when discussing whether it’s a net positive for utility to raise “happy cows,” slaughter them painlessly and eat them.
There are two major options for counting utility: average utilitarianism (or averagism) and total utilitarianism (or totalism). Average utilitarianism says that the utility of a society is equal to the average utility of its members, and total utilitarianism says that the utility of a society is equal to the total utility of its members.
Total utilitarianism is where people tend to start out. It’s Bentham’s original method, and even for people who haven’t read Bentham, summing may seem more natural than averaging. It also supports our intuitive belief that it’s good to be alive and bad to die.
The major critique of total utilitarianism is Parfit’s “repugnant conclusion,” which points out that total utilitarianism prefers a very large society of miserable beings to a small society of happy beings. In fact, we may have an obligation to have lots of children, or to determine which animal has the highest ratio of average utility to cost of living, and then breed a huge population of that animal.
Average utilitarianism clearly does not have these problems, because it ignores population size. But it has several others. If we accept average utilitarianism, we accept that killing off people of below-average utility (assuming away pain, fear, disorder, grief, etc.) is a moral good. We must accept this even if the people in question are quite happy, as long as they are less happy than average. We must also accept that birthing and raising a happy child with utility level X may be either moral or immoral, depending on the happiness of the rest of society. It is not clear why the status of the rest of society should matter to the question of whether creating that happy life was a good thing to do. Because population size can never directly affect social utility in the averagist model, the averagist must accept that there is nothing inherently good about being alive or bad about dying.
So it seems that each of the major models for counting utility has some deeply counterintuitive implications. What is to be done? I think it would be very difficult to justify switching from consequentialism to a fundamentally different ethical system on the basis of these weaknesses. Instead, I’ll discuss a few modifications that make total utilitarianism more appealing.
The first modification is to define utility such that it can be negative. Suppose that having 0 utility means being indifferent between life and death. Then negative utility means that your life is really nasty and you’d be better off dead. Using this definition tempers the repugnant conclusion somewhat: we now run the risk of endorsing huge societies of beings with lives barely worth living (but still by definition worth living!), rather than with actually miserable lives. This may be enough to shift our intuitions substantially in favor of total utilitarianism, because it’s not obviously wrong, especially when we remember that we are vulnerable to scope neglect in considering large populations.
In fact, we can go farther than that by factoring in resource constraints and cost of living. Suppose that a person’s utility is some logarithm of the amount of resources allocated to that person. This is plausible in light of common sense and the economic literature about diminishing marginal utility of wealth. This implies that if we give someone the minimum amount of resources they need to survive, then they will actually have negative utility. This too is plausible, because they will be near starvation. The upshot is that a totalist social planner paying heed to realistic utility functions and resource constraints will not choose a maximally large population of beings with minimally positive utility, because in that case most of the resources spent on each person will have been to get their utility up to 0, rather than to raise it significantly above 0. The marginal case should be plain: if we reduce the size of that maximally large population by 1 person, then we lose only a minimally positive amount of utility from the prevention of their life, but we can redirect those resources toward another life, making it significantly positive. This marginal process proceeds until an equilibrium point, which can be mathematically derived for simple models.
Let’s briefly look at such a model, ignoring things like productivity benefits from larger populations. Suppose that you have a resource endowment of size E to split among a population of size P, with average utility U. We’re trying to maximize total utility T by choosing the best value for P. Suppose that whatever population size we choose, we’ll then divide up the endowment equally among those people, giving each person an investment I = E / P. Suppose also that a person’s utility is some logarithm of the amount of resources allocated to that person, such that everyone in the society will end up with equal utility. Then T = U * P. So we have:
U = log(I)
P = E / I
T = U * P
T = log(I) * E / I
T’ = -E(log(I) – 1) / I^2 = 0
T is maximized where T’ = 0, which occurs where I equals the base of our chosen logarithm. So if our endowment is 100 and we use the base-10 logarithm, total utilitarianism tells us to design a 10-person society, giving 10 units of the endowment to each of them.
Overall, I think these considerations should shift us some amount toward favoring total utilitarianism, but I don’t think this is a solved problem.
3) Rule Utilitarianism vs. Act Utilitarianism
Act utilitarianism is essentially Bentham’s original formulation: at any time, perform the action which will result in the best consequences. Rule utilitarianism, on the other hand, holds that “the rightness or wrongness of a particular action is a function of the correctness of the rule of which it is an instance” – in other words, you should act in accordance with the rule which tends to produce the best outcomes overall, even if that rule does not produce the best outcome at the moment (Garner and Rosen). In this sense, rule utilitarianism can be seen as an approximation of classical utilitarianism, or as a compromise between classical utilitarianism and deontological ethics.
Rule utilitarianism is wrong because its benefits are captured by well-applied act utilitarianism. A good act utilitarian should consider all consequences of an action, including very long-term ones. This full set of consequences includes the consequences of the rule being followed, so rule utilitarianism can only ever be the same as or worse than act utilitarianism.
Two-level utilitarianism, developed by R. M. Hare, is an integration of the act and rule formulations. It holds that people should act in accordance with intuitive moral rules most of the time, engaging in critical moral reasoning only in exceptional cases. This formulation falls to the same argument I used against rule utilitarianism above.
The assumption above that act utilitarianism is well and fully applied is significant. An agent who is lazy, or incompetent, or lacking information, may indeed tend to achieve better outcomes by following some version of rule or two-level utilitarianism. In this sense, though, these systems are pragmatic computational shortcuts; they are practically useful but approximate implementations of act utilitarianism. In fact, this is probably the more useful way of thinking about the topic altogether: rule and two-level utilitarianism are particular implementations of act utilitarianism which may or may not yield practical advantages, depending on context. Crucially, they are on a different level of abstraction; they receive any validity they may have in a given context from classical (act) utilitarianism. A useful analog here may be rights. The naive view is that rights are incompatible with utilitarianism; the more sophisticated view is that rights are sometimes-useful tools within utilitarianism.
4) Preference Utilitarianism vs. Hedonic Utilitarianism
Hedonic utilitarianism is basically Bentham’s classical formulation: maximize pleasure and minimize pain, or simply maximize utility. Preference utilitarianism, on the other hand, values preference satisfaction and disvalues preference frustration. Like rule utilitarianism, preference utilitarianism might be a useful approximation or tool at times, but does not compete with the classical model for basic ethical validity.
It is difficult to measure a being’s amount of pleasure or pain from the outside, and even more difficult to forecast those quantities in response to various actions over long time periods. So asking them for their preference (or observing it if they are unable to communicate linguistically) can be a terrific approximation. But it can also lead us astray, to the extent that beings are not rational. Due to vice, myopia, and other factors, beings may act suboptimally. Looking at a heavy cigarette smoker, for instance, we would be wrong to assume that they are best off continuing their heavy smoking, simply because they have revealed that preference. So when we are confident that a being’s preference does not line up with their actual best interests, we should act in accordance with the real interests, not the preference.
We should be cautious, of course, in imposing our values on people. Some heavy smokers may perfectly well enjoy it so much that we should not try to stop them, but that this is not always the case can be partly demonstrated by people’s willingness to pay for commitment devices to help them change certain entrenched behaviors.
Some preference utilitarians may respond by insisting that they are acting in accordance with beings’ true or idealized preferences, not with their manifest or observed preferences, which are of course tainted by irrationality. In the case of perfectly idealized preferences, though, the preference and hedonic formulations converge: a being’s idealized preference is, by definition, to maximize its utility.
5) Negative Utilitarianism vs. Non-negative Utilitarianism
Since writing this piece, I’ve written a longer treatment of negative utilitarianism here.
Brian Tomasik outlines three types of negative utilitarianism: ordinary negative utilitarianism, threshold negative utilitarianism, and negative-leaning utilitarianism. Borrowing the framework of his article, I embrace his first and second intuitions (happiness can outweigh small pains, and no pain is infinitely worse than any other pain) and reject his third intuition (a day in hell could not be outweighed). These premises seem fairly obvious to me. As a result, I fairly comfortably reject ordinary negative utilitarianism and threshold negative utilitarianism. I may be somewhat sympathetic to negative-leaning utilitarianism (based only on conversations with my roommate, who tends to be more bullish than I am about the potential for pleasure to outweigh displeasure), but I would like to clarify that the notion of being “negative-leaning” refers only to measurement disputes, not to disputes over fundamental framework. I do not think of “negative-leaning” utilitarians as negative utilitarians.
I have made the case for utilitarianism over other ethical systems, and for hedonic non-negative act utilitarianism over other formulations of utilitarianism. I have also proposed reasons to favor totalism over averagism, but this question deserves further study.
7) Works Cited
Bentham, J. (1789). An introduction to the principles of morals and legislation. London: Printed for T. Payne, and Son.
Garner, R. T., & Rosen, B. (1972). Moral philosophy: a systematic introd. to normative ethics and meta-ethics. New York: Macmillan.
Parfit, D. (1987). Reasons and persons. Oxford: Clarendon Press.
Singer, P. (2016). Famine, affluence, and morality. Oxford: Oxford University Press.
Tomasik, B. (2013, March 23). Three Types of Negative Utilitarianism [Web log post]. Retrieved May 4, 2017, from http://reducing-suffering.org/three-types-of-negative-utilitarianism/