Why I Am A Utilitarian (And You Should Too)

When I was growing up, utilitarians were basically the bogeyman.1 They were the people who would torture you at the drop of a hat or throw Christians to the lions if there are enough ancient Romans who enjoy it. At some point in tenth grade, there was a debate about the nuclear weapons in WWII, during which my internal reaction to someone else’s argument was “but then you get all the standard arguments against utilitarianism” and I couldn’t remember what those were. So I rethought the whole thing and eventually became a consequentialist. Of course, it’d be a long while before I admitted that I had switched sides, even to myself.

So. Why you should be a consequentialist. You should be a consequentialist because you care about people and you want to make sure that good things happen and bad things don’t.

There’s an old saying that actions speak louder than words. And I don’t trust people’s listening ability enough to say that it’s true, but I think you’ll agree that it usually should be. If you claim to care about something, you should act as if you do. And actually, that’s consequentialism. That’s it; it’s that simple. You decide what you care about, and act in a way that results in that. If you care what happens to other people, then you should do things based on whether the result helps people. So far, so obvious. This absurdly obvious thing, the particular variety of consequentialism where the consequence you’re aiming for is helping people, is known as utilitarianism.

A lot of people don’t like utilitarianism, because “the end justifies the means” is something people say before doing something evil. Or because it sounds like it’s implying that morality is relative and changes based on situation, and a lot of people don’t like that. Or even just because they don’t like the idea of being calculating about ethics.

But none of those reasons are actually based in trying to help people. Saying that consequentialism must be wrong because it says that there are some cases where it would be justifiable to do [insert terrible thing here]? Well, yes. Yes, there are. Denying it can make you feel like a good, moral person in contrast to the evil utilitarians (“I would never do something like that, under any circumstances!”), but “do whatever makes me feel moral” is a much worse rule than you might think. Feeling moral certainly shouldn’t be the priority on important questions.

Look, if Darth Vader credibly claims that he’ll blow up your planet unless you kill a kitten, you better not say that killing kittens is morally wrong. That example was specifically chosen to be obvious about the right answer, (also the real Darth Vader can’t credibly say that because he’d have to convince you he won’t blow up the planet anyway), but real people really do decide based on reasons even worse than that.

So you get conversations like “You want to pick futures that are as good as possible for as many people as possible, right?” “Yes!” “So you’ll sign this organ donor card?” “No!” “Why not?” “I don’t want my organs in someone else’s body; that’s disgusting.”

Even though this person is aware that becoming an organ donor will probably save at least one life, they dislike the idea. They wouldn’t say their disgust is more important than a stranger’s life, but they are deciding based on one and not the other. You care about saving someone’s life? Donate blood (at the cost of slight pain and minor inconvenience) or register as an organ donor (at the cost of like two minutes and probably no other downside for the rest of your life). If you don’t, the noise from your actions is drowning out what you say you believe.

Lots of moral decisions are concerned with looking moral or feeling moral. Consequentialism is concerned with doing good things. Pretty much by definition, consequentialism gets the best results. (Whatever gets the best results, consequentialism says to do that.) And when “results” are measured in things like “human lives saved,” or “illnesses cured,” you can see why results are so important.

I titled this “Why I am a consequentialist,” and the answer to that is because I thought the arguments were convincing. I ended up having to agree that the greatest good for the greatest number does sound better than anything less than the greatest good. And so, assuming I ought to care about people, it would seem that not being a consequentialist is…suboptimal. I realize that kind of thing isn’t very convincing to most people (and those who do find it convincing are usually consequentialists already), but it’s how I got convinced.

I’ve had conversations where people were actually taking seriously the question of whether or not you’re morally required to tell the truth when a wanted murderer knocks on your door, shows you a bloody axe, and asks politely where your family members are so that he can kill them. Fortunately, these people were Christians, and so there are at least two ways to convince them within three words that they shouldn’t. But it really should be obvious without an appeal to authority that of course telling a lie is less important than saving your family’s lives.

I’m having difficulty avoiding strawman arguments, because no matter how ridiculous the example, there are always people who argue it. So rather than say that you have to be a consequentialist or you end up believing ridiculous things, I’ll just say that if you don’t make decisions as a consequentialist then you are at risk of avoidable Bad Stuff happening, and that consequentialism is obviously right in at least some of the cases where it disagrees with opposing ethical systems. Hopefully everyone agrees with that.

But what about victimless crimes? You’d think that consequentialists would give the OK to anything that doesn’t harm anyone, but most people agree that some things are wrong even if they don’t hurt anyone. If someone only eats food that was prepared according to their religious guidelines, and I swap it out for identical food that wasn’t, that arguably doesn’t harm them. The food’s identical. But it’s still a jerk thing to do and I wouldn’t mind saying it’s immoral. Doesn’t this contradict the principle of judging actions by their consequences?

The answer is…kind of. Well, that’s more of a concession than you’d usually get from a rhetorical question. It’s true that if you’re just thinking of maximizing pleasure and/or minimizing pain, like Bentham or Epicurus did, then this doesn’t hurt anyone. Another form of consequentialism is more about satisfying everyone’s preferences (called, appropriately enough, “preference utilitarianism”), and others are stranger and more complicated. Sometimes they give different answers, but I’ll take the opportunity to stress that these are all way better than non-consequentialist ethical systems. Almost like real people, consequentialists sometimes disagree on how to define “good.” This means they do disagree with each other on some questions. But accusing consequentialism of failing to completely define what goodness is means you are  criticizing it for failing to do something it didn’t aim for. The point of consequentialism is to maximize good results however you define good; it doesn’t say you have to value X, Y, or ~Z.

This leads to the other answer to the victimless crime question, which  is that, well, would you prefer to live in a world where everyone is being all victimlessly immoral all the time? I’m guessing no, you’d consider that a bad thing. If so, then the consequentialist thing to do might well be to oppose the victimless bad thing. (Admittedly, this does depend on how many people consider it bad and how strongly, what kind of preferences other people have in favor of the thing, etc. It can get messy fairly easily, so it’s simpler to avoid mixing ethics and morals. Maybe I’ll write something about that later.)

The “would you actually prefer that” answer applies to a whole host of objections, like “wouldn’t utilitarians force people into gladiatorial combat for the enjoyment of the greatest number” or “aren’t utilitarians incapable of sticking to a deal they made because they’ll back out the second they think their preference not to be bound by it outweighs the other person’s preference to be able to rely on them.” Objections like that can be strong or weak, but asking whether you would actually prefer a world like that can help you make sure the objection needs to be answered.

Another important thing to realize is that utilitarians are actually pretty normal people. Maybe you got offended once when a utilitarian said they’d absolutely go around suffocating puppies if required to save air in a broken submarine. Maybe they’re better than you at donating money effectively, or they give blood more often, but most of the time they’ll live pretty much like other people. “It all adds up to normality,” as the saying goes, and most reasonable philosophies will say something along the lines of to get up in the morning, go to work, have a life, don’t rob anybody. Most utilitarians have never had to throw a switch on a trolley problem and hope they never do. They are not always psychopathic mutants with no empathy.

Before I stop, I’ll say one more thing about utilitarianism. The ends do not necessarily justify the means. Some ends justify some means. Specifically, the means are justified if and only if they are less bad than the alternative, like killing a kitten being justified to save a planet. Most of you will probably agree with that. To you I say, welcome to the shadowy and sinister ranks of utilitarians. You are now suitable for use as an evil monster for frightening small children.

1You think I’m exaggerating but I’m not. As a kid, monsters under my bed never scared me. But I was sometimes told scary stories about utilitarians, and those did.

Advertisements

6 thoughts on “Why I Am A Utilitarian (And You Should Too)

  1. Pingback: Ethical perspectives | LAST FREELANCE

  2. “Almost like real people, consequentialists sometimes disagree on how to define “good.” This means they do disagree with each other on some questions. But accusing consequentialism of failing to completely define what goodness is means you are criticizing it for failing to do something it didn’t aim for. The point of consequentialism is to maximize good results however you define good; it doesn’t say you have to value X, Y, or ~Z.”

    That is my “problem” with Utilitarianism – it’s not really a moral system. It’s merely a calculus to optimize a given moral system. Consequentialism doesn’t tell me what I must do; it only tells me I should think about what my actions mean in reality so I’m certain about their moral worth.

    Virtue ethics (I am a very staunch virtue ethicist) are strictly “consequential” and “utilitarian” in the sense that they care about the result of an action or choice. They just perform a non-hedonic calculus, in the sense that they value becoming-something-good more than feeling-pleasant (“Better a Socrates dissatisfied, etc.) So people aren’t, by that definition, complaining against utilitarianism – most likely, they complain (as I do) against hedonism.

    I guess what I want to ask is… what do you think is (supreme and action-worthy) good? You haven’t really told us anything about the good. Only that more good is good-er than less good, which is tautological. I can’t have the fun of agreeing or disagreeing with you until I know…

    • Consequentialism is a general class of philosophy saying you should decide based on future results. It should go without saying. Utilitarianism is slightly more specific, where the thing you care about is the greatest good for the greatest number. As tautological as that might be, a frustratingly large number of people disagree. Virtue ethicists should be consequentialists, but might or might not be utilitarian. (Incidentally, how do you resolve disputes where one virtue says to do one thing and another says to do something else? I have a high opinion of virtue ethics and might well agree with it if Aristotle had had better answers here.)

      I think it makes sense to start from maximizing pleasure and minimizing pain, and work from there. Hedonism with a wider perspective, you could say. I consider that a reasonable philosophy that would blow up in people’s faces more than virtue ethics but less than deontology.

      That’s not an accurate depiction of my own beliefs, but it is a reasonable first approximation. To a second approximation, I’m more like a preference utilitarian, so I’d prefer to maximize the amount of people-getting-what-they-want-ness. Which would include maximizing pleasure and minimizing pain, but also goes up a level or n, in the sense of wanting to want things and as in wanting things that aren’t pleasure. Essentially I start from the most basic utilitarianism and add stuff on until I like the result.

      Specifically, some things that ought to be optimized: Pleasure’s good; displeasure’s bad. (That covers most things right there.)
      Personally I also like to know true things even in cases where the information isn’t instrumentally useful. I’d imagine having that as a terminal value is probably common.
      The stuff in the Universal Declaration of Human Rights is probably a good list of more important things I value even though I don’t, technically speaking, believe in its concept of human rights. The things that they say everyone should have are in fact things everyone should have.

      This all interacts in weird ways with Christian morality, too. Theoretically, my only terminal value is supposed to be glorifying God. In practice, that generally means doing things God approves of, which in turn means morality. (As in, living a normal life in an ethical and godly manner.) This is why I find it easier to think of ethics and morals separately, and follow both.

      In terms of which good is action-worthy, that’s all of them. Anything where you’d prefer the world be one way rather than another. Which good thing gets the highest priority is an empirical question based on how you would be able to improve the universe the most for a given quantity of effort. Which probably means sending money to GiveWell’s top charity.

  3. Sorry about swapping the terms, it was a bit lazy on my part and I should have made clear that I do understand the difference between Consequentialism in general and Utilitarianism specifically (as a genus-species relationship.)

    As for the incidental question, I’m not familiar with particular moral dilemmas on that topic, but 1) the orthodox stance would be that virtues do not contradict, because they are essentially all the same (the Tao, working with the grain of the universe, being in accordance with the real, etc.) – for example, moderation is just a “type” of justice, a giving-of-what-is-proper to various desires – and so on. 2) Virtue ethics have a bad time with theoretical situations, because virtue ethics deal with persons rather than cases. One cannot make an “always do this” statement about the train dilemma because we do not know anything about the people on the tracks, or if there are any better options. (This is also true with act-utilitarian analysis of that dilemma, no?)

    The problem a Christian virtue ethicist has with “min/max pain/pleasure” as a general principle (at least at first thought) is that pain and pleasure are not inherently good. They are sensations which may in fact be grossly inappropriate to the situation. Take porn for (an extreme) example. Assume for a moment that it was an externally victimless crime – it would still be a type of negative pleasure, because said pleasure is dis-concordant with the nature of pleasant things. Or to put it in terms you used, it would be contrary to the terminal value of knowing-true-things-regardless-of-pleasure because to become sexually aroused by an image is to be an untruth about sex. And I’m sure you can think of an example where pain, even life-long pain, is better than (say) ignorance.

    Or to cut at it another way, pleasure makes a terrible terminal value, because pleasure doesn’t actually tell you anything ethically. Applying Hume’s criticism to Hume means he has no reason to think pleasure actually, morally good. Pleasure is not a metaphysical assumption, and thus cannot be the basis of a metaphysical calculation, that is, an Ethical calculation. On the same basis, I do not think preference is a terminal value, because preference in itself (rather than in a basis of good and bad preferences) tells us nothing about metaphysical reality.

    I think you will have by now interjected (though of course I’m guessing) by saying that you do not think ethics and morals are the same (kind of) thing. You are, it seems to me (correct me if I am wrong here) advocating an amoral ethic. That requires saying one can separate valuing from goodness – which I cannot yet find a way to regard as non-contradictory.

    “In terms of which good is action-worthy, that’s all of them.”… Well, that’s information, but not as much as I was hoping for. You haven’t so much defined good, as said good is good and so do good. Which I already can deduce from what you’ve said.

    As a final note, I might try to describe the virtue ethicist as someone who has (in a limited sense) given up on changing the world, and instead tries to change themselves – which, in a world of contingencies, is a far more assured, if no easier, task. But this formulation is more an epigram than a definition, and if it does not (in a poetical or rhetorical way) make things clearer, then I’ll have to clarify more carefully.

    Thanks for the reply, by the way, there’s nothing like talking to smart people to make my day better, by any standard of good (:

    • Unity of the virtues is the main reason I’m not a virtue ethicist. It just seems pretty implausible that they would always agree.
      If a Gestapo officer asks you if you’ve seen any Jews hiding in your neighbor’s house, the honest thing to do would be to answer honestly. But that might not be the virtuous thing to do. I can understand why good philosophies can be thrown off by contrived thought experiments, but that one isn’t that contrived.

      The metaphysical assumption utilitarians make would be that increasing pleasure (or fulfillment of preferences) is a good thing. It’s just as much of an unfounded assumption as saying that an increased number of paper clips is a good thing, but we have to make *some* assumption, and by Hume’s criterion they’ll all be unfounded. Even saying that it’s a good thing to be more virtuous would be a metaphysical assumption. Anyone who wants a system of ethics that tells them how to act is going to need something they can call good, and since there’s nothing in the universe clearly labeled as inherently good it’s going to be at least kind of arbitrary.

      I do think ethics and morals are the same kind of thing. It’s just that I personally use utilitarian ethics and Christian morality. (It’s because I want an ethic that generalizes symmetrically. If I tell everyone to follow my God, they tell me to follow theirs, and I’m right and they’re wrong but we don’t get anywhere. Utilitarianism is, if not objective, at least neutral.) My ethics and morals don’t conflict very often, and when they do it’s typically along the lines of my religion telling me not to do something that utilitarianism would allow. And I’m good at inaction, so that’s not hard.

      Something that gives pleasure can definitely be a bad thing. Values trade off against each other, and something that’s good in one way can be bad in another. Even if something is immoral but pleasurable, though, I’d consider the pleasure to be an upside. Might not make it worth it, but it’s better than if there were no pleasure involved. (Not counting the fact that the pleasure makes it more likely for the mistake to be made.)

      I’m not remotely confident that I can define good. I could list off things I consider to be good, and to be worth acting toward, but I don’t think I could get an actual definition. It’d have to include things both in the world and in me, because both are worth changing.

      • Well, the Gestapo example isn’t an issue (for me at least). Wisdom, Justice, Moderation, and Courage all advise not being honest (honesty is technically not a virtue in itself – for example, one can tell a misleading truth). Love, Hope, and Faithfulness wouldn’t harm those you ought to save, nor help others be murderers. It is against virtue to provide weapons to known, willing murderers, even if that weapon is facts about the current material state of the universe. In the same sense as killing an enemy soldier in war may not be murder, lying to state authorities doing obvious evil is not dishonesty. I see no conflict between virtues in this case.

        I understand Hume’s point – except I disagree with him on the point of rejecting axioms. Axioms are necessary for reason, so they are (or at least we have good reason for believing) also necessary (and reality) for Moral reasoning. The issue is, as Christians, we must know there IS something in the Universe Who calls Himself Good and Faithful. We have by authority what we already had by assumption – a moral, not to mention rational, ground.

        Again, I don’t understand your distinction between ethics and morals. I thought ethics was essentially a system of morals; actions judged by morality. As for symmetry, I see no reason why it ought to be symmetrical. God is certainly not symmetrical, in that sense. God is a Jealous/Zealous God. Virtue ethics, however, are universal: “most people agree on what sins are; they disagree on what sins are excusable.” Not to mention that (at least the Cardinal virtues) are universal and impartial: you can be a coward in any religion, or unjust, or so on.

        Augustine, among others, makes the distinction between valuing something good too much (such as pleasure over human life) and valuing something intrinsically evil (stealing pears for the sake of theft). So I think we both agree pleasure usually being a positive attribute.

        I suppose it’s a bit flippant to say define good. One can’t “define” God, in the sense of laying out parameters. It’s a matter of building a computer to simulate the Universe. But what I wanted was a sort of operational description. Moses asked for a Name, and got “I AM that I AM” – which is a type of definition. So the kind of answer I was looking for is, what is the highest, most encompassing genus you can think of for all things you consider good? Or at least a few generalizations as to the shared attribute of all good things. If that’s still too vast, then I’ll just try not to make assumptions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s