When I was growing up, utilitarians were basically the bogeyman.1 They were the people who would torture you at the drop of a hat or throw Christians to the lions if there are enough ancient Romans who enjoy it. At some point in tenth grade, there was a debate about the nuclear weapons in WWII, during which my internal reaction to someone else’s argument was “but then you get all the standard arguments against utilitarianism” and I couldn’t remember what those were. So I rethought the whole thing and eventually became a consequentialist. Of course, it’d be a long while before I admitted that I had switched sides, even to myself.
So. Why you should be a consequentialist. You should be a consequentialist because you care about people and you want to make sure that good things happen and bad things don’t.
There’s an old saying that actions speak louder than words. And I don’t trust people’s listening ability enough to say that it’s true, but I think you’ll agree that it usually should be. If you claim to care about something, you should act as if you do. And actually, that’s consequentialism. That’s it; it’s that simple. You decide what you care about, and act in a way that results in that. If you care what happens to other people, then you should do things based on whether the result helps people. So far, so obvious. This absurdly obvious thing, the particular variety of consequentialism where the consequence you’re aiming for is helping people, is known as utilitarianism.
A lot of people don’t like utilitarianism, because “the end justifies the means” is something people say before doing something evil. Or because it sounds like it’s implying that morality is relative and changes based on situation, and a lot of people don’t like that. Or even just because they don’t like the idea of being calculating about ethics.
But none of those reasons are actually based in trying to help people. Saying that consequentialism must be wrong because it says that there are some cases where it would be justifiable to do [insert terrible thing here]? Well, yes. Yes, there are. Denying it can make you feel like a good, moral person in contrast to the evil utilitarians (“I would never do something like that, under any circumstances!”), but “do whatever makes me feel moral” is a much worse rule than you might think. Feeling moral certainly shouldn’t be the priority on important questions.
Look, if Darth Vader credibly claims that he’ll blow up your planet unless you kill a kitten, you better not say that killing kittens is morally wrong. That example was specifically chosen to be obvious about the right answer, (also the real Darth Vader can’t credibly say that because he’d have to convince you he won’t blow up the planet anyway), but real people really do decide based on reasons even worse than that.
So you get conversations like “You want to pick futures that are as good as possible for as many people as possible, right?” “Yes!” “So you’ll sign this organ donor card?” “No!” “Why not?” “I don’t want my organs in someone else’s body; that’s disgusting.”
Even though this person is aware that becoming an organ donor will probably save at least one life, they dislike the idea. They wouldn’t say their disgust is more important than a stranger’s life, but they are deciding based on one and not the other. You care about saving someone’s life? Donate blood (at the cost of slight pain and minor inconvenience) or register as an organ donor (at the cost of like two minutes and probably no other downside for the rest of your life). If you don’t, the noise from your actions is drowning out what you say you believe.
Lots of moral decisions are concerned with looking moral or feeling moral. Consequentialism is concerned with doing good things. Pretty much by definition, consequentialism gets the best results. (Whatever gets the best results, consequentialism says to do that.) And when “results” are measured in things like “human lives saved,” or “illnesses cured,” you can see why results are so important.
I titled this “Why I am a consequentialist,” and the answer to that is because I thought the arguments were convincing. I ended up having to agree that the greatest good for the greatest number does sound better than anything less than the greatest good. And so, assuming I ought to care about people, it would seem that not being a consequentialist is…suboptimal. I realize that kind of thing isn’t very convincing to most people (and those who do find it convincing are usually consequentialists already), but it’s how I got convinced.
I’ve had conversations where people were actually taking seriously the question of whether or not you’re morally required to tell the truth when a wanted murderer knocks on your door, shows you a bloody axe, and asks politely where your family members are so that he can kill them. Fortunately, these people were Christians, and so there are at least two ways to convince them within three words that they shouldn’t. But it really should be obvious without an appeal to authority that of course telling a lie is less important than saving your family’s lives.
I’m having difficulty avoiding strawman arguments, because no matter how ridiculous the example, there are always people who argue it. So rather than say that you have to be a consequentialist or you end up believing ridiculous things, I’ll just say that if you don’t make decisions as a consequentialist then you are at risk of avoidable Bad Stuff happening, and that consequentialism is obviously right in at least some of the cases where it disagrees with opposing ethical systems. Hopefully everyone agrees with that.
But what about victimless crimes? You’d think that consequentialists would give the OK to anything that doesn’t harm anyone, but most people agree that some things are wrong even if they don’t hurt anyone. If someone only eats food that was prepared according to their religious guidelines, and I swap it out for identical food that wasn’t, that arguably doesn’t harm them. The food’s identical. But it’s still a jerk thing to do and I wouldn’t mind saying it’s immoral. Doesn’t this contradict the principle of judging actions by their consequences?
The answer is…kind of. Well, that’s more of a concession than you’d usually get from a rhetorical question. It’s true that if you’re just thinking of maximizing pleasure and/or minimizing pain, like Bentham or Epicurus did, then this doesn’t hurt anyone. Another form of consequentialism is more about satisfying everyone’s preferences (called, appropriately enough, “preference utilitarianism”), and others are stranger and more complicated. Sometimes they give different answers, but I’ll take the opportunity to stress that these are all way better than non-consequentialist ethical systems. Almost like real people, consequentialists sometimes disagree on how to define “good.” This means they do disagree with each other on some questions. But accusing consequentialism of failing to completely define what goodness is means you are criticizing it for failing to do something it didn’t aim for. The point of consequentialism is to maximize good results however you define good; it doesn’t say you have to value X, Y, or ~Z.
This leads to the other answer to the victimless crime question, which is that, well, would you prefer to live in a world where everyone is being all victimlessly immoral all the time? I’m guessing no, you’d consider that a bad thing. If so, then the consequentialist thing to do might well be to oppose the victimless bad thing. (Admittedly, this does depend on how many people consider it bad and how strongly, what kind of preferences other people have in favor of the thing, etc. It can get messy fairly easily, so it’s simpler to avoid mixing ethics and morals. Maybe I’ll write something about that later.)
The “would you actually prefer that” answer applies to a whole host of objections, like “wouldn’t utilitarians force people into gladiatorial combat for the enjoyment of the greatest number” or “aren’t utilitarians incapable of sticking to a deal they made because they’ll back out the second they think their preference not to be bound by it outweighs the other person’s preference to be able to rely on them.” Objections like that can be strong or weak, but asking whether you would actually prefer a world like that can help you make sure the objection needs to be answered.
Another important thing to realize is that utilitarians are actually pretty normal people. Maybe you got offended once when a utilitarian said they’d absolutely go around suffocating puppies if required to save air in a broken submarine. Maybe they’re better than you at donating money effectively, or they give blood more often, but most of the time they’ll live pretty much like other people. “It all adds up to normality,” as the saying goes, and most reasonable philosophies will say something along the lines of to get up in the morning, go to work, have a life, don’t rob anybody. Most utilitarians have never had to throw a switch on a trolley problem and hope they never do. They are not always psychopathic mutants with no empathy.
Before I stop, I’ll say one more thing about utilitarianism. The ends do not necessarily justify the means. Some ends justify some means. Specifically, the means are justified if and only if they are less bad than the alternative, like killing a kitten being justified to save a planet. Most of you will probably agree with that. To you I say, welcome to the shadowy and sinister ranks of utilitarians. You are now suitable for use as an evil monster for frightening small children.
1You think I’m exaggerating but I’m not. As a kid, monsters under my bed never scared me. But I was sometimes told scary stories about utilitarians, and those did.