Utilitarianism Directly Conflicts With Many Important Moral Intuitions
Stop me if you've heard this one before

Imagine you strike up a conversation one day with a fervent defender of Frenchism, an idiosyncratic aesthetic theory that says a painting is beautiful if and only if it was painted in France. You listen to him explain his view for a bit, and then respond by saying, “Well, that doesn’t seem right, since The Starry Night by Vincent Van Gogh is very clearly beautiful as well, and he was from the Netherlands.” The Frenchist tells you not to worry, since it turns out the painting you’re talking about was, as a matter of fact, painted in France. (It was completed while Van Gogh was staying in an asylum near Marseille.) “So,” the Frenchist says with a smile, “it looks like my Frenchist analysis can accommodate your intuitions quite nicely!”
You think for a second. “I mean, sure,” you reply. “I guess I’m glad that Frenchism can accept The Starry Night is a beautiful painting, since Van Gogh happened to paint it in France. But that’s not really why it’s a beautiful painting, right? I mean, what would Frenchism say if Van Gogh hadn’t painted it in France, but back in the Netherlands, or on Mars? Wouldn’t it still be beautiful then?”
The Frenchist stammers for a second before regaining composure. “Yes, you’ve got me there - Frenchism does say The Starry Night wouldn’t be beautiful if he had painted it anywhere else. But if you look at the course of Van Gogh’s career, that counterfactual is pretty unlikely; he never would have painted something like The Starry Night in the periods where he wasn’t working in southern France. And as for Mars, why, he could obviously never have gotten there in the first place! So we don’t really need to worry about what Frenchism says in those very specific and unlikely counterfactuals. When it comes to The Starry Night as it actually exists here in the real world, or even how it might exist in all of the possible worlds closest to us, you can trust Frenchism will give you the right result. What else could you want?”
“But that’s insane!” You respond. “I don’t care whether Frenchism just so happens to assign the proposition ‘The Starry Night is a beautiful painting’ the same truth value I do in the course of daily life. I also care about why a painting is or isn’t beautiful, and in that context, Frenchism just seems obviously wrong, even in cases where it comes to the same conclusion I do. So it doesn’t matter if Frenchism only gives an absurdly counterintuitive appraisal of The Starry Night in situations I’ll never encounter, since those counterfactual appraisals are still enough to show me that even the appraisals I do ultimately agree with are only right by accident. What I need is an aesthetic theory that says The Starry Night is beautiful, period, no matter what.”
Your Frenchist interlocutor begins to frown. “That’s simply demanding too much of an aesthetic theory! Especially since your intuitions about what is or isn’t a beautiful painting developed in the context of the actual world, and not some alternate reality where nineteenth-century painters travel to other planets. Who knows? Maybe if you saw The Starry Night being painted on Mars, or grew up with the Martian version, you would think it wasn’t beautiful on Frenchist grounds. And even if not, you shouldn’t expect that your intuitions here are fine-grained enough to rule out the possibility.”
“Therefore,” he continues, “all that should really matter is what Frenchism says about The Starry Night as it actually is, or as we could reasonably expect it to be. And since, in all those cases, it gets painted in France, Frenchism’s verdict on its beauty is exactly what you’d expect. So there’s nothing unintuitive about Frenchism at all, you see!” The Frenchist gives you a smile as he starts to walk off; you begin to protest, but he just keeps going, nodding smugly to himself.
This, my friends, is how I feel when I read utilitarians.
I think it’s fair to say most people reject utilitarianism mainly because it commits us to radically counterintuitive judgments about how we should act in all sorts of cases. Take, for example, the Transplant scenario as described on Utilitarianism.net:
Transplant: Imagine a hypothetical scenario in which there are five patients, each of whom will soon die unless they receive an appropriate transplanted organ—a heart, two kidneys, a liver, and lungs. A healthy patient, Chuck, comes into the hospital for a routine check-up and the doctor finds that Chuck is a perfect match as a donor for all five patients. Should the doctor kill Chuck and use his organs to save the five others?
Most people think it would be wrong to kill Chuck here, and a pretty hefty portion of those people aren’t really willing to accept any moral theory that says otherwise. But utilitarianism does say otherwise, at least in response to a naive description like the one above. There are probably some utilitarians out there who would just stubbornly insist that we all bite the bullet, but since that seems like a nonstarter rhetorically, most other utilitarians will take a different route by trying their best to accommodate the original intuition, or at least something close to it.
The easiest way to do so is by questioning whether, when you consider all the various second-order effects, killing Chuck would actually result in an overall welfare increase anyway - the idea being that, if it wouldn’t, our intuition that we should let him live would actually be the right one. Quoting again from the link above:
Critics of utilitarianism assume that, in Transplant, the doctor killing Chuck will cause better consequences. But this assumption is itself highly counterintuitive. If the hospital authorities and the general public learned about this incident, a major scandal would result. People would be terrified to go to the doctor.
As a consequence, many more people could die, or suffer serious health problems, due to not being diagnosed or treated by their doctors. Since killing Chuck would not clearly result in the best outcome, and may even result in a terrible outcome, utilitarianism does not necessarily imply that the doctor should kill him.
In my experience, a lot of utilitarians seem to think this is really all you need to do to put the question of non-utilitarian intuitions to bed. If we can show, as a practical matter, that contingent facts about human psychology or society or whatever else will reliably line up utilitarian judgments with non-utilitarian intuitions - that is, as long as we don’t need to worry that accepting utilitarianism means we might find ourselves having to slice up Chuck in real life - then what else is there to worry about?
But the problem with a response like this should be obvious: It’s trivially easy for the critic of utilitarianism to just stipulate whatever external circumstances are needed to ensure that killing Chuck will go off without a hitch, in which case the utilitarians are right back where they started in having to affirm that killing him would be the right move. So while utilitarianism might be able to give pragmatic assurance that actions like these will happen to be evil in most circumstances we’re likely to find ourselves in, it can’t shake the (comparatively massive) set of hypothetical circumstances where the opposite conclusion would still have to be affirmed.
Of course, most utilitarians - including the authors of the page I keep linking to - fully understand this, which is why they have plenty of other responses ready to go. The problem is, none of them make any sense to me. One approach is to shift focus away from the actual outcome of killing Chuck and towards the expected outcome from the doctor’s perspective; since the doctor has no way of knowing that the long-term impact of killing Chuck would be net positive, so the argument goes, it would still be absurdly reckless for her to go through with it.
But as utilitarians themselves consistently point out, there’s a difference between a decision-making procedure and a criterion of rightness, and the intuition in question here is most plausibly understood as dealing with the latter, not the former. I’m sure some people who approve of killing Chuck in an abstract sense would also have the intuition that actually doing it would be too risky. But the very idea of something being too risky presupposes that the goal it’s aiming for is worthwhile in and of itself, so that can’t possibly be the motivating concern for anyone who thinks it would be wrong to kill Chuck regardless.
In other words, when I consider the doctor’s decision to harvest organs from Chuck, I’m not thinking to myself, “Gosh, it would be great if everything works out, but I just don’t think she’s properly considering the risk that it won’t.” Rather, it’s the situation “working out” - Chuck’s organs being successfully harvested and then distributed to others, without any other major impacts down the line - that strikes me as obviously impermissible in the first place. I can always just stipulate in my head that Chuck has no concerned family members, or that administrators at the hospital are so careless and corrupt that the doctor has zero risk of being found out. But when I do this, my intuitive take on the doctor’s decision to kill Chuck gets worse, if anything, not better! So attempts to reframe our intuitive judgments in terms of certainty and proper risk management clearly miss the mark.
Other utilitarians will argue that our intuitions in these cases can’t always be trusted, since those intuitions themselves developed in meaningfully different contexts. The idea is that our intuitions are necessarily going to be coarse-grained “rules of thumb” for navigating life as we actually know it, and that we shouldn’t be surprised when they render false judgments in situations with outlandish or unfamiliar features. And for some intuitions, that’s definitely plausible. Most of us have a strong intuitive sense that, for example, coercing someone else into giving you money is inherently wrong, and this leads some people to naturally conclude that taxation is also illegitimate. But it also seems obvious (to me, at least) that our intuitions about the permissibility of coercion develop in the context of interpersonal relationships, and that we shouldn’t necessarily trust those intuitions in the entirely separate context of how a state or federal government should function.
But similar concerns clearly don’t apply here, since the intuition we’re talking about has to do with exactly the sort of situation this thought experiment describes. The Transplant scenario (and many other classic anti-utilitarian thought experiments) isn’t taking judgments about the rights of persons that were meant to apply to some other kind of dilemma, and instead applying them to a new bizarre edge case we weren’t originally considering; we aren’t asking what to do if the five people really share one single consciousness, or if they’re all the time-traveling younger selves of Chuck, or something crazy like that. We’re literally just asking if it’s permissible to proactively sacrifice one person against their will for the good of five others, which is as close to a straightforward account of the intuition’s content as there could possibly be. It’s the utilitarian, ironically, who wants to bring in contingent circumstances that shift the context of the thought experiment in ways that destabilize our intuitions.
All of these points become much clearer, I think, if we look at them through the lens of another thought experiment I’ll call Abuser. I don’t normally like to use examples that reference things like sexual violence or exploitation, since it can be easy for any debate over utilitarianism to devolve into an arms race where everyone is just trying to come up with the most evil actions imaginable in a way that feels disrespectful. But I think it’s important here because it helps capture a dynamic that can be obscured by the framing of scenarios like Transplant, where someone is trying to make the best of a bad situation that’s been externally imposed on them. When we invert that dynamic, and look at situations where someone proactively causes unnecessary harm for their own benefit, our intuitions shift even further against a utilitarian approach.
Abuser: Imagine a hypothetical scenario in which a patient has fallen into a coma and is completely unresponsive. Chuck, a deviant who works as a night orderly at the patient’s hospital, realizes he’s alone and unsupervised for several hours each night. Should Chuck abuse the comatose patient for his own pleasure?
Here, my intuitive negative judgment is significantly stronger than it was in the case of Transplant, and yet a naive, straightforward utilitarian calculus gives the similar verdict that Chuck should probably go ahead and do it (assuming he’s guaranteed to enjoy it and the abuse will leave no lasting harm). Meanwhile, all the ameliorative strategies utilitarians offered in Transplant seem to miss the point even harder.
Most obviously, I just don’t care if, by a nice stroke of luck, contingent circumstances give Chuck good reason to abstain from recreationally abusing a vulnerable stranger; what I need, and expect, from any even remotely plausible theory of morality is that Chuck’s sadistic desires will never win out, ethically speaking, over another person’s dignity. Ignoring the fact that utilitarianism could even possibly endorse something like that, solely on the grounds that it probably won’t happen, is like ignoring that a mathematical theorem implies a googol is prime because hey, it’s not like I’m ever going to count that high anyway, right?
In other words, my intuition is not only that Chuck’s abuse would be wrong, but that it would be paradigmatic in its wrongness. And all the external second-order effects that go into swaying the utilitarian judgment one way or another have nothing to do with that intuition; if they did, then switching up those external conditions should realign my judgment, but whether or not Chuck might get caught is totally irrelevant when it comes to how I evaluate his decision (except in that I might react even more negatively if he knew he had guaranteed impunity). So it can’t be that my intuitions are misfiring here as a result of being applied outside their proper scope. If anything, the situation described by Abuser is something like a normative “ground zero” for my basic evaluation of sadism or violation, and my judgment in this case is what would structure my response to any more complicated cases down the road.
(It’s worth pointing out that situation described by Abuser is also immune to another kind of utilitarian response, which is to just arbitrary jack up the stakes. That might be compelling in the Transplant scenario, where it’s hard to argue that you shouldn’t kill Chuck to save, I don’t know, ten billion people instead of just five. But here, you could stipulate that Chuck experiences unimaginably intense and long-lasting joy whenever he abuses people, and it still wouldn’t move the needle one bit. The reason we think it would be evil for Chuck to take advantage of a vulnerable stranger clearly isn’t because he wouldn’t enjoy it enough!)
So if these sorts of responses don’t work, what’s the situation we’re left with? Well, to me, it looks pretty dire. For one thing, we’ve got a huge set of counterfactual scenarios where utilitarianism commits us to radically implausible conclusions, with no solid reason given to think the judgments they contradict are misguided or in error. But there’s an even bigger problem lurking in the background: It’s unclear to me whether utilitarianism can accommodate my intuitions about how to act at all, even in cases where its practical conclusions nominally line up with mine. It’s not just unintuitive that we might sometimes be compelled to kill innocent strangers or indulge sadistic pleasure - it’s also equally unintuitive that the reason we usually shouldn’t is because of the risk it poses to hospital enrollment down the line.
In other words, I don’t see my moral sense as a discrete set of extensional Yes/No judgments that any just-so story about consequences would be capable of grounding; I need a moral theory that doesn’t just tell me evil things are evil, but that they’re evil for the right reason. And when utilitarians tell me that’s too much to ask, I have to admit that my only response is the same incredulous stare that you’d probably give the Frenchist from the story at the start, especially since there are structurally similiar non-utilitarian consequentialist theories that have a pretty good shot at avoiding this problem.
Of course, utilitarianism has a lot more going for it than Frenchism does, and being radically unintuitive isn’t the absolute worst sin a theory can commit anyways; maybe there really are some other theoretical concerns that help the utilitarian win out in the end, regardless of how many bullets they’re going to have to bite. But still, it’s always worth pointing out one more time that, despite a bunch of overly optimistic claims to the contary, utilitarianism definitely is radically counterintuitive - both in specific hard cases, and also just as often when it “gets the answer right.”
IMO you're on stronger grounds just saying "I literally don't care about the Abuser's pleasure, that kind of pleasure isn't morally valuable, hence goodness isn't utility, QED." All of the cases in this are ones which are awkward for act utilitarians but which rule utilitarians just bat away with "We choose social rules based on the effects of them being widely known and adhered to, individuals don't get to break those rules whenever they happen to think they can get more utility by doing so because 99% of the time they're just wrong."
I think the really devastating blow is that all this reasoning about the side effects and risks can be rephrased as advice to the doctor/killer or the Abuser to "only do it if/when you can make absolutely sure no one finds out." Heck, they might even offer specific advice on how to avoid being caught. That's not moral advice!