IMO you're on stronger grounds just saying "I literally don't care about the Abuser's pleasure, that kind of pleasure isn't morally valuable, hence goodness isn't utility, QED." All of the cases in this are ones which are awkward for act utilitarians but which rule utilitarians just bat away with "We choose social rules based on the effects of them being widely known and adhered to, individuals don't get to break those rules whenever they happen to think they can get more utility by doing so because 99% of the time they're just wrong."
I think the really devastating blow is that all this reasoning about the side effects and risks can be rephrased as advice to the doctor/killer or the Abuser to "only do it if/when you can make absolutely sure no one finds out." Heck, they might even offer specific advice on how to avoid being caught. That's not moral advice!
> In other words, I don’t see my moral sense as a discrete set of extensional Yes/No judgments that any just-so story about consequences would be capable of grounding; I need a moral theory that doesn’t just tell me evil things are evil, but that they’re evil for the right reason.
What reason would you say makes it so that organ-stealing in the Transplant case is wrong?
I'd say organ stealing is wrong because it very reliably does more harm than good, but of course, the Transplant hypothetical by design has whatever set of stipulations is necessary to make this not the case: for instance, we can say that no one will ever find out; that the doctor will perform all six operations on their own, successfully; that all five recipients will recover and never find out where their organs came from; that no one will imitate this in the future; that the surgeon knows all this, certainly and correctly; and so on and so on. This is fine - and if all these stipulations are granted, utilitarians will say that the organs should be stolen, which is doubtless a radically counterintuitive conclusion. But I don't see it as a problem with a theory, that it yields radically counterintuitive conclusions about a radically counterintuitive case.
But supposing we reject this conclusion and say, we need a theory which says organ-stealing is evil "for the right reason." What is this reason? And can we ensure it doesn't yield counterintuitive conclusions about counterintuitive cases? Can it even possibly be impervious to this?
I definitely agree that we should ultimately be on the lookout for the perfect theory that matches all these intuitive judgments - or at least gets close enough that we feel comfortable revising the ones it doesn't accommodate - and I don't think that theory exists right now. But on a personal level, I would generally say what makes killing Chuck wrong is somewhere in the vicinity of:
1) He's being treated as a means rather than being respected as a person,
2) the doctor is betraying her duty as a doctor, and
3) his right to bodily autonomy is being violated.
Like I said, I don't think either one of those is exactly right, but I think the right answer is going to be something related to that general set of features. I'm personally favorable to a general virtue ethics account here that leans towards the first two more.
I think they should in that case, for sure. But that same sort of reasoning doesn't work in the inverted case - you could stipulate that someone gets an arbitrarily huge benefit from sadistic joy, and that doesn't justify engaging in it. So whatever is going on in the case you're bringing up, it can't just be a result of a fundamental utilitarian calculus, or at least that isn't sufficient for to determine what the right move would be for certain. Personally, I think we're generally permitted to override typical constraints when the outcome of not doing so is sufficiently bad, at least in certain circumstances. Saying that non-utilitarian concerns matter doesn't mean they're necessarily decisive, although they probably should be in some cases (like the case with the abuser).
But **why** should they? If they should to prevent Europe from being nuked, then surely you agree, it can't be the case that an act is wrong merely because a right to bodily autonomy is being violated, or because a doctor is violating their duty, or because a person is being treated as a means rather than being respected as a person - because this would violate all three.
So then, what test is **actually** being employed to make the determination that an act is wrong? And why wasn't it in the 3-part list? What methodology yields the **real** test, and how does one argue for it?
Sure, you don’t need to believe that those features are necessarily sufficient to make some act wrong overall. You just need to believe that they count strongly against the act and render it wrong in the absence of any countervailing features. Saying “X is impermissible because of Y” doesn’t mean Y could never be present while X is permissible, so long as other features could also shift - that would be like saying “My car is expensive because it’s a Lamborghini” rules out the possibility of any Lamborghini not being expensive, but a Lamborghini that’s been totally trashed or set on fire or whatever wouldn’t be expensive even though it’s a Lamborghini. I guess you could demand that they specify “My car is expensive because it’s a Lamborghini and it has no other features that outweigh that fact,” or “X is impermissible because of Y and the fact that there are no other features that outweigh Y,” but that should be implied on pretty much any non-deontological framework. The point is that you don’t need to affirm something is wrong merely because of X, Y, and Z to still think that X, Y, and Z, combined with a variety of other features or their absence, explain its wrongness in this case.
But it sounds like what you are describing is a decision procedure applying some given set of values, rather than an explanation of the set of values. What I am asking is, what methodology generates your values? How do we check whether, for instance, bodily autonomy has intrinsic weight as a value, and how do we figure out how much it's got?
I think the decision procedure you describe is perfectly sensible, given some set of values - only, I think that utilitarianism supplies an excellent methodology for generating those values. I think it's generally wrong to violate bodily autonomy because doing so makes people's lives worse. A lot worse. When this conflicts with some other value I hold, I resolve that conflict in the manner which I think is most likely to improve people's lives. So I have no trouble saying that preventing the sun from exploding outweighs bodily autonomy - I resolve that conflict by looking at how the anticipated consequences will affect people's lives.
But if there is some other, better way of figuring out values - what is it?
I don’t think the circumstances are counterintuitive at all. I have almost zero intuition about the likelihood of a surgeon getting caught in this act—it doesn’t strike me as unintuitive to imagine that a surgeon could successfully kill a patient without anyone knowing. Dr. Death was a thing. Moreover, the stipulations don’t even have to be certainties—so long as, in expectation, the odds of those things are low enough, then the utilitarian recommendation to harvest the organs goes through.
As for what would constitute a better/more fitting reason for why the organ harvesting is wrong, BSB hit on several in his reply, but to generalize: it seems to me (and I think most people?), that the character of what is wrong about harvesting a patient’s organs to save 5 others is that the doctor has done something to him that violates said doctor’s obligations to that patient. There is a deontic violation here it seems, and a theory which cannot articulate this deontic violation seemingly fails to properly account for the moral logic of the scenario.
Ahhh but you see, that gets to whether the violation of a deontic constraint can ever be justified, which is a whole separate question. What BSB is pointing out in this essay is that a theory which cannot even properly articulate that a deontic constraint exists and can only reference consequences/effects is missing something about the nature of wrongness in this instance.
To answer your question directly though: I do think deontic violations can be outweighed and/or granted an exception by sufficiently dire consequences, but that does not mean that the duties are themselves reducible to consequences.
But then, if a deontic constraint can permissibly be violated, the fact that a choice violates a deontic constraint is no longer sufficient to explain its wrongness - one must say, this violates the constraint, **and also is wrong** - which raises (for me) three questions - first, what else is necessary? And second, if there is some other balancing test we must employ, of what consequence is the constraint? And third - what methodology derives these constraints in the first place?
Okay this will be long. But first, to clarify: I would characterize it more that deontic constraints are characterized as rules and rules can have exceptions. You can think of these exceptions in one of two ways:
A) An exception to a rule is merely another conditional added to the antecedent of a moral rule [if X then Y] where X is a set and Y is an act. (often it’s more like X then !Y since constraints are usually negative.)
B) An exception to a rule is itself a form of meta-rule that overrides the original rule when it’s own conditional is met
These are functionally isomorphic but the latter construct can be a more useful model to think about in some circumstances. For instance, the “okay but what if the sun would explode if X” can be thought of as a larger meta-rule that exempts a lot of other rules given how dire the consequences are. You can even generalize this meta-rule to just be a broader “utility disaster” exception: an exception to a deontic constraint can be granted if the negative utility produced by following it is *severe enough* (defining “severe enough” is a very big question ofc; obviously a grain of sand in an eye isn’t enough, but the sun exploding is, so…somewhere in between those).
Okay with that rough model in place, your questions:
1. What else is necessary: The rule which establishes / defines the duty must be violated and no exceptions to that rule may trigger. With our utility disaster exception, it means that the negative utility is not “severe enough” to justify an exception. There could be other exception criteria but the point is that none of them trigger
2. What is the point of a constraint if it has a balancing test: Here, the reframing as rule exceptions is helpful. Ofc the rule is important even if it has exceptions, because it applies when those exceptions do not trigger. Even if we go back to the balancing test framework: the constraint serves as a major weight on one side of the test—without it you only have utility, which would lead to different answers. Moreover, per BSBs essay, it seems like this actually correctly captures the character of what has gone wrong in these situations.
3. How do we derive these constraints/rules: this is the big one. After all, everything I’ve said above could be very very justifiably objected to as what seems to be a load of ad hoc bullshit. Why do we have a utility disaster exception but not a general utility exception (i.e., ignore deontic constraints if the utility generated by violating them exceeds that of maintaining them, even if only barely)? How do we decide what “severe enough” is? Why these rules / meta-rules vs any other set? What is the higher principle binding this all together? And the answer, imo, is some sort of constructivism based around a universalizability test: would you warrant that all rational beings acted according to these rules? Would you prefer to live in the resulting world? Note that this process works for exceptions/meta-rules too: you can ask if you would universally warrant an exception to other rules if circumstances make it such that the sun will explode if you don’t violate the rule, and…yeah I think that universalizes just fine!
I’m gonna stop here because uhhh, this has gotten way too long but hopefully you get the gist. If you take anything away from this, it’s that defining an exception to a rule is not the same as invalidating it, so long as the exception is itself defined rigorously and its limits clear. There is a difference between saying “killing is (generally) wrong, but there are exceptions” vs. “killing cannot be wrong because there are exceptions”
I'm perfectly fine with recommending that people live in accordance with a set of well-defined rules (this was the topic of my most recent article: https://substack.com/home/post/p-169764193), but for me, the answer to "how do we derive these constraints/rules", and how do we derive exceptions, is simple - we derive them by checking whether people living in accordance with a rule or permitting some category of exception generally promotes wellbeing. This is why I think utilitarianism works best as the high-level abstract grounding for more on-the-ground, day-to-day systems like virtue ethics and deontology - these are really effective means to the end utilitarianism describes. (cf. https://substack.com/@gumphus/p-168569193)
There is no sane analysis which disregards consequences that can demonstrate, a priori, that the sun exploding is bad. Likewise, in the trolley problem, the goodness of pulling the lever is **directly and exclusively determined** by the consequences of doing so. There is nothing intrinsically wrong with pulling levers. So for me, any analysis just **has** to arise out of consequences - it can end up elsewhere, but this is the inescapable set of morally relevant facts. Nothing has ever even remotely seemed, to me, like a plausibly workable alternative.
Utilitarianism has always struck me as a "morality" (there's nothing moral about it) for socially-inept dweebs who understand numbers better than people and think everything real can be calculated. This must be why it's so popular with the so-called "Rationalist"/EA crowd.
The shrimp debate was particularly silly, because the whole idea that some number of shrimp lives (maybe just one) are equal to a human life is crazy to begin with. Normal people (not utilitarians) consider life more valuable the closer it is to their own. Humans matter more to us than shrimp; my family means more to me than other humans. Shrimp, if they could talk, would presumably favor shrimp over humans, and that would be completely understandable. A theory of morality needs to accept this; one that denies it doesn't deserve to be taken seriously. It's a morality for machines, not for people.
Your Abuser scenario is truly disturbing, as of course it was meant to be. And it could apply to much less extreme cases than people in comas who may never awaken. I've read articles in the media about real-life cases where girls in their early teens were drugged and raped, and apparently didn't realize what had happened until someone else told them. (In one case, she found out when she went to school and other kids were laughing at her and calling her a slut, which, needless to say, is absolutely disgusting behavior.) So would it be okay to drug and rape someone as long as she never finds out she was raped? I don't think so. But utilitarianism seems to have trouble reaching such a conclusion. Where is the negative utility if the rapist benefited and the victim never even knows they were victimized?
Good post - I think a lot of utilitarian arguments do indeed try to sidestep certain intuitions instead of confronting them head on. I think the Abuser scenario is very possibly a big hole in utilitarianism.
But I'm not sure. I have a lot of questions. And the question that's at the forefront of my mind after reading this post is:
If a Doctor risks his medical license and his freedom to murder one person in order to save five others (who *certainly* would have died otherwise), shouldn't we consider that Doctor to be a hero?
I think that's a tough question, since the obvious default option should be for the doctor to sacrifice himself instead. But maybe he can't do that, because he's not a match or something? In that case, I don't know if I would consider him a hero, but I do think I would feel *better* about the situation if he knowingly took that risk, which is ironic since that sort of risk is what utilitarians say makes the action bad in the first place!
"which is ironic since that sort of risk is what utilitarians say makes the action bad in the first place"
That's interesting. I suppose utilitarianism/consequentialism (do they mean the same thing?) is about encouraging you to do the right thing, which is different than measuring a person's worth or heroism. Sacrificing one to save five has probabilistically better consequences when there's no personal risk. The personal risk makes it slightly more excusable for a doctor to not choose that option, but it's the same thing that makes the action more commendable, because we can recognize that actions are more heroic when there's more at stake personally. I don't think that's contradictory? Though hard to put into words...
I agree that utilitarianism is sometimes counterintuitive, but I generally think that the intuitions other views conflict with are more methodologically suspect. I elaborate on this more here, and I also explain why I think it's much less costly to just bite the bullet on the organ harvesting case https://benthams.substack.com/p/the-ultimate-argument-against-deontology
Regarding the coma case, you can definitely have a view on which pleasure derived from wicked acts isn't good for a person and on which the person is harmed by being raped in the coma. In fact, I think this is the commonsense thing to say--it's pretty intuitive that the reason it's wrong to rape people is that it's bad for them, that it harms them. Maybe this gives some reason to abandon strict hedonistic utilitarianism but I don't think it gives much reason to abandon objective list utilitarianism.
Certainly it's no reason to abandon consequentialism because our intuition is that the act is bad not just wrong. So I think people should have an axiology holding the act makes the world worse, and then it's not a counterexample.
I definitely agree you can avoid a lot of these issues by abandoning hedonic fundamentalism and moving towards objective list views - but once you've done that, I'm not really sure what makes the resulting view "utilitarian" in any meaningful sense, beyond an orthogonal semantic claim that all those things broadly constitute welfare. Especially since, imo, adopting that sort of standard destabilizes a lot of the judgments that people bring up in favor of utilitarianism in the first place. Better to just cut the cord and become a multi-valued consequentialist in general.
What makes it utilitarian is that it meets the definition of utilitarianism: holding that the right act is the one that maximizes aggregate well-being (or, if we want to be precise, says that whichever of two acts you have stronger reason to perform is the one that raises aggregate well-being more).
It's distinct from consequentialism in that it says the only important consequences are in terms of aggregate well-being, so it's incompatible, for instance, with desert and views on which nature has intrinsic value.
Well sure, it's not literally conceptually equivalent to some other view. But the point is that, the broader or more controversially thick your conception of "well-being" gets, the less distinct any theory grounded in it becomes - you'd think most utilitarians would want their theory to have more practical applicable content than just "any form of consequentialism where you could tangentially tie something back to the experience of a conscious creature," especially if you're allowed to bring in judgments that conflict with our own self-reports. Of course, that's a problem with pretty much every other theory too. But utilitarianism is often sold on precisely the point that it avoids that kind of messiness!
I mean, utilitarians will generally have more specific theories of well-being. Mere utilitarianism doesn't have to be very specific. But mere utilitarianism has various important implications. For instance, on any version of it, you should kill one to save five and there are very strong obligations to give away large sums of money. Also no special obligations or desert. It strikes me as crazy to think that a theory saying all those controversial things is inadequately specific!
Serious question: Could a utilitarian who adopts an objective list outlook say that, for example, caring for a family member is good for you in a way that caring for someone else is not? Or that receiving what you deserve is good for you and avoiding punishment for wrongdoing is not? Then you could have room for special obligations or desert (or a bunch of other things). It's hard for me to see where exactly the brakes are on something like this.
Yes all of that is current but doesn’t seem that plausible (it seems like people who believe in desert generally want to say it’s good morally for bad people to have their lives go badly).
I'm not sure I follow your example at the top. If I hadn't read the title to the piece, I would have assumed the Frenchist was a stand-in for a deontologist and you were championing utilitarianism, not poking holes in it. "I also care about why a painting is or isn’t beautiful" sounds like something a painting-utilitarian would say.
I think *everyone* would care about why something is or isn't beautiful (or good) - if I told a utilitarian they should kill Chuck and harvest his organs because killing people named Chuck is a deontological duty, then I bet they wouldn't be satisfied either haha.
Being a consequentialist myself, I’m more sympathetic to utilitarianism’s verdicts than you are (I think the doctor should kill Chuck in the Transplant scenario) - that being said, I do reject core tenets of utilitarianism - such as welfarism and impartiality (and in the case of classical utilitarianism specifically a moral symmetry between happiness and suffering) - so many of the outcomes it yields are radically counterintuitive to me as well.
It does seem obviously crazy to say that rape is morally permissible - or even good - as long as it doesn’t lead to any additional suffering or loss of pleasure. Even more crazy is to propose that even an instance of rape that does lead to severe suffering for the victim can be good too if the perpetrator enjoys it enough.
I also find the tendency to dismiss purported counterexamples on the basis of the scenarios being “unrealistic” very frustrating. I touch on this tendency a bit in an upcoming article I’m writing on speciesism - and I plan to write a whole article specifically about this phenomenon later.
People like this will insist that weird worlds will have weird results - and that’s certainly true to an extent. In my article on moral offsetting, I presented a scenario in which every time a human blinked, it resulted in an innocent person from a distant galaxy being teleported in a brazen bull. Surely if it turned out this was the case, we should radically revise our views on the moral status of blinking.
But there is a limit to this. If it turned out that the Looney Tunes live underground like they do in movie Space Jam, this shouldn’t lead to us altering our moral evaluation of hunting toddlers for sport. I think the crux here is that while it’s fine for weird worlds to have weird results, they shouldn’t have *absurd* results. The former, while weird, isn’t an absurd result - but the latter is.
The move of dismissing counterexample cases on the basis of them being “unrealistic” seems to be always be a last ditch effort from people with implausible ethical views who can’t muster anything in support of their view.
Whether it’s a natural law theorist insisting that it would be unjust to tell a white lie to prevent trillions from being tortured, a utilitarian claiming it would be a moral obligation to light kittens on fire as long as a sufficient amount of people got pleasure from watching it, or an anti-vegan claiming it would be good to factory farm humans as long as it increased species fitness - I just can’t take it seriously when an adherent of an ethical view claims that we should ignore the insane entailments of their views as long as those results obtain in “unrealistic” scenarios.
I think these cases are about as clear-cut as it gets morally - if you’re going to tell me that actually I can’t have confident moral judgments in these cases because they take place in possible worlds too distant from us - I’m sorry, I just don’t buy that! It’s a weird form of moral skepticism - I see no reason why we’d somehow be precluded from making confident ethical judgments about actions in possible worlds drastically different from our own - in fact we routinely do this already when we make negative assessments about the actions of a villain in some work of fiction. I don’t think our moral judgments are somehow epistemically compromised when we’re doing this - so I’d see why they’d be in these cases either.
Yeah, I totally agree that the “weird worlds give weird results” line is a cheap way to get out of the problem. I think a distinction needs to be made between “first-level weirdness,” which is just some result that seems strange or unfamiliar but makes sense in the context of the weird world, and “second-level weirdness” that bleeds back into our own world. “Blinking would be morally evil in a world where it causes people to be tortured” is a weird result, but the weirdness is contained within the weird situation. “Rape would be acceptable if a utility monster existed who really, really enjoyed it” is a weird result that carries weird implications back into our world!
I think Utilitarianism indeed seems unconvincing as a way to judge action/decision since it is concerned with states/consequences and likewise duty/deontology seems a bizarre way to judge states. Both approaches seem somewhat unhelpful about judging what we should like, what is likeable, beautiful, noble and so on as might be a question of virtue ethics.
Imagine if a doctor said to you "I realized I could save 5 patients by harvesting the organs from this guy who came in, but since I knew that would be wrong I did not. All 5 patients died a few hours later. The man who came in was a total stranger and I've known the 5 patients for years and they are all close friends, yet I am so happy the one man is alive and they are dead, just filled with euphoric joy, also I judge the world to be a much better place." I'd say the doctor was completely crazy pragmatically and probably profoundly wrong morally also. I don't think this sort of possibility put the lie to the idea there might be rules about what one should or should not do in those cases independent of utility calculus,.
Even if it is worse to do evil than to suffer evil, that doesn't mean the evil we suffer is not also evil and that in some ways we are worse off suffering lose than exacting one. Also I don't think it's enough to try and divide up say happiness (or pain) and moral duty and just treat them as two independent considerations and to say they are somehow unmixed.
To deescalate your abuser example consider the Punster, a sadistic villain who is also a total coward, so their preferred method of inflicting suffering is to tell people truly awful puns and lap up the groans this generates. Most of all the Punster likes telling a real groaner to dementia patients who will forget about the whole thing in a minute or two, so there are guaranteed no consequences. It seems like we just wouldn't want to credit his sadistic urges at all, if he just doesn't get a chance to inflict his awful puns on people, we'd say that's not only the morally preferred outcome, but it is the happier outcome, because we don't consider what the Punster enjoys to be worthy of consideration (the Punster likes it, but we don't consider what he likes likeable) and so on.
I think any theory of morality has grave tensions and practical morality itself is just rife with tricky situations.
Tbh I just don't find thought experiments very persuasive, in full generality. They are interesting and fun sometimes I guess[1], but I wouldn't take them very seriously. I'm not sure why other people like them for serious stuff.
I think they're sometimes helpful for elucidating a position, or to add color to an otherwise overly abstract discussion. They can be occasionally educational. They're also sometimes funny. For example, in my recent post on the unreasonable effectiveness of mathematics [2], I thought it'd be fun to imagine a shrimp contemplating physics:
> Imagine you're a shrimp trying to do physics at the bottom of a turbulent waterfall. You try to count waves with your shrimp feelers and formulate hydrodynamics models with your small shrimp brain. But it’s hard. Every time you think you've spotted a pattern in the water flow, the next moment brings complete chaos. Your attempts at prediction fail miserably. In such a world, you might just turn your back on science and get re-educated in shrimp grad school in the shrimpanities to study shrimp poetry or shrimp ethics or something.
But importantly I was only using the shrimp to *illustrate* an interesting idea. I don't think anybody should take the shrimp physicist very seriously as an *argument*, and if your opinions about physics or philosophy of science are importantly contingent on your intuitions for small shrimp physicists, I think there's like a screw or five missing in how you should relate to physics.
I guess I'm not convinced that thought experiments in ethics are actually much better? Here are some ways that I think are better for determining a system of ethics:
1) think about the world you want, and then see which ethical systems would more plausibly lead to that world. There's some complications/nuances on questions like whether you're imagining only one person changing their views, everybody who-thinks-like-you changing your views, all of humanity, or everywhere in the multiverse, including counterpossible people etc
2) look at the track record of ethical systems, and choose the ones with the best track record
3) think about which ethical systems are simplest/most beautiful
4) choose which ethical systems most accord with your intuitions in real life choices that you're likely to experience
4a) finding a reflective equilibrium between various different intuitions at different levels of abstraction about practical ethis
4b) adopt the ethical systems that let you justify your pre-existing choices and don't lie to yourself about what you're doing
5) consider non-ethical intuitions/heuristics/preferences you have about areas that you find less confusing than ethics, and systematically rule out systems of ethics that violate too many of them
6) etc, etc.
This "trial by dueling thought experiments" model of truth/the good just doesn't seem like it is a very successful epistemic process, and I find it fairly suspicious as a reasoning method[3].
[1] I ran the world's largest thought experiments memes page for a coupla years so this isn't a hypothetical for me
[3] I suspect many of them fail on their own terms; people talk about the replication crisis in psychology, but I strongly suspect a high fraction of thought experiments would completely fail to replicate with different thought-experimenters, with or without minor changes.
I think you’re generally right about more abstract or complex thought experiments playing an outsized role in this kind of reasoning, but for me at least, examining the Transplant scenario is pretty close to just examining the fundamental moral claim itself. What really matters is just whether or not we should proactively sacrifice one person to save five others; you could easily reframe things to remove any references to Chuck, or a hospital, and the basic point I’m making would still apply to examining the principle itself in that sort of context.
Have you considered Williamson's methodological arguments in his recent book about "overfitting"? They seem relevant to what you are talking about here. Daniel Greco has a post up about it on Substack if you want a taster.
I did just read that! I agree that overfitting is a concern here, and that we shouldn’t necessarily want our moral theories to fit every single intuition we could possibly have. But I do also think that our moral theories should fit some intuitions, or else we have no reason to adopt them in the first place. And so while I’d still take utilitarianism over some hard particularist analysis that had no predictive success anywhere at all, I think there are theories that fit our fundamental intuitions much better than utilitarianism without veering into overfitting in a problematic way.
Well, not comatose people specifically - but I would say that the main feature of being comatose is being totally and completely incapacitated or unable to resist, and *that* is a pretty paradigmatic aspect of abuse as we conceive it.
Good post! As with many, this is much of what holds me from full utilitarian endorsement.
One thing to say to lessen the intuition is that it seems like my intuition is not just that it's wrong, but that it's bad. I wouldn't hope as a third party that someone else chopped up chuck, or that some amoral machine did. But in other classic anti-consequentialist cases, like jim and the Indians, I would hope that someone did the wrong thing. Whether or not this is because it is bad or not, it suggests that right=good (i.e. consequentialism) is not (wholly) at fault here.
As for the final couple of paragraphs, I think you can make any normative ethical theory sound bad in this way: You shouldn't kill people because it would show that *you* have a bad character, you shouldn't kill one to save give because it's against some rule, etc.
Yeah, I added that throwaway line at the end about other consequentialist theories not having these problems, but soon I'd love to write something about what those might be. I get the sense that a lot of people think consequentialism and utilitarianism are basically identical.
IMO you're on stronger grounds just saying "I literally don't care about the Abuser's pleasure, that kind of pleasure isn't morally valuable, hence goodness isn't utility, QED." All of the cases in this are ones which are awkward for act utilitarians but which rule utilitarians just bat away with "We choose social rules based on the effects of them being widely known and adhered to, individuals don't get to break those rules whenever they happen to think they can get more utility by doing so because 99% of the time they're just wrong."
I think the really devastating blow is that all this reasoning about the side effects and risks can be rephrased as advice to the doctor/killer or the Abuser to "only do it if/when you can make absolutely sure no one finds out." Heck, they might even offer specific advice on how to avoid being caught. That's not moral advice!
> In other words, I don’t see my moral sense as a discrete set of extensional Yes/No judgments that any just-so story about consequences would be capable of grounding; I need a moral theory that doesn’t just tell me evil things are evil, but that they’re evil for the right reason.
What reason would you say makes it so that organ-stealing in the Transplant case is wrong?
I'd say organ stealing is wrong because it very reliably does more harm than good, but of course, the Transplant hypothetical by design has whatever set of stipulations is necessary to make this not the case: for instance, we can say that no one will ever find out; that the doctor will perform all six operations on their own, successfully; that all five recipients will recover and never find out where their organs came from; that no one will imitate this in the future; that the surgeon knows all this, certainly and correctly; and so on and so on. This is fine - and if all these stipulations are granted, utilitarians will say that the organs should be stolen, which is doubtless a radically counterintuitive conclusion. But I don't see it as a problem with a theory, that it yields radically counterintuitive conclusions about a radically counterintuitive case.
But supposing we reject this conclusion and say, we need a theory which says organ-stealing is evil "for the right reason." What is this reason? And can we ensure it doesn't yield counterintuitive conclusions about counterintuitive cases? Can it even possibly be impervious to this?
I definitely agree that we should ultimately be on the lookout for the perfect theory that matches all these intuitive judgments - or at least gets close enough that we feel comfortable revising the ones it doesn't accommodate - and I don't think that theory exists right now. But on a personal level, I would generally say what makes killing Chuck wrong is somewhere in the vicinity of:
1) He's being treated as a means rather than being respected as a person,
2) the doctor is betraying her duty as a doctor, and
3) his right to bodily autonomy is being violated.
Like I said, I don't think either one of those is exactly right, but I think the right answer is going to be something related to that general set of features. I'm personally favorable to a general virtue ethics account here that leans towards the first two more.
Suppose a doctor could prevent Europe from being nuked by removing a patient's fingernail - should they? If so, why? If not, why not?
I think they should in that case, for sure. But that same sort of reasoning doesn't work in the inverted case - you could stipulate that someone gets an arbitrarily huge benefit from sadistic joy, and that doesn't justify engaging in it. So whatever is going on in the case you're bringing up, it can't just be a result of a fundamental utilitarian calculus, or at least that isn't sufficient for to determine what the right move would be for certain. Personally, I think we're generally permitted to override typical constraints when the outcome of not doing so is sufficiently bad, at least in certain circumstances. Saying that non-utilitarian concerns matter doesn't mean they're necessarily decisive, although they probably should be in some cases (like the case with the abuser).
But **why** should they? If they should to prevent Europe from being nuked, then surely you agree, it can't be the case that an act is wrong merely because a right to bodily autonomy is being violated, or because a doctor is violating their duty, or because a person is being treated as a means rather than being respected as a person - because this would violate all three.
So then, what test is **actually** being employed to make the determination that an act is wrong? And why wasn't it in the 3-part list? What methodology yields the **real** test, and how does one argue for it?
Sure, you don’t need to believe that those features are necessarily sufficient to make some act wrong overall. You just need to believe that they count strongly against the act and render it wrong in the absence of any countervailing features. Saying “X is impermissible because of Y” doesn’t mean Y could never be present while X is permissible, so long as other features could also shift - that would be like saying “My car is expensive because it’s a Lamborghini” rules out the possibility of any Lamborghini not being expensive, but a Lamborghini that’s been totally trashed or set on fire or whatever wouldn’t be expensive even though it’s a Lamborghini. I guess you could demand that they specify “My car is expensive because it’s a Lamborghini and it has no other features that outweigh that fact,” or “X is impermissible because of Y and the fact that there are no other features that outweigh Y,” but that should be implied on pretty much any non-deontological framework. The point is that you don’t need to affirm something is wrong merely because of X, Y, and Z to still think that X, Y, and Z, combined with a variety of other features or their absence, explain its wrongness in this case.
But it sounds like what you are describing is a decision procedure applying some given set of values, rather than an explanation of the set of values. What I am asking is, what methodology generates your values? How do we check whether, for instance, bodily autonomy has intrinsic weight as a value, and how do we figure out how much it's got?
I think the decision procedure you describe is perfectly sensible, given some set of values - only, I think that utilitarianism supplies an excellent methodology for generating those values. I think it's generally wrong to violate bodily autonomy because doing so makes people's lives worse. A lot worse. When this conflicts with some other value I hold, I resolve that conflict in the manner which I think is most likely to improve people's lives. So I have no trouble saying that preventing the sun from exploding outweighs bodily autonomy - I resolve that conflict by looking at how the anticipated consequences will affect people's lives.
But if there is some other, better way of figuring out values - what is it?
I don’t think the circumstances are counterintuitive at all. I have almost zero intuition about the likelihood of a surgeon getting caught in this act—it doesn’t strike me as unintuitive to imagine that a surgeon could successfully kill a patient without anyone knowing. Dr. Death was a thing. Moreover, the stipulations don’t even have to be certainties—so long as, in expectation, the odds of those things are low enough, then the utilitarian recommendation to harvest the organs goes through.
As for what would constitute a better/more fitting reason for why the organ harvesting is wrong, BSB hit on several in his reply, but to generalize: it seems to me (and I think most people?), that the character of what is wrong about harvesting a patient’s organs to save 5 others is that the doctor has done something to him that violates said doctor’s obligations to that patient. There is a deontic violation here it seems, and a theory which cannot articulate this deontic violation seemingly fails to properly account for the moral logic of the scenario.
If a doctor could prevent the sun from exploding by violating their obligations to a patient, should they? If so, why? If not, why not?
Ahhh but you see, that gets to whether the violation of a deontic constraint can ever be justified, which is a whole separate question. What BSB is pointing out in this essay is that a theory which cannot even properly articulate that a deontic constraint exists and can only reference consequences/effects is missing something about the nature of wrongness in this instance.
To answer your question directly though: I do think deontic violations can be outweighed and/or granted an exception by sufficiently dire consequences, but that does not mean that the duties are themselves reducible to consequences.
But then, if a deontic constraint can permissibly be violated, the fact that a choice violates a deontic constraint is no longer sufficient to explain its wrongness - one must say, this violates the constraint, **and also is wrong** - which raises (for me) three questions - first, what else is necessary? And second, if there is some other balancing test we must employ, of what consequence is the constraint? And third - what methodology derives these constraints in the first place?
Okay this will be long. But first, to clarify: I would characterize it more that deontic constraints are characterized as rules and rules can have exceptions. You can think of these exceptions in one of two ways:
A) An exception to a rule is merely another conditional added to the antecedent of a moral rule [if X then Y] where X is a set and Y is an act. (often it’s more like X then !Y since constraints are usually negative.)
B) An exception to a rule is itself a form of meta-rule that overrides the original rule when it’s own conditional is met
These are functionally isomorphic but the latter construct can be a more useful model to think about in some circumstances. For instance, the “okay but what if the sun would explode if X” can be thought of as a larger meta-rule that exempts a lot of other rules given how dire the consequences are. You can even generalize this meta-rule to just be a broader “utility disaster” exception: an exception to a deontic constraint can be granted if the negative utility produced by following it is *severe enough* (defining “severe enough” is a very big question ofc; obviously a grain of sand in an eye isn’t enough, but the sun exploding is, so…somewhere in between those).
Okay with that rough model in place, your questions:
1. What else is necessary: The rule which establishes / defines the duty must be violated and no exceptions to that rule may trigger. With our utility disaster exception, it means that the negative utility is not “severe enough” to justify an exception. There could be other exception criteria but the point is that none of them trigger
2. What is the point of a constraint if it has a balancing test: Here, the reframing as rule exceptions is helpful. Ofc the rule is important even if it has exceptions, because it applies when those exceptions do not trigger. Even if we go back to the balancing test framework: the constraint serves as a major weight on one side of the test—without it you only have utility, which would lead to different answers. Moreover, per BSBs essay, it seems like this actually correctly captures the character of what has gone wrong in these situations.
3. How do we derive these constraints/rules: this is the big one. After all, everything I’ve said above could be very very justifiably objected to as what seems to be a load of ad hoc bullshit. Why do we have a utility disaster exception but not a general utility exception (i.e., ignore deontic constraints if the utility generated by violating them exceeds that of maintaining them, even if only barely)? How do we decide what “severe enough” is? Why these rules / meta-rules vs any other set? What is the higher principle binding this all together? And the answer, imo, is some sort of constructivism based around a universalizability test: would you warrant that all rational beings acted according to these rules? Would you prefer to live in the resulting world? Note that this process works for exceptions/meta-rules too: you can ask if you would universally warrant an exception to other rules if circumstances make it such that the sun will explode if you don’t violate the rule, and…yeah I think that universalizes just fine!
I’m gonna stop here because uhhh, this has gotten way too long but hopefully you get the gist. If you take anything away from this, it’s that defining an exception to a rule is not the same as invalidating it, so long as the exception is itself defined rigorously and its limits clear. There is a difference between saying “killing is (generally) wrong, but there are exceptions” vs. “killing cannot be wrong because there are exceptions”
I'm perfectly fine with recommending that people live in accordance with a set of well-defined rules (this was the topic of my most recent article: https://substack.com/home/post/p-169764193), but for me, the answer to "how do we derive these constraints/rules", and how do we derive exceptions, is simple - we derive them by checking whether people living in accordance with a rule or permitting some category of exception generally promotes wellbeing. This is why I think utilitarianism works best as the high-level abstract grounding for more on-the-ground, day-to-day systems like virtue ethics and deontology - these are really effective means to the end utilitarianism describes. (cf. https://substack.com/@gumphus/p-168569193)
There is no sane analysis which disregards consequences that can demonstrate, a priori, that the sun exploding is bad. Likewise, in the trolley problem, the goodness of pulling the lever is **directly and exclusively determined** by the consequences of doing so. There is nothing intrinsically wrong with pulling levers. So for me, any analysis just **has** to arise out of consequences - it can end up elsewhere, but this is the inescapable set of morally relevant facts. Nothing has ever even remotely seemed, to me, like a plausibly workable alternative.
Utilitarianism has always struck me as a "morality" (there's nothing moral about it) for socially-inept dweebs who understand numbers better than people and think everything real can be calculated. This must be why it's so popular with the so-called "Rationalist"/EA crowd.
The shrimp debate was particularly silly, because the whole idea that some number of shrimp lives (maybe just one) are equal to a human life is crazy to begin with. Normal people (not utilitarians) consider life more valuable the closer it is to their own. Humans matter more to us than shrimp; my family means more to me than other humans. Shrimp, if they could talk, would presumably favor shrimp over humans, and that would be completely understandable. A theory of morality needs to accept this; one that denies it doesn't deserve to be taken seriously. It's a morality for machines, not for people.
Your Abuser scenario is truly disturbing, as of course it was meant to be. And it could apply to much less extreme cases than people in comas who may never awaken. I've read articles in the media about real-life cases where girls in their early teens were drugged and raped, and apparently didn't realize what had happened until someone else told them. (In one case, she found out when she went to school and other kids were laughing at her and calling her a slut, which, needless to say, is absolutely disgusting behavior.) So would it be okay to drug and rape someone as long as she never finds out she was raped? I don't think so. But utilitarianism seems to have trouble reaching such a conclusion. Where is the negative utility if the rapist benefited and the victim never even knows they were victimized?
Reasonable criticism- perhaps it will even be compelling enough to momentarily distract the utilitarians from their shrimp
Oh man I should have built the whole thing around a shrimp-themed thought experiment instead
Good post - I think a lot of utilitarian arguments do indeed try to sidestep certain intuitions instead of confronting them head on. I think the Abuser scenario is very possibly a big hole in utilitarianism.
But I'm not sure. I have a lot of questions. And the question that's at the forefront of my mind after reading this post is:
If a Doctor risks his medical license and his freedom to murder one person in order to save five others (who *certainly* would have died otherwise), shouldn't we consider that Doctor to be a hero?
I think that's a tough question, since the obvious default option should be for the doctor to sacrifice himself instead. But maybe he can't do that, because he's not a match or something? In that case, I don't know if I would consider him a hero, but I do think I would feel *better* about the situation if he knowingly took that risk, which is ironic since that sort of risk is what utilitarians say makes the action bad in the first place!
"which is ironic since that sort of risk is what utilitarians say makes the action bad in the first place"
That's interesting. I suppose utilitarianism/consequentialism (do they mean the same thing?) is about encouraging you to do the right thing, which is different than measuring a person's worth or heroism. Sacrificing one to save five has probabilistically better consequences when there's no personal risk. The personal risk makes it slightly more excusable for a doctor to not choose that option, but it's the same thing that makes the action more commendable, because we can recognize that actions are more heroic when there's more at stake personally. I don't think that's contradictory? Though hard to put into words...
I agree that utilitarianism is sometimes counterintuitive, but I generally think that the intuitions other views conflict with are more methodologically suspect. I elaborate on this more here, and I also explain why I think it's much less costly to just bite the bullet on the organ harvesting case https://benthams.substack.com/p/the-ultimate-argument-against-deontology
Regarding the coma case, you can definitely have a view on which pleasure derived from wicked acts isn't good for a person and on which the person is harmed by being raped in the coma. In fact, I think this is the commonsense thing to say--it's pretty intuitive that the reason it's wrong to rape people is that it's bad for them, that it harms them. Maybe this gives some reason to abandon strict hedonistic utilitarianism but I don't think it gives much reason to abandon objective list utilitarianism.
Certainly it's no reason to abandon consequentialism because our intuition is that the act is bad not just wrong. So I think people should have an axiology holding the act makes the world worse, and then it's not a counterexample.
I definitely agree you can avoid a lot of these issues by abandoning hedonic fundamentalism and moving towards objective list views - but once you've done that, I'm not really sure what makes the resulting view "utilitarian" in any meaningful sense, beyond an orthogonal semantic claim that all those things broadly constitute welfare. Especially since, imo, adopting that sort of standard destabilizes a lot of the judgments that people bring up in favor of utilitarianism in the first place. Better to just cut the cord and become a multi-valued consequentialist in general.
What makes it utilitarian is that it meets the definition of utilitarianism: holding that the right act is the one that maximizes aggregate well-being (or, if we want to be precise, says that whichever of two acts you have stronger reason to perform is the one that raises aggregate well-being more).
It's distinct from consequentialism in that it says the only important consequences are in terms of aggregate well-being, so it's incompatible, for instance, with desert and views on which nature has intrinsic value.
Well sure, it's not literally conceptually equivalent to some other view. But the point is that, the broader or more controversially thick your conception of "well-being" gets, the less distinct any theory grounded in it becomes - you'd think most utilitarians would want their theory to have more practical applicable content than just "any form of consequentialism where you could tangentially tie something back to the experience of a conscious creature," especially if you're allowed to bring in judgments that conflict with our own self-reports. Of course, that's a problem with pretty much every other theory too. But utilitarianism is often sold on precisely the point that it avoids that kind of messiness!
I mean, utilitarians will generally have more specific theories of well-being. Mere utilitarianism doesn't have to be very specific. But mere utilitarianism has various important implications. For instance, on any version of it, you should kill one to save five and there are very strong obligations to give away large sums of money. Also no special obligations or desert. It strikes me as crazy to think that a theory saying all those controversial things is inadequately specific!
Serious question: Could a utilitarian who adopts an objective list outlook say that, for example, caring for a family member is good for you in a way that caring for someone else is not? Or that receiving what you deserve is good for you and avoiding punishment for wrongdoing is not? Then you could have room for special obligations or desert (or a bunch of other things). It's hard for me to see where exactly the brakes are on something like this.
Yes all of that is current but doesn’t seem that plausible (it seems like people who believe in desert generally want to say it’s good morally for bad people to have their lives go badly).
I'm not sure I follow your example at the top. If I hadn't read the title to the piece, I would have assumed the Frenchist was a stand-in for a deontologist and you were championing utilitarianism, not poking holes in it. "I also care about why a painting is or isn’t beautiful" sounds like something a painting-utilitarian would say.
I think *everyone* would care about why something is or isn't beautiful (or good) - if I told a utilitarian they should kill Chuck and harvest his organs because killing people named Chuck is a deontological duty, then I bet they wouldn't be satisfied either haha.
Being a consequentialist myself, I’m more sympathetic to utilitarianism’s verdicts than you are (I think the doctor should kill Chuck in the Transplant scenario) - that being said, I do reject core tenets of utilitarianism - such as welfarism and impartiality (and in the case of classical utilitarianism specifically a moral symmetry between happiness and suffering) - so many of the outcomes it yields are radically counterintuitive to me as well.
It does seem obviously crazy to say that rape is morally permissible - or even good - as long as it doesn’t lead to any additional suffering or loss of pleasure. Even more crazy is to propose that even an instance of rape that does lead to severe suffering for the victim can be good too if the perpetrator enjoys it enough.
I also find the tendency to dismiss purported counterexamples on the basis of the scenarios being “unrealistic” very frustrating. I touch on this tendency a bit in an upcoming article I’m writing on speciesism - and I plan to write a whole article specifically about this phenomenon later.
People like this will insist that weird worlds will have weird results - and that’s certainly true to an extent. In my article on moral offsetting, I presented a scenario in which every time a human blinked, it resulted in an innocent person from a distant galaxy being teleported in a brazen bull. Surely if it turned out this was the case, we should radically revise our views on the moral status of blinking.
But there is a limit to this. If it turned out that the Looney Tunes live underground like they do in movie Space Jam, this shouldn’t lead to us altering our moral evaluation of hunting toddlers for sport. I think the crux here is that while it’s fine for weird worlds to have weird results, they shouldn’t have *absurd* results. The former, while weird, isn’t an absurd result - but the latter is.
The move of dismissing counterexample cases on the basis of them being “unrealistic” seems to be always be a last ditch effort from people with implausible ethical views who can’t muster anything in support of their view.
Whether it’s a natural law theorist insisting that it would be unjust to tell a white lie to prevent trillions from being tortured, a utilitarian claiming it would be a moral obligation to light kittens on fire as long as a sufficient amount of people got pleasure from watching it, or an anti-vegan claiming it would be good to factory farm humans as long as it increased species fitness - I just can’t take it seriously when an adherent of an ethical view claims that we should ignore the insane entailments of their views as long as those results obtain in “unrealistic” scenarios.
I think these cases are about as clear-cut as it gets morally - if you’re going to tell me that actually I can’t have confident moral judgments in these cases because they take place in possible worlds too distant from us - I’m sorry, I just don’t buy that! It’s a weird form of moral skepticism - I see no reason why we’d somehow be precluded from making confident ethical judgments about actions in possible worlds drastically different from our own - in fact we routinely do this already when we make negative assessments about the actions of a villain in some work of fiction. I don’t think our moral judgments are somehow epistemically compromised when we’re doing this - so I’d see why they’d be in these cases either.
Yeah, I totally agree that the “weird worlds give weird results” line is a cheap way to get out of the problem. I think a distinction needs to be made between “first-level weirdness,” which is just some result that seems strange or unfamiliar but makes sense in the context of the weird world, and “second-level weirdness” that bleeds back into our own world. “Blinking would be morally evil in a world where it causes people to be tortured” is a weird result, but the weirdness is contained within the weird situation. “Rape would be acceptable if a utility monster existed who really, really enjoyed it” is a weird result that carries weird implications back into our world!
I think Utilitarianism indeed seems unconvincing as a way to judge action/decision since it is concerned with states/consequences and likewise duty/deontology seems a bizarre way to judge states. Both approaches seem somewhat unhelpful about judging what we should like, what is likeable, beautiful, noble and so on as might be a question of virtue ethics.
Imagine if a doctor said to you "I realized I could save 5 patients by harvesting the organs from this guy who came in, but since I knew that would be wrong I did not. All 5 patients died a few hours later. The man who came in was a total stranger and I've known the 5 patients for years and they are all close friends, yet I am so happy the one man is alive and they are dead, just filled with euphoric joy, also I judge the world to be a much better place." I'd say the doctor was completely crazy pragmatically and probably profoundly wrong morally also. I don't think this sort of possibility put the lie to the idea there might be rules about what one should or should not do in those cases independent of utility calculus,.
Even if it is worse to do evil than to suffer evil, that doesn't mean the evil we suffer is not also evil and that in some ways we are worse off suffering lose than exacting one. Also I don't think it's enough to try and divide up say happiness (or pain) and moral duty and just treat them as two independent considerations and to say they are somehow unmixed.
To deescalate your abuser example consider the Punster, a sadistic villain who is also a total coward, so their preferred method of inflicting suffering is to tell people truly awful puns and lap up the groans this generates. Most of all the Punster likes telling a real groaner to dementia patients who will forget about the whole thing in a minute or two, so there are guaranteed no consequences. It seems like we just wouldn't want to credit his sadistic urges at all, if he just doesn't get a chance to inflict his awful puns on people, we'd say that's not only the morally preferred outcome, but it is the happier outcome, because we don't consider what the Punster enjoys to be worthy of consideration (the Punster likes it, but we don't consider what he likes likeable) and so on.
I think any theory of morality has grave tensions and practical morality itself is just rife with tricky situations.
This article is excellent. It is the clearest articulation I have seen of a core problem with utilitarianism.
Tbh I just don't find thought experiments very persuasive, in full generality. They are interesting and fun sometimes I guess[1], but I wouldn't take them very seriously. I'm not sure why other people like them for serious stuff.
I think they're sometimes helpful for elucidating a position, or to add color to an otherwise overly abstract discussion. They can be occasionally educational. They're also sometimes funny. For example, in my recent post on the unreasonable effectiveness of mathematics [2], I thought it'd be fun to imagine a shrimp contemplating physics:
> Imagine you're a shrimp trying to do physics at the bottom of a turbulent waterfall. You try to count waves with your shrimp feelers and formulate hydrodynamics models with your small shrimp brain. But it’s hard. Every time you think you've spotted a pattern in the water flow, the next moment brings complete chaos. Your attempts at prediction fail miserably. In such a world, you might just turn your back on science and get re-educated in shrimp grad school in the shrimpanities to study shrimp poetry or shrimp ethics or something.
But importantly I was only using the shrimp to *illustrate* an interesting idea. I don't think anybody should take the shrimp physicist very seriously as an *argument*, and if your opinions about physics or philosophy of science are importantly contingent on your intuitions for small shrimp physicists, I think there's like a screw or five missing in how you should relate to physics.
I guess I'm not convinced that thought experiments in ethics are actually much better? Here are some ways that I think are better for determining a system of ethics:
1) think about the world you want, and then see which ethical systems would more plausibly lead to that world. There's some complications/nuances on questions like whether you're imagining only one person changing their views, everybody who-thinks-like-you changing your views, all of humanity, or everywhere in the multiverse, including counterpossible people etc
2) look at the track record of ethical systems, and choose the ones with the best track record
3) think about which ethical systems are simplest/most beautiful
4) choose which ethical systems most accord with your intuitions in real life choices that you're likely to experience
4a) finding a reflective equilibrium between various different intuitions at different levels of abstraction about practical ethis
4b) adopt the ethical systems that let you justify your pre-existing choices and don't lie to yourself about what you're doing
5) consider non-ethical intuitions/heuristics/preferences you have about areas that you find less confusing than ethics, and systematically rule out systems of ethics that violate too many of them
6) etc, etc.
This "trial by dueling thought experiments" model of truth/the good just doesn't seem like it is a very successful epistemic process, and I find it fairly suspicious as a reasoning method[3].
[1] I ran the world's largest thought experiments memes page for a coupla years so this isn't a hypothetical for me
[2] https://linch.substack.com/p/why-reality-has-a-well-known-math
[3] I suspect many of them fail on their own terms; people talk about the replication crisis in psychology, but I strongly suspect a high fraction of thought experiments would completely fail to replicate with different thought-experimenters, with or without minor changes.
I think you’re generally right about more abstract or complex thought experiments playing an outsized role in this kind of reasoning, but for me at least, examining the Transplant scenario is pretty close to just examining the fundamental moral claim itself. What really matters is just whether or not we should proactively sacrifice one person to save five others; you could easily reframe things to remove any references to Chuck, or a hospital, and the basic point I’m making would still apply to examining the principle itself in that sort of context.
Have you considered Williamson's methodological arguments in his recent book about "overfitting"? They seem relevant to what you are talking about here. Daniel Greco has a post up about it on Substack if you want a taster.
I did just read that! I agree that overfitting is a concern here, and that we shouldn’t necessarily want our moral theories to fit every single intuition we could possibly have. But I do also think that our moral theories should fit some intuitions, or else we have no reason to adopt them in the first place. And so while I’d still take utilitarianism over some hard particularist analysis that had no predictive success anywhere at all, I think there are theories that fit our fundamental intuitions much better than utilitarianism without veering into overfitting in a problematic way.
I wrote a long comment responding to this that turned into a post!
https://open.substack.com/pub/outpacingzeno/p/utilitarianism-is-not-virtue-ethics?r=6mh3s&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Very interesting! I restacked it with some thoughts.
Did your intuitions about abuse evolved / were formed to deal with comatose people? Why would you expect that to be the paradigmatic case?
Well, not comatose people specifically - but I would say that the main feature of being comatose is being totally and completely incapacitated or unable to resist, and *that* is a pretty paradigmatic aspect of abuse as we conceive it.
Good post! As with many, this is much of what holds me from full utilitarian endorsement.
One thing to say to lessen the intuition is that it seems like my intuition is not just that it's wrong, but that it's bad. I wouldn't hope as a third party that someone else chopped up chuck, or that some amoral machine did. But in other classic anti-consequentialist cases, like jim and the Indians, I would hope that someone did the wrong thing. Whether or not this is because it is bad or not, it suggests that right=good (i.e. consequentialism) is not (wholly) at fault here.
As for the final couple of paragraphs, I think you can make any normative ethical theory sound bad in this way: You shouldn't kill people because it would show that *you* have a bad character, you shouldn't kill one to save give because it's against some rule, etc.
Yeah, I added that throwaway line at the end about other consequentialist theories not having these problems, but soon I'd love to write something about what those might be. I get the sense that a lot of people think consequentialism and utilitarianism are basically identical.