Subjectivism Undermines Basic Aspects of Moral Reasoning
Properties Do What Preferences Don't

There’s something I need to get off my chest: I’ve never really liked the TV show Twin Peaks, and I honestly don’t know why. On paper, it should be a perfect fit for me, since I love pretty much everything else David Lynch ever had a hand in, and I’m a big fan of both surrealist horror and crime dramas in general. Plus, I grew up right around where the series was filmed, and it was always playing in the background any time my circle of edgy Obama-era art dorks got together. But still, for some reason, I never shared their enthusiasm back then, and even now I can’t get through more than an episode or two before I bail and just watch Blue Velvet again instead. There’s nothing in particular I can point to as an explanation here - every aspect of the show sounds like something I’d enjoy if you describe it to me in words - but no matter how hard I try, I simply can’t make myself enjoy it.
So why bring this up? It isn’t just to confess a dark secret I’ve been hiding from my hipster friends for years now. It’s also to illustrate the general point that preferences have no logical entailments - that is, the fact that you enjoy or approve of one thing never necessarily implies that you’ll take an identical stance on something similar, or even take an identical stance on the exact same thing at another time or in another context. This is true for aesthetic tastes and preferences for different activities or experiences, but it’s also (unfortunately) true for the subjective stances we take in regards to very important real-world matters, like the behavior of certain politicians or the appropriateness of certain social practices or institutions. You can hop on Twitter right now and see this very plainly. Just spend five minutes scrolling through the discourse on any hot-button issue that crosses party lines and you’ll have all the proof you need that attitudinal consistency isn’t exactly our strong suit.
Okay, sure, but why bring that up? Well, a few months ago I wrote a piece on moral discourse, and how a subjectivist understanding of moral language and properties undermined the most plausible explanation we have for why the essential features of that kind of discourse are justified. Today, I want to take that critique a bit further and argue that subjectivism undermines not only moral discourse as a social practice, but also the structure of moral reasoning itself. The basic concern is this: If there aren’t any logical entailments between different preferences, then that puts a very restrictive upper bound on what conclusions we’re justified in drawing when we try to reason solely with regard to either our own stances or the stances of someone else. To start making my case, I’ll use one of philosophy’s most famous syllogisms:
It’s wrong to tell lies.
If it’s wrong to tell lies, then it’s wrong to get your little brother to tell lies.
Therefore, it’s wrong to get your little brother to tell lies.
Originally, this syllogism was used as an example by the philosopher Peter Geach to criticize expressivist theories that claimed moral statements were non-cognitive expressions of evaluative attitudes rather than descriptions of actual properties. The problem, Geach argued, was that the sentence “it’s wrong to tell lies” as it occurs in the conditional lacks the same expressive content as when it occurs as a stand-alone premise, and therefore its meaning is obscure or undetermined under an expressivist lens. Subjectivism doesn’t face this same problem, since “It’s wrong to tell lies” can be given the same subjective gloss in both premises. Nevertheless, subjectivism will still struggle to make sense of syllogisms like these, as we can see when we “translate” the premises into statements about the stances of the agent speaking:
I disapprove of telling lies.
If I disapprove of telling lies, then I disapprove of getting your little brother to tell lies.
Therefore, I disapprove of getting your little brother to tell lies.
The problem with this new syllogism is very simple: Given undeniable empirical facts about the way human psychology works, then either we have no grounds for affirming the truth of the first premise, or else we can know for a fact the second is false.
Let’s start by looking at the first premise. One of the most straightforward ways we could interpret the statement “I disapprove of telling lies” would be to take it as the claim that you disapprove of every actual instance of lie-telling. But a statement like that is always obviously false, because it’s just a basic empirical fact about human cognition that no one could possibly be aware of every instance of lying (or stealing, or killing, or pretty much anything else) throughout human history. And since you can’t take a disapproving stance towards a particular event you have no idea occurred, it therefore isn’t the case that anyone currently holds a stance of disapproval towards every actual instance of any remotely widespread behavior. So if the claim that it’s wrong to tell lies is meant to apply to every actual instance of lying - which, I would argue, is at least sometimes the intended scope of “It’s wrong to X” - then it seems like that claim is guaranteed to be false given subjectivism.
But maybe I’m being uncharitable here, and the disapproval towards lying is meant in a more hypothetical sense - something like the claim that, were the speaker to take any stance towards a particular instance of lying, it would be a stance of disapproval. A claim like this isn’t guaranteed to be false, since it’s at least possible that someone could, in fact, go their entire life without approving of a single lie. But it’s also the sort of thing we could never know to be true, since no one can be sure their preferences will always have that kind of regularity. Claiming it’s a fact that you’ll disapprove of every lie you ever become aware of is no more plausible than claiming it’s a fact that you’ll dislike every polka song you ever hear. Your stances just aren’t that reliable!
I would imagine all of us have had an experience where we reacted emotionally or attitudinally to an important situation in a way we wouldn’t have predicted: Making a split-second decision to offer support to someone you previously saw as your enemy, realizing at all once that you’ve lost conviction in the political beliefs you held when you were younger, whatever. But even if you have happened to have impeccable self-knowledge so far, that’s still only, at best, some inductive evidence for the hypothesis that those stances will stay predictable in the future, and that inductive evidence will need to be weighed against the tremendous amount of evidence we have on the other side that preferences do often shift in unpredictable ways. So if “It’s wrong to tell lies” means something like “I’ll disapprove of any lie I ever hear about,” then no one, not even the speaker, can assert it as anything other than a working hypothesis, and an overconfident one at that.
For this reason, it’s much more plausible to interpret the first premise as expressing disapproval towards the general concept of lying, rather than any particular instance. But then the subjectivist is immediately faced with a new problem, which is that the second premise becomes obviously false. This is because, as we’ve seen, there’s no logical entailment whatsoever between any two preferences or stances. It’s true that general disapproval towards lying might correlate with general disapproval towards getting your little brother to lie, as a contingent fact about human psychology. But what rational force is that correlation supposed to have? Logically speaking, the two attitudes have nothing to do with each other. It’s perfectly possible, at least in theory, for someone to take a strong negative stance towards their own dishonesty alongside a totally neutral (or even positive) stance towards someone else’s without being in any sort of error. You could even take a negative stance towards every single person on the planet except your little brother lying, if for some reason you really wanted to, and the most a subjectivist could say is that other people tend to feel differently. Who cares?
Of course, as long as the subjectivist does disapprove of getting their little brother to tell lies, they could still insist that the second premise is vacuously true as a general consequence of the rules of material implication. But then the second premise is no more meaningful or informative than any other conditional statement that ends at the same conclusion, like “If I have a hangnail on my left pinky finger, then I disapprove of getting your little brother to tell lies.” Interpreted this way, what’s been presented isn’t really a syllogism at all. It’s just an assertion of what’s meant to be demonstrated, joined to a logically irrelevant statement about some other unrelated stance. But when interpreted as an actual attempt to engage in moral reasoning - that is, as a claim that some rationally compelling relationship truly exists between the wrongness of lying and the wrongness of getting your little brother to lie - the premise is clearly false.
To summarize: If we’re meant to take statements of the form “X is wrong” as asserting a universal stance of disapproval towards all instances of X, then we’re never justified in taking those statements to be true, since no one can have that level of confidence in the regularity of their preferences. And if we’re meant to interpret those statements as claims about a general stance of disapproval towards the concept of X, then all moral conditionals are always false, since a general stance of disapproval towards a concept never entails any other specific stance, or even the same general stance at a later time. (Alternatively, moral conditionals are sometimes true, but only in that they express an irrelevant observation about the way certain stances tend to correlate.) Therefore, if we assume subjectivism, then either pretty much moral reasoning relies on premises we aren’t justified in taking to be true, or else moral conditionals in particular are guaranteed to be false, rationally inert, or only vacuously true.
This fundamental issue doesn’t just pop up for stock syllogisms like the one used by Geach. Take, for example, the following common argument against abortion (which I personally think is terrible, but that’s not the point right now):
It’s wrong to kill a newborn.
If it’s wrong to kill a newborn, then it’s wrong to abort a fetus.
Therefore, it’s wrong to abort a fetus.
According to a subjectivist interpretation, this is roughly equivalent to:
I disapprove of killing a newborn.
If I disapprove of killing a newborn, then I disapprove of aborting a fetus.
Therefore, I disapprove of aborting a fetus.
Or maybe it could be more generously interpreted as:
You disapprove of killing a newborn.
If you disapprove of killing a newborn, then you should disapprove of aborting a fetus.
Therefore, you should disapprove of aborting a fetus.
But no matter how exactly you interpret it, the same exact problem comes up: Even assuming that your interlocutor does disapprove of killing newborns, neither version of the second premise could possibly be rationally compelling, given we know for a fact that some people hold that stance while at the same time generally approving of abortion, and no logical entailment exists between those two stances that could make the combination rationally improper. It’s no more persuasive, in that sense, than the claim that anyone who likes surrealist mysteries must necessarily like Twin Peaks.
I’m going to repeat that one more time, for emphasis: When interpreted through any plausible subjectivist lens as a substantive rational claim, “If it would be wrong to kill a newborn, then it would be wrong to abort a fetus” is just not true. It’s not true as a claim about any actual logical entailment between the speaker’s own stances, and it’s not true as a claim about any proposed logical entailment between the hearer’s own stances. It can’t be. At absolute most, it’s just a vacuous material conditional that repeats the conclusion of the argument alongside a rationally unrelated self-report. It may be that, for various psychological reasons, the juxtaposition of these two stances will induce a shift in the preferences of some people who hear it. But when considered as a truly rational appeal, it’s dead in the water, as is any similar conditional.
Of course, there are some subjectivists who might not see a problem here. Maybe all moral reasoning is fundamentally about identifying and influencing the contingent connections between our stances in pursuit of some goal, and it’s a mistake to see any of the steps involved in terms of actual truth values. Personally, I’m very skeptical of the idea that our stances are fine-grained or consistent enough to ground the sorts of inferences and connections we make in these exchanges without any reference to the actual laws of formal logic. But even if that did turn out to be possible, the much more fundamental issue here is that some moral conditionals don’t just seem rhetorically or pragmatically successful - they seem actually true.
Take, for example, a claim like “If it’s wrong to torture someone for five dollars, then it’s wrong to torture them for ten dollars.” That statement can and should be affirmed by everyone, even if (for some reason) you don’t actually think torturing someone for five dollars is wrong in the first place. That’s because it’s not actually saying anything about the morality of torture. What it’s saying is that there’s a specific relationship between the two situations described, such that, hypothetically, the wrongness of one would imply the wrongness of the other. And that seems like something any plausible moral theory should be able to affirm. Whatever the morality of torturing someone for five dollars might be, it shouldn’t suddenly change if you get five dollars more.
But no subjectivist can affirm a relationship like that, since it will always be rationally faultless for anyone who holds a negative stance towards torturing someone for five dollars to hold a positive stance towards torturing someone for ten (and similarly for any other moral conditional). So it really is the case that subjectivism misses out on affirming not only the objective truth of our actual moral judgments, but also any sort of truth for an entire class of meta-statements about the modal properties of those moral judgments. At best, subjectivists can just report an (unreliable) account of how their own stances would hypothetically correlate, or how they’d expect someone else’s would. But even in that case, as we’ve seen before, no particular correlation would actually make the moral conditional true.
Similarly, all even remotely plausible moral theories should want to affirm that some things are always necessarily wrong. But as we’ve seen, subjectivists can’t make this claim either, since it’s just not possible to actually take a negative stance towards every actual instance of an action, or to be sure that we’d hypothetically do so in every case. So even a general second-level statement like “It’s always wrong to not maximize overall welfare” isn’t actually true on subjectivism, or at least we can’t have the kind of confidence in it necessary to ensure any conclusions we draw from it will be correct. All we can say is that we generally disapprove of not maximizing overall welfare as a concept, which is perfectly compatible with approving of not maximizing overall welfare in any particular instance.
So if subjectivists are willing to just bite the bullet and say that all moral conditionals are technically false, or that all statements about moral necessity have indeterminate truth values, then they’re certainly allowed to do so. But that strikes me (and, I would imagine, many others) as a massive hit to the plausibility of subjectivism, no matter how practically effective those kinds of statements still might be. And if, on the other hand, subjectivists do want to say that statements like “If it’s wrong to tell lies, then it’s wrong to get your little brother to tell lies” or “Torturing someone for fun is always wrong” are actually, literally true, then they need to provide a consistent and coherent account of what it would mean for them to be true under subjectivism that preserves the role they’re meant to play in the process of moral reasoning.
As it stands, I’m skeptical that an account like that exists. And if that’s the case, then anyone who’s committed to the validity of basic moral reasoning should find moral realism (or at least some other non-subjectivist theory) to be radically more appealing.


A few comments:
I would affirm the proposition “I disapprove of severe suffering”. The reason I would affirm this is because reliably every single time I’ve been made aware of an instance of severe suffering, I’ve always had a negative attitude toward it.
I suppose your point of contention would be that this inference isn’t actually justified: just because it’s been the case that every instance of severe suffering I’m aware of has been out of accord with my values, that doesn’t mean that all the instances of severe suffering I don’t know are also out of accord with my values.
I grant this isn’t a conclusion that I can be *completely* sure of - but I think I can think very, very, very sure of it. In my recent article on speciesism, I say that my credence in speciesism being wrong is approximately 100% - and that the only reason I even include the “approximately” modifier is to account for the possibility that I’m radically mistaken about my attitudes.
I suppose you would think this judgment of mine is in serious epistemic error: there’s no way I can be confident that species isn’t something I care about, let alone confident to a degree approaching certainty. After all, there’s an infinite number of logically possible species - and it only has to be the case that I care about the interests of just a single one less (in virtue of their species) - even just ever so slightly - in order to render the claim that speciesism is wrong false.
Be that as it may, I still find it tremendously and extraordinarily implausible that I would ever care about such a characteristic.
Like it seems you would say I could not even affirm that I disapprove of brutally and slowly torturing rabbits to death solely for the sake of sadistic pleasure - or if I could, I at least couldn’t too confident in it.
After all, there are an infinite amount of logically possible scenarios where this action occurs. Maybe I’m just exposing myself as lacking an insane degree of epistemic humility here - but well, it just seems super self-evident to me that I would in fact disapprove of all those instances.
I don’t think the plausibility of this is at all like me disliking all polka songs or something. Some ethical stances are more akin to that - there are cases where I think what exactly what our moral values are is opaque to us - but there are other cases where they’re very clear-cut. This, I think, would be one instance of that.
You note that disapproving of telling lies doesn’t logically entail it being the case that you disapprove of getting your little brother to tell lies. This is true.
But it’s just true in general that the proposition “It’s wrong to lie” doesn’t logically entail the proposition “It’s wrong to persuade your younger brother to lie” - even if you give a realist analysis to the propositions.
There is nothing *logically impossible* about it being the case that it’s objectively impermissible to lie yet stance-independently right to convince your brother to lie. Likewise, it could well be the case that it’s objectively wrong to torture someone $5 but fine to do it for $10. It could be even weirder than that - it may be that it’s obligatory to torture someone for $5 but an egregious act of evil to torture someone for $10.
It could be that today utilitarianism is true, but tomorrow deontology will be true, and the next day virtue ethics will be true. None of this is prohibited *logically*. Of course, we can say it’s very implausible, but it’s not *impossible* (logically).
So I think these sorts of conditionals shouldn’t be interpreted as matters of logical entailment - that if P is true, Q logically follows from it - but rather about what’s most likely the case. If P is true, Q is probably true as well.
I take my principles and commitments to have entailments, so I agree that this is something a plausible metaethical theory would have to explain.
I'm sympathetic to Finlay's end-relational theory--normative statements are indexed to some goal or framework and truth-apt relative to that goal/framework. Our relationship to these base goals or ultimate ends is non-cognitive--we are primitively compelled to accept them.
Under this theory, you could ask for reasons why stealing is wrong to determine some unstated goal--it causes suffering, or treats people as mere means, or whatever. You could translate any "ought" statement to have an unstated "in order to" component. "In order to not treat people as a mere means, you ought not to lie." Then you could rationally assess whether that goal also entails not getting your little brother to lie.
I'm not sure if Finlay would call himself a subjectivist, I believe he's used both contextualist and quasi-expressivist to describe the view. Whatever the right label, this appears to be a view which can handle this objection while escewing realist metaphysics.