Can't Stop Believin'
A few quick thoughts on intuitions and moral theorizing

A couple of months ago, a brutal and calamitous civil war broke out on philosophy Substack: Brother posted notes about brother, parents critiqued children, and battle lines were drawn down the very center of the human heart. This tragic conflict, which claimed as many as four or five mutual followings, had to do with the proper role of intuitions in our theorizing — and while I don’t consider myself an “intuitionist” in any real sense, I nonetheless found myself a conscript on the pro-intuition side. And now that the smoke has cleared, I thought it would be fun to return to the battlefield once again to tackle the question in the context of moral theorizing specifically.
The basic issue is this: When we set out to put together a moral theory, is it acceptable to rely on our moral intuitions, and to evaluate competing theories by how well they cohere with what seems intuitively true to us? From what I can tell, a hefty segment of the Substack world seems to think the answer is no, and that proper theorizing in any domain should be done apart from any pre-theoretic intuitions. In this piece, I want to give a few quick thoughts on why I disagree, and how honestly baffling I find the entire idea of intuition-free theorizing in general.
Let me set the stage by focusing on a real-life example. As I write this, it’s 9:38 AM, and I’m sitting in a comfortable, well-lit room. I’m not under the influence of drugs or alcohol, and there’s no severe emotional or physical distress clouding my judgment. As far as I can tell, no one else is conspiring at the moment to distract, confuse, or nefariously mislead me while I think. I am, in other words, in the best possible state I could be in when it comes to doing some good ol’ fashioned rational reflection. And when I start to engage in that rational reflection, it immediately becomes clear to me that I believe a whole bunch of different things: That I’m sitting in a chair, that the sun is shining outside, that I should buy a new vacuum cleaner soon, that Napoleon lost the battle of Waterloo, that the square root of sixteen is four, and so on.
Some of these beliefs are based on perception, others on experience or memory, and a few just seem to be self-evident. But regardless of whatever ends up explaining these beliefs, or whether they’re actually true, what’s completely undeniable is that I do, in fact, believe them. And one entry on the Big List of Things I Believe At 9:38 AM This Morning is that sadistic pleasure has no value — which is to say, when I consider the proposition that the pleasure of a sadist could never count in favor of anything, that proposition seems true to me in an immediate, obvious sense. The moment I wrap my head around all the concepts involved, I just find myself believing it.
Furthermore, sitting here in my well-lit room at 9:38 AM, I can’t see any particular reason to not believe it. For one thing, it’s not obvious to me, in an a priori sense, that I wouldn’t be able to figure out at least some basic moral truths just by thinking them through, so there’s nothing to undercut my confidence right out of the gate. But more importantly, there’s also nothing else in my larger picture of the world that seems in tension with a belief like this — in fact, when I start to consider all the other things that would have to be true in order for sadistic pleasure to have no value, I find that I believe those things already, or at least that I could believe them without running into any problems. And when I use this belief to render judgments in particular cases, or to build up a broader moral theory, I find that all the new beliefs that get generated seem equally true, in exactly the way I’d expect if they were based on a true belief.
I would imagine that a belief like this is generally what people are talking about when they use the term “intuition.” (Or, more accurately, I guess they might say it’s a belief in the intuition.) But putting all labels aside, it seems like what I’m faced with here, ultimately, is just a belief that’s been formed under optimal conditions, that I don’t have any obvious reason to discount, and that I seem to be able to rely on effectively. So my sincere question for those who think we should exclude intuitions from the process of moral theorizing is simple: What, exactly, am I supposed to do with a belief like this?
I get the sense from some people that, when they say we shouldn’t rely on intuitions in theorizing, what they’re really saying is that we shouldn’t believe our intuitions in the first place. These are the people who tend to compare intuitions with reading tea leaves or following astrological charts, and who dismissively refer to them as “truth tingles” or whatever. But without going too deep into the broader question of moral epistemology, I just don’t see any particular reason to assume that my moral intuitions aren’t truth-tracking. As I said earlier, it’s by no means obvious that rational reflection is incapable of determining (or at least ruling out) some basic moral truths. It’s not as if there’s some other agreed-upon method out there for developing reliable moral knowledge that has certain identifiable features that rational reflection lacks, after all. So what grounds do we have for deciding up front that it’s missing what we’d need?
Meanwhile, if we do treat the reliability of moral intuition as an open question, then the best way to resolve that question would be to take an objective look at the track record moral intuition has when it comes to producing true beliefs. And this is where most critics of intuition get excited, since it’s so incredibly easy to just list off all the horrendously misguided and false moral intuitions that have cropped up throughout human history. But whether or not philosophically competent rational thinkers can expect their moral intuitions to track certain fundamental moral truths can’t possibly be decided by just looking at how random people across time and space have thought through highly complex social and political issues, any more than we could reasonably determine the reliability of a professional mathematician’s intuitions by just seeing how well a random person on the street could work through a calculus problem.
Instead, what we should really be asking is whether the moral intuitions of reasonably thoughtful, ethically respectable people are still consistently misfiring, even when we do our best to isolate those intuitions from irrelevant context or socially constructed biases. But of course we can’t answer that question directly without relying on what our own intuitions tell us about right and wrong in the first place, which I’m pretty sure won’t be an acceptable move for the people pushing this critique. So the best we can do “from the outside” is just ask whether these sorts of intuitions seem to have the general content-neutral features that would indicate some level of truth-tracking.
In other words: Do we see a widespread convergence of intuitions when it comes to a meaningful number of fundamental moral principles? Do our intuitions generally lead us towards coherent and self-consistent ethical frameworks, even if those frameworks differ? Are we able to check our intuitions against our wider body of knowledge and see if they make accurate predictions? And if we recognize an internal contradiction between our intuitions, are we able to resolve that contradiction non-arbitrarily? In every case, I think the answer here is clearly yes, which is exactly what you’d expect if there was some actual contact with reality going on. Of course, these considerations by themselves don’t prove that our intuitions are actually tracking an objective moral truth. But in the absence of any obvious defeater, they collectively give us good reason to think these intuitions can at least play the role of a reliable starting point.
Now, I’m sure this brief sketch of a defense won’t be enough to win over the critics. But even if those critics remain convinced that I shouldn’t believe my moral intuitions, we’re all still stuck with the basic fact that I do believe them, and that I don’t really see any easy way to not believe them. I’m sure that, if I really wanted to, I could set up some kind of Pascalian regime where I work as hard as possible to slowly undermine my intuitive sense that sadistic pleasure has no value — but even then, the best-case scenario would probably just be that I start to think sadistic pleasure does have some value, which isn’t much better. Either way, what I sincerely can’t see myself doing, doxastic voluntarism be damned, is returning to a state of any actual agnosticism on the issue. So it looks like I still need to ask: What, exactly, should I do with my belief?
This is where another way of understanding the objection comes into focus. Maybe intuitions are a perfectly fine foundation for personal beliefs, as long as we’re careful to put them aside whenever we actually start theorizing. I think this is what most people who dismiss intuitions in contexts like these are getting at, and it’s certainly a more reasonable take than the demand that we actually stop believing what seems to be true. But it still places our intuitions into a weird sort of purgatory, where we’re allowed to believe they’re true as long as we don’t ever act as though they’re true. And this is what really doesn’t make sense to me: What could justify excluding a belief you have about the way the world is from your theorizing, if getting a maximally coherent picture of how you take the world to be is the whole point of theorizing?
Arbitrarily excluding intuitions like this gets even more complicated when it comes to actually putting our moral beliefs into practice. Let’s say I have a friend who wants advice on whether to pursue some sadistic pleasure that would cause at most only a minor amount of real-life harm. It seems crazy to argue that I should encourage him to do so, even though I truly believe it would be unjustified. But it also seems crazy to argue that I should discourage him, since doing so would presumably be going against whatever moral theory I’ve developed apart from the belief that’s actually motivating me to oppose it. If we really are required to suspend our intuitive judgments whenever we craft a moral theory, then we’d constantly be finding ourselves in situations like this: Either rendering judgments we believe to be false, or else rendering judgments we do believe to be true, but that have no actual basis in the theories we’ve built up.
Here, the critic might object by saying that you can still consistently and intelligibly discourage your friend as long as you have a moral theory not based on intuition that also sees sadistic pleasure as worthless. But here’s where I have to stop and ask the most basic question imaginable, which nonetheless rarely gets addressed: Where on earth is a theory like that supposed to come from? I truly, sincerely don’t understand how this entirely non-intuitive approach to ethics (or anything else, honestly) is meant to work. For any even remotely controversial moral issue, from polyamory to the death penalty to veganism, it just seems obvious to me that you’re always going to need a bit of normative raw material to start the process up.
When I do go looking for intuition-free alternatives, the most common suggestion I see is to rely on universally acceptable axioms instead. But what I don’t see is anyone actually doing the work to show that those axioms, whatever they supposedly are, could actually get us to any particularly interesting, non-obvious moral conclusions (or that intuition would play no role in who does and doesn’t make the connection). If they’re up for it, I’d love to see someone who takes this view actually try to lay out a step-by-step defense (or condemnation) of abortion, for example, that actually avoids any intuitive judgments about personhood, harm, obligations, rights, or anything else. If it’s out there, I sure haven’t seen it! And my sense is that, to the degree that any of those accounts could succeed, they would only do so by taking what are clearly moral intuitions and just calling them axioms, which doesn’t really help things much.
This is, I think, a perfectly general problem for any kind of theorizing at all: In order to reason towards some conclusion, you’re going to need at least a few foundational beliefs as a starting point — but whether you want to call those foundational beliefs axioms or intuitions, they’re ultimately just a set of propositions that seem obviously true to you. The basic misfortune complicating things is that the set of propositions that seem obviously true to everyone, if it even exists at all, almost certainly won’t be enough to build up a workable moral theory. So at some point, you’re probably going to have to bring in some claims that your interlocutors will disagree with, including more than a few where no independent adjudication apart from the broader web of belief you two hold is going to be possible.
Is this unfortunate, in a world where we’d all love some guarantee that an ultimate, long-term alignment between all rational agents is inevitable? Of course! And if there really was some way for us to all voluntarily give up our intuitions and pursue a better path towards moral knowledge, I’d be all for it. But as it stands, I don’t see how I could reset myself to a universal moral agnosticism on demand, and I’m even less clear on what I’d be able to do theorizing-wise if I did somehow pull it off. So as I sit here, still undisturbed in my well-lit room a few hours after 9:38, still believing that sadistic pleasure has no value, it seems to me like the only real option is to make the best of what I’ve got.


I don’t know how I got dragged into Intuition War One, but I was there, man, and lost a lot of good people (who still have me blocked as far as I know). Since it looks like your trying to kick off Intuition War Two, here’s my view, which was deemed too insane when I whipped it out in the last war, and likely is still too insane now. Stop reading now if you are easily triggered by philosophy of science and language from after the 1940s. You’ve been warned. Here we go: Doing theorizing without intuitions is about as doable as doing theorizing without data, which is to say, not doable at all, unless daydreaming or playing tabletop role playing games counts as theorizing. There’s just got to be something or other occupying that spot where intuition/data traditionally are thought to go. The anti-intuition science cosplayers would do well to look more deeply at potential analogies between philosophical intuitions and scientific data. One interesting thing that happened over the history of wondering what scientific data are is that people generally (not universally, just generally) abandoned any hope of supplying a blanket account of data, an account that would explain, for all sciences and all times, what data were. That data would be some theory-neutral arbiter of theory choice has been abandoned as naive and unworkable. But science hasn’t ground to a halt because of this failure to explain how all genuine data are all actually qualia, or sensory organ irritations, or whatever. Science gets on just fine despite having realized that what’s data for the goose isn’t going to be data for the gander. What’s data relative to one theory is just noise or instrument error relative to another. We sort it all out in the mix, and still have scientific winners (and losers). What we did, then, isn’t that we discovered that there’s no such thing as data. We realized simply that data isn’t always and everywhere one sort of thing. And this is the lesson I think philosophers, even moral realists (you nuts), can take on happily. To quote myself from several sentences ago, intuitions have “got to be something or other”, but that doesn’t mean that they have to be the same sort of thing for all situations at all times. So, the general picture of philosophical methodology you get from Rawls (reflective equilibrium) or Quine (explication without analysis) is a good and adequate picture, and it accommodates everything worth accommodating about intuitions. But just like how you can do science without doing sense-data theory, you can do philosophy without holding that intuitions are a special kind of propositional attitude. Intuitions are whatever, at a time, a group of philosophical interlocutors can agree are the more or less uncontroversial sentences that their subsequent theorizing should strive to maintain as being true. The interlocutors are under no obligation, however, to regard any one of those sentences as beyond questioning or immune to eventual revision. Just like data.
I think we should in general suspend judgement about the truth of our intuitions, because intuitions in general are not very reliable. For example, take intuitions about physics. Physics seems like a pretty promising domain for intuition, given that there's clear evolutionarily adaptive value in being able to track the physical truth without having theoretical knowledge of physics. But even there, intelligent people have plenty of mistaken intuitions. For example, a lot of people have the intuition that if you drop a heavy object from the side of a moving vehicle then it will fall straight down. Or more abstractly, most people probably have the intuition that absolute simultaneity exists.
I know that some people argue that there's adaptive value in tracking the moral truth, but I think it would be fair to say that the case for that claim is at least less obvious than the case for the claim that there's adaptive value in tracking the physical truth. So if even physical intuitions are often unreliable, then moral intuitions are probably not on very solid ground.
When I try to suspend judgement about the truth of my moral intuitions, it actually seems pretty possible to me. But if it turns out to be impossible, then I guess the next best thing would be to avoid relying on them as much as possible.
As for how to construct a moral theory without relying on intuition, I think the thing to do is to begin from metaethical and more generally metaphysical premises. For example, if transcendental idealism and Kant's metaphysics of judgement and action were true, then Kant's moral philosophy would follow. Or if qualia existed and had intrinsic value, and some deflationary theory of the metaphysics of personhood were true, then that might entail utilitarianism. Of course, these theories themselves might ultimately rely on intuition. But by proceeding from fundamental metaethical and metaphysical theories down towards first-order moral philosophy, you can at least minimize the number of intuitions that you rely on, which in my view is the most reliable way to go about things.