Moral Realism Turns Ethics Into a Science
Or: Why geology and morality are basically the same thing
Over the past few months, I’ve been slowly working on a short series of posts where I lay out and defend the particular brand of moral naturalism that I think is true. I decided to split the project into three parts to keep things manageable: In the first post, which you can read here, I criticized a very common argument that says moral facts can’t possibly play an adequate role in our best explanations of the world around us; and in the second post, which you can read here, I tried to offer a positive account of just how it is that moral facts can play that role. (There was also a tangentially related piece on goodness and objectivity.) Today, I’m going to finish the series by explaining how we can move beyond recognizing individual moral facts and towards constructing broader moral theories. I won’t be citing any particular papers in this post, but it will rely heavily on the work of philosophers Peter Railton, Nicholas Sturgeon, and David Brink, as well as general concepts in the philosophy of science most commonly associated with W. V. O. Quine and Thomas Kuhn. My hope is that, together, these three posts can give a good overview of the moral naturalist position from start to finish and help push back on some of the more simplistic misrepresentations that you tend to find in the online philosophy world.
The position I’ll be defending here is something you might call moral empiricism - the idea that ethical theorizing is a sort of abstract science, one that functions on the basis of observation and experimentation in much the same way natural sciences like physics or chemistry do. Of course, plenty of moral anti-realists are also anti-realists about the sciences, so this argument won’t do much to move them. But that’s fine! What I’m interested in is just the modest claim that an empirical approach to moral deliberation can produce ethical theories that are roughly as objective and reliably truth-tracking as scientific ones. And if you think we do have good reason to be scientific realists - which I think we definitely do - then you should also be a moral realist, or at least find new arguments against moral realism that don’t also end up applying to scientific realism more generally.
In order to show that ethical theorizing has a similar structure to theorizing in the natural sciences, it’s worth taking a look at how we construct scientific theories in the first place. Rather than trying to give a fully fleshed out account of exactly how science as a general project works - something that would obviously be beyond the scope of this post - it might be best to start with a simple example of scientific reasoning. Consider Julie the geologist:
Julie the geologist comes across a sample of rock that she suspects is igneous, rather than sedimentary or metamorphic. As a geologist, she knows that igneous rock forms out of molten magma or lava that’s much too hot for plant or animal material to survive in. So Julie forms the hypothesis that, if she cuts open the sample and scours for fossils, there won’t be any.
Here, Julie is doing what all good scientists do: She’s beginning with an open question, using her theoretical background knowledge to develop a hypothesis, and then constructing an experiment that would either validate or falsify that hypothesis. And if Julie goes through with her experiment and finds nothing but rock, just as she predicted, then everything checks out and her hypothesis receives evidential support. So far, so good! But the more interesting question to ask is what would happen if, when she cuts open the sample, she does find a small piece of fossilized plant matter where she predicted there wouldn’t be any. (This is obviously very unlikely, given what we know about geology, but put that aside for a second.)
You might be tempted to give the junior high science class answer here, which is that Julie should accept the hypothesis has been falsified and revise her original belief about the rock being igneous. But in reality, this isn’t always how science works. In fact, there are at least three other perfectly valid options open to Julie that don’t involve rejecting her original assumption. The first would be to accept the general structure of the experiment but question its results; maybe she misidentified something in her sample as plant matter when it was really something else, in which case her hypothesis wasn’t actually falsified. The second option, which is much more radical, would be to accept the experiment’s results but still maintain the hypothesis by revising her background theory; maybe geologists are just wrong about the way igneous rocks form, in which case finding plant matter inside one might not be a problem. And the final option would be to introduce an ad hoc premise that resolves the apparent tension; maybe a careless colleague introduced some plant matter into her sample by accident, in which case there’s no contradiction to worry about.
One thing to notice here is that, aside from the choice to introduce an ad hoc premise, each option involves rejecting one element to salvage the other two. You can see this very clearly in the simple chart I’ve made below:
So which option should Julie choose? She probably won’t reject the background theory here; it would obviously be a little reckless to throw out foundational principles in geology all on the basis of a single wonky experiment. But in other domains where the background theory has much less support - or if the anomalous results just kept stacking up, and she was absolutely sure there was no other mistake - then that might end up being the right move. Otherwise, she’ll just have to weigh her original confidence that the rock is igneous against her confidence in the accuracy of her experiment and decide which one she’s more willing to drop, or else try to come up with another explanation that can reconcile the two. And what decision she makes there is going to depend on a wide range of factors, all of which are historioculturally contingent and deeply intertwined with other beliefs and assumptions that could themselves be revised later on. If Julie is absolutely devoted to her original belief that the rock is igneous - if, as Quine says, she’ll hold her position “come what may” - then there will always be some theoretical revision that allows her to maintain that view; if, on the other hand, she has unshakeable confidence in her ability to identify fossilized plant matter, then a different set of beliefs will need to be altered. But which beliefs to alter, and to what degree, is ultimately a choice she’ll have to make for herself.
The critical point to emphasize here is that Julie, like all scientists, has no perfectly objective, algorithmic way of guaranteeing the truth of any singular scientific claim, even one as simple as whether a certain rock is igneous or sedimentary. Nor do scientists have a foolproof method for determining the “right” response when contradictions pop up. All they can do is look to their other commitments and make considered judgments about what to conserve and what to tweak, knowing full well that another experiment tomorrow might require shuffling everything up again. Of course, some propositions are more resilient in the face of repeated experimentation, and they rightfully take a central place in our extended web of belief. But even those are hardly invulnerable! Take the whole framework of Newtonian physics, for example, which was experimentally confirmed countless times before a few pesky abnormalities overturned the whole regime. Was there one single experiment done on one particular day that “officially proved” Newtonian physics was false and quantum physics was true? Of course not. Instead, there was a slow process of revision, whereby different scientists at different times decided to change what they were willing to accept and willing to reject.
And to be very clear, I’m not saying scientific truth is just some nebulous social construct. I absolutely do believe there are objectively true scientific facts that our theories can do a better or worse job of capturing; geocentrism, for example, was false even when all relevant scientific authorities considered it to be true. But the process by which scientists reject false theories and embrace more accurate ones necessarily requires an extended cycle of experimentation and revision informed by broader theoretical frameworks, rather than one single test that settles things once and for all by directly revealing some indisputable truth. Think of scientific theorizing a little like filling out a crossword puzzle: Things fit together better and better until, if you’re good enough, you get a set of interlocking answers that seem to be maximally coherent. At that moment, does some booming voice from heaven come down and confirm every answer? Of course not! There’s always going to be the possibility that some little error early on threw everything off. But ending up with a puzzle where everything fits together is still really good reason to think you’ve got it right.
In short, the ultimate hope of the scientific process is not that we’ll eventually test every specific claim one by one until we have a complete list of Scientifically Proven Facts; rather, it’s that we’ll eventually produce the most coherent possible collection of theoretical commitments through a continual process of revision and reconsideration. This general view of scientific investigation is relatively uncontroversial for most philosophers of science (although of course there are a million little nuances and qualifications I’ve had to skip over here). In fact, it’s relatively uncontroversial for most laypeople too, at least when you explain it to them outside the context of the debate over moral realism. But in the context of that debate, it’s very common for skeptics to dismiss the idea of rigorous moral theorizing by comparing it unfavorably to an idealized scientific theorizing that doesn’t actually exist. That’s why it’s so important to spend some time honestly looking at the process by which we come to scientific truth, warts and all; otherwise, ethics will be held to an artificially high standard that it doesn’t need to meet.
So with the actual structure of scientific investigation fresh in our minds, let’s turn now to a new sort of theorizer, Eugene the utilitarian:
Eugene the utilitarian is writing a book where he argues that sex outside of marriage is immoral. As a utilitarian, he knows that actions are immoral if their consequences reduce the overall well-being of all the relevant moral subjects involved. So Eugene forms the hypothesis that, if he studies the cultures where sex outside of marriage is widespread and controls for other variables, he’ll see that those societies have lower rates of well-being overall.
My question for the skeptic in this case would be: If Julie the geologist was engaged in a reliable, truth-tracking process, why isn’t Eugene? The actual approach to theorizing is roughly identical between the two. Eugene is also setting up an experiment, just like Julie did. He’s taking an open question, using his theoretical background knowledge to develop a hypothesis, and then constructing an experiment that would either validate or falsify that hypothesis. If he does the research and comes back with results that confirm his hypothesis, then this would be experimental evidence for the immorality of sex outside of marriage, in exactly the same way a lack of plant material would have been experimental evidence for the rock being igneous. But more importantly, just like Julie, Eugene has four broad options open to him if the research comes back in a way that contradicts his expectations: He can accept that the hypothesis has been falsified, he can dispute the accuracy of his own research, he can toss out utilitarianism entirely, or he can come up with some other reason why the harm caused by sex outside of marriage has been canceled out by something else. You can even put these options into the same sort of chart that Julie filled out to see they’re structurally identical:
So if you believe that Julie’s scientific empiricism eventually leads to the formation and refinement of reliable scientific theories that capture meaningful truths about the nature of the world, then you ought to accept that Eugene’s moral empiricism will (or at least could) do the same. Or at the very least, you’d need to actually explain why it is that such a successful theory-producing process is necessarily unable to function in certain domains. It’s not enough to just say that ethics is magically immune to systematic investigation. You have to actually give a reason for your selective skepticism!
And when skeptics do try to level a specific critique at the practice of ethical theorizing, by far the most common one would be something like:
Well, of course a utilitarian can come to ethical conclusions by looking at real-world data, and you can even call those experiments if you really want to. But all they’d ever demonstrate is that, if you’re already a utilitarian, you should hold a particular moral view. That’s hardly a worthwhile conclusion by itself - if I already accept a background theory that tells me Bigfoot prefers warm climates, then I could also do experiments that tell me where Bigfoot is most likely to live. But that doesn’t actually give me reason to believe in Bigfoot! What I’m looking for is the experiment that could actually demonstrate the truth or falsity of utilitarianism itself, or even just the reality of moral facts in general. Without that, isn’t this all just a bunch of circular reasoning?
This sounds convincing… until you remember that a heterodox geologist could make the exact same criticism of Julie! After all, it’s also true that her experiment relies on accepting an extensive set of background assumptions in order to be meaningful; if you don’t believe that igneous rocks form from cooling magma in the first place, then you have no reason to accept Julie’s experiment as valid no matter what the outcome is. And if you demanded she prove the foundational principles of modern geology before you were willing to consider any further results, then she obviously couldn’t do it, because that theory is the product of an extended process of repeated investigation and revision over time rather than any one particular experiment - just like Eugene’s utilitarianism! Like it or not, our two theorizers are in the same boat here: While both of them can and should be prepared to offer solid arguments in defense of the theoretical background they presuppose, the fact that they rely on that background to do additional theorizing is nothing to be ashamed of, nor is it reason to think the theory itself is somehow vacuous or circular.
Ultimately, when skeptics complain that moral realists can’t “prove” moral claims with a single stand-alone experiment, what they’re really doing is just pointing out the possibility of catastrophic error in the moral realist’s background theory. But that possibility of catastrophic error exists for all theorizing, not just the moral kind, and the mere fact that moral realists could be wrong about their foundational assumptions is no more meaningful than the fact that geologists could be wrong about how igneous rock forms. Of course, we should always be open to questioning our ontological commitments in the face of evidence favoring an alternate framework that eliminates them. It’s just that anti-realists have to actually provide and defend that alternate framework, rather than asserting from first principles that it’s somehow inherently illegitimate to rely on the theoretical assumption that moral facts exist when engaging in ethical theorizing. Moral skeptics are well within their rights to make specific criticisms of realist theories and to argue that their own anti-realist frameworks explain the data in a more parsimonious, internally consistent ways. But simply writing off the entire possibility of moral empiricism to begin with shows a fundamental ignorance of how all theorizing works.
This isn’t to say there aren’t are meaningful differences between moral and scientific theorizing that really do matter. For one, theoretical disagreement is much greater, and seemingly much more intractable, in ethics than it is in the natural sciences. And theories about pressing issues in ethics are influenced much more heavily by extraneous social and cultural factors than theories about igneous rocks. But these are contingent complications that arise from the central importance of ethics in our social lives, rather than anything inherent to the ontology involved. And when we look at scientific issues that exist in similarly fraught conditions, we see many of the same dynamics that plague ethical deliberation as well. Meanwhile, we’ve also made tremendous progress in our ethical theorizing that often goes completely unacknowledged. Just like no scientists today are still pushing phlogiston theory, you won’t find papers being published defending slavery or child sacrifice - even though practices like these have been far more socially entrenched historically than any belief about the way fire works! So while I would never claim that ethical theorizing has historically been as systematic, rigorous, and productive as the natural sciences, I think skeptics ought to accept that the gap we see today is a difference in degree and not in kind. So long as our ethical theorizing rests on the same general process of observation, experimentation, and revision that scientific theorizing does, we have every reason to believe the theories produced are reliable guides to objective truth.
Throughout this series, my goal hasn’t really been to convince anyone that moral realism is true. I haven’t talked about all the major arguments out there for it, like the argument from deliberative indispensability or the companions-in-guilt approach, and I haven’t directly criticized common anti-realist theories that I think are unworkable. Instead, all I’ve really wanted to do is lay out the case that moral realism makes sense as a philosophy - that there’s nothing naive, archaic, or spooky about believing it, and that the most common reasons people have for rejecting it out of hand are fundamentally misguided. That’s why, in my first post, I tried to show that a curious philosopher surveying these views in good faith shouldn’t rule out moral realism automatically. And in my second post, I tried to give that same curious philosopher some positive reasons for thinking moral facts might really be out there. Now, to finish things off, I just want to inspire confidence in realist ethics as a legitimate project with the power to give meaningful answers for the questions that matter to us. Of course, there still might be some devastating skeptical argument waiting to be discovered that will compel us all to abandon realism. But in the absence of an argument like that, moral realists should be proud to defend a theory that’s intuitive, elegant, productive, and reliable.
I think one can take away an opposite conclusion from your discussion. As you probably know, scientific realism is a contested view. Pragmatists and instrumentalist agree with your discussion of science, but deny that scientific theories represent reality or are ‘true’ in a simple sense. What we know is that they are useful.
So the fact that there are structural similarities between science and ethics does little to establish realism. You need a separate argument. And the argument to establish moral realism is going to be much harder, because unlike science, the underlying theories are not based on predictive successes. No experiment ever has or will confirm whether our moral duty is to promote the greatest good for the greatest number.
There is a relevant difference between physical experimentation and moral experimentation: 1) A tree in the forest makes a mess (or basalt forms from magma) even if nobody is there to see it, whereas empirical consequences of moral wrongs suffer from observation bias insofar as we have no idea about the effects of moral wrongs committed in secrecy (this is an insurmountable confounder). There could be a difference in consequences for witnessed immoral acts vs those done in secret, which then could indicate that it is only our bias about what is immoral that is in fact immoral, not the acts that we consider immoral.
Moreover, quantifying the moral consequences in terms of “wellbeing” requires that it is standardised, and that standard proven as the ultimate and universal normative principle (the same for all agents and the highest for all agents), otherwise it is not a common measure but another bias.