> However, this way of thinking very obviously begs the question against moral realism, which necessarily involves the belief that (at least some) widespread moral judgments really do come from making contact with objective, mind-independent moral facts. If you think that sort of picture is reasonable, then the truth of the moral judgments in question doesn’t need to come from natural selection — those judgments will already be justified by whatever independently truth-tracking ability the realist has in mind, and natural selection just explains why that ability became widespread over time.
Let's grant, for the sake of argument, that there are mind-independent moral facts.
The question then is, where would the ability to make contact with them come from? How would this truth-tracking ability happen to exist in humans?
I'm going to postulate an extremely controvercial thing here - about as controvercial as Trump not being the best possible statesman: all our mental properties are result of evolution through natural selection. If humans have some truth-tracking ability regarding the moral facts - it has to be evolved. And so if our moral intuitions have evolved, and the process that was guiding our evolution was not optimizing for correspondance to objective morality, then it's quite reasonable to assume that we are wrong about the moral facts.
> When we say, for example, that the long-necked-ness of giraffes is the result of natural selection, we are saying, roughly, that the ancestors of giraffes were such that, those among them who had slightly longer necks than the others tended to produce more offspring, and that this led to a situation where a greater and greater proportion of the creatures that were born had long necks, which eventually led to the long-necked giraffes that exist now existing instead of some other creatures with shorter necks. This is a claim about the process that led to there being individuals who had the trait in question, not a claim about the process that led to these individuals having this trait.
It's very much both.
Trump rally analogy is is a bit confusing. Here is where it breaks. Suppose that this particular Trump rally didn't take place. As a result there wouldn't be this congregation of Trump-supporters - True. These supporters, however, would still exist, however, because it wasn't this particular rally that turned them into Trump supporters.
Now, suppose that evolution through natural selectin didn't take place in our universe. Would there still be any individuals with long-neck-ness trait or, for that matter any necks at all? No, there wouldn't be. Because evolution through natural selection is directly causally responsible for them.
> So regardless of whether we end up calling that sort of process a predicative explanation or not, a more robust predicative reading — one that frames evolutionary pressures as literally “changing our minds” from one judgment to another — is pretty clearly off the table.
It's technically true that evolution doesn't make a divine intervention every time I'm thinking "What is the right thing to do"? But neither it needs to. Because evolution through natural selection has designed my mind to think the way it thinks.
Consider how we arrive to our moral judgement. There is this explicit reasoning going on, reflecting on our knowledge about the world, but ultimately it bottoms up in our core moral intuitions. These intuitions are the result of natural selection.
You say: "And so if our moral intuitions have evolved, and the process that was guiding our evolution was not optimizing for correspondance to objective morality, then it's quite reasonable to assume that we are wrong about the moral facts."
But isn't this the exact error that Hanson is pointing out? The fact that a capacity was selected for on the basis of non-truth-tracking reasons isn't by itself reason to believe the capacity is itself non-truth-tracking; that move only works if you preemptively assume the dispositions being acted upon by natural selection are themselves unrelated to moral truth, which is begging the question very directly against the realist. If a realist has a plausible theory for how we develop our moral beliefs in a truth-tracking way (which every realist is going to say they do have) then the role of natural selection in the propagation of that capacity doesn't matter in the least.
So if you want to adjust the Trump rally analogy to more closely fit the dynamic you're talking about, we can: You could imagine that campaign staff just literally murdered everyone in a small town who wasn't a Trump supporter, and then those who survived went on to have kids who were themselves raised to be Trump supporters. If you were one of those kids, and you supported Trump, then when you find out what happened a generation ago, your first thought still shouldn't be "Well now I know I must be wrong!" Instead, it would obviously be "Well wait, how did the people who weren't killed come to have their beliefs about Trump?" If they were all the smartest, most well-informed people in that town (obviously they wouldn't be, haha, but just for the sake of argument) then the kid shouldn't be particularly concerned about the purging - it should only bother them if they think the process by which the Trump supporters came to their beliefs in the first place was faulty. But realists don't have to accept that in the case of moral beliefs, obviously!
> The fact that a capacity was selected for on the basis of non-truth-tracking reasons isn't by itself reason to believe the capacity is itself non-truth-tracking; that move only works if you preemptively assume the dispositions being acted upon by natural selection are themselves unrelated to moral truth
I think there is some kind of misunderstanding between us but I can't put a finger to it. Can you point out where exactly you disagree with me?
1. Having a capacity for moral truth tracking is very improbable. Most things don't have it. Therefore P(MTT) ~ 0
2. If this capacity was optimized for via an optimization process that would reduce this improbability: P(MTT|OP) ~ 1
3. However, utill we actually observed that OP exists and optimizes MTT, the improbability of our overall theory is still very high is
P(MTT|OP)&P(OP) ~ 0
4. We've discovered the optimization process that produced us and didn't find strong evidence that it optimizes for moral truth tracking. We kind of found the opposite. The process itself is quite amoral - see all the horrible things animals do to each other - and it's sole goal is optimizing inclusive genetic fitness.
5. We do not have good evidence in favor of the idea that optimizaing inclusive genetic fitness is correlated with MTT.
6. We do have evidence that optimizing inclusive genetic fitness is correlated with developing the kind of moral intuitions we developped, regardless of whetehr they are correlated to the objective moral truths. As a result improbability of MTT only rises.
7. It's still possible that our moral intuitions just so happened to correspond to objective moral truth by sheer coincidence or some additional factor that we haven't discovered yet. We may even come up with some plausibly sounding story such that
P(MTT|PSCS) ~ 1
8. But until this story is proven true (or receives significant evidence in its favor) the total improbability of the theory is quite low
P(MTT|PSCS)P(PSCS) ~ 0
> If they were all the smartest, most well-informed people in that town (obviously they wouldn't be, haha, but just for the sake of argument) then the kid shouldn't be particularly concerned about the purging - it should only bother them if they think the process by which the Trump supporters came to their beliefs in the first place was faulty.
On the other hand if they got their beliefs from their parents, while they got them from their, and so on, and the initial beliefs we just adopted at random, the situation looks pretty dire. And this is what evolution through natural selection is telling us about of the sources of all our quilities.
Again, moral realist can postulate some additional principle that counteracts that. But then this principle has to be proven to comparable degree with evolution through natural selection, to actually decrease the improbability of the overal theory
I mean, I get off at the very first step! I don't think there's anything at all improbable about the possibility of human beings developing moral knowledge, since I think the recognitional capacity for moral facts is just a general aspect of the ability to observe reality and rationally reflect. Other non-naturalist views might face more of a challenge there, but even then, I don't think they give us reason to think that moral knowledge is "very improbable" at all. It's fine if an anti-realist believes that on the basis of their own broader theory, of course, but it's not something that can be used as an uncontroversial assumption in some *further* argument against realism. And the degree to which it can be shown to be reasonable, then all the force of the evolutionary argument would come from that demonstration, not any further facts about evolution itself. And that makes the evolutionary argument generally superfluous.
I think your second response illustrates this issue well - you assume the initial beliefs are "adopted at random," in which case obviously natural selection wouldn't somehow pick out the one that happened to be true. But moral realists are not going to accept that our original moral views are random in that way! So again, the conversation refocuses on the possibility of moral knowledge, rather than the evolutionary story that takes place alongside it.
> I mean, I get off at the very first step! I don't think there's anything at all improbable about the possibility of human beings developing moral knowledge, since I think the recognitional capacity for moral facts is just a general aspect of the ability to observe reality and rationally reflect.
Oh, but that's not at all what 1. is saying! The point of 1. is to establish the *complexity* of moral truthseeking. That it's not a simple property that we can expect random object to possess, like having "being affected by gravity". It's a property of minds, which only a very small minority of objects are, and minds and their properties are complex.
It's great that you invoked truthseeking about physical reality as an example here. Because it's, of course, also very complex by basically the same reasons. We need to have all this machinery for organs of perception and brains that can generalize the observations. Can we agree that moral truth-seeking and physical truth-seeking are about the same complexity? Then, I think, we are in agreement about 1.
This, of course doesn't mean that we can a priori conclude that *humans* can't have either of these properties. But it means that there has to be some *improbability reduction* - a causal process in the reality that ensured we have such properties.
With this in mind, can you once again point out our disagreement in the list above?
> you assume the initial beliefs are "adopted at random," in which case obviously natural selection wouldn't somehow pick out the one that happened to be true. But moral realists are not going to accept that our original moral views are random in that way!
I think I understand you position better now. But doesn't it directly contradicts our understanding of evolutionary biology? The random mutation + selection mechanism framework seems to be just common knowledge at this point.
Is it an exception you are doing for moral truth seeking in particular? Or is it a general principle that you also adopt for other properties? Like "neckness"?
"It's technically true that evolution doesn't make a divine intervention every time I'm thinking "What is the right thing to do"? But neither it needs to. Because evolution through natural selection has designed my mind to think the way it thinks."
With the caveat that I would substitute "...has biased my mind towards certain intuitions and predispositions", I agree. I cannot figure out why anyone with a reasonable grasp of evolutionary theory would think that an EDA must frame evolutionary pressures as literally “changing our minds” from one judgment to another; this is as clear a straw man as I have seen in a while.
But natural selection has *not,* in the causal sense you're implying here, biased your mind in any one direction. All it's done is preserve whatever dispositions bias you in one direction or another. That's a very, very big difference, since the epistemic danger of that preservation is only as great as the epistemic danger of the bias itself. And that's the whole point, right? What matters is the reliability of what natural selection preserved, not the truth-tracking nature of natural selection itself.
I don't doubt that you believe your first sentence here, but I'm sure you understand that I don't regard that as settling the matter; I would need to see a persuasive argument, and my other posts are sufficient to explain why I don't see that in Hanson's paper.
I have to say that I don't get what you are saying in the last sentence - what does 'reliability' mean here?
If you haven't already, I'd recommend reading Hanson's full paper - she goes into a fair amount of detail defending her claim, which is fairly uncontroversial (or at least likely the majority view) in philosophy of biology today. I think you're taking her as saying something much stronger than she actually is, something like "Our moral sense arose entirely independent from evolution and natural selection just preserved it." But that's not her claim at all, since (obviously) every capacity we have is a product of evolution in some sense. Rather, she's just saying that evolutionary pressures by themselves aren't the sort of explanation that epistemic concerns center on.
I have read it, and I am pleased to report that your article is an excellent summary, and your choice of passages to quote seem to accurately present the author's key points.
One sort-of corollary is that the concern I have with the author's quantificational-predicative distinction was not assuaged in the broader text - for more details, see my reply in the thread where I first raised it.
Your first quote from Hanson is problematic, as its summary of natural selection omits two key concepts from evolutionary theory: firstly, the reproductive inheritance of traits with variation, and secondly (ironically) the process and role of selection itself. Consequently,the conclusion of this passage, "this is a claim about the process that led to there being individuals who had the trait in question, not a claim about the process that led to these individuals having this trait", is, at the very least, irrelevant: the complete theory of evolution by natural selection makes empirically-justified claims both about the process that led to there being individuals who had the trait in question, and also about the process that led to these individuals having this trait.
Armed with a proper conception of evolutionary theory, we can see that the Trump-rally analogy is not an analogy at all - and if it were, Hanson's argument would be "devastating" not just for EDAs, but also for the theory of biological evolution by natural selection. Beware of arguments that prove too much!
Whether natural selection can explain why individuals have certain traits is an open debate in philosophy of biology, and I'm personally convinced by the arguments on the "no" side (although I'm certainly no expert). But Hanson shows in her paper that even if you do accept this causal story, it isn't enough to undercut the epistemic foundation any more than the purely quantificational reading would.
The passage from Hanson's paper we are discussing here presents an argument leading to a conclusion. Strictly speaking, it is not wrong: the selection part of evolutionary theory, when taken out of the context of the full theory, does not by itself explain the process that led to the individuals in question having the trait in question. This is of no consequence, however, as the full theory does provide that explanation.
Furthermore, Hanson's conclusion here seems central to her thesis, and so, at least until I see a reasonable argument for focusing only on what natural selection in the narrow sense can do, while ignoring evolutionary theory as a whole, I am disposed to dismiss this paper.
But there's just no plausible argument that evolution "taken as a whole" could explain why individuals have the traits they do, except through the quantificational process she's describing. And as she shows in her paper, that quantificational process isn't enough to undercut epistemic warrant *even if* you take it to be predicative as you are.
On the contrary (or so I say), understanding evolution begins with understanding how individuals inherit most of their traits from their parents, and then we successively layer on first variation and then selection to explain how individuals in a lineage may differ from their predecessors, building up to answers to the quantificational questions from the bottom-up by first answering the predicative question - or, to put it another way, the quantitative properties of populations are shown to be a consequence of how individuals get their traits and get to pass them on. Hanson concludes that evolutionary theory is not predicative only by ignoring the inheritance and variation part of what she calls the back-story, which is where we see an explanation of where each individual's traits come from.
When you write "and as she shows in her paper, that quantificational process isn't enough to undercut epistemic warrant *even if* you take it to be predicative", I assume you are referring to section 4 of the paper (in sections 2 and 3 she does not stray from the claim in the passage which started this discussion.)
The crux of the matter is in this passage from page 33: "What’s important to note is that [Neander's] argument doesn’t deny that the explanation is a quantificational one - it just claims that it also counts as a predicative one... But the arguments I’ve given in Section 2 are arguments for thinking that explanations of the frequency of our moral beliefs can’t have any bearing on the epistemic status of these beliefs. And this remains the case whether or not such explanations count as also explaining the moral beliefs of individuals."
The final sentence of this statement is a misrepresentation of her own arguments in section 2 (and 1), where her arguments explicitly target the quantificational reading and only that - to the extent that sometimes she says forthrightly that the argument would go through if there were a predicative reading.
As it happens, I think Neander gets things the wrong way round by characterizing the predicative explanation as derivative of the quantificational one, but regardless, Hanson accepts that it also counts as a predicative one. Now when an explanation answers both the quantificational and the predicative question, then it certainly does answer the predicative question! And as an explanation answering the predicative question, it is applicable in any situation calling for one, regardless of whether it also has a quantificational one - it is not as if the latter somehow contaminates, neuters or invalidates the former.
Note that in all these discussions, I am taking issue with the argument Hanson is using here. Whether some other argument could be successful is a separate matter.
It's been a while since I've read Steet or Joyce on this, so I'm fuzzy on what moves they make. If I bring in evolutionary explanations, it's not on the basis that evolved faculties couldn't be truth-tracking--I don't find that version of EDA very compelling. It's more
1. Mind-independent moral reality is being posited to explain something (moral intuitions, deliberative indispensablity, etc)
2. But we have explanations for these phenomena that don't require mind-independent moral reality, so
3. Mind-independent moral reality is extraneous to our best explanation of what we experience.
(3) would need more support, of course, but that's the argumentive strategy.
These considerations couldn't *disprove* realism. The best we can do on *any* metaphysical thesis is to get it to the point where it has the epistemic status of Sagan's dragon or Russell's teapot.
Yeah, it depends on who you're reading. Street is very clearly pushing the idea that evolution undermines moral knowledge directly, and uses that to support her constructivism. Whereas Joyce focuses on both and probably emphasizes the explanatory aspect more. What's odd is that the undermining/debunking aspect is way, way more emphasized in the scholarly literature, whereas the explanatory aspect is way more common online. Not sure why that is, but I also tend to find the latter more interesting too.
But as for the argument you've actually laid out, it's perfectly legitimate as it stands. It's just that P2 is not a premise that can be defended by reference to evolution. That's the category error I'm talking about here: The explanation for our moral judgments that makes realism superfluous is going to need to be a psychological explanation for *every particular individual,* not just a quantificational explanation for why those individuals exist and not others. Everyone, realist or not, is going to accept the quantificational explanation. The debate is about what best explains the judgments predicatively, and there natural selection isn't going to matter. So all I'm criticizing in this particular piece is the idea that evolutionary pressures somehow "replace" the realist view, as opposed to being orthogonal to it.
Doesn't that seem like an *insane* asymmetry in explanitory burden? The realist says that mind-independent moral reality exists because it seems obvious to them that it does. Are you saying that am evolutionary explanation, in order to be a better explanation than realism, needs to be able to *predict every individual's moral judgements*? Has any theory of anything ever met that kind of burden? I think we have a decent idea how the weather works, but I doubt we'll ever be able to predict every gust of wind.
Oh no haha, I'm definitely not saying that anti-realists in particular have to give an explanation for every individual person's actual moral beliefs - that would be an unreasonable burden for anyone, realist or anti-realist, to meet. What I mean is just that both parties have to have their own general explanation for how *an* individual person's moral judgments work, as opposed to having an explanation for why it is that individuals who work that way exist and not others who work a different way. And that sort of explanation is going to be a psychological one that talks about how people who exist right now do, in fact, make moral judgments. Anti-realists have arguments for that, of course, so it's not like I'm saying it's a big whole in their theory or something. I'm just saying that's the proper place to adjudicate the two theories, not in the realm of evolutionary science.
I think I broadly agree. I'd look to empirical moral psychology to understand moral experience, although the evolutionary models and game theory stuff might shed some light on things as well. Like anything complex, the right overall analysis should be able to make sense of different lines of evidence. I expect you'd agree with all that.
Yes, absolutely - and to be clear, I do recognize that evolutionary psychology provides a boost to the plausibility of anti-realism in an important sense, since it gives a plausible story for why a moral sense would be selected for in the absence of any moral facts. I'm just saying that it doesn't *undermine* the realist framework in any internal sense.
Thank you for this really interesting take I wasn't aware of this argument. Thinking about it off the cuff, I'm not sure it's completely convincing, although I think it does change one's reasonable credence level. Possible objections and limitations seem to be:
- if Street etc can offer a reason to think that natural selective pressures would be actively tracking certain moral mistruths, as it were, then that would seem to give reason to doubt moral realism. One such explanation could be that _all_ moral intuitions are evolved fictions for their uses as motivators of human actions that benefit survival and procreation. Another motivator for this challenge could be to point to moral intuitions that are widely shared but are in tension with each other in what they point to (e.g., arguably rights to autonomy and duties to respect life, particularly of close kin).
- if one can make a case that moral intuitions really do derive _from_ evolutionary processes, then that would seem to cut off at the root that moral intuitions have merely _survived_ evolutionary pressures. Such a case might also make appeal to the active _benefits_ of many moral intuitions for survival and procreation to raise the probability that they really are the _result_ of evolution, as a opposed to things that may have been otherwise written on our hearts and then merely withstood the test of natural selection.
I think these provide substantive challenges to Hanson's workaround the problem (based on your summary, I will go read the Hanson paper now!)
You're definitely right that Street could make a further claim that natural selection actively selects *away* from moral facts - but the problem is that she can't do that without relying on claims about what the moral facts actually are, and that sort of knowledge is necessarily ruled out by her complaint. She can't argue that we have no way of knowing what the moral truths are, *and* that natural selection would lead away from them.
Otherwise, I agree that our moral judgments didn't just appear fully formed one day out of the blue - they developed slowly over time, and natural selection played a role in that process. But that's also true of literally every perceptive or rational faculty, so it isn't an issue by itself; so long as "every step of the way," some truth-apt capacity was growing, then the reliability is preserved. And that's generally the story we give for other things, right? Like, it's not as though some blind animal gave birth to a baby with fully-formed eyes one day. Those visual faculties developed slowly over time under evolutionary pressures. But that's no reason to think that "natural selection explains what you see" in any problematic sense. Hanson's paper does address this point in section four in more detail, I definitely glossed over it a bit to shorten things up.
Im not sure that she would have to know what the moral truths are, if there are any, in order to claim that it wouldn't be truth tracking. For one, the burden of proof to show that should probably be the other way round/there seems to be no reason to think it would _unless_ we suppose that the moral is also that which leads to good evolutionary outcomes (which seems to be the bigger presupposition). Further, I think her case is more to say that, since moral intuitions seem to be caused by evolution, we don't need to appeal to moral realist theories to explain why we have such intuitions.
Sure, you can say it isn't truth-tracking no matter what - but if you want to go the extra mile and say that it would actively lead us *against* the moral truths, then you'd need to have some idea of what they were in the first place. So at best, the explanation just makes anti-realism more plausible, but it doesn't make a problem for realists internally.
Well if there's no reason to suppose it's truth tracking, then it undercuts the use of our moral intuitions as evidence for moral realism. Sure, one could be a moral realist for other reasons. But it does make moral intuitions suspect evidence.
This is interesting! My initial objection to it is that even if selection didn’t give us our moral judgments predicatively but simply explains why people who make different judgments reproduce at different rates, it still seems that it would be unlikely that the intuitions that are preserved by the process are the truth-tracking ones. Even if a random mutation did create such a an ability to intuit moral facts, if the process that determines whether or not that ability is passed on is insensitive to moral facts, then the chance that the intuitions that survived are the correct ones seems low. Assuming that the mutation initially only occurred in a small part of the population and became more common through selection, there is little reason to expect that the correct moral intuitions would become frequent.
Compare to a modified Trump case: suppose that we want to explain why most journalists covering his press conferences has the views they do, and it turns out that journalists who had different views have been filtered out and this precise has been happening for years, so it effects the people who get hired by news organisations, etc. Whether or not this predictively explains why they have the views they do, the selection process over time being insensitive to truth does make the journalists less reliable.
You write "assuming that the mutation initially only occurred in a small part of the population and became more common through selection, there is little reason to expect that the [trait it enhances] would become frequent."
Note that the substitution I have made for the phrase "correct moral intuitions" yields a sentence which is the antithesis of a key insight of evolutionary theory; additionally substituting "strong" for "little" yields that insight itself.
Of course, the trait has to enhance reproductive success (note that this is not the same as truth-tracking, which I think is a question-begging stipulation.) Here's one thought which pushes me towards some form of EDA: If you were to take the violent (by human standards) chimpanzee culture and boost their power to reason about how to dominate and exploit that culture to human levels, I can easily imagine that species self-destructing in internecine warfare. It might, I suppose, be necessary for some form of ethical disposition to evolve in correlated progression with intelligence.
But you have to see how this begs the question against the realist, right? If a realist believes that moral judgments come from some contact with moral truths, then they wouldn't accept that chimpanzees becoming significantly more intelligent or rational would leave their judgments unchanged. And again, you might not find that picture of moral knowledge particularly plausible - but that's a reason to criticize the theory directly, and the evolutionary story itself has no relevance.
Could you explain what you mean by saying it begs the question *against the realist?* I'm outlining an alternative to the realist hypothesis of judgement development that you have outlined in your reply here, one which does not require accepting the premises behind the realist one, but that is the nature of alternative hypotheses, and it certainly does not count as question-begging - that would be like saying that Copernicus was begging the question against Ptolemy, would it not?
(If you could also explain why I can sometimes add emphasis to my Substack replies but sometimes not, that would also help me!)
Ah I misspoke. I mean to say that assuming that the specific mutation that enabled correct moral intuition only occurs in a small part of the population initially and only become more common through selection, there is little reason to think that the one that becomes most frequent through selection would be the one that enables correct moral beliefs.
The point is that even if selection doesn’t explain my moral judgments, it still makes it unlikely that they are made by a reliable faculty, because there’s is no reason for selection to have favoured the reliable one. This is a move away from the version of the argument that is about the existence of a rival explanation of my moral beliefs that makes no mention of moral facts, but I think it still works.
The best reply to this is probably to deny that there is some moral faculty and instead insist that moral judgment comes from a generic faculty for detecting reasons, shake correct operation is adaptive in other cases, such as epistemic or prudential reasons. But I doubt that works.
I get what you're saying, but I think this is still showing a little bit of the bias I talked about in the piece: You should only be confident that natural selection undermines the chances of the views being accurate if you *already* assume those views aren't truth-tracking. But if you have no insight into the truth of the belief, then just finding out that it was selected for shouldn't do anything to shake your confidence in it directly. So in the Trump case, if you found out that all the reporters had been filtered out in such a way, on down into new generations and so on, you still have no reason to think they aren't trustworthy until you learn how it is that they do journalism - if it turns out they're all great journalists, then the fact that great journalism was selected for shouldn't undermine it. Of course that's not going to be the case with Trump supporting journalists, haha, but we don't have the same reason here to think natural selection is actually selecting *against* true belief.
I guess I think that if selection is based on something tangential to truth, and the true belief was initially the result of a faculty that even if reliable was the product of a random mutation, then the chance of that becoming frequent is low - I’m not assuming the individual faculty isn’t truth tracking to begin with, only that, those faculties won’t become more frequent over the generations. Maybe the objection is that if selection start out by trusting our faculties then we can appeal to our intuitions to justify the claim that true moral beliefs are adaptive, using specific cases. But I think this would be like appealing to the fact that we know we exist in reply to fine tuning. We can ask how likely it would be that our moral beliefs would be adaptive if they reflected moral facts vs if they didn’t and since there is no reason independent of our specific intuitions for expecting a harmony between moral facts and what helps us spread our genes, we should update against realism on the evidence that our intuitions do help. This didn’t require suspending our actual moral beliefs. I’m not sure that’s what you were getting at but I hope it’s relevant.
Hey, this is great! Really interesting. I am just now getting into moral realism/anti-realism because I am taking illusionism much more seriously; when I was a qualia realist, "pain = badness" seemed good enough, but now I have lots more to think about. I'm probably going to go through your whole archive on moral realism, anything in particular you think is strongest or most interesting from an illusionist perspective?
I appreciate it! I haven't written anything on the intersection of illusionism and ethics, but I do have a draft I've been working on and might publish soon. But if you're interested in ethical naturalism generally, I think this might be a good intro: https://bothsidesbrigade.substack.com/p/human-rationality-acting-well-and and I also wrote a three-part series a while back that ends with this one: https://bothsidesbrigade.substack.com/p/moral-realism-turns-ethics-into-a If you ever have questions about it, always feel free to tag me! A lot of people think illusionism and moral realism are an odd fit, but I actually think they share some similar sorts of commitments.
>a more robust predicative reading — one that frames evolutionary pressures as literally “changing our minds” from one judgment to another — is pretty clearly off the table
It is *mutation* and natural selection, and the former do quite literally change your mind. In the everyday context of the intuition pump, there are limitations on how far you can go in either direction - not enough humans that you can select a crowd with *any* beliefs, nor can you take a person and convince them of anything. Thats what makes the distinction make sense, and its not really there for evolution. The "predicative explanation" for some belief very well could just be "Because youre hardwired to belief this specific thing", and it seems like at least some moral intuitions work this way too (e.g. Westermarck effect). So I think those robust readings are on the table.
Thats said, I agree that the *generic* evolutionary debunking in some sense doesnt add anything over the knowledge problem - its a way of forcing the issue by making it more concrete. You *should* have already understood everything youll get out of EDA from the knowledge problem alone. Unfortunately its not guaranteed that you *did*, so its not correct to think "I was fine with the knowledge problem, so I dont need to worry about EDA".
If you haven't already, you should check out section four of her actual paper - I can see what you're saying, but I think she shows pretty convincingly that it can't work that way. (And as far as I can tell, most evolutionary biologists tend to agree.) But otherwise, I agree that it does help force the issue, and raises the stakes of failure for the realist. I just think there are a lot of people who think there's something uniquely problematic about the evolutionary context, and that things would be fine without it.
I had a look at the paper and Im not convinced. If we take her response in section 4 seriously, it implies that some facts about individuals *cant* be explained predicatively. Which facts exactly depends on the details of the account, but "that you have the genes you have" would commonly be among them. And other facts for which those genes themselves do (a lot of relatively direct) explanatory work, like the hard-wired belief, are in a kind of twilight zone.
Again, this division into quantificational and predicative makes sense if the existence of the people in the pool youre selecting from is already explained in some other way - then you avoid problems like the previous paragraph.
Really, reading the paper I now put even less stock in the argument, because of the heavy use she makes of "whether a counterfactual creature counts as being you". (Notice again, that this is not needed for the Trump attendees, and thats closely related to their independent existence). I lean to anti-realism about those sorts of facts, so I dont accept her account where they make a difference to knowledge.
I find the argument a bit hard to follow. Is the argument supposed to be something like: "humans evolved eyes for survival reasons, yet our eyes give us true knowledge of the world," and that in a similar way we may have evolved some ability to "sense the moral facts" for survival reasons but with such an ability we can now have moral knowledge?
That’s definitely somewhere someone could take the argument, and I made a similar sort of defense here: https://bothsidesbrigade.substack.com/p/why-evolutionary-challenges-to-moral But Hanson’s point is more general, just that we need to focus on the reliability of the faculty itself and not its evolutionary origins in particular.
It is known that the ichneumon wasp will paralyse caterpillars and lay its eggs inside so that the caterpillar is eaten alive while paralysed. If there was a giant ichneumon wasp that did the same thing to human beings, we might call it "evil." But if we could communicate with it, it is obvious that nothing we could say would convince the giant wasps to stop. To the wasp, laying its eggs and getting along with other wasps would be the most moral thing. The core intuition of evolutionary debunking is that from a God's eye view the situation between humans and the giant wasps is completely symmetrical. And you could imagine arbitrarily intelligent beings with arbitrarily different moral values. I personally don't find this intuition challenged.
Why would it be obvious that nothing we could say could convince the giant wasps to stop? Human beings "naturally" prey on other animals in all sorts of horrible ways too, but it's also the case that many humans, through a straightforward application of basic moral principles, come to the conclusion that harming animals unnecessarily is wrong. Why would we assume that a sufficiently intelligent non-human species wouldn't also recognize those basic moral principles as true, especially upon our complaints about their harming us?
We feel sympathy, for e.g. mistreated animals, because of certain capacities for empathy that we evolved for interfacing with other humans. But that is a purely contingent fact. That is the way pro-social behaviours happen to have been hard wired into us. There’s no logical reason it had to be done that way. It could be that pro-sociality among the giant wasps arose from identifying a particular pattern on their exoskeletons and if you lack that pattern it has no pro-social feelings towards you at all. Edit: Btw if moral realists have a scientific story that says that pro-sociality can only, as a matter of fact, evolve in one particular way then that would put some meat on the bones of the claim that morality is objective. But if moral realists don’t make that scientific claim then I’m not really sure what they’re saying. (If this was true it would be very important for AI alignment debates because it would mean we don’t need to worry about unfriendly AIs since sufficiently intelligent beings would automatically arrive at the right moral views.)
I guess I'm still not clear on why you would assume our empathy for non-human animals doesn't have the sort of rational component that any sufficiently intelligent species of animal would also develop. Certainly, when *I* ask myself why I don't want to see other animals suffer, it's because, at least internally, I take myself to be recognizing that suffering is bad and that I ought to oppose it when I can; my empathetic response is derivative of that objective judgment, not its basis. Of course, you can argue that I'm deceived when I say this, and that I'm systematically misrepresenting my own internal reasoning process. But that's a claim you'd need actual evidence for, it certainly isn't the default assumption.
I guess I just find these metaphysical debates very hard to grasp. If your position is that beings with superhuman perception and logical powers would necessarily agree (or could be made to agree through argument) with suffering being bad, or vice versa, then that is an interesting substantive claim. But if no such scientific claim is being made, then I don’t even know what is being claimed by saying that suffering is objectively bad. It just seems to be a verbal garnish to our value judgements. Like if a theist says that there is objective morality because god exists, that gives content to the notion of objectivity in this context.
Very interesting! I must admit that it feels somewhat like a trick, though I'm not sure why. But consider the following case:
You live in a totalitarian state, where the government really wants people to believe that baby shark is good. For this reason, they execute anyone who doesn't like the song (though we can imagine that no one knows about this, is fear introduces a predicative element). Not only that, but it turns out that the children of parents who like baby shark are themselves more likely to like the song. Now you find yourself in the state, believing baby shark to be good. After a while you find out the dirty secret!
It seems obvious to me that you should now not trust your judgement at all! You wouldn't have existed if your ancestors didn't like baby shark, and the fact that they do very much increases the likelihood that you do. Even if you have methods for judging song-quality, you should be skeptical of those, since had your ancestors been inclined to use other methods, you would not have existed.
Maybe this case is too similar to the moral one, but I had a hard time coming up with an example where the predicative and quantitative aspects come apart.
So, I think this is the exact kind of scenario to consider if you really want to test the argument, but I also think it’s a mistake to choose an aesthetic preference to center it on, since that already biases things in an anti-realist direction. Instead, it would be better to imagine something like: Let’s say the Democrats take power in 2028 and carry out their secret plan to exterminate all Republicans. Maybe they even do a few rounds of it over the generations, just to be safe. If you’re a Democrat in 2100 (or whenever), and you learn about this process, should that make you doubt that the Democratic Party’s views are true, or that you have a reason to believe them?
In my mind, the answer is obviously no! What you should do instead is just ask, “Well, were those Democratic voters back in 2028 correct? Did they have a good way to understand the political realities and reason to reliable conclusions?” Because, if so, the fact that their correctly-produced views were maintained by the purging process shouldn’t matter at all. You should maybe consider yourself “lucky,” in some sense, that correct views were preserved, but (depending on how you stipulate the plot) it might not even be particularly lucky at all, since there might be a clear reason why the Democrats would select for correct views even if that wasn’t their deciding criteria.
So if you think roughly the same thing in the moral case — that we really have developed the ability to gain moral knowledge — then the fact that the people who have that ability have been kept around shouldn’t be upsetting, especially if we have a good theory (and we do) for why broadly accurate moral beliefs would be preserved by natural selection. But even without that story, if we have separate good reason to believe the beliefs are true, then their selection shouldn’t reduce our confidence in them. Things don’t become less likely to be true, just because they’re also useful.
Wow. Quite involved. Not sure the motivation behind this piece but the content is interesting enough. Assigning biology to a seemingly non -biological process. Perhaps we’re just using biological terms. Though unfamiliar I am non the less interested. However if, as I surmise, the intent was to understand “Trump” voters or why Trump even exists as a political phenomenon or take shots at the same then…….it is almost delusional and narcissistic in concept. In fact, just this attempt at explanation is part and parcel of the reason he exists. The observer effect as it were. How those who cannot understand the “Trump Effect” amaze me. While I am not a Trump “voter” and am not part of the “MAGA” movement……..I come from that space, those people are “my people “ and I understand them. While I have paid a price for my opinions…….their intentions, objectives or “evolutionary morality“ are not a mystery.
Jesus, BSB, this is literally the Ken Ham style intelligent design argument.
Certain Christians say that while natural selection can explain why animals with eyes would eventually outcompete animals without eyes, it cannot explain where the first animals with eyes cake from, therefore God must have done it.
You're just taking the sake incredulity from anatomy to psychology. If you understand how an eye could evolve you should understand how moral sensibilities could evolve.
Now, of course, that a group was selected to contain people with certain opinions is not direct evidence against the opinion. A conference of economists will tend to be full of people who believe basic economic principles because people who do t either don't become economists or get don't get invited. But the basic beliefs of economists are in fact correct.
However, this looks to me like another failure to track offense and defense. The classic realist line is to argue something like:
1. There are some things that seem wrong to almost all humans.
2. This consensus is surprising and requires explanation.
3. For lack of any other explanation we must accept that some kind of objective moral facts explain the consensus.
The EDA intervenes to offer an alternative explanation and a meant to activate a preference for parsimony. It is not that something we were selected to believe *couldn't* also be objectively true, just that we don't *need to* invoke objective truth to account for those thoughts.
This has nothing to do with incredulity or skepticism towards evolution? No one is claiming that our moral sense "didn't evolve" - I'm a naturalist, so I think literally every aspect of human beings is a product of evolution. The claim is just that a capacity for some judgment being an evolved capacity doesn't automatically undercut its reliability. And while you're right that evolutionary explanations can help raise the plausibility of anti-realism - I talk about this in the last piece I wrote - the EDA is very clearly presented as something that undermines realism itself. That's what the "debunking" means! And my point is just that it doesn't do this.
Furthermore, I'm not sure I've actually ever encountered anyone who took the strong debunker position you've described here. Sometimes people talk as if they do but usually in contexts where I tend to assume they are simplifying, exaggerating, skipping steps, or getting confused. The strong debuking position is too foolish to be worth the amount of effort put in here.
If someone actually argues:
1. All evolved beliefs are false.
2. (Realist) Moral beliefs evolved
3. (Realist) Moral beliefs are false.
Just point out that many other opinions and perceptual faculties exist which are not false. We evolved to think of external physical objects as real, and indeed they are, so clearly some evolved beliefs may happen to be true.
Your introduction frames your piece as a rejection of a very broad genre of evolution related arguments. It explicitly covers not just arguments about moral knowledge being impossible or improbable, but also arguments about "parsimony" and "explanatory power" for moral intutions -- your own terms.
Then you go on to talk about the two types of explanations -- explaining how a group got its members vs. How the individual members got their traits -- and basically say the force of evolutionary arguments against moral realism depends on equivocation.
But thats false, at least if we include the arguments about explanatory power and parsimonious. All we need to show is that evolution is sufficient to explain why the people who are currently around happen to be the ones with moral sentiment, for thr argument that realism is *unparsimonious* to go through.
Interesting take, however I think it falls apart on the grounds that it's far too anthropocentric. I'm starting my rebuttal to this by referencing some things from your first work in the series.
> A reasonable standard to have for any explanation is that (absent some irrelevant complications) the event being explained wouldn’t have taken place in the absence of whatever it is you’re relying on to explain it; in other words, if X truly explains Y, then it should generally be the case that, in the closest possible world in which X is not the case, Y is also not the case. Explanations that struggle to meet this requirement often strike us an intuitively absurd and unreliable. Saying that water on the floor is explained by a broken sink, for example, but that the water would be there even if the sink wasn’t broken, is just failing to explain the water on the floor at all.
> But if, on the other hand, we try to explain the development of our aversion with the fact that rotten meat is pathogenic, the counterfactual test is now successful. In the closest possible world in which that pathogenic quality is absent — a world in which rotten meat posed no major risk of disease — evolutionary pressures would not have led us to develop the aversion in question. For this reason, the fact that rotten meat is pathogenic functions as a legitimately better explanation for our aversion than all the first-order facts that make up its supervenient base. Or, in other words: According to our best explanations, it really is the case that we developed the aversion we did because of some fact about what is and isn’t pathogenic.
As I understand it, this passage from your last post has quite a lot to do with the underlying thesis of this post. E.g., EDA fails because moral realists can cede that evolution did result in our moral intuitions, it's just that moral realism creates evolutionary pressures in of itself.
My question is thus: Are the existence of "real moral truths" contingent upon the existence of homo sapiens? Did moral truths exist when the dinosaurs were roaming the land?
After all, if the moral realist argument is that moral truths are objective, then why should they only operate on humans? We can see, for example, that evolutionary pressures have resulted in other species mirroring some basic aspects of human morality (e.g. self-sacrifice for the sake of one's kin, reciprocity towards strangers, and so on). Does this convergent evolution speak to some sort of higher moral truth?
If it does, then what would that say about those aspects of human intuition that are less widely spread across the animal kingdom? Humans have an aversion to raw meat as well as rotten meat, but most animals are omnivores and will eat raw meat. Some animals preferentially eat rotten meat. If "pro-social" intuitions are more common across the animal kingdom than "aversion to raw meat", would that mean that the morals behind those pro-social intuitions are "more real" than the ones behind the aversion to raw meat?
The evolutionary debunking argument proposes answers to the questions: What are our moral intuitions? What are they for? Why would they be unreliable when our factual intuitions are reliable? Why would we believe in them so strongly if they’re wrong?
Without understanding evolution, the most obvious answers are: “They’re intuitions about moral facts. They’re for knowing moral facts. They’re not unreliable.” In other words, moral realism is the default.
The evolutionary debunking argument does not directly refute that “we as individuals gain moral knowledge”. It merely creates a highly-plausible competing theory. And that makes all the difference in the world.
… And now I am going to reply to my own comment with the much-longer response I originally wrote. Read it if you want, or don’t :)
And here is what I originally wrote. It's my version of the evolutionary debunking argument, focused on refuting scenarios where our moral intuitions ended up becoming truth-tracking despite having evolved. Disclaimer: I have only read BSB's post, not Hanson's original argument.
* First premise: altruism is a trait that appears to be ‘hardcoded’ and was actively selected for by natural selection.
For comparison, the set of principles guiding my behavior includes both “altruism is good” and “the Pythagorean theorem is correct”, but natural selection did not select for belief in the Pythagorean theorem. It only selected for more general abilities like reasoning and teaching/learning that indirectly resulted in me believing in it. Moreover, those general abilities appear to be largely optimized for truth-seeking, so I can be reasonably confident that my belief really is true. (Though, it has been hypothesized that evolution prioritized social conformity over strict truth-seeking.) However, altruism is such a basic emotion that it doesn’t seem it could be derived from more general faculties.
* Second premise: hardcoded traits like altruism are the axioms we use to make moral decisions.
We use our general reasoning abilities to build complicated logical frameworks on top of those axioms. For the same reasons as with the Pythagorean theorem, I can be at least somewhat confident that it’s true the frameworks follow from the axioms (albeit less confident because the axioms are vaguer and the proofs less rigorous). But reasoning can’t justify the axioms themselves.
To be fair, reasoning can’t justify axioms in mathematics either. These days there are many sets of mathematical axioms to choose from. But pure mathematics gets around this by only caring which axioms produce interesting results rather than which ones are ‘true’. And applied mathematics has physical observations to provide evidence for whether the chosen axioms are appropriate. Neither of those approaches works for morality.
Also note that some of those sets of axioms lead to geometries where the Pythagorean Theorem is false. I’d argue that this makes for a cautionary tale about treating results derived from intuition as if they’re necessary. On the other hand, mathematics does have an inherent structure where there are only a few simple and consistent systems of geometry, and one of those systems follows the Pythagorean theorem. In that sense, the Pythagorean theorem is quite robust to variations in axioms. (I’ll argue later that it matters whether morality is robust in this way.)
* Third premise: The simplest explanation for why altruism was selected for does not involve truth.
We can split altruism into two categories.
First is altruism towards kin, e.g. a parent’s attitude towards their child. This is really easy to explain evolutionarily – it obviously helps pass on genes – and it’s present in a wider range of species.
Second is more general altruism towards any other creature of the same or different species. This is less common and its origin in humans is more controversial. Evolutionary psychology has competing theories for that origin, such as group selection, kin selection, reciprocity, and cultural selection. To the extent the origin is cultural, it may have been influenced by general truth-seeking abilities (after all, the Pythagorean theorem is also culturally transmitted). On the other hand, some nonhuman species form social structures which include various forms of non-kin altruism. These species can perform some semblance of cultural transmission, but nothing very robust, and they lack the ability to intentionally drive culture based on reasoning.
For our purposes, I’m not sure the difference between kin and general altruism really matters. Even if general altruism is the objectively correct generalization of kin altruism (and even if there is some objectively correct reason it *should* be generalized), that would just leave kin altruism as an unjustified axiom.
* Conclusion: the simplest explanation for why we have the moral rules we do does not involve truth (at least not for its most basic axioms).
I say “does not involve truth”; that doesn’t mean it’s uncorrelated with truth.
Suppose that humans’ understanding of morality is broadly correct. Then, by assumption, individual survival and societal growth really are morally good outcomes. Because the human sense of morality was optimized for those two things, it’s not surprising that the it would arrive somewhere near true morality. And once we got near the truth, it’s not too surprising that refining our sense of morality by applying general reasoning principles would get us closer to the truth. Maybe not all the way to the truth, but of course morality is not a solved problem. Therefore, evolution does not disprove moral realism.
But that was a strong starting assumption! Suppose instead we only assume that there is _some_ true morality, not that humans necessarily understand it at all. Then it becomes quite a coincidence that true morality tracks evolutionary fitness so well.
Suppose that our evolutionary starting point was completely off. The aforementioned refinement might have reliably moved us in the direction of the truth, arguably (depending on your priors for whether true morality is consistent or simple in the first place). But, empirically, refinement doesn't move our moral beliefs very far; we still depend deeply on our emotions to judge morality. So our endpoint would likely still be completely off. Yet we would likely still _believe_ intuitively that we had found the truth – or at least, I don’t see what would be different in such a world that would cause us to not believe that.
Therefore we should not give any credence to our intuitions. Therefore we should not believe that our sense of morality is anywhere near correct. Therefore it seems superfluous for the concept of true morality to even exist.
On the other hand, if true morality does exist, _and_ if we make the shaky assumption that true morality is consistent and simple, then I do see two potential sources of evidence for whether our sense of morality matches true morality.
The first source of evidence is that we can simply judge how consistent and simple our moral intuitions are. Unfortunately, I think our moral intuitions tend to be rather inconsistent and sensitive to seemingly irrelevant factors, which is evidence against them being correct. YMMV.
The second source of evidence is more promising. Our intuitions are consistent and simple to _some_ extent. If morality is like geometry, then there are only a few possible systems that are consistent and simple; any set of starting axioms must either be inconsistent, be superfluously complex, or lead to one of those systems. If so, then since by assumption true morality must be one of those systems, it would be much less surprising if we coincidentally landed on the right one. We can evaluate this by thinking about the range of alternative possible systems and how meaningfully different they are. YMMV on this too.
I like to describe human natural selection with fewer words. A perfect man and woman fell from grace (rejecting truth) wanting to go it their way (mini-gods). No human actually comes close to the intelligence it really takes to create even a lone rose or to station the sun in the sky. So natural selection became, to me, another way to say fallen man killed (murdered) other men for both position (pride) and to the winner goes the spoils (greed). Man had to exit Eden (a sinless environment where natural selection doesn’t exist) to enter a world that he now could rule mostly by brawn and without any rules. Nature paralleled man’s fallen status where the strongest mostly used, abused and killed and not only to survive. Over time it dawned on some humans there existed something which actually topped themselves and they became mere humans, again, as a result of hard living coupled with this inspired truth. (Humility meets its Maker). Man cried ‘Get us back to Eden’ because evil liars (dictators over man) really rules this world (fallen angels). Maker feels compassion for fallen man, who was tempted by an evil lie over truth, so Maker interacts with man. One day Maker decides man is so lost He came to earth as a man, Himself, to save mankind.
Those who heard Him became the selected. Those still wanting to go it their way (mini-gods) killed the Man.
The Maker rose the Man to life again because, in truth, God and not man governs life. And man’s natural selection is just man trying to explain the resulting predicament we got our mini-selves into.
Or to put it a bit differently, in the Trump case, our selective explanation is compatible with their having reliable fundamental beliefs about politics and ethics and it only targets quite high level beliefs, and it also involves only one iteration of selection. If you have many more and acting on more fundamental beliefs, then the debunking argument looks more plausible. Why would this mechanism filter out the bad views and leave us with the good of ids nor sensitive to truth?
> However, this way of thinking very obviously begs the question against moral realism, which necessarily involves the belief that (at least some) widespread moral judgments really do come from making contact with objective, mind-independent moral facts. If you think that sort of picture is reasonable, then the truth of the moral judgments in question doesn’t need to come from natural selection — those judgments will already be justified by whatever independently truth-tracking ability the realist has in mind, and natural selection just explains why that ability became widespread over time.
Let's grant, for the sake of argument, that there are mind-independent moral facts.
The question then is, where would the ability to make contact with them come from? How would this truth-tracking ability happen to exist in humans?
I'm going to postulate an extremely controvercial thing here - about as controvercial as Trump not being the best possible statesman: all our mental properties are result of evolution through natural selection. If humans have some truth-tracking ability regarding the moral facts - it has to be evolved. And so if our moral intuitions have evolved, and the process that was guiding our evolution was not optimizing for correspondance to objective morality, then it's quite reasonable to assume that we are wrong about the moral facts.
> When we say, for example, that the long-necked-ness of giraffes is the result of natural selection, we are saying, roughly, that the ancestors of giraffes were such that, those among them who had slightly longer necks than the others tended to produce more offspring, and that this led to a situation where a greater and greater proportion of the creatures that were born had long necks, which eventually led to the long-necked giraffes that exist now existing instead of some other creatures with shorter necks. This is a claim about the process that led to there being individuals who had the trait in question, not a claim about the process that led to these individuals having this trait.
It's very much both.
Trump rally analogy is is a bit confusing. Here is where it breaks. Suppose that this particular Trump rally didn't take place. As a result there wouldn't be this congregation of Trump-supporters - True. These supporters, however, would still exist, however, because it wasn't this particular rally that turned them into Trump supporters.
Now, suppose that evolution through natural selectin didn't take place in our universe. Would there still be any individuals with long-neck-ness trait or, for that matter any necks at all? No, there wouldn't be. Because evolution through natural selection is directly causally responsible for them.
> So regardless of whether we end up calling that sort of process a predicative explanation or not, a more robust predicative reading — one that frames evolutionary pressures as literally “changing our minds” from one judgment to another — is pretty clearly off the table.
It's technically true that evolution doesn't make a divine intervention every time I'm thinking "What is the right thing to do"? But neither it needs to. Because evolution through natural selection has designed my mind to think the way it thinks.
Consider how we arrive to our moral judgement. There is this explicit reasoning going on, reflecting on our knowledge about the world, but ultimately it bottoms up in our core moral intuitions. These intuitions are the result of natural selection.
You say: "And so if our moral intuitions have evolved, and the process that was guiding our evolution was not optimizing for correspondance to objective morality, then it's quite reasonable to assume that we are wrong about the moral facts."
But isn't this the exact error that Hanson is pointing out? The fact that a capacity was selected for on the basis of non-truth-tracking reasons isn't by itself reason to believe the capacity is itself non-truth-tracking; that move only works if you preemptively assume the dispositions being acted upon by natural selection are themselves unrelated to moral truth, which is begging the question very directly against the realist. If a realist has a plausible theory for how we develop our moral beliefs in a truth-tracking way (which every realist is going to say they do have) then the role of natural selection in the propagation of that capacity doesn't matter in the least.
So if you want to adjust the Trump rally analogy to more closely fit the dynamic you're talking about, we can: You could imagine that campaign staff just literally murdered everyone in a small town who wasn't a Trump supporter, and then those who survived went on to have kids who were themselves raised to be Trump supporters. If you were one of those kids, and you supported Trump, then when you find out what happened a generation ago, your first thought still shouldn't be "Well now I know I must be wrong!" Instead, it would obviously be "Well wait, how did the people who weren't killed come to have their beliefs about Trump?" If they were all the smartest, most well-informed people in that town (obviously they wouldn't be, haha, but just for the sake of argument) then the kid shouldn't be particularly concerned about the purging - it should only bother them if they think the process by which the Trump supporters came to their beliefs in the first place was faulty. But realists don't have to accept that in the case of moral beliefs, obviously!
> The fact that a capacity was selected for on the basis of non-truth-tracking reasons isn't by itself reason to believe the capacity is itself non-truth-tracking; that move only works if you preemptively assume the dispositions being acted upon by natural selection are themselves unrelated to moral truth
I think there is some kind of misunderstanding between us but I can't put a finger to it. Can you point out where exactly you disagree with me?
1. Having a capacity for moral truth tracking is very improbable. Most things don't have it. Therefore P(MTT) ~ 0
2. If this capacity was optimized for via an optimization process that would reduce this improbability: P(MTT|OP) ~ 1
3. However, utill we actually observed that OP exists and optimizes MTT, the improbability of our overall theory is still very high is
P(MTT|OP)&P(OP) ~ 0
4. We've discovered the optimization process that produced us and didn't find strong evidence that it optimizes for moral truth tracking. We kind of found the opposite. The process itself is quite amoral - see all the horrible things animals do to each other - and it's sole goal is optimizing inclusive genetic fitness.
5. We do not have good evidence in favor of the idea that optimizaing inclusive genetic fitness is correlated with MTT.
6. We do have evidence that optimizing inclusive genetic fitness is correlated with developing the kind of moral intuitions we developped, regardless of whetehr they are correlated to the objective moral truths. As a result improbability of MTT only rises.
7. It's still possible that our moral intuitions just so happened to correspond to objective moral truth by sheer coincidence or some additional factor that we haven't discovered yet. We may even come up with some plausibly sounding story such that
P(MTT|PSCS) ~ 1
8. But until this story is proven true (or receives significant evidence in its favor) the total improbability of the theory is quite low
P(MTT|PSCS)P(PSCS) ~ 0
> If they were all the smartest, most well-informed people in that town (obviously they wouldn't be, haha, but just for the sake of argument) then the kid shouldn't be particularly concerned about the purging - it should only bother them if they think the process by which the Trump supporters came to their beliefs in the first place was faulty.
On the other hand if they got their beliefs from their parents, while they got them from their, and so on, and the initial beliefs we just adopted at random, the situation looks pretty dire. And this is what evolution through natural selection is telling us about of the sources of all our quilities.
Again, moral realist can postulate some additional principle that counteracts that. But then this principle has to be proven to comparable degree with evolution through natural selection, to actually decrease the improbability of the overal theory
I mean, I get off at the very first step! I don't think there's anything at all improbable about the possibility of human beings developing moral knowledge, since I think the recognitional capacity for moral facts is just a general aspect of the ability to observe reality and rationally reflect. Other non-naturalist views might face more of a challenge there, but even then, I don't think they give us reason to think that moral knowledge is "very improbable" at all. It's fine if an anti-realist believes that on the basis of their own broader theory, of course, but it's not something that can be used as an uncontroversial assumption in some *further* argument against realism. And the degree to which it can be shown to be reasonable, then all the force of the evolutionary argument would come from that demonstration, not any further facts about evolution itself. And that makes the evolutionary argument generally superfluous.
I think your second response illustrates this issue well - you assume the initial beliefs are "adopted at random," in which case obviously natural selection wouldn't somehow pick out the one that happened to be true. But moral realists are not going to accept that our original moral views are random in that way! So again, the conversation refocuses on the possibility of moral knowledge, rather than the evolutionary story that takes place alongside it.
> I mean, I get off at the very first step! I don't think there's anything at all improbable about the possibility of human beings developing moral knowledge, since I think the recognitional capacity for moral facts is just a general aspect of the ability to observe reality and rationally reflect.
Oh, but that's not at all what 1. is saying! The point of 1. is to establish the *complexity* of moral truthseeking. That it's not a simple property that we can expect random object to possess, like having "being affected by gravity". It's a property of minds, which only a very small minority of objects are, and minds and their properties are complex.
It's great that you invoked truthseeking about physical reality as an example here. Because it's, of course, also very complex by basically the same reasons. We need to have all this machinery for organs of perception and brains that can generalize the observations. Can we agree that moral truth-seeking and physical truth-seeking are about the same complexity? Then, I think, we are in agreement about 1.
This, of course doesn't mean that we can a priori conclude that *humans* can't have either of these properties. But it means that there has to be some *improbability reduction* - a causal process in the reality that ensured we have such properties.
With this in mind, can you once again point out our disagreement in the list above?
> you assume the initial beliefs are "adopted at random," in which case obviously natural selection wouldn't somehow pick out the one that happened to be true. But moral realists are not going to accept that our original moral views are random in that way!
I think I understand you position better now. But doesn't it directly contradicts our understanding of evolutionary biology? The random mutation + selection mechanism framework seems to be just common knowledge at this point.
Is it an exception you are doing for moral truth seeking in particular? Or is it a general principle that you also adopt for other properties? Like "neckness"?
"It's technically true that evolution doesn't make a divine intervention every time I'm thinking "What is the right thing to do"? But neither it needs to. Because evolution through natural selection has designed my mind to think the way it thinks."
With the caveat that I would substitute "...has biased my mind towards certain intuitions and predispositions", I agree. I cannot figure out why anyone with a reasonable grasp of evolutionary theory would think that an EDA must frame evolutionary pressures as literally “changing our minds” from one judgment to another; this is as clear a straw man as I have seen in a while.
But natural selection has *not,* in the causal sense you're implying here, biased your mind in any one direction. All it's done is preserve whatever dispositions bias you in one direction or another. That's a very, very big difference, since the epistemic danger of that preservation is only as great as the epistemic danger of the bias itself. And that's the whole point, right? What matters is the reliability of what natural selection preserved, not the truth-tracking nature of natural selection itself.
I don't doubt that you believe your first sentence here, but I'm sure you understand that I don't regard that as settling the matter; I would need to see a persuasive argument, and my other posts are sufficient to explain why I don't see that in Hanson's paper.
I have to say that I don't get what you are saying in the last sentence - what does 'reliability' mean here?
If you haven't already, I'd recommend reading Hanson's full paper - she goes into a fair amount of detail defending her claim, which is fairly uncontroversial (or at least likely the majority view) in philosophy of biology today. I think you're taking her as saying something much stronger than she actually is, something like "Our moral sense arose entirely independent from evolution and natural selection just preserved it." But that's not her claim at all, since (obviously) every capacity we have is a product of evolution in some sense. Rather, she's just saying that evolutionary pressures by themselves aren't the sort of explanation that epistemic concerns center on.
I have read it, and I am pleased to report that your article is an excellent summary, and your choice of passages to quote seem to accurately present the author's key points.
One sort-of corollary is that the concern I have with the author's quantificational-predicative distinction was not assuaged in the broader text - for more details, see my reply in the thread where I first raised it.
https://bothsidesbrigade.substack.com/p/maybe-evolutionary-debunking-arguments/comment/228003330
Your first quote from Hanson is problematic, as its summary of natural selection omits two key concepts from evolutionary theory: firstly, the reproductive inheritance of traits with variation, and secondly (ironically) the process and role of selection itself. Consequently,the conclusion of this passage, "this is a claim about the process that led to there being individuals who had the trait in question, not a claim about the process that led to these individuals having this trait", is, at the very least, irrelevant: the complete theory of evolution by natural selection makes empirically-justified claims both about the process that led to there being individuals who had the trait in question, and also about the process that led to these individuals having this trait.
Armed with a proper conception of evolutionary theory, we can see that the Trump-rally analogy is not an analogy at all - and if it were, Hanson's argument would be "devastating" not just for EDAs, but also for the theory of biological evolution by natural selection. Beware of arguments that prove too much!
Whether natural selection can explain why individuals have certain traits is an open debate in philosophy of biology, and I'm personally convinced by the arguments on the "no" side (although I'm certainly no expert). But Hanson shows in her paper that even if you do accept this causal story, it isn't enough to undercut the epistemic foundation any more than the purely quantificational reading would.
The passage from Hanson's paper we are discussing here presents an argument leading to a conclusion. Strictly speaking, it is not wrong: the selection part of evolutionary theory, when taken out of the context of the full theory, does not by itself explain the process that led to the individuals in question having the trait in question. This is of no consequence, however, as the full theory does provide that explanation.
Furthermore, Hanson's conclusion here seems central to her thesis, and so, at least until I see a reasonable argument for focusing only on what natural selection in the narrow sense can do, while ignoring evolutionary theory as a whole, I am disposed to dismiss this paper.
But there's just no plausible argument that evolution "taken as a whole" could explain why individuals have the traits they do, except through the quantificational process she's describing. And as she shows in her paper, that quantificational process isn't enough to undercut epistemic warrant *even if* you take it to be predicative as you are.
On the contrary (or so I say), understanding evolution begins with understanding how individuals inherit most of their traits from their parents, and then we successively layer on first variation and then selection to explain how individuals in a lineage may differ from their predecessors, building up to answers to the quantificational questions from the bottom-up by first answering the predicative question - or, to put it another way, the quantitative properties of populations are shown to be a consequence of how individuals get their traits and get to pass them on. Hanson concludes that evolutionary theory is not predicative only by ignoring the inheritance and variation part of what she calls the back-story, which is where we see an explanation of where each individual's traits come from.
When you write "and as she shows in her paper, that quantificational process isn't enough to undercut epistemic warrant *even if* you take it to be predicative", I assume you are referring to section 4 of the paper (in sections 2 and 3 she does not stray from the claim in the passage which started this discussion.)
The crux of the matter is in this passage from page 33: "What’s important to note is that [Neander's] argument doesn’t deny that the explanation is a quantificational one - it just claims that it also counts as a predicative one... But the arguments I’ve given in Section 2 are arguments for thinking that explanations of the frequency of our moral beliefs can’t have any bearing on the epistemic status of these beliefs. And this remains the case whether or not such explanations count as also explaining the moral beliefs of individuals."
The final sentence of this statement is a misrepresentation of her own arguments in section 2 (and 1), where her arguments explicitly target the quantificational reading and only that - to the extent that sometimes she says forthrightly that the argument would go through if there were a predicative reading.
As it happens, I think Neander gets things the wrong way round by characterizing the predicative explanation as derivative of the quantificational one, but regardless, Hanson accepts that it also counts as a predicative one. Now when an explanation answers both the quantificational and the predicative question, then it certainly does answer the predicative question! And as an explanation answering the predicative question, it is applicable in any situation calling for one, regardless of whether it also has a quantificational one - it is not as if the latter somehow contaminates, neuters or invalidates the former.
Note that in all these discussions, I am taking issue with the argument Hanson is using here. Whether some other argument could be successful is a separate matter.
It's been a while since I've read Steet or Joyce on this, so I'm fuzzy on what moves they make. If I bring in evolutionary explanations, it's not on the basis that evolved faculties couldn't be truth-tracking--I don't find that version of EDA very compelling. It's more
1. Mind-independent moral reality is being posited to explain something (moral intuitions, deliberative indispensablity, etc)
2. But we have explanations for these phenomena that don't require mind-independent moral reality, so
3. Mind-independent moral reality is extraneous to our best explanation of what we experience.
(3) would need more support, of course, but that's the argumentive strategy.
These considerations couldn't *disprove* realism. The best we can do on *any* metaphysical thesis is to get it to the point where it has the epistemic status of Sagan's dragon or Russell's teapot.
Yeah, it depends on who you're reading. Street is very clearly pushing the idea that evolution undermines moral knowledge directly, and uses that to support her constructivism. Whereas Joyce focuses on both and probably emphasizes the explanatory aspect more. What's odd is that the undermining/debunking aspect is way, way more emphasized in the scholarly literature, whereas the explanatory aspect is way more common online. Not sure why that is, but I also tend to find the latter more interesting too.
But as for the argument you've actually laid out, it's perfectly legitimate as it stands. It's just that P2 is not a premise that can be defended by reference to evolution. That's the category error I'm talking about here: The explanation for our moral judgments that makes realism superfluous is going to need to be a psychological explanation for *every particular individual,* not just a quantificational explanation for why those individuals exist and not others. Everyone, realist or not, is going to accept the quantificational explanation. The debate is about what best explains the judgments predicatively, and there natural selection isn't going to matter. So all I'm criticizing in this particular piece is the idea that evolutionary pressures somehow "replace" the realist view, as opposed to being orthogonal to it.
Doesn't that seem like an *insane* asymmetry in explanitory burden? The realist says that mind-independent moral reality exists because it seems obvious to them that it does. Are you saying that am evolutionary explanation, in order to be a better explanation than realism, needs to be able to *predict every individual's moral judgements*? Has any theory of anything ever met that kind of burden? I think we have a decent idea how the weather works, but I doubt we'll ever be able to predict every gust of wind.
Oh no haha, I'm definitely not saying that anti-realists in particular have to give an explanation for every individual person's actual moral beliefs - that would be an unreasonable burden for anyone, realist or anti-realist, to meet. What I mean is just that both parties have to have their own general explanation for how *an* individual person's moral judgments work, as opposed to having an explanation for why it is that individuals who work that way exist and not others who work a different way. And that sort of explanation is going to be a psychological one that talks about how people who exist right now do, in fact, make moral judgments. Anti-realists have arguments for that, of course, so it's not like I'm saying it's a big whole in their theory or something. I'm just saying that's the proper place to adjudicate the two theories, not in the realm of evolutionary science.
Oh, okay. That's a lot more reasonable.
I think I broadly agree. I'd look to empirical moral psychology to understand moral experience, although the evolutionary models and game theory stuff might shed some light on things as well. Like anything complex, the right overall analysis should be able to make sense of different lines of evidence. I expect you'd agree with all that.
Yes, absolutely - and to be clear, I do recognize that evolutionary psychology provides a boost to the plausibility of anti-realism in an important sense, since it gives a plausible story for why a moral sense would be selected for in the absence of any moral facts. I'm just saying that it doesn't *undermine* the realist framework in any internal sense.
Thank you for this really interesting take I wasn't aware of this argument. Thinking about it off the cuff, I'm not sure it's completely convincing, although I think it does change one's reasonable credence level. Possible objections and limitations seem to be:
- if Street etc can offer a reason to think that natural selective pressures would be actively tracking certain moral mistruths, as it were, then that would seem to give reason to doubt moral realism. One such explanation could be that _all_ moral intuitions are evolved fictions for their uses as motivators of human actions that benefit survival and procreation. Another motivator for this challenge could be to point to moral intuitions that are widely shared but are in tension with each other in what they point to (e.g., arguably rights to autonomy and duties to respect life, particularly of close kin).
- if one can make a case that moral intuitions really do derive _from_ evolutionary processes, then that would seem to cut off at the root that moral intuitions have merely _survived_ evolutionary pressures. Such a case might also make appeal to the active _benefits_ of many moral intuitions for survival and procreation to raise the probability that they really are the _result_ of evolution, as a opposed to things that may have been otherwise written on our hearts and then merely withstood the test of natural selection.
I think these provide substantive challenges to Hanson's workaround the problem (based on your summary, I will go read the Hanson paper now!)
You're definitely right that Street could make a further claim that natural selection actively selects *away* from moral facts - but the problem is that she can't do that without relying on claims about what the moral facts actually are, and that sort of knowledge is necessarily ruled out by her complaint. She can't argue that we have no way of knowing what the moral truths are, *and* that natural selection would lead away from them.
Otherwise, I agree that our moral judgments didn't just appear fully formed one day out of the blue - they developed slowly over time, and natural selection played a role in that process. But that's also true of literally every perceptive or rational faculty, so it isn't an issue by itself; so long as "every step of the way," some truth-apt capacity was growing, then the reliability is preserved. And that's generally the story we give for other things, right? Like, it's not as though some blind animal gave birth to a baby with fully-formed eyes one day. Those visual faculties developed slowly over time under evolutionary pressures. But that's no reason to think that "natural selection explains what you see" in any problematic sense. Hanson's paper does address this point in section four in more detail, I definitely glossed over it a bit to shorten things up.
Im not sure that she would have to know what the moral truths are, if there are any, in order to claim that it wouldn't be truth tracking. For one, the burden of proof to show that should probably be the other way round/there seems to be no reason to think it would _unless_ we suppose that the moral is also that which leads to good evolutionary outcomes (which seems to be the bigger presupposition). Further, I think her case is more to say that, since moral intuitions seem to be caused by evolution, we don't need to appeal to moral realist theories to explain why we have such intuitions.
Sure, you can say it isn't truth-tracking no matter what - but if you want to go the extra mile and say that it would actively lead us *against* the moral truths, then you'd need to have some idea of what they were in the first place. So at best, the explanation just makes anti-realism more plausible, but it doesn't make a problem for realists internally.
Well if there's no reason to suppose it's truth tracking, then it undercuts the use of our moral intuitions as evidence for moral realism. Sure, one could be a moral realist for other reasons. But it does make moral intuitions suspect evidence.
This is interesting! My initial objection to it is that even if selection didn’t give us our moral judgments predicatively but simply explains why people who make different judgments reproduce at different rates, it still seems that it would be unlikely that the intuitions that are preserved by the process are the truth-tracking ones. Even if a random mutation did create such a an ability to intuit moral facts, if the process that determines whether or not that ability is passed on is insensitive to moral facts, then the chance that the intuitions that survived are the correct ones seems low. Assuming that the mutation initially only occurred in a small part of the population and became more common through selection, there is little reason to expect that the correct moral intuitions would become frequent.
Compare to a modified Trump case: suppose that we want to explain why most journalists covering his press conferences has the views they do, and it turns out that journalists who had different views have been filtered out and this precise has been happening for years, so it effects the people who get hired by news organisations, etc. Whether or not this predictively explains why they have the views they do, the selection process over time being insensitive to truth does make the journalists less reliable.
You write "assuming that the mutation initially only occurred in a small part of the population and became more common through selection, there is little reason to expect that the [trait it enhances] would become frequent."
Note that the substitution I have made for the phrase "correct moral intuitions" yields a sentence which is the antithesis of a key insight of evolutionary theory; additionally substituting "strong" for "little" yields that insight itself.
Of course, the trait has to enhance reproductive success (note that this is not the same as truth-tracking, which I think is a question-begging stipulation.) Here's one thought which pushes me towards some form of EDA: If you were to take the violent (by human standards) chimpanzee culture and boost their power to reason about how to dominate and exploit that culture to human levels, I can easily imagine that species self-destructing in internecine warfare. It might, I suppose, be necessary for some form of ethical disposition to evolve in correlated progression with intelligence.
But you have to see how this begs the question against the realist, right? If a realist believes that moral judgments come from some contact with moral truths, then they wouldn't accept that chimpanzees becoming significantly more intelligent or rational would leave their judgments unchanged. And again, you might not find that picture of moral knowledge particularly plausible - but that's a reason to criticize the theory directly, and the evolutionary story itself has no relevance.
Could you explain what you mean by saying it begs the question *against the realist?* I'm outlining an alternative to the realist hypothesis of judgement development that you have outlined in your reply here, one which does not require accepting the premises behind the realist one, but that is the nature of alternative hypotheses, and it certainly does not count as question-begging - that would be like saying that Copernicus was begging the question against Ptolemy, would it not?
(If you could also explain why I can sometimes add emphasis to my Substack replies but sometimes not, that would also help me!)
Ah I misspoke. I mean to say that assuming that the specific mutation that enabled correct moral intuition only occurs in a small part of the population initially and only become more common through selection, there is little reason to think that the one that becomes most frequent through selection would be the one that enables correct moral beliefs.
The point is that even if selection doesn’t explain my moral judgments, it still makes it unlikely that they are made by a reliable faculty, because there’s is no reason for selection to have favoured the reliable one. This is a move away from the version of the argument that is about the existence of a rival explanation of my moral beliefs that makes no mention of moral facts, but I think it still works.
The best reply to this is probably to deny that there is some moral faculty and instead insist that moral judgment comes from a generic faculty for detecting reasons, shake correct operation is adaptive in other cases, such as epistemic or prudential reasons. But I doubt that works.
I get what you're saying, but I think this is still showing a little bit of the bias I talked about in the piece: You should only be confident that natural selection undermines the chances of the views being accurate if you *already* assume those views aren't truth-tracking. But if you have no insight into the truth of the belief, then just finding out that it was selected for shouldn't do anything to shake your confidence in it directly. So in the Trump case, if you found out that all the reporters had been filtered out in such a way, on down into new generations and so on, you still have no reason to think they aren't trustworthy until you learn how it is that they do journalism - if it turns out they're all great journalists, then the fact that great journalism was selected for shouldn't undermine it. Of course that's not going to be the case with Trump supporting journalists, haha, but we don't have the same reason here to think natural selection is actually selecting *against* true belief.
I guess I think that if selection is based on something tangential to truth, and the true belief was initially the result of a faculty that even if reliable was the product of a random mutation, then the chance of that becoming frequent is low - I’m not assuming the individual faculty isn’t truth tracking to begin with, only that, those faculties won’t become more frequent over the generations. Maybe the objection is that if selection start out by trusting our faculties then we can appeal to our intuitions to justify the claim that true moral beliefs are adaptive, using specific cases. But I think this would be like appealing to the fact that we know we exist in reply to fine tuning. We can ask how likely it would be that our moral beliefs would be adaptive if they reflected moral facts vs if they didn’t and since there is no reason independent of our specific intuitions for expecting a harmony between moral facts and what helps us spread our genes, we should update against realism on the evidence that our intuitions do help. This didn’t require suspending our actual moral beliefs. I’m not sure that’s what you were getting at but I hope it’s relevant.
Hey, this is great! Really interesting. I am just now getting into moral realism/anti-realism because I am taking illusionism much more seriously; when I was a qualia realist, "pain = badness" seemed good enough, but now I have lots more to think about. I'm probably going to go through your whole archive on moral realism, anything in particular you think is strongest or most interesting from an illusionist perspective?
I appreciate it! I haven't written anything on the intersection of illusionism and ethics, but I do have a draft I've been working on and might publish soon. But if you're interested in ethical naturalism generally, I think this might be a good intro: https://bothsidesbrigade.substack.com/p/human-rationality-acting-well-and and I also wrote a three-part series a while back that ends with this one: https://bothsidesbrigade.substack.com/p/moral-realism-turns-ethics-into-a If you ever have questions about it, always feel free to tag me! A lot of people think illusionism and moral realism are an odd fit, but I actually think they share some similar sorts of commitments.
>a more robust predicative reading — one that frames evolutionary pressures as literally “changing our minds” from one judgment to another — is pretty clearly off the table
It is *mutation* and natural selection, and the former do quite literally change your mind. In the everyday context of the intuition pump, there are limitations on how far you can go in either direction - not enough humans that you can select a crowd with *any* beliefs, nor can you take a person and convince them of anything. Thats what makes the distinction make sense, and its not really there for evolution. The "predicative explanation" for some belief very well could just be "Because youre hardwired to belief this specific thing", and it seems like at least some moral intuitions work this way too (e.g. Westermarck effect). So I think those robust readings are on the table.
Thats said, I agree that the *generic* evolutionary debunking in some sense doesnt add anything over the knowledge problem - its a way of forcing the issue by making it more concrete. You *should* have already understood everything youll get out of EDA from the knowledge problem alone. Unfortunately its not guaranteed that you *did*, so its not correct to think "I was fine with the knowledge problem, so I dont need to worry about EDA".
If you haven't already, you should check out section four of her actual paper - I can see what you're saying, but I think she shows pretty convincingly that it can't work that way. (And as far as I can tell, most evolutionary biologists tend to agree.) But otherwise, I agree that it does help force the issue, and raises the stakes of failure for the realist. I just think there are a lot of people who think there's something uniquely problematic about the evolutionary context, and that things would be fine without it.
I had a look at the paper and Im not convinced. If we take her response in section 4 seriously, it implies that some facts about individuals *cant* be explained predicatively. Which facts exactly depends on the details of the account, but "that you have the genes you have" would commonly be among them. And other facts for which those genes themselves do (a lot of relatively direct) explanatory work, like the hard-wired belief, are in a kind of twilight zone.
Again, this division into quantificational and predicative makes sense if the existence of the people in the pool youre selecting from is already explained in some other way - then you avoid problems like the previous paragraph.
Really, reading the paper I now put even less stock in the argument, because of the heavy use she makes of "whether a counterfactual creature counts as being you". (Notice again, that this is not needed for the Trump attendees, and thats closely related to their independent existence). I lean to anti-realism about those sorts of facts, so I dont accept her account where they make a difference to knowledge.
I find the argument a bit hard to follow. Is the argument supposed to be something like: "humans evolved eyes for survival reasons, yet our eyes give us true knowledge of the world," and that in a similar way we may have evolved some ability to "sense the moral facts" for survival reasons but with such an ability we can now have moral knowledge?
That’s definitely somewhere someone could take the argument, and I made a similar sort of defense here: https://bothsidesbrigade.substack.com/p/why-evolutionary-challenges-to-moral But Hanson’s point is more general, just that we need to focus on the reliability of the faculty itself and not its evolutionary origins in particular.
It is known that the ichneumon wasp will paralyse caterpillars and lay its eggs inside so that the caterpillar is eaten alive while paralysed. If there was a giant ichneumon wasp that did the same thing to human beings, we might call it "evil." But if we could communicate with it, it is obvious that nothing we could say would convince the giant wasps to stop. To the wasp, laying its eggs and getting along with other wasps would be the most moral thing. The core intuition of evolutionary debunking is that from a God's eye view the situation between humans and the giant wasps is completely symmetrical. And you could imagine arbitrarily intelligent beings with arbitrarily different moral values. I personally don't find this intuition challenged.
Why would it be obvious that nothing we could say could convince the giant wasps to stop? Human beings "naturally" prey on other animals in all sorts of horrible ways too, but it's also the case that many humans, through a straightforward application of basic moral principles, come to the conclusion that harming animals unnecessarily is wrong. Why would we assume that a sufficiently intelligent non-human species wouldn't also recognize those basic moral principles as true, especially upon our complaints about their harming us?
We feel sympathy, for e.g. mistreated animals, because of certain capacities for empathy that we evolved for interfacing with other humans. But that is a purely contingent fact. That is the way pro-social behaviours happen to have been hard wired into us. There’s no logical reason it had to be done that way. It could be that pro-sociality among the giant wasps arose from identifying a particular pattern on their exoskeletons and if you lack that pattern it has no pro-social feelings towards you at all. Edit: Btw if moral realists have a scientific story that says that pro-sociality can only, as a matter of fact, evolve in one particular way then that would put some meat on the bones of the claim that morality is objective. But if moral realists don’t make that scientific claim then I’m not really sure what they’re saying. (If this was true it would be very important for AI alignment debates because it would mean we don’t need to worry about unfriendly AIs since sufficiently intelligent beings would automatically arrive at the right moral views.)
I guess I'm still not clear on why you would assume our empathy for non-human animals doesn't have the sort of rational component that any sufficiently intelligent species of animal would also develop. Certainly, when *I* ask myself why I don't want to see other animals suffer, it's because, at least internally, I take myself to be recognizing that suffering is bad and that I ought to oppose it when I can; my empathetic response is derivative of that objective judgment, not its basis. Of course, you can argue that I'm deceived when I say this, and that I'm systematically misrepresenting my own internal reasoning process. But that's a claim you'd need actual evidence for, it certainly isn't the default assumption.
I guess I just find these metaphysical debates very hard to grasp. If your position is that beings with superhuman perception and logical powers would necessarily agree (or could be made to agree through argument) with suffering being bad, or vice versa, then that is an interesting substantive claim. But if no such scientific claim is being made, then I don’t even know what is being claimed by saying that suffering is objectively bad. It just seems to be a verbal garnish to our value judgements. Like if a theist says that there is objective morality because god exists, that gives content to the notion of objectivity in this context.
Very interesting! I must admit that it feels somewhat like a trick, though I'm not sure why. But consider the following case:
You live in a totalitarian state, where the government really wants people to believe that baby shark is good. For this reason, they execute anyone who doesn't like the song (though we can imagine that no one knows about this, is fear introduces a predicative element). Not only that, but it turns out that the children of parents who like baby shark are themselves more likely to like the song. Now you find yourself in the state, believing baby shark to be good. After a while you find out the dirty secret!
It seems obvious to me that you should now not trust your judgement at all! You wouldn't have existed if your ancestors didn't like baby shark, and the fact that they do very much increases the likelihood that you do. Even if you have methods for judging song-quality, you should be skeptical of those, since had your ancestors been inclined to use other methods, you would not have existed.
Maybe this case is too similar to the moral one, but I had a hard time coming up with an example where the predicative and quantitative aspects come apart.
So, I think this is the exact kind of scenario to consider if you really want to test the argument, but I also think it’s a mistake to choose an aesthetic preference to center it on, since that already biases things in an anti-realist direction. Instead, it would be better to imagine something like: Let’s say the Democrats take power in 2028 and carry out their secret plan to exterminate all Republicans. Maybe they even do a few rounds of it over the generations, just to be safe. If you’re a Democrat in 2100 (or whenever), and you learn about this process, should that make you doubt that the Democratic Party’s views are true, or that you have a reason to believe them?
In my mind, the answer is obviously no! What you should do instead is just ask, “Well, were those Democratic voters back in 2028 correct? Did they have a good way to understand the political realities and reason to reliable conclusions?” Because, if so, the fact that their correctly-produced views were maintained by the purging process shouldn’t matter at all. You should maybe consider yourself “lucky,” in some sense, that correct views were preserved, but (depending on how you stipulate the plot) it might not even be particularly lucky at all, since there might be a clear reason why the Democrats would select for correct views even if that wasn’t their deciding criteria.
So if you think roughly the same thing in the moral case — that we really have developed the ability to gain moral knowledge — then the fact that the people who have that ability have been kept around shouldn’t be upsetting, especially if we have a good theory (and we do) for why broadly accurate moral beliefs would be preserved by natural selection. But even without that story, if we have separate good reason to believe the beliefs are true, then their selection shouldn’t reduce our confidence in them. Things don’t become less likely to be true, just because they’re also useful.
Wow. Quite involved. Not sure the motivation behind this piece but the content is interesting enough. Assigning biology to a seemingly non -biological process. Perhaps we’re just using biological terms. Though unfamiliar I am non the less interested. However if, as I surmise, the intent was to understand “Trump” voters or why Trump even exists as a political phenomenon or take shots at the same then…….it is almost delusional and narcissistic in concept. In fact, just this attempt at explanation is part and parcel of the reason he exists. The observer effect as it were. How those who cannot understand the “Trump Effect” amaze me. While I am not a Trump “voter” and am not part of the “MAGA” movement……..I come from that space, those people are “my people “ and I understand them. While I have paid a price for my opinions…….their intentions, objectives or “evolutionary morality“ are not a mystery.
nice presentation, loved getting stuck in the loop
Jesus, BSB, this is literally the Ken Ham style intelligent design argument.
Certain Christians say that while natural selection can explain why animals with eyes would eventually outcompete animals without eyes, it cannot explain where the first animals with eyes cake from, therefore God must have done it.
You're just taking the sake incredulity from anatomy to psychology. If you understand how an eye could evolve you should understand how moral sensibilities could evolve.
Now, of course, that a group was selected to contain people with certain opinions is not direct evidence against the opinion. A conference of economists will tend to be full of people who believe basic economic principles because people who do t either don't become economists or get don't get invited. But the basic beliefs of economists are in fact correct.
However, this looks to me like another failure to track offense and defense. The classic realist line is to argue something like:
1. There are some things that seem wrong to almost all humans.
2. This consensus is surprising and requires explanation.
3. For lack of any other explanation we must accept that some kind of objective moral facts explain the consensus.
The EDA intervenes to offer an alternative explanation and a meant to activate a preference for parsimony. It is not that something we were selected to believe *couldn't* also be objectively true, just that we don't *need to* invoke objective truth to account for those thoughts.
This has nothing to do with incredulity or skepticism towards evolution? No one is claiming that our moral sense "didn't evolve" - I'm a naturalist, so I think literally every aspect of human beings is a product of evolution. The claim is just that a capacity for some judgment being an evolved capacity doesn't automatically undercut its reliability. And while you're right that evolutionary explanations can help raise the plausibility of anti-realism - I talk about this in the last piece I wrote - the EDA is very clearly presented as something that undermines realism itself. That's what the "debunking" means! And my point is just that it doesn't do this.
Furthermore, I'm not sure I've actually ever encountered anyone who took the strong debunker position you've described here. Sometimes people talk as if they do but usually in contexts where I tend to assume they are simplifying, exaggerating, skipping steps, or getting confused. The strong debuking position is too foolish to be worth the amount of effort put in here.
If someone actually argues:
1. All evolved beliefs are false.
2. (Realist) Moral beliefs evolved
3. (Realist) Moral beliefs are false.
Just point out that many other opinions and perceptual faculties exist which are not false. We evolved to think of external physical objects as real, and indeed they are, so clearly some evolved beliefs may happen to be true.
Your introduction frames your piece as a rejection of a very broad genre of evolution related arguments. It explicitly covers not just arguments about moral knowledge being impossible or improbable, but also arguments about "parsimony" and "explanatory power" for moral intutions -- your own terms.
Then you go on to talk about the two types of explanations -- explaining how a group got its members vs. How the individual members got their traits -- and basically say the force of evolutionary arguments against moral realism depends on equivocation.
But thats false, at least if we include the arguments about explanatory power and parsimonious. All we need to show is that evolution is sufficient to explain why the people who are currently around happen to be the ones with moral sentiment, for thr argument that realism is *unparsimonious* to go through.
Interesting take, however I think it falls apart on the grounds that it's far too anthropocentric. I'm starting my rebuttal to this by referencing some things from your first work in the series.
> A reasonable standard to have for any explanation is that (absent some irrelevant complications) the event being explained wouldn’t have taken place in the absence of whatever it is you’re relying on to explain it; in other words, if X truly explains Y, then it should generally be the case that, in the closest possible world in which X is not the case, Y is also not the case. Explanations that struggle to meet this requirement often strike us an intuitively absurd and unreliable. Saying that water on the floor is explained by a broken sink, for example, but that the water would be there even if the sink wasn’t broken, is just failing to explain the water on the floor at all.
> But if, on the other hand, we try to explain the development of our aversion with the fact that rotten meat is pathogenic, the counterfactual test is now successful. In the closest possible world in which that pathogenic quality is absent — a world in which rotten meat posed no major risk of disease — evolutionary pressures would not have led us to develop the aversion in question. For this reason, the fact that rotten meat is pathogenic functions as a legitimately better explanation for our aversion than all the first-order facts that make up its supervenient base. Or, in other words: According to our best explanations, it really is the case that we developed the aversion we did because of some fact about what is and isn’t pathogenic.
As I understand it, this passage from your last post has quite a lot to do with the underlying thesis of this post. E.g., EDA fails because moral realists can cede that evolution did result in our moral intuitions, it's just that moral realism creates evolutionary pressures in of itself.
My question is thus: Are the existence of "real moral truths" contingent upon the existence of homo sapiens? Did moral truths exist when the dinosaurs were roaming the land?
After all, if the moral realist argument is that moral truths are objective, then why should they only operate on humans? We can see, for example, that evolutionary pressures have resulted in other species mirroring some basic aspects of human morality (e.g. self-sacrifice for the sake of one's kin, reciprocity towards strangers, and so on). Does this convergent evolution speak to some sort of higher moral truth?
If it does, then what would that say about those aspects of human intuition that are less widely spread across the animal kingdom? Humans have an aversion to raw meat as well as rotten meat, but most animals are omnivores and will eat raw meat. Some animals preferentially eat rotten meat. If "pro-social" intuitions are more common across the animal kingdom than "aversion to raw meat", would that mean that the morals behind those pro-social intuitions are "more real" than the ones behind the aversion to raw meat?
The evolutionary debunking argument proposes answers to the questions: What are our moral intuitions? What are they for? Why would they be unreliable when our factual intuitions are reliable? Why would we believe in them so strongly if they’re wrong?
Without understanding evolution, the most obvious answers are: “They’re intuitions about moral facts. They’re for knowing moral facts. They’re not unreliable.” In other words, moral realism is the default.
The evolutionary debunking argument does not directly refute that “we as individuals gain moral knowledge”. It merely creates a highly-plausible competing theory. And that makes all the difference in the world.
… And now I am going to reply to my own comment with the much-longer response I originally wrote. Read it if you want, or don’t :)
And here is what I originally wrote. It's my version of the evolutionary debunking argument, focused on refuting scenarios where our moral intuitions ended up becoming truth-tracking despite having evolved. Disclaimer: I have only read BSB's post, not Hanson's original argument.
* First premise: altruism is a trait that appears to be ‘hardcoded’ and was actively selected for by natural selection.
For comparison, the set of principles guiding my behavior includes both “altruism is good” and “the Pythagorean theorem is correct”, but natural selection did not select for belief in the Pythagorean theorem. It only selected for more general abilities like reasoning and teaching/learning that indirectly resulted in me believing in it. Moreover, those general abilities appear to be largely optimized for truth-seeking, so I can be reasonably confident that my belief really is true. (Though, it has been hypothesized that evolution prioritized social conformity over strict truth-seeking.) However, altruism is such a basic emotion that it doesn’t seem it could be derived from more general faculties.
* Second premise: hardcoded traits like altruism are the axioms we use to make moral decisions.
We use our general reasoning abilities to build complicated logical frameworks on top of those axioms. For the same reasons as with the Pythagorean theorem, I can be at least somewhat confident that it’s true the frameworks follow from the axioms (albeit less confident because the axioms are vaguer and the proofs less rigorous). But reasoning can’t justify the axioms themselves.
To be fair, reasoning can’t justify axioms in mathematics either. These days there are many sets of mathematical axioms to choose from. But pure mathematics gets around this by only caring which axioms produce interesting results rather than which ones are ‘true’. And applied mathematics has physical observations to provide evidence for whether the chosen axioms are appropriate. Neither of those approaches works for morality.
Also note that some of those sets of axioms lead to geometries where the Pythagorean Theorem is false. I’d argue that this makes for a cautionary tale about treating results derived from intuition as if they’re necessary. On the other hand, mathematics does have an inherent structure where there are only a few simple and consistent systems of geometry, and one of those systems follows the Pythagorean theorem. In that sense, the Pythagorean theorem is quite robust to variations in axioms. (I’ll argue later that it matters whether morality is robust in this way.)
* Third premise: The simplest explanation for why altruism was selected for does not involve truth.
We can split altruism into two categories.
First is altruism towards kin, e.g. a parent’s attitude towards their child. This is really easy to explain evolutionarily – it obviously helps pass on genes – and it’s present in a wider range of species.
Second is more general altruism towards any other creature of the same or different species. This is less common and its origin in humans is more controversial. Evolutionary psychology has competing theories for that origin, such as group selection, kin selection, reciprocity, and cultural selection. To the extent the origin is cultural, it may have been influenced by general truth-seeking abilities (after all, the Pythagorean theorem is also culturally transmitted). On the other hand, some nonhuman species form social structures which include various forms of non-kin altruism. These species can perform some semblance of cultural transmission, but nothing very robust, and they lack the ability to intentionally drive culture based on reasoning.
For our purposes, I’m not sure the difference between kin and general altruism really matters. Even if general altruism is the objectively correct generalization of kin altruism (and even if there is some objectively correct reason it *should* be generalized), that would just leave kin altruism as an unjustified axiom.
* Conclusion: the simplest explanation for why we have the moral rules we do does not involve truth (at least not for its most basic axioms).
I say “does not involve truth”; that doesn’t mean it’s uncorrelated with truth.
Suppose that humans’ understanding of morality is broadly correct. Then, by assumption, individual survival and societal growth really are morally good outcomes. Because the human sense of morality was optimized for those two things, it’s not surprising that the it would arrive somewhere near true morality. And once we got near the truth, it’s not too surprising that refining our sense of morality by applying general reasoning principles would get us closer to the truth. Maybe not all the way to the truth, but of course morality is not a solved problem. Therefore, evolution does not disprove moral realism.
But that was a strong starting assumption! Suppose instead we only assume that there is _some_ true morality, not that humans necessarily understand it at all. Then it becomes quite a coincidence that true morality tracks evolutionary fitness so well.
Suppose that our evolutionary starting point was completely off. The aforementioned refinement might have reliably moved us in the direction of the truth, arguably (depending on your priors for whether true morality is consistent or simple in the first place). But, empirically, refinement doesn't move our moral beliefs very far; we still depend deeply on our emotions to judge morality. So our endpoint would likely still be completely off. Yet we would likely still _believe_ intuitively that we had found the truth – or at least, I don’t see what would be different in such a world that would cause us to not believe that.
Therefore we should not give any credence to our intuitions. Therefore we should not believe that our sense of morality is anywhere near correct. Therefore it seems superfluous for the concept of true morality to even exist.
On the other hand, if true morality does exist, _and_ if we make the shaky assumption that true morality is consistent and simple, then I do see two potential sources of evidence for whether our sense of morality matches true morality.
The first source of evidence is that we can simply judge how consistent and simple our moral intuitions are. Unfortunately, I think our moral intuitions tend to be rather inconsistent and sensitive to seemingly irrelevant factors, which is evidence against them being correct. YMMV.
The second source of evidence is more promising. Our intuitions are consistent and simple to _some_ extent. If morality is like geometry, then there are only a few possible systems that are consistent and simple; any set of starting axioms must either be inconsistent, be superfluously complex, or lead to one of those systems. If so, then since by assumption true morality must be one of those systems, it would be much less surprising if we coincidentally landed on the right one. We can evaluate this by thinking about the range of alternative possible systems and how meaningfully different they are. YMMV on this too.
I like to describe human natural selection with fewer words. A perfect man and woman fell from grace (rejecting truth) wanting to go it their way (mini-gods). No human actually comes close to the intelligence it really takes to create even a lone rose or to station the sun in the sky. So natural selection became, to me, another way to say fallen man killed (murdered) other men for both position (pride) and to the winner goes the spoils (greed). Man had to exit Eden (a sinless environment where natural selection doesn’t exist) to enter a world that he now could rule mostly by brawn and without any rules. Nature paralleled man’s fallen status where the strongest mostly used, abused and killed and not only to survive. Over time it dawned on some humans there existed something which actually topped themselves and they became mere humans, again, as a result of hard living coupled with this inspired truth. (Humility meets its Maker). Man cried ‘Get us back to Eden’ because evil liars (dictators over man) really rules this world (fallen angels). Maker feels compassion for fallen man, who was tempted by an evil lie over truth, so Maker interacts with man. One day Maker decides man is so lost He came to earth as a man, Himself, to save mankind.
Those who heard Him became the selected. Those still wanting to go it their way (mini-gods) killed the Man.
The Maker rose the Man to life again because, in truth, God and not man governs life. And man’s natural selection is just man trying to explain the resulting predicament we got our mini-selves into.
May you have a blessed Easter for He has risen.
Or to put it a bit differently, in the Trump case, our selective explanation is compatible with their having reliable fundamental beliefs about politics and ethics and it only targets quite high level beliefs, and it also involves only one iteration of selection. If you have many more and acting on more fundamental beliefs, then the debunking argument looks more plausible. Why would this mechanism filter out the bad views and leave us with the good of ids nor sensitive to truth?