Discussion about this post

User's avatar
Travis Talks's avatar

A few comments:

I would affirm the proposition “I disapprove of severe suffering”. The reason I would affirm this is because reliably every single time I’ve been made aware of an instance of severe suffering, I’ve always had a negative attitude toward it.

I suppose your point of contention would be that this inference isn’t actually justified: just because it’s been the case that every instance of severe suffering I’m aware of has been out of accord with my values, that doesn’t mean that all the instances of severe suffering I don’t know are also out of accord with my values.

I grant this isn’t a conclusion that I can be *completely* sure of - but I think I can think very, very, very sure of it. In my recent article on speciesism, I say that my credence in speciesism being wrong is approximately 100% - and that the only reason I even include the “approximately” modifier is to account for the possibility that I’m radically mistaken about my attitudes.

I suppose you would think this judgment of mine is in serious epistemic error: there’s no way I can be confident that species isn’t something I care about, let alone confident to a degree approaching certainty. After all, there’s an infinite number of logically possible species - and it only has to be the case that I care about the interests of just a single one less (in virtue of their species) - even just ever so slightly - in order to render the claim that speciesism is wrong false.

Be that as it may, I still find it tremendously and extraordinarily implausible that I would ever care about such a characteristic.

Like it seems you would say I could not even affirm that I disapprove of brutally and slowly torturing rabbits to death solely for the sake of sadistic pleasure - or if I could, I at least couldn’t too confident in it.

After all, there are an infinite amount of logically possible scenarios where this action occurs. Maybe I’m just exposing myself as lacking an insane degree of epistemic humility here - but well, it just seems super self-evident to me that I would in fact disapprove of all those instances.

I don’t think the plausibility of this is at all like me disliking all polka songs or something. Some ethical stances are more akin to that - there are cases where I think what exactly what our moral values are is opaque to us - but there are other cases where they’re very clear-cut. This, I think, would be one instance of that.

You note that disapproving of telling lies doesn’t logically entail it being the case that you disapprove of getting your little brother to tell lies. This is true.

But it’s just true in general that the proposition “It’s wrong to lie” doesn’t logically entail the proposition “It’s wrong to persuade your younger brother to lie” - even if you give a realist analysis to the propositions.

There is nothing *logically impossible* about it being the case that it’s objectively impermissible to lie yet stance-independently right to convince your brother to lie. Likewise, it could well be the case that it’s objectively wrong to torture someone $5 but fine to do it for $10. It could be even weirder than that - it may be that it’s obligatory to torture someone for $5 but an egregious act of evil to torture someone for $10.

It could be that today utilitarianism is true, but tomorrow deontology will be true, and the next day virtue ethics will be true. None of this is prohibited *logically*. Of course, we can say it’s very implausible, but it’s not *impossible* (logically).

So I think these sorts of conditionals shouldn’t be interpreted as matters of logical entailment - that if P is true, Q logically follows from it - but rather about what’s most likely the case. If P is true, Q is probably true as well.

Expand full comment
Charles Egan's avatar

I take my principles and commitments to have entailments, so I agree that this is something a plausible metaethical theory would have to explain.

I'm sympathetic to Finlay's end-relational theory--normative statements are indexed to some goal or framework and truth-apt relative to that goal/framework. Our relationship to these base goals or ultimate ends is non-cognitive--we are primitively compelled to accept them.

Under this theory, you could ask for reasons why stealing is wrong to determine some unstated goal--it causes suffering, or treats people as mere means, or whatever. You could translate any "ought" statement to have an unstated "in order to" component. "In order to not treat people as a mere means, you ought not to lie." Then you could rationally assess whether that goal also entails not getting your little brother to lie.

I'm not sure if Finlay would call himself a subjectivist, I believe he's used both contextualist and quasi-expressivist to describe the view. Whatever the right label, this appears to be a view which can handle this objection while escewing realist metaphysics.

Expand full comment
61 more comments...

No posts