The weird world of the kind-of-deontologist
Your moral system is only as good as its enforcement mechanism
Sometimes I write something I’m not as happy with as I could be. This article is an example of that. I like it, but I’m not sure how much confidence I have that I won’t change my thinking on several of the points here within the next few weeks or (even more likely) over the next few years.
With that said, most of the time these articles aren’t absolute garbage either - there’s almost always something I like about them. I like how this article made me think, for instance. And since I have enough time wrapped up in the article that it’s either print it or fail to put up any content at all for the week, the balance ended up showing in favor of publishing.
With that said, this isn’t my strongest work. You have full permission to dismiss it out-of-hand if that suits you.
One of the few real advantages to a general lack of education is that you end up understanding a lot of things at “street-level”, essentially understanding terms and concepts as they tend to be used rather than how they are strictly defined in books. As ever, CRT is a great example of this - I’m aware of the academic definition of the word, but my understanding of the word is as it’s used by the bulk of people who reference it.
There are both advantages and disadvantages to seeing the world this way, and they tend to occur at different points in an argument. Academic definitions are specific and defensible, while street-level definitions are broad, vague, and incredibly useful.
When someone tells you that cigarettes are as addictive as heroin and more addictive than alcohol they want you to hear something as broad as “worse than either of those”, that the addiction is stronger, more life-altering, and will lead to similar or worse outcomes. It’s only later, on pushback, that you get the more academic sense of addictivity (how quickly one gets addicted, as opposed to how strongly) that they can easily defend.
This distinction might not seem that significant, but it becomes really important really fast when you belong to a moral system generally classified as deontology.
Street-level definitions of people groups trend strongly towards “those guys who…” definitions. To the right, street-level socialists are something like “those guys who don’t want anyone to have to work, and don’t understand where food comes from.” To the left, street-level “white men with guns” are “those guys who like murder and are always on the cusp of doing it”. If these seem like slurs, that’s because they are - that’s where the street-level shorthand is most useful.
Deontologists are very often defined on the street level as “those guys who follow rules entirely because they are rules, for no other reason, and without a single thought inside their cavernous heads”. This is especially true in the rationalist space, where utilitarianism is so accepted as the thinkin’ man’s choice that a major blog prize lists it as the only kind of moral system they consider to be worth discussing:
If the street-level formulation of deontology is negative (and almost all such formulations a particular group makes of their out-group are), then their formulation for themselves is that they are the only people who think about their morality; they choose their actions, thoughtfully as opposed to having them dictated to them by a third party like a prole.
When trying to counter this, Kantians have a slightly easier time of it than some other forms of deontologists. Where the Utilitarian might say:
We choose our actions based on the amount of overall good they do, not just blindly following rules.
The Kantian can usually be successful in pointing out there’s more complexity than the Utilitarian saying something like:
Well, sure. But we don’t just blindly choose our rules - we have ways we consider what rules are good and build them from that. Kant thought that rules were only good if they’d be net-good if everyone did them, among other things.
That one-layer objection is pretty useful - The Kantian gets to immediately bring the conversation into the weeds by pointing out that while Kant was a fan of rules, he wasn’t a fan of ALL rules, and that he advocated rules he thought would work out if everyone had to do them all the time. It can still be objected to, sure, but The Kantian gets to immediately nullify “your system is for simpletons who don’t think” street-level definitions by pointing out that it’s endlessly and possibly needlessly complex.
Meanwhile, I’m in a different camp altogether, one that gets generally lumped into the ultra-broad “divine command theory” lump. At my first level, I get this:
I don’t follow rules purely because they are rules. I follow them because God said so. If you presented the same set of rules and I had no concept of God, I wouldn’t feel obligated to follow them just because you said.
If I present that first layer and only that first layer, the utilitarian does not feel less justified in thinking I’m stupid and non-thinking; he thinks it’s more so. This is for two reasons: First, a *pure* utilitarian is (usually) some form of atheist or agnostic; he doesn’t think there is or could be a god. Second, he’s even farther from thinking a hypothetical god would have any right to command anyone to do anything “just cause he said so”.
Anyone who has ever been in an argument about this with a hard-thinking Christian knows there are more levels to this. For instance, some think that there might be other goods besides a vaguely defined overall welfare (or whatever definition of good the utilitarianist has decided on, if he’s meta), and that God knows them. Or that that good to be maximized is somehow tied to God himself - i.e. being obedient is the ultimate good, or that mirroring God’s nature is the ultimate good.
Whether or not you think any of those levels are true, they make the street-level deontologist slur a lot harder; now we both do things we think are pursuing good, we just disagree about what good is. Or at least you might expect so, but in reality it doesn’t actually affect The Utilitarian’s calculations at all.
He’s so far from considering religious beliefs to be a reality he actually finds it difficult to believe anyone else does, either. Thus you get a consequentialist-imagined version of deontology that won’t acknowledge any of the reasons people want the rules they follow; The Utilitarian doesn’t believe the reasons are good, and because of this also grants himself permission to pretend they don’t exist at all.
So let’s talk about rule utilitarianism.
Talking about RU is easier, because it’s not really popular or known enough to have a street-level definition, so we don’t have to disambiguate between the prole understanding of the thing and the actual academic definition. The academic definition is something like this:
Actions are good or bad to the extent they follow rules chosen because they seem likely, if followed, to maximize good.
You have probably noted that the first part of that is straight-up, do not-pass-go deontology. Their formulation is essentially “Follow rules chosen because X”, where X is their concept of good. But - and this is the kicker - that’s what pretty much every deontologist does too.
The pure, non-rule type of utilitarian can still say “Well, you see, our system is flexible - we don’t have rules at all, we just work off a general guideline and optimize for it on a case-by-case basis”. That’s a real difference. But by accepting The Rule Utilitarian as a consequentialist, philosophy is necessarily saying something like this:
Listen, we both think things can be good or bad depending on how well they follow a set of rules. But it’s either the case that you don’t have reasons for why you picked your rules, or our system is so clearly better than yours in every way we can treat it like it’s the same thing.
And they really are thoughtfully or thoughtlessly saying something like this - if they weren’t, there’s no logical, untortured way to install rule utilitiarian within consequentialism.
I want to be clear: That most recent quote isn’t outside of the realm of what can be argued. If someone was to say “Listen, moral systems built entirely off what can be observed are fundamentally different from those that aren’t” that would be fine - it’s an argument that we can and often do have.
My problem is with the sneakiness on display here. Remember, deontologists are the whipping boy of philosophy at large broadly because they just follow rules. Once rule utilitiarians enter the fray, just following rules, we find it was never about that at all; the argument was always about how we defined “good”, not how we pursued it.
The reason I think any of this matters at all and why it’s worthwhile to note that the deontology/consequentialist distinction is not actually (again, at the street level) about rules is this: Once we dig through all the layers and find we’ve always just been arguing about the nature of Good, we have to actually talk about what good is again.
But as long as we don’t, the utilitarian gets to take advantage of the greatest long-term semantic con anyone has ever pulled: claiming the words “good” and “utility” as restricted-use technical terms, then reaping all the benefits of using them in the broadest, most non-specific ways possible.
Imagine if I started a political party called “The Good Useful Party” and consistently claimed that people should vote for me because, as the name indicated, I was dedicated to good, useful things. How long would I get away with that for? My guess is I wouldn’t be able to get off the stage at the ribbon-cutting before someone said “Well, yes, but every political party claims to be that - what are your actual policy positions?”
In the case of The Utilitarian, this didn’t really happen. I mean, it happened a bit; sometimes a person will question whether or not mosquito nets really are clearly better than wigs for kids with cancer. But for the most part, he gets away with saying something like “well, you see, I’m the one who wants GOOD; everyone else is doing some other bullshit unrelated to that”.
Once you reveal that the precise definition of good standing behind the rules is all that matters to the consequentialist/utilitarian divide, as the existence of Rule Utilitarianism does, it immediately becomes apparent that utilitarians should have had to be defending their method of good selection the whole time like everyone else. Honestly, it’s astonishing how much of a march they’ve been allowed to steal here.
Deontology doesn’t exist. And I don’t mean that there aren’t various moral systems that are organized under the banner of deontology - there are. Deontology as a category contains none of its subcategories; near as I can tell, it stands alone as a concept without a single adherent. Nobody thinks themselves to be following a rule simply because it’s a rule, full-stop. There’s always a layer of something behind it, whether it’s the authority of a God, the good of mankind, or whatever the hell Kantianism is.
We could say it like this: Every Deontologist is to some extent a Consequentialist; they follow a set of rules because they think the set of rules maximizes good outcomes. If you take a Deontologist and convince him that the only possible good that exists is a nebulously defined concept of utility that is best pursued on a case-by-case ad-hoc basis, he will set that as his rule, and believe his actions are good when he follows it. The average deontologist doesn’t do this, but broadly because he doesn’t agree with The Utilitarian on what the best outcomes are.
I think an actual philosophy guy - someone who is enthusiastic about all the little granular definitions - is probably screaming at me by now. I’m guessing he probably has a point; like, I’m stepping into his territory and making broad claims that very likely have a lot of holes. I’m sorry, my man. I have a point, I promise.
When I found out about rule consequentialism, I was excited. Again, the only difference between the two systems is that Rule Consequentialists believe themselves to have chosen their rules because they pursue good - which means they believe that everyone else chose their rules for no reason at all. It’s as if they said, “Well, we need something that’s exactly like deontology in nearly every way - but we still can’t associate with those nutjobs over there, can we?”
I think that probably does have some of the implications I’ve argued for here - that deontology doesn’t exist in the sense that people commonly use the word, for instance. But it doesn’t really matter, because when we do this we are focusing on the least important parts of the argument.
Does your moral system make you do good things? Christians: do you ever find opportunities to do good? I very rarely do, compared to how much time I could actually be spending helping people. For us, in our system, that’s a big deal. You have faith, but do you have the kind of faith that makes you work towards sucking less at being good to people?
Utilitarians: You probably very strongly believe that good is a certain thing related to maximizing welfare for the greatest amount of people. Are you doing that, though? I’m not saying you aren’t, but if we are both being frank with each other it’s not like it’s any easier for you than it is for us on the individual level.
I was having a discussion the other day about how weird it is that there’s a huge fixation within philosophy on precisely defining good, but very little focus on how good a particular system of morality is at compelling you to actually do good compared to another. It’s easy to imagine a moral system that is less technically correct than another but still results in more quantifiable good, and yet 1,000 times more energy is spent on the minutia than how likely your particular system is to get you off your ass.
The person I was talking to noted that this seems much more applicable as a thought to a utilitarian than a Christian - in a weird meta way, a utilitarian should find it completely fine to promote and follow a system that’s less correct than utilitarianism should it’s incorrectness prove to cause more good than another more correct system. At the time, I couldn’t find a reason to disagree.
But later I realized Christians get to (or, as with utilitarians, at least should get to) the same practical point in a different way: We believe we approach good through faith, but the kind of faith that actually lets us do that is explicitly the kind that, if real, should be motivating significant works in the service of others. If a Christian’s faith isn’t doing this it’s faith of a sort, but not the sort that the Bible is talking about.
I’m not being a Universalist here; I’m not saying the two moral systems are compatible or that all roads lead to the same place. But both systems demand action to function. Perfection of moral theory is nice, but it doesn’t comfort the mourning or feed the hungry in any system.
If faith without works is dead, then a utilitarian without demonstrable utility is just a Redditor. We can all do better, but we have to get to work.
Author’s note: I’m not secretly a more famous writer or anything. I work for a living, mostly by writing for corporations in support of their recruiting processes. I’ve recently been doing more freelance work in pursuit of helping companies tell their story better. If that’s something you need help with, feel free to let me know at firstname.lastname@example.org.
> I like it, but I’m not sure how much confidence I have that I won’t change my thinking on several of the points here within the next few weeks or (even more likely) over the next few years.
Just wanted to drop by to say that this is one of my favorite things about your writing. You have opinions and write well on them, but freely admit your uncertainty. When someone admits freely what they do not know and allow themselves to change their mind, THAT is when I trust what they say.
My disagreements with you probably ground out in my atheism, but regardless, here are some thoughts! Note that I’m not particularly interested in philosophy qua philosophy, so I’m sure a philosophy guy wouldn’t like me either.
I think that consequentialism is obviously correct and that you're probably a consequentialist in the trivial sense that you probably ultimately care about good outcomes. We may disagree about what those outcomes are, or how we might achieve them, but if we both ultimately care about good outcomes then I think we're consequentialists in this sort of trivial sense.
With regards to identifiable works, I’d point to effective altruism (EA). I like to perhaps idiosyncratically characterise them (us?) as ‘taking consequentialism seriously’ alongside some flavour of utilitarianism and physicalism. EA is particularly concerned with ameliorating global poverty, animal suffering, and existential risk, and while the movement's not perfect I certainly think it can really be said to be trying.
I think it's important to distinguish consequentialism, which seems rather trivial to me, from utilitarianism, which on my view is an arbitrary choice, as it seems to me that what we end up valuing is arbitrary. Hence concerns about AI alignment! (Indeed, I've recently come to appreciate that for Christians the most effectively altruistic thing to do is whatever converts the most people to Christianity, so I can respect that sort of thing.) Why choose some flavour of utilitarianism? Because 'flourishing for all sentient beings' seems like the most natural thing to value, at least as a human. I hope the same is true for AI, but I very much doubt it.
What’s the place for rules and virtues in a consequentialist framework? It’s simple: how do you calculate what actions will produce most good? This is computationally intractable! In practice we have to rely on heuristics, such as, say, rules to follow or virtues to embody. These can't be specified to such detail that you can always rely on them; the world is too complicated for that. So sometimes we'll have to fall back to reasoning from first principles. But to do this, your capacity to perform this sort of reasoning should be developed enough to be able to generate the sorts of rules and virtues by which you normally live. Consequentialism shouldn't lead you to bad consequences; the problem is with doing it wrong. That's easy to do, so one ought to be careful. And of course these rules or virtues—which, yes, we absorb from tradition—must be improved by reasoning from first principles to account for changes in the world.
It's not clear to me that there exists a moral system that's 'less technically correct' and yet 'results in more quantifiable good'. The 'less technically correct' approach may be a useful heuristic that improves outcomes. This does not make it any less technically correct; it simply means that your moral theory accounts for the physical and information-theoretic limitations of this universe, which is probably a nice feature. You cannot ignore the fact that there's a cost to obtaining information, and a cost to focusing on minutiae, and this must be accounted for in any sensible moral theory. Ultimately, I'm not sure what sort of system couldn't be understood in this heuristic sense.