> I like it, but I’m not sure how much confidence I have that I won’t change my thinking on several of the points here within the next few weeks or (even more likely) over the next few years.

Just wanted to drop by to say that this is one of my favorite things about your writing. You have opinions and write well on them, but freely admit your uncertainty. When someone admits freely what they do not know and allow themselves to change their mind, THAT is when I trust what they say.

Expand full comment

My disagreements with you probably ground out in my atheism, but regardless, here are some thoughts! Note that I’m not particularly interested in philosophy qua philosophy, so I’m sure a philosophy guy wouldn’t like me either.

I think that consequentialism is obviously correct and that you're probably a consequentialist in the trivial sense that you probably ultimately care about good outcomes. We may disagree about what those outcomes are, or how we might achieve them, but if we both ultimately care about good outcomes then I think we're consequentialists in this sort of trivial sense.

With regards to identifiable works, I’d point to effective altruism (EA). I like to perhaps idiosyncratically characterise them (us?) as ‘taking consequentialism seriously’ alongside some flavour of utilitarianism and physicalism. EA is particularly concerned with ameliorating global poverty, animal suffering, and existential risk, and while the movement's not perfect I certainly think it can really be said to be trying.

I think it's important to distinguish consequentialism, which seems rather trivial to me, from utilitarianism, which on my view is an arbitrary choice, as it seems to me that what we end up valuing is arbitrary. Hence concerns about AI alignment! (Indeed, I've recently come to appreciate that for Christians the most effectively altruistic thing to do is whatever converts the most people to Christianity, so I can respect that sort of thing.) Why choose some flavour of utilitarianism? Because 'flourishing for all sentient beings' seems like the most natural thing to value, at least as a human. I hope the same is true for AI, but I very much doubt it.

What’s the place for rules and virtues in a consequentialist framework? It’s simple: how do you calculate what actions will produce most good? This is computationally intractable! In practice we have to rely on heuristics, such as, say, rules to follow or virtues to embody. These can't be specified to such detail that you can always rely on them; the world is too complicated for that. So sometimes we'll have to fall back to reasoning from first principles. But to do this, your capacity to perform this sort of reasoning should be developed enough to be able to generate the sorts of rules and virtues by which you normally live. Consequentialism shouldn't lead you to bad consequences; the problem is with doing it wrong. That's easy to do, so one ought to be careful. And of course these rules or virtues—which, yes, we absorb from tradition—must be improved by reasoning from first principles to account for changes in the world.

It's not clear to me that there exists a moral system that's 'less technically correct' and yet 'results in more quantifiable good'. The 'less technically correct' approach may be a useful heuristic that improves outcomes. This does not make it any less technically correct; it simply means that your moral theory accounts for the physical and information-theoretic limitations of this universe, which is probably a nice feature. You cannot ignore the fact that there's a cost to obtaining information, and a cost to focusing on minutiae, and this must be accounted for in any sensible moral theory. Ultimately, I'm not sure what sort of system couldn't be understood in this heuristic sense.

Expand full comment

Good essay; I feel inspired to build upon it. For now, I think one key point to think about is the distinction between rules and rules for coming up with other rules.

Utilitarianism strikes me as being about how you choose your rules for behavior. No self proclaimed utilitarian goes through the moral calculus of every decision, both because the numbers aren't there and no one has time for that. Instead they try to work out general heuristics that work most of the time, based on the best numbers they can conjure and their general sense of the good. They reserve the right to change those heuristics based on changing senses of numbers and/or their general sense of the good.

Deontology, or rather deontology as practiced, does pretty much the same thing, only with varying levels of openness to changing the rules, and perhaps less clear rules about how to change the rules. One could go farther and say "Deontology is only inflicted on people by those who make the rules; left to their own devices, everyone tries to pick rules that maximize their expected good outcome." That is, the only deontologists are those who want other people to follow rules the deontologist makes up, while everyone who chooses to follow rules does so because they think those rules are going to lead to more good.

As you say, everyone is sort of a consequentialist, but how they evaluate the consequences of actions and rules under the extreme uncertainty of real life varies a good bit, but less than the rationalist/utilitarians probably like to admit :)

Expand full comment

Hi Resident,

This was a thoughtful piece and I enjoyed reading it.

I think Utilitarians are pretty clear about what constitutes the "good".

The confusion arises because there is not a single definition of the good that is applied to each and every person.

This is from John Stuart Mill's On Liberty (which I got via https://en.wikipedia.org/wiki/On_Liberty


"That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.... Over himself, over his body and mind, the individual is sovereign."

The connection with Utilitarianism is straightforward. Externally imposed standards are bound to limit the "sovereignty" of some folks.

So, the greatest good for the greatest number is achieved by laissez-faire, with guardrails for public saftey.

The beauty of this approach is that allows individuals to discover and pursue their own goodness.

The drawback is that there is no guarantee of success.

Expand full comment

I really really like the conclusion here. But I'm frustrated by the introduction and development.

I thought it would be obvious by now that all moral systems are arbitrary. The advantage of utilitarianism is that it's explicitly not a moral system. It's more of a moral framework where you can plug in your arbitrary moral preferences and then get some help on how to solve the real world issues case by case.

Religious moral is also arbitrary. We codify whatever arbitrary rules we like as some pretend commandments from a pretend higher being. Then we follow those rules as they satisfy our preferences as a group. When our arbitrary preferences change, religions change too, you can just study history of Christianism to see how much it has changed over the last 1500 years to adapt to whatever arbitrary preferences we have in a given era. The history of all other religions will show you a similar pattern. Also when the preferences change too much some whole religions disappear and others take their place, but that's not super relevant as they are at a meta level interchangeable.

The case for utilitarianism is that it makes the meta game explicit. We know that the preferences are arbitrary, we think about them, we agree as a society what our preferences are, then we use the framework to apply the preferences. This allow us to constantly be aware and capable of adjusting the preferences that serve as input to the system.

The case for religion is that people are not smart enough to play the utilitarian game explicitly. And they are also not capable of following rules that are known to be arbitrary. So we still have to agree on some arbitrary preferences but we do it through some indirect social interactions. Then we codify these rules as coming from supernatural made up sources to increase the likelihood that people will follow them. That may increase rule observance but hides the preference choosing part and makes it harder to be aware of it.

I'm not convinced which one is actually better in practice. But I do think there's a paradox that if religion is better, it requires people to never be actually fully aware of how the system operates otherwise the whole system would come crumbling down. So I currently believe that utilitarianism would be a better answer as it could allow people to be more aware of the system as a whole.

In the end though I do agree that whatever system proves out to be better will be the one that compels people to act. Right now I see neither as specially successful in that goal.

Expand full comment