71 Comments
Sep 12, 2022·edited Sep 13, 2022Liked by Resident Contrarian

One of the more annoying things about Utilitarians, in general, is that they argue 'we've got the math on our side'. This irritates people who have studied basic Set Theory, who know that this isn't so. It is not as if mathematics is the best foundation for moral sentiments, but Utilitarianism has a strong appeal among people who pride their own rationalism above and beyond all else.

Infinite sequences are a source of strange paradoxes. Most of them are not actually contradictory but merely indicative of a mistaken intuition about the nature of infinity and the notion of a set.

"What is larger," wondered Galileo Galilei in _Two New Sciences_, published in 1638, "the set of all positive numbers (1,2,3,4 ...) or the set of all positive squares (1,4,9,16 ...)?" (He wasn't the first to do this sort of wondering, of course, but it's a convenient starting point, i.e. there are links.)

For some people the answer is obvious. The set of all squares are contained in the set of all numbers, therefore the set of all numbers must be larger. But others reason that because every number is the root of some square, the set of all numbers is equal to the set of all squares. Paradox, no?

Galileo concluded that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all the numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.

See 'Galileo's Paradox' https://en.wikipedia.org/wiki/Galileo%27s_paradox .

We haven't changed our minds much about this in the mathematical world. We've become more rigourous in our thinking, and have invented fancy notation -- typographical conventions -- to talk about them, but the last big thing in such comparisons was the idea of 'Cardinality of infinite sets', thank you Georg Cantor. The Cardinality of a set is how many items are in it. If the set is finite, then you just count the elements. Cantor wanted to make it possible to do certain comparisons with infinite sets. I could explain a whole lot of math here, which would bore most of the readership to tears, but people interested in this stuff can find it all over the internet. If you come from a country where set theory is taught in high school, you will have already learned this.

The bottom line is that the set of all numbers and the set of all squares are both the same size, the size being 'countable infinity' or 'aleph-null' in the jargon. Aha, you conclude. So where is the mistaken intuition that creates these paradoxes? The mistaken intuition is that you can compare infinite sets and conclude things like 'the set of all numbers is twice the size of the set of all even numbers'.

Which brings us to the Utilitarian's favourite hobby horse, trolley problems. If you consider each human being on the track as 'an infinite set of potentials', not just metaphorically, but mathematically too, then you can no longer conclude that killing 1 person is better than killing 4. They've all got the same cardinality. (No, I cannot prove this one. But for a thought experiment, we can assume it.)

And this is, after all, what the non-utilitarian moral philosophers have been insisting for all this time. People are not fungible. Non-utilitarians still have to make the tough moral decisions about whether you let one person die to save four, but we don't get to hide behind a shallow mistaken intuition all the while singing the 'we're superior because we have the math on our side' song, as loudly as we can.

P.S. I think that this dreadful state where we all end up living in capsule-hotel accommodations feeling just a hair above misery, with only one duty, to fill the world with people in the same state is simply a restatement of Hilbert's Paradox of the Grand Hotel --

https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel

where the hotel has a particularly lousy rating for hospitality on Trip Advisor.

Expand full comment

I still think self-dealing is the biggest practical problem with utilitarianism. While it's true that every moral system gets abused, utilitarianism is uniquely susceptible.

Imagine I decide to eat more healthily. I have two options: Banish cream-filled Hostess cupcakes from my life entirely, or acknowledge that cream-filled Hostess cupcakes are a sometimes food that should be eaten extremely rarely.

Now probably I would increase my amount of life enjoyment if I could usually not eat a cream-filled Hostess cupcake, but occasionally, on a special occasion, have one cream-filled Hostess cupcake. So the "math" says to go with option 2.

I do so. But oh no! Now every time I come across a cream-filled Hostess cupcake I have to do a mental analysis. Is this one of these rare occasions when I am permitted a cupcake? And hey, look at all this psych research that says I will *not* make that decision based on my calculating mind, but on the hind-brain that thinks it might die if I don't have that cupcake. So I eat the cupcake and tell myself that I've been dealing with a lot of stress lately and that this will help me cope with that stress, and probably I'll be more effective in my diet if I don't make myself miserable through denial, and etc. etc.

Or I go with option one, say "sorry, I don't eat cream-filled Hostess cupcakes" and miss out on a tiny bit of pleasure, but gain the advantage of this decision not being a decision.

For me the big lie in utilitarianism is less that you might reach a repugnant conclusion. As you point out the whole purpose of the project is to increase human flourishing in human terms, so if our logic chain hits a point of not doing that, at the point of action I think 99.9% of sane people will revise the chain.

And it's not that I don't think there is a right and wrong answer to moral questions: I absolutely am okay decreasing human happiness and flourishing today for drastically increased human happiness and flourishing in the future.

The big lie is that we are mentally capable of handling every single moral choice through calculation. We absolutely are not.

Expand full comment

I haven't seen as much pushback on this as I'd like, so I felt the need to right that myself! A few things.

First, and I think I might've said this in a previous comment, I do see morality as arbitrary; my morality is merely a reflection of my preferences. But I don't think it's reasonable to hope for any more than this, so I don't see this as a problem! Fortunately I also think that, given the right environment, most people would end up choosing to value something like 'flourishing for all sentient life'. I'm an optimist like that! (Unfortunately, I don't think AI will converge on this without significant effort on our part. But hey, if that goes well, I think the best possible future is very likely.) And I think we could reasonably try to operationalise this as some combination of preference and hedonic utilitarianism. But I'm a consequentialist before I'm a utilitarian, so I'm happy to let the relevant utility function be whatever we determine to be sensible.

Second, I think the comment on infinities is rather bizarre. If you had $5500, would you rather donate it (at the margin) to AMF to buy malaria nets, thereby saving a life in expectation, or to other cause or organisation? How would you make that decision? In a world like ours where resources are finite, if we genuinely care about doing the most good possible we have to think about the of tradeoffs we're making when we allocate our resources. All else equal, I think most people would rather one person die than five! Why throw that away, especially in a decidedly finite observable universe?

Third, I mean, come on. This is ridiculous. Surely you have to realise this objection is ridiculous? I'm happy to bite the bullet that the world of the repugnant conclusion is better than the world of today. But a universe filled with people whose lives are barely worth living doesn't sound good! It sounds absolutely horrible! An obscene tragedy! Such a universe has squandered the immense resources involved in supporting so many lives—it has squandered roughly all potential value!

The relevant quantity to maximise is the utility derived from each unit of resources consumed. (And to be clear, I would be very surprised if the resulting system ended up being simple, or bland, or repetitive, because after all those don't sound like very positive words.) What the universe ends up filled with as a result depends on what humanity ends up valuing—it could be humans, dogs, digital minds, or even something else. This would depend on things like the weight we assign to the experiences of digital minds, which would lean on better understandings of neuroscience and artificial intelligence. (It's not going to be dogs, though. That would be dumb. There's no way dogs are an optimal use of resources.) The one thing I'm sure it won't end up being is filled with is miserable people whose lives are barely worth living. That would be awful!

I think the world we live in today is pretty repugnant. I hope that the future is filled with an enormous amount of lives each of which is far better than any life led today. I hope no utilitarian settles for anything less!

Expand full comment

> He has to hope you will, because the moment you remember that the $1 car price argument was “happiness of the kind that makes intuitive sense to all of us, even if it’s just a trick of evolution” and not “The kind of stuff Goebbels comes up with after snorting an entire Monkey Paw”, he’s sunk.

I just want to start by saying thank you for how hard this made me laugh. I non-ironically slapped my knee. :D

Expand full comment

Playing devil's advocate:

The happiness function *might be* very complicated. In fact, that's almost certainly the case, seing that people are famously complicated. So, it might simply not be possible to tile the Earth with pod-hotels while average happiness is anywhere above "shoot me now".

The failure seems to be in the premise "for any human population with a given average happiness (however that might be calculated) you can add more people so that overall happiness increases". I strongly refute this claim. For given assumptions about technology and how society works, there probably is an optimal population where increasing it *a little* makes everyone *much less happy* so that overall happiness is strictly diminished. Trying to say this with words rather than math sux btw.

Seing how we mostly hate having too many people around, I strongly suspect that optimum is *not* pod-hotel-tiled Earth. It might be the case that we're already above optimum population, although tech is changing rapidly so who knows. But it seems to me that this Repugnant-Conclusion argument is built on a flawed hypothesis and is therefore invalid.

I'm not a proponent of the Dreaded Philosophy by the way, it seems to breezily over-simplify far too much. But the part where it says "Things should be good." is irrefutable. The required question is "OK, good how?" and all hell breaks loose.

Expand full comment

Maximize total happiness utilitarianism seems to endorse increasing population by any amount, even if that lowers average happiness, as long as total happiness is increased. So if the average happiness now is 10 utils and there are 10 people, for a total of 100 utils, then we should opt to increase the population to 10,000, even if that means lowering the average happiness, as long as the total is more than 100 utils. For instance, we should prefer doing that increase in numbers even if the average happiness falls to 0.0101, for a total of 101. So we are required to produce a lot of very, very unhappy people.

My solution of the paradox is that 0.0101 is not very unhappy. It's just a little bit happy, which is not the same thing. At 0.0101 everyone would have more than enough. That's because zero means sufficiency, and anything plus means more than sufficiency. Very unhappy means suffering from an overwhelming quantity of troubles, problems, etc. Once we leave behind Epicurus's mistake of equating misery with a tiny bit of happiness, we should appreciate that the repugnant conclusion is not repugnant. .0101 is better than not so bad. It's good. It's true that the people at 10 would resist the prospect of losing happiness, if they survive into the new scene, but that has to be left out of the thought experiment.

Expand full comment

Maximize total happiness utilitarianism seems to endorse increasing population by any amount, even if that lowers average happiness, as long as total happiness is increased. So if the average happiness now is 10 utils and there are 10 people, for a total of 100 utils, then we should opt to increase the population to 10,000, even if that means lowering the average happiness, as long as the total is more than 100 utils. For instance, we should prefer doing that increase in numbers even if the average happiness falls to 0.0101, for a total of 101. So we are required to produce a lot of very, very unhappy people.

My solution of the paradox is that 0.0101 is not very unhappy. It's just a little bit happy, which is not the same thing. At 0.0101 everyone would have more than enough. That's because zero means sufficiency, and anything plus means more than sufficiency. Very unhappy means suffering from an overwhelming quantity of troubles, problems, etc. Once we leave behind Epicurus's mistake of equating misery with a tiny bit of happiness, we should appreciate that the repugnant conclusion is not repugnant. .0101 is better than not so bad. It's good. It's true that the people at 10 would resist the prospect of losing happiness, if they survive into the new scene, but that has to be left out of the thought experiment.

Expand full comment

What's funny is that I've been thinking about "The Monkey's Paw" off and on all week.

Expand full comment

You and xkcd are on the same page about dogs

https://xkcd.com/2672/

Expand full comment

What bothers me the most about utilitarianism is that people take it seriously enough to talk about.

* I propose that morality entails hitting people with hammers! More hitting, harder hitting, and bigger hammers all translate to more good!

* I propose that humans are born good and become more evil as they age! The more effectively you commit suicide by 35, and the more children you have by then, the more good you brought the world!

* I propose that hurting people's feelings is the root of all evil! One must never hurt the feelings over another person! (Are cats people? Are politicians people? Who is a person? We need answers!)

Nobody bothers talking about these arbitrary moral codes. But then we have:

* I propose maximizing happiness is morally good!

And now suddenly everyone is trying to figure out who counts (or should count) as a happiness-haver, and whether this does or doesn't align with intuition. And boy is this important - the only reason anyone likes utilitarianism is that they find it intuitively satisfying.

But is relying on intuition really the way we should be reasoning about morality? About anything? A tactic that might seem at least vaguely sensible would be to ask questions, like, "Does morality exist? Where does it come from? How would we know?" This isn't what utilitarians do. They start with "Happiness is good, just, you know, because," and then talk about what hardcore rationalists they are. Seriously, I'm left standing here wondering - what does "rationality" even mean?

Expand full comment
Sep 12, 2022·edited Sep 12, 2022

"There is probably a strong moral argument to be made for, at some point in the future, killing off most people to make room for dogs."

This would be true only if you 1) are a consequentialist utilitarian, 2) who treats utility and happiness as synonyms. I am 1 but not 2 - there is clearly more to a good life than the kind of happiness experienced by a breed of dog with a particularly sunny disposition. If we treated that kind of happiness as all that mattered, there would also be an excellent case for developing really pleasant drugs, or machines that just stimulated the section of the brain responsible for the feeling you're looking to maximize.

Also, I didn't feel baited-and-switched when I first heard of discussions around the "repugnant" conclusion - partially because as you've mentioned in comments below, it's not clear exactly how to define a life barely worth living, and it could be fine (and by whatever definition one uses, it must be worth living, so I guess maybe it's morally OK by definition, even if the definition isn't clear and specific?). The second reason is it was pretty clear to me pretty quickly that we were talking hypothetical worlds that couldn't actually exist here, and this was unlikely to translate into an actualizable world. The repugnant conclusion is something like "for any given world with high average utility, it is possible to imagine a world with many more people who have lives that are barely worth living, where the sum of the utility of all the people involved in the second world is greater than that in the first." Which, fine, yes, I can accept that. It is theoretically possible for me to imagine one world where there are 10 billion people living their best lives however one defines best, and another world where people's lives are so close to being morally not worth living that 10 quintillion of those people getting to exist would have the same utility as me enjoying a very nice cheese bagel, but yet still because there are a 1 with a sufficient number of zeroes after it number of people living those sorts of lives in this hypothetical world, the math comes out that there is more total utility in the second world. Fine, I accept that it is possible to imagine such a scenario.

But this does not mean that in practice in the actual world we live in, we're going to have to make actual people have barely-good lives. Let's say we take the next hundred years and work on making the current world better, so we do have around 10 billion people living very good lives at the end of it. And someone says "well actually, wouldn't it be better if there were as many people living on Earth as it took for there to be a higher total utility, even if that means a quintillion people?" My response would be "A quintillion people will not fit."

I suspect, although I cannot prove, that we will reach a point where adding more people reduces total utility due to resource constraints. This is explicitly avoided in discussions of the repugnant conclusion by saying unrealistic things like "imagine that you can add more people without making the existing people worse off, because nobody uses extra resources - let's suppose you could create a second planet Earth by snapping your fingers". Of course, since utility is poorly defined and not something like "there are 15 apples, each apple weighs 100 grams, so there are 1500 grams of apples. Also each apple is worth 2 utils, so there are 30 utils of apples", there will be debates even among total utilitarians when that point of decreasing total utility has been reached, but still, my point is that even accepting the logic behind the repugnant conclusion does not compel one to try to bring about a state of affairs where everyone's life is far from great but there are a lot of people. If it would take 1 quintillion extra people to make up the utility of a cheese bagel, I'll just bring two cheese bagels into existence that wouldn't have otherwise existed (which is much easier than working towards a future state of the world where there are +1quintillion people anyway), and call it a good day. And the same logic with smaller numbers will likely apply at more realistic scales.

Oh, and: Despite the fact that I'm just posting stuff that disagrees with you, I liked your post a lot - got me thinking, and it was well thought out. I actually had filed the repugnant conlusion under "unsolved edge cases/things I'm not sure about", and now I feel like I have a better response to it. :)

Expand full comment

It seems to me that while utilitarian calculations might be a good way to distribute resources that a government has taken from the citizens (or, alternatively, exert power over the citizenry), or for a charity to distribute funds it received from donors, utilitarianism is not a good justification for the government to appropriate those resources or that power, or for a person to donate to the charity, in the first place.

This reminds me of that grand rat's nest of legally instantiated moral hazard and grift, conservatorship law. The moment grandma's memory gets a little hazy and she gets a little unsteady on her feet and maybe makes one or two unwise financial moves, an army of private and public "good samaritans" are called in to send grandma to a care home, divide half of her estate among the professionals working supposedly in grandma's best interest, and the other half to her squabbling heirs, lest she *gasp* continue to control her own destiny and maybe squander the estate before the heirs can get at it, all while grandma insists, Monty-Python-style, that she feels fine and wants to maintain her independence and might even go for a walk.

The point is that it *might* be better to let most of us go around like demented old folks trying to pursue our own happiness while we liquidate our own estates to the detriment of our heirs, rather than let our betters take all our stuff and determine what's best for us.

There should at least be a strong presumption that each individual's idea of his own good is at least, if not more, valid than our own conception of what's good for him.

There's a reason the US Declaration of Independence talks about the "pursuit of happiness" (which is not uncoincidentally listed third after life and liberty) and not happiness per se.

Expand full comment

Okay, so I don't read all that much philosophy. But this:

"The long-termist repugnant conclusion salesmanship here is sneaky. It has successfully argued in the past that our concept of good should be based on what - by trick of evolution or otherwise - seems good to humans. It then said “Hey, since we all agree that we should be maximizing good is great, shouldn’t we do something that seems incredibly bad on an instinctual level to all but a very small percentage of human beings?” and hoped like hell you wouldn’t remember the work it took to get you into the dealership in the first place."

Is a good and new to me critique of materialist consequentialism.

Expand full comment

> But I’m arguably neither fair nor utilitarian, and my colloquial sanity is wobbly at best.

lovely

Expand full comment

Excellent breakdown. That is such a sneaky move I hadn't noticed it myself.

I also request pictures of the miniatures your wife is painting.

Expand full comment

I don't know why total utilitarianism gets as much flak for this scenario as it does from Scott and others. Nobody has a plan to tile the planet with barely livable slums. Meanwhile, the principle of prioritizing average welfare also leads to some repellant conclusions like "A population of 500 million very fulfilled people would be better than the world today", which is a much more dangerous idea. If we're picking a theory of population ethics to get mad at, why total utilitarianism?

The practical tradeoff that hinges on this distinction today is whether population control in poor countries is desirable or not: if it's good to let more people in Burundi be born even at a low standard of living. I would say yes, but I'm not a utilitarian at all and I think people are valuable in their own right, not just as a contribution to aggregate utility.

If all you want to argue is that utilitarianism itself is a bait-and-switch ("I was promised compelling, intuitive answers to moral dilemmas!") that's fine, but this is just about the least threatening nontrivial implication of utilitarian ethics: nobody is in a position to execute on it and even the people who find it intellectually persuasive think that it's aesthetically distasteful.

Expand full comment