

Discover more from Resident Contrarian
The Repugnant Conclusion Bait-and-Switch
You can't build a vinegar house on baking soda foundations
There is probably a strong moral argument to be made for, at some point in the future, killing off most people to make room for dogs.
Think about it: some dogs are naturally very happy. Some of the very happiest dogs are very small and long-lived (I’m looking at you, yorkiepoos) omnivores who can live on a wide variety of foods. Before you consider other options, it makes sense to kill off everyone not necessary to maintain a reasonably high level of happiness in the much-expanded toy-breed category of the planet Earth.
Long-Termist Utilitarianism ends up demanding a lot of wacky stuff once you get deep enough into the math. A not-universal-but-approaching-typical stance among them is to maximize happiness as a total instead of an average. This leads to the argument that it’s not only good but morally demanded that we try to overpopulate the world as much as possible, right up to the point where life is so bad for everyone that the average person is almost-but-not-quite suicidal. Such is the objection here, from ACX:
MacAskill concludes that there’s no solution besides agreeing to create as many people as possible even though they will all have happiness 0.001. He points out that happiness 0.001 might not be that bad. People seem to avoid suicide out of stubbornness or moral objections, so “the lowest threshold at which living is still slightly better than dying” doesn’t necessarily mean the level of depression we associate with most real-world suicides. It could still be a sort of okay life…
I hate to disagree with twenty-nine philosophers, but I have never found any of this convincing. Just don’t create new people! I agree it’s slightly awkward to have to say creating new happy people isn’t morally praiseworthy, but it’s only a minor deviation from my intuitions, and accepting any of these muggings is much worse.
I’m strawmanning the hell out of Long Termists with the dog thing, but note that this is a strawman in the sense that nobody is really making the argument, not in the sense that some people’s stated logic isn’t consistent with it. Since a lot of LT folks value animal happiness at an only slightly discounted rate as compared to humans, the only thing keeping this from being viable is perhaps being able to maintain a sufficiently large population of dogs.
The devil of the whole business is that it’s pretty hard to argue against; if you buy into math-as-morality, you can’t be surprised when cold, hard numbers lead you to some pretty cold, dark places.
Scott’s technique to counter this is to point out that accepting extreme moral implications involving forced breeding camps to create a maximally-large population of minimally-happy folks in an effort to get a high score in a game nobody wants to play is mental enough that he’s abstaining:
Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people!
Scott’s argument is good as far as it goes, but it’s limited by him being a sane, fair utilitarian of sorts. There’s some criticisms that are hard to see and get to from the inside. But I’m arguably neither fair nor utilitarian, and my colloquial sanity is wobbly at best.
In terms of use of my time, I think this is a pretty good one. Putting the spotlight on some of the weaknesses of implications-of-math-as-dread-Lovecraftian-god is vital, lest you someday find yourself wandering a wasteland with a bag of Alpo in one hand and a blood-soaked machete in the other.
When you are a young deontologist setting out to strawman the ever-loving hell out of consequentialists, the first thing you run into is some low-level reference book definition of it, like this:
Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative.
The first thing you bite into here is that this leaves things pretty open for ad-hoc reasoning that just allows you to do whatever you wanted to anyway, but this quickly loses some of its shine when it’s pointed out to you that a lot of people from every moral system do this; it’s more a “shitty person” trait than one belonging to a particular moral philosophy.
The second thing, and deeper opportunity, is to point out that this is not only essentially a circular argument, but also reeks of isolated demands for rigor. The consequentialist (henceforth Conny) has just got done telling you how dumb you are for thinking that a rule can be good in an absolute abstract sense, but now he wants to claim the entire concept of “good” as if it’s an abstract, set-in-stone concept you all agree is settled.
So you form a battle plan: you will point out that pretty much any rule system that doesn’t allow for “this rule is a settled good” also doesn’t allow for “the concept of good outcomes is a settled matter” either. Isn’t every definition of “good” outcomes he could make essentially arbitrary? Why should you prefer “feed orphans in Africa and prevent murders” to “chuck a bunch of rocks in a lake”? It’s all just moving around atoms, you say.
(I know this is getting long, but please stay with me; it’s an important setup, I swear. I also promise I will mention Singer here at some point.)
The consequentialist is prepared with a counter here, and in almost every case says something like this:
Listen, man, I know this argument. And yes, I don’t believe there’s some supernatural, fundamental definition imposed on particular acts that makes them right or wrong, and yes, I also don’t think the finger of a god was involved in setting the definition of “good consequences”.
But here’s the thing: we both agree that we don’t want to get murdered. We both agree that we don’t want our stuff stolen. We both agree that as a general rule you shouldn’t sacrifice babies to death gods.
We both also notably agree that - absent relevant modifiers - feeling bad is bad and feeling good is good. We might disagree on certain particulars and we can hash those out, but in broad strokes we both agree there’s a vaguely Maslows-shaped set of things that we all agree are good if we can get them.
So no, I don’t believe in a hard-and-fast deity-enforced definition of good outcomes. But even if I accept your premise that the implications of that are that any outcome is basically equivalent to any other outcome some purely abstract way, I can still point to the fact that there’s a fairly large set of outcomes that pretty universally FEEL good or bad to humans at large, and it makes sense to pick that set as our definition of good.
Conny allows that - in his system- it might just be a trick of evolution that humans tend to think of a certain set of things as good, but points out that this isn’t necessarily the fatal blow I thought it was. “All humans agree on 90% of this” isn’t a great counterargument to a paper-clip maximizer, but it’s a pretty good argument to other humans - if we are hurtling towards the heat death of the universe, then at least we can do so while behaving like humans, and doing the things that humans think are good.
Let’s go ahead and say this stumps you; you are forced to grant consequentialism provisional legitimacy in the court of your own opinions and give up for a while to go somewhere else and make heavy-handed metaphors.
Imagine this: You are in a fun thought-experiment-based car dealership. A salesman comes up and indicates five large chalk hopscotch squares drawn in a straight line on the ground.
The deal, he says, is this: You are going to assign each square one of your five most important car-buying requirements. For every qualifier he meets, you will move forward a square; if he can get you through all five, you will be contractually obligated to buy the car.
This sounds like a shitty deal for you in some ways, but you don’t want to ruin the metaphor so you do what he asks. You assign cost to the first square, a particular make and model to the second, a minimum options package to the third, and so on. To save time, you put them in order of importance and tell him so.
Once you have done all this, the salesman starts his pitch. “This car is only a dollar - that’s a tremendously low price,” he says. You agree, so you move forward a square. “This car is a 2022 Honda Odyssey Minivan,” he says, and you know from the internet that this is the finest vehicle on the road today, so you move forward another square.
The salesman is pleased and continues the process. “This Honda Odyssey Minivan is possessor of the Elite trim package, the finest level of appointment Honda offers,” he says. You prepare to step forward, but he’s not done: “Oh, yeah, and also that thing I said about it being $1 was a lie, it’s like 60k minimum.”
Now you and he are at an impasse of sorts. He, being a weird for-the-sake-of-argument salesman, wants you to keep moving forward to square four; you, being sane, want to move all the way back to the pricing block and start over. After all, you moving through all the other squares was downstream of the $1 price.
If the salesman pushes it and complains that you should just ignore that the $1 price was never or at least isn’t now valid and that you should just keep moving forward at whatever new price he sets, he’d have very little sympathy from most people; they’d rightly point out that you can’t build an argument on a particular foundation, then yank that foundation away and expect people not to notice.
There’s a guy named Peter Singer who is pretty foundational to a lot of modern utilitarian thought. I don’t know enough about Will MacAskill to say what percentage of morality he agrees with him on, but this article seems to imply it’s a fair amount:
For about four decades, Singer’s essay was assigned predominantly as a philosophical exercise: his moral theory was so onerous that it had to rest on a shaky foundation, and bright students were instructed to identify the flaws that might absolve us of its demands. MacAskill, however, could find nothing wrong with it.
By the time MacAskill was a graduate student in philosophy, at Oxford, Singer’s insight had become the organizing principle of his life…
It’s important to note that things Singer says are not by definition necessarily things MacAskill says. But it’s also important to note that Singer says things like this:
Newborn human babies have no sense of their own existence over time. So killing a newborn baby is never equivalent to killing a person, that is, a being who wants to go on living. That doesn’t mean that it is not almost always a terrible thing to do. It is, but that is because most infants are loved and cherished by their parents, and to kill an infant is usually to do a great wrong to her or his parents.
Which in turn (so long as the mother agrees it’s fine) enables stuff like this thing that I wish I could obscure behind spoilers because it’s a legitimately horrifying nightmare that I’m entirely comfortable telling you shouldn’t actually read:
It’s not as intensely horrific as infanticide, but Will MacAskill seems to accept most of the repugnant conclusion and the bag of problems included in that acceptance. That’s the thing I mentioned earlier where you should overpopulate the world because the not-quite-entirely miserable lives of your huge population will have more total happiness than a smaller, much happier-on-average population would.
Like Scott, most people initially recoil against the idea of intentionally making a dystopian sorrow wasteland on purpose when we could just not do that. But as MacAskill says in his book, if you’ve already accepted that utilitarianism/consequentialism are right this is what follows from that - you can’t believe one and not believe the other.
And herein lies the double-cross; he’s just changed the price of the car on square 3.
The long-termist repugnant conclusion salesmanship here is sneaky. It has successfully argued in the past that our concept of good should be based on what - by trick of evolution or otherwise - seems good to humans. It then said “Hey, since we all agree that we should be maximizing good is great, shouldn’t we do something that seems incredibly bad on an instinctual level to all but a very small percentage of human beings?” and hoped like hell you wouldn’t remember the work it took to get you into the dealership in the first place.
Just as every person should ask when they are introduced to “good-actions-are-those-that-cause-good” as a morality system, you at some point very likely asked what makes utilitarianism or consequentialism’s definition of “good” non-arbitrary1. And you were told something. It might have been that “good” was defined by human moral intuitions we almost universally agreed on. It might have been some other standard - avoiding suffering, say - since pretty much every living thing tries to do this.
But you know what it wasn’t? It wasn’t “The mathematical implications of some weird assumptions I pulled out of a hat, implications that go against every atom of your moral instincts, and no matter how extreme the implications get as we keep increasing the variables on our being-good formulas.”. And it couldn’t have been - in that pre-frog-boiling era, you wouldn’t have accepted “A bunch of weird shit you will absolutely fucking hate” as an answer.
When MacAskill says stuff that boils down to “It’s morally demanded that to the extent we can we overpopulate the shit out of the world well beyond what would be reasonable-seeming, ensuring that everyone has shitty lives for all eternity”, he’s counting on you forgetting the specifics of your come-to-Util moment.
He has to hope you will, because the moment you remember that the $1 car price argument was “happiness of the kind that makes intuitive sense to all of us, even if it’s just a trick of evolution” and not “The kind of stuff Goebbels comes up with after snorting an entire Monkey Paw2”, he’s sunk. When someone comes to you and goes "Why is this incredibly counterintuitive thing your position", saying "well, I made some arbitrary assumptions then cranked them up to 500" doesn't cut the mustard.
They shouldn’t be able to get away with this. In the same way you should be able to say “hey, what happened to all that bible stuff?” if I ever come out in favor of adultery and murder, MacAskill-likes should be forced to explain why “This outcome that even I find morally repugnant is demanded by some concept of good we agreed on earlier, which was probably ‘do non-repugnant things’, or something close” still works with their new argument.
Here’s a controversial statement: Sometimes there’s such thing as too much knowledge - a point at which your human limitations can’t take into account everything you’ve learned at once. We mostly deal with this by letting the bulk of our knowledge seep into itself, intermixing into a foggy rule-of-thumbs ruleset we use to handle new arguments. But the simple foundational stuff has to compete with the newer, cooler complexity; it’s entirely possible for it to get a little cobwebby waiting for its turn.
Here’s an even more controversial statement: Sometimes a comparatively ignorant person can get better outcomes on a particular thought problem because he doesn’t know as much - they are forced to approach the argument as if it’s being made for the first time today.
I’m flattering myself3, but I think I’m just ignorant enough for that to have happened here. The long-termists have pulled a fast one, whether intentional or not; they are asking people to optimize for a particular form of good that isn't really that fleshed out by piggy-backing off of a completely different form of good that at least to some extent is. Still being at that "wait, why is this a better definition of good than 'eat a lot of candy', again?" stage of things forces you to figure it all out from square one.
In this case, doing that makes repugnant conclusion believers look weird. They’ve hit a point where they’ve extrapolated from a foundation most people can at least acknowledge is reasonable to conclusions that completely contradict those foundations, and instead of either getting rid of the authority of the agreed-upon foundation or the conclusions that completely undercut it, they’ve tried to have both.
I think-but-can’t-confirm that this would be harder to see for someone like Scott, who lives in utilitarianism-is-good land. Has he thought real hard about why utilitarianism claims it can say that crushing a coke can is better than crushing the life out of someone? Very likely. Has he managed to keep up the habit of applying that rationale to every new doctrine that’s popped up in the last 20-ish years? Maybe, but it’s much less sure.
It seems like you probably either keep up that kind of first-principles scrutiny at all times, or else do what Scott did by saying "this is so against my intuition that I just don't want to do it, and that's fine." If you don’t do that, then you leave yourself vulnerable to any consistent-at-first-glance extrapolation that comes across the pike.
That’s how you find yourself either shoveling unwanted children into furnaces with Singer or purposefully over-filling the world with miserable people with MacAskill4; it’s by letting the goalposts get moved so gradually you never notice the premise has changed from “It’s easy to know what good is - we all know it, deep down” to “Whatever the dread god math demands, we shall provide - yea, even to the blood of our young.”
I’m saying you shouldn’t go along with this kind of thing. But if you are nonetheless inclined to flow out in the tide of other people’s arbitrary definition of good, I think it makes sense start with mine and buy me a minivan first.
If someone came up to you and said “Good-causing things are things that cause good!” and you accepted it as a full moral system without unlooping that circular argument, you don’t have a moral system. You have a business card with “being good is good” written on the back.
While writing this, I shouted “Monkey Paw references are in this season!” at my wife, and she asked me who determined that kind of thing. I told her that I could determine that kind of thing for myself, thank you very much; she then demanded that if I’m going to do that while she’s trying to paint miniatures that I at least have to make it a public announcement so people can hold me to it later. So here you go.
Kind of.
I want to be extra-fair here and point out that MacAskill is not, like, overpopulating the world on purpose right now, nor am I aware of him putting together forced breeding camps or anything. This is probably what’s implied at the extreme ends of his philosophy, but it’s worth noting that where he is right now he’s mostly just sending a lot of money to various charitable causes.
That doesn’t mean that you shouldn’t be worried about the philosophy anyhow; you should always be worried about what someone’s philosophy would mean were it accepted, because it might be. But it also doesn’t mean he’s not a decent person who spends a lot of time and effort trying to make things better.
The Repugnant Conclusion Bait-and-Switch
One of the more annoying things about Utilitarians, in general, is that they argue 'we've got the math on our side'. This irritates people who have studied basic Set Theory, who know that this isn't so. It is not as if mathematics is the best foundation for moral sentiments, but Utilitarianism has a strong appeal among people who pride their own rationalism above and beyond all else.
Infinite sequences are a source of strange paradoxes. Most of them are not actually contradictory but merely indicative of a mistaken intuition about the nature of infinity and the notion of a set.
"What is larger," wondered Galileo Galilei in _Two New Sciences_, published in 1638, "the set of all positive numbers (1,2,3,4 ...) or the set of all positive squares (1,4,9,16 ...)?" (He wasn't the first to do this sort of wondering, of course, but it's a convenient starting point, i.e. there are links.)
For some people the answer is obvious. The set of all squares are contained in the set of all numbers, therefore the set of all numbers must be larger. But others reason that because every number is the root of some square, the set of all numbers is equal to the set of all squares. Paradox, no?
Galileo concluded that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all the numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.
See 'Galileo's Paradox' https://en.wikipedia.org/wiki/Galileo%27s_paradox .
We haven't changed our minds much about this in the mathematical world. We've become more rigourous in our thinking, and have invented fancy notation -- typographical conventions -- to talk about them, but the last big thing in such comparisons was the idea of 'Cardinality of infinite sets', thank you Georg Cantor. The Cardinality of a set is how many items are in it. If the set is finite, then you just count the elements. Cantor wanted to make it possible to do certain comparisons with infinite sets. I could explain a whole lot of math here, which would bore most of the readership to tears, but people interested in this stuff can find it all over the internet. If you come from a country where set theory is taught in high school, you will have already learned this.
The bottom line is that the set of all numbers and the set of all squares are both the same size, the size being 'countable infinity' or 'aleph-null' in the jargon. Aha, you conclude. So where is the mistaken intuition that creates these paradoxes? The mistaken intuition is that you can compare infinite sets and conclude things like 'the set of all numbers is twice the size of the set of all even numbers'.
Which brings us to the Utilitarian's favourite hobby horse, trolley problems. If you consider each human being on the track as 'an infinite set of potentials', not just metaphorically, but mathematically too, then you can no longer conclude that killing 1 person is better than killing 4. They've all got the same cardinality. (No, I cannot prove this one. But for a thought experiment, we can assume it.)
And this is, after all, what the non-utilitarian moral philosophers have been insisting for all this time. People are not fungible. Non-utilitarians still have to make the tough moral decisions about whether you let one person die to save four, but we don't get to hide behind a shallow mistaken intuition all the while singing the 'we're superior because we have the math on our side' song, as loudly as we can.
P.S. I think that this dreadful state where we all end up living in capsule-hotel accommodations feeling just a hair above misery, with only one duty, to fill the world with people in the same state is simply a restatement of Hilbert's Paradox of the Grand Hotel --
https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel
where the hotel has a particularly lousy rating for hospitality on Trip Advisor.
I still think self-dealing is the biggest practical problem with utilitarianism. While it's true that every moral system gets abused, utilitarianism is uniquely susceptible.
Imagine I decide to eat more healthily. I have two options: Banish cream-filled Hostess cupcakes from my life entirely, or acknowledge that cream-filled Hostess cupcakes are a sometimes food that should be eaten extremely rarely.
Now probably I would increase my amount of life enjoyment if I could usually not eat a cream-filled Hostess cupcake, but occasionally, on a special occasion, have one cream-filled Hostess cupcake. So the "math" says to go with option 2.
I do so. But oh no! Now every time I come across a cream-filled Hostess cupcake I have to do a mental analysis. Is this one of these rare occasions when I am permitted a cupcake? And hey, look at all this psych research that says I will *not* make that decision based on my calculating mind, but on the hind-brain that thinks it might die if I don't have that cupcake. So I eat the cupcake and tell myself that I've been dealing with a lot of stress lately and that this will help me cope with that stress, and probably I'll be more effective in my diet if I don't make myself miserable through denial, and etc. etc.
Or I go with option one, say "sorry, I don't eat cream-filled Hostess cupcakes" and miss out on a tiny bit of pleasure, but gain the advantage of this decision not being a decision.
For me the big lie in utilitarianism is less that you might reach a repugnant conclusion. As you point out the whole purpose of the project is to increase human flourishing in human terms, so if our logic chain hits a point of not doing that, at the point of action I think 99.9% of sane people will revise the chain.
And it's not that I don't think there is a right and wrong answer to moral questions: I absolutely am okay decreasing human happiness and flourishing today for drastically increased human happiness and flourishing in the future.
The big lie is that we are mentally capable of handling every single moral choice through calculation. We absolutely are not.