Discover more from Resident Contrarian
The Repugnant Conclusion Bait-and-Switch
You can't build a vinegar house on baking soda foundations
There is probably a strong moral argument to be made for, at some point in the future, killing off most people to make room for dogs.
Think about it: some dogs are naturally very happy. Some of the very happiest dogs are very small and long-lived (I’m looking at you, yorkiepoos) omnivores who can live on a wide variety of foods. Before you consider other options, it makes sense to kill off everyone not necessary to maintain a reasonably high level of happiness in the much-expanded toy-breed category of the planet Earth.
Long-Termist Utilitarianism ends up demanding a lot of wacky stuff once you get deep enough into the math. A not-universal-but-approaching-typical stance among them is to maximize happiness as a total instead of an average. This leads to the argument that it’s not only good but morally demanded that we try to overpopulate the world as much as possible, right up to the point where life is so bad for everyone that the average person is almost-but-not-quite suicidal. Such is the objection here, from ACX:
MacAskill concludes that there’s no solution besides agreeing to create as many people as possible even though they will all have happiness 0.001. He points out that happiness 0.001 might not be that bad. People seem to avoid suicide out of stubbornness or moral objections, so “the lowest threshold at which living is still slightly better than dying” doesn’t necessarily mean the level of depression we associate with most real-world suicides. It could still be a sort of okay life…
I hate to disagree with twenty-nine philosophers, but I have never found any of this convincing. Just don’t create new people! I agree it’s slightly awkward to have to say creating new happy people isn’t morally praiseworthy, but it’s only a minor deviation from my intuitions, and accepting any of these muggings is much worse.
I’m strawmanning the hell out of Long Termists with the dog thing, but note that this is a strawman in the sense that nobody is really making the argument, not in the sense that some people’s stated logic isn’t consistent with it. Since a lot of LT folks value animal happiness at an only slightly discounted rate as compared to humans, the only thing keeping this from being viable is perhaps being able to maintain a sufficiently large population of dogs.
The devil of the whole business is that it’s pretty hard to argue against; if you buy into math-as-morality, you can’t be surprised when cold, hard numbers lead you to some pretty cold, dark places.
Scott’s technique to counter this is to point out that accepting extreme moral implications involving forced breeding camps to create a maximally-large population of minimally-happy folks in an effort to get a high score in a game nobody wants to play is mental enough that he’s abstaining:
Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people!
Scott’s argument is good as far as it goes, but it’s limited by him being a sane, fair utilitarian of sorts. There’s some criticisms that are hard to see and get to from the inside. But I’m arguably neither fair nor utilitarian, and my colloquial sanity is wobbly at best.
When you are a young deontologist setting out to strawman the ever-loving hell out of consequentialists, the first thing you run into is some low-level reference book definition of it, like this:
Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative.
The first thing you bite into here is that this leaves things pretty open for ad-hoc reasoning that just allows you to do whatever you wanted to anyway, but this quickly loses some of its shine when it’s pointed out to you that a lot of people from every moral system do this; it’s more a “shitty person” trait than one belonging to a particular moral philosophy.
The second thing, and deeper opportunity, is to point out that this is not only essentially a circular argument, but also reeks of isolated demands for rigor. The consequentialist (henceforth Conny) has just got done telling you how dumb you are for thinking that a rule can be good in an absolute abstract sense, but now he wants to claim the entire concept of “good” as if it’s an abstract, set-in-stone concept you all agree is settled.
So you form a battle plan: you will point out that pretty much any rule system that doesn’t allow for “this rule is a settled good” also doesn’t allow for “the concept of good outcomes is a settled matter” either. Isn’t every definition of “good” outcomes he could make essentially arbitrary? Why should you prefer “feed orphans in Africa and prevent murders” to “chuck a bunch of rocks in a lake”? It’s all just moving around atoms, you say.
(I know this is getting long, but please stay with me; it’s an important setup, I swear. I also promise I will mention Singer here at some point.)
The consequentialist is prepared with a counter here, and in almost every case says something like this:
Listen, man, I know this argument. And yes, I don’t believe there’s some supernatural, fundamental definition imposed on particular acts that makes them right or wrong, and yes, I also don’t think the finger of a god was involved in setting the definition of “good consequences”.
But here’s the thing: we both agree that we don’t want to get murdered. We both agree that we don’t want our stuff stolen. We both agree that as a general rule you shouldn’t sacrifice babies to death gods.
We both also notably agree that - absent relevant modifiers - feeling bad is bad and feeling good is good. We might disagree on certain particulars and we can hash those out, but in broad strokes we both agree there’s a vaguely Maslows-shaped set of things that we all agree are good if we can get them.
So no, I don’t believe in a hard-and-fast deity-enforced definition of good outcomes. But even if I accept your premise that the implications of that are that any outcome is basically equivalent to any other outcome some purely abstract way, I can still point to the fact that there’s a fairly large set of outcomes that pretty universally FEEL good or bad to humans at large, and it makes sense to pick that set as our definition of good.
Conny allows that - in his system- it might just be a trick of evolution that humans tend to think of a certain set of things as good, but points out that this isn’t necessarily the fatal blow I thought it was. “All humans agree on 90% of this” isn’t a great counterargument to a paper-clip maximizer, but it’s a pretty good argument to other humans - if we are hurtling towards the heat death of the universe, then at least we can do so while behaving like humans, and doing the things that humans think are good.
Let’s go ahead and say this stumps you; you are forced to grant consequentialism provisional legitimacy in the court of your own opinions and give up for a while to go somewhere else and make heavy-handed metaphors.
Imagine this: You are in a fun thought-experiment-based car dealership. A salesman comes up and indicates five large chalk hopscotch squares drawn in a straight line on the ground.
The deal, he says, is this: You are going to assign each square one of your five most important car-buying requirements. For every qualifier he meets, you will move forward a square; if he can get you through all five, you will be contractually obligated to buy the car.
This sounds like a shitty deal for you in some ways, but you don’t want to ruin the metaphor so you do what he asks. You assign cost to the first square, a particular make and model to the second, a minimum options package to the third, and so on. To save time, you put them in order of importance and tell him so.
Once you have done all this, the salesman starts his pitch. “This car is only a dollar - that’s a tremendously low price,” he says. You agree, so you move forward a square. “This car is a 2022 Honda Odyssey Minivan,” he says, and you know from the internet that this is the finest vehicle on the road today, so you move forward another square.
The salesman is pleased and continues the process. “This Honda Odyssey Minivan is possessor of the Elite trim package, the finest level of appointment Honda offers,” he says. You prepare to step forward, but he’s not done: “Oh, yeah, and also that thing I said about it being $1 was a lie, it’s like 60k minimum.”
Now you and he are at an impasse of sorts. He, being a weird for-the-sake-of-argument salesman, wants you to keep moving forward to square four; you, being sane, want to move all the way back to the pricing block and start over. After all, you moving through all the other squares was downstream of the $1 price.
If the salesman pushes it and complains that you should just ignore that the $1 price was never or at least isn’t now valid and that you should just keep moving forward at whatever new price he sets, he’d have very little sympathy from most people; they’d rightly point out that you can’t build an argument on a particular foundation, then yank that foundation away and expect people not to notice.
There’s a guy named Peter Singer who is pretty foundational to a lot of modern utilitarian thought. I don’t know enough about Will MacAskill to say what percentage of morality he agrees with him on, but this article seems to imply it’s a fair amount:
For about four decades, Singer’s essay was assigned predominantly as a philosophical exercise: his moral theory was so onerous that it had to rest on a shaky foundation, and bright students were instructed to identify the flaws that might absolve us of its demands. MacAskill, however, could find nothing wrong with it.
By the time MacAskill was a graduate student in philosophy, at Oxford, Singer’s insight had become the organizing principle of his life…
It’s important to note that things Singer says are not by definition necessarily things MacAskill says. But it’s also important to note that Singer says things like this:
Newborn human babies have no sense of their own existence over time. So killing a newborn baby is never equivalent to killing a person, that is, a being who wants to go on living. That doesn’t mean that it is not almost always a terrible thing to do. It is, but that is because most infants are loved and cherished by their parents, and to kill an infant is usually to do a great wrong to her or his parents.
Which in turn (so long as the mother agrees it’s fine) enables stuff like this thing that I wish I could obscure behind spoilers because it’s a legitimately horrifying nightmare that I’m entirely comfortable telling you shouldn’t actually read:
It’s not as intensely horrific as infanticide, but Will MacAskill seems to accept most of the repugnant conclusion and the bag of problems included in that acceptance. That’s the thing I mentioned earlier where you should overpopulate the world because the not-quite-entirely miserable lives of your huge population will have more total happiness than a smaller, much happier-on-average population would.
Like Scott, most people initially recoil against the idea of intentionally making a dystopian sorrow wasteland on purpose when we could just not do that. But as MacAskill says in his book, if you’ve already accepted that utilitarianism/consequentialism are right this is what follows from that - you can’t believe one and not believe the other.
And herein lies the double-cross; he’s just changed the price of the car on square 3.
The long-termist repugnant conclusion salesmanship here is sneaky. It has successfully argued in the past that our concept of good should be based on what - by trick of evolution or otherwise - seems good to humans. It then said “Hey, since we all agree that we should be maximizing good is great, shouldn’t we do something that seems incredibly bad on an instinctual level to all but a very small percentage of human beings?” and hoped like hell you wouldn’t remember the work it took to get you into the dealership in the first place.
Just as every person should ask when they are introduced to “good-actions-are-those-that-cause-good” as a morality system, you at some point very likely asked what makes utilitarianism or consequentialism’s definition of “good” non-arbitrary1. And you were told something. It might have been that “good” was defined by human moral intuitions we almost universally agreed on. It might have been some other standard - avoiding suffering, say - since pretty much every living thing tries to do this.
But you know what it wasn’t? It wasn’t “The mathematical implications of some weird assumptions I pulled out of a hat, implications that go against every atom of your moral instincts, and no matter how extreme the implications get as we keep increasing the variables on our being-good formulas.”. And it couldn’t have been - in that pre-frog-boiling era, you wouldn’t have accepted “A bunch of weird shit you will absolutely fucking hate” as an answer.
When MacAskill says stuff that boils down to “It’s morally demanded that to the extent we can we overpopulate the shit out of the world well beyond what would be reasonable-seeming, ensuring that everyone has shitty lives for all eternity”, he’s counting on you forgetting the specifics of your come-to-Util moment.
He has to hope you will, because the moment you remember that the $1 car price argument was “happiness of the kind that makes intuitive sense to all of us, even if it’s just a trick of evolution” and not “The kind of stuff Goebbels comes up with after snorting an entire Monkey Paw2”, he’s sunk. When someone comes to you and goes "Why is this incredibly counterintuitive thing your position", saying "well, I made some arbitrary assumptions then cranked them up to 500" doesn't cut the mustard.
They shouldn’t be able to get away with this. In the same way you should be able to say “hey, what happened to all that bible stuff?” if I ever come out in favor of adultery and murder, MacAskill-likes should be forced to explain why “This outcome that even I find morally repugnant is demanded by some concept of good we agreed on earlier, which was probably ‘do non-repugnant things’, or something close” still works with their new argument.
Here’s a controversial statement: Sometimes there’s such thing as too much knowledge - a point at which your human limitations can’t take into account everything you’ve learned at once. We mostly deal with this by letting the bulk of our knowledge seep into itself, intermixing into a foggy rule-of-thumbs ruleset we use to handle new arguments. But the simple foundational stuff has to compete with the newer, cooler complexity; it’s entirely possible for it to get a little cobwebby waiting for its turn.
Here’s an even more controversial statement: Sometimes a comparatively ignorant person can get better outcomes on a particular thought problem because he doesn’t know as much - they are forced to approach the argument as if it’s being made for the first time today.
I’m flattering myself3, but I think I’m just ignorant enough for that to have happened here. The long-termists have pulled a fast one, whether intentional or not; they are asking people to optimize for a particular form of good that isn't really that fleshed out by piggy-backing off of a completely different form of good that at least to some extent is. Still being at that "wait, why is this a better definition of good than 'eat a lot of candy', again?" stage of things forces you to figure it all out from square one.
In this case, doing that makes repugnant conclusion believers look weird. They’ve hit a point where they’ve extrapolated from a foundation most people can at least acknowledge is reasonable to conclusions that completely contradict those foundations, and instead of either getting rid of the authority of the agreed-upon foundation or the conclusions that completely undercut it, they’ve tried to have both.
I think-but-can’t-confirm that this would be harder to see for someone like Scott, who lives in utilitarianism-is-good land. Has he thought real hard about why utilitarianism claims it can say that crushing a coke can is better than crushing the life out of someone? Very likely. Has he managed to keep up the habit of applying that rationale to every new doctrine that’s popped up in the last 20-ish years? Maybe, but it’s much less sure.
It seems like you probably either keep up that kind of first-principles scrutiny at all times, or else do what Scott did by saying "this is so against my intuition that I just don't want to do it, and that's fine." If you don’t do that, then you leave yourself vulnerable to any consistent-at-first-glance extrapolation that comes across the pike.
That’s how you find yourself either shoveling unwanted children into furnaces with Singer or purposefully over-filling the world with miserable people with MacAskill4; it’s by letting the goalposts get moved so gradually you never notice the premise has changed from “It’s easy to know what good is - we all know it, deep down” to “Whatever the dread god math demands, we shall provide - yea, even to the blood of our young.”
I’m saying you shouldn’t go along with this kind of thing. But if you are nonetheless inclined to flow out in the tide of other people’s arbitrary definition of good, I think it makes sense start with mine and buy me a minivan first.
If someone came up to you and said “Good-causing things are things that cause good!” and you accepted it as a full moral system without unlooping that circular argument, you don’t have a moral system. You have a business card with “being good is good” written on the back.
While writing this, I shouted “Monkey Paw references are in this season!” at my wife, and she asked me who determined that kind of thing. I told her that I could determine that kind of thing for myself, thank you very much; she then demanded that if I’m going to do that while she’s trying to paint miniatures that I at least have to make it a public announcement so people can hold me to it later. So here you go.
I want to be extra-fair here and point out that MacAskill is not, like, overpopulating the world on purpose right now, nor am I aware of him putting together forced breeding camps or anything. This is probably what’s implied at the extreme ends of his philosophy, but it’s worth noting that where he is right now he’s mostly just sending a lot of money to various charitable causes.
That doesn’t mean that you shouldn’t be worried about the philosophy anyhow; you should always be worried about what someone’s philosophy would mean were it accepted, because it might be. But it also doesn’t mean he’s not a decent person who spends a lot of time and effort trying to make things better.