I don’t normally care that much about the titles of my articles, to the point where this is one of the leading complaints about my work. “How dare he?” says the HackerNews Commenter, noting that there are some small irregularities in my article title about an absurdist football novel featuring both Tim Tebow and an ancient, hidden techno-utopia who airdrops Happy Meals and
very ratsy of you to overintellectualize and try to find rational reasons for your feelings. to me the post reads more like "why ea is imperfect" and less like "trying to figure out why I don't like them", as it doesn't really have much of a self psychoanalyzing one would think is necessary to understand one's feelings. this is gonna sound weird coming from a stranger online.. but, really, "I'm inherently angry and hateful person" => "I have unresolved issues and would benefit from some good therapy and deeper understanding of my emotions" sounds like a no-brainer implication to me. sure, call me a member of a therapy cult. ignore me if this doesn't sound appropriate, meant this as an advice, not an adhominem.
not surprisingly from that "your feelings are your feelings and have little to do with reasons" stance, I mostly agree with your observations/characterizations, and it doesn't trigger me at all.
trying to understand your feelings a bit, sounds like part of it might be disappointment/disillusionment: "I thought they are knights in shining armor but turns out they are leftie california vegans", or maybe even "I thought they are knights and wanted to join but then I found vegan cows corpses in the basement".
Next stage is acceptance: yeah, their sociology is a mixed bag, and they have plenty of vegans and environmentalists and even some outright jacobin socialists under the umbrella - so what, compared to how pointless and inefficient a lot of charities (or government spending on education or health, for that matter) are, would argue - and I think you won't even disagree? - that EAs are net good, RCT-validated 3rd world cheap but effective interventions are a good idea (though you could argue re how much prominence it deserves), popularizing earning to give is mostly good, some attention to existential risk is not bad (though one could argue it's so hard a pursuit as to be mostly pointless).
ditto re self-advertising and overconfidence: again, it seems what you're doing is first setting an unrealistically high expectations, and then get angry over EAs not meeting them. like, you don't hate on domino's ad not being entirely even-handed and self-aware, are you? or that St Jude's (https://twitter.com/peterwildeford/status/1618739307447042048) is not being fully transparent in their ads and doesn't include RCTs or QALYs calcs?
in my model of the world, every org is to some extent dishonest and puts its own interests first, and isn't expected to self-incriminate, is allowed to promote itself, is expected to put the best spin on what it's doing.
EAs are not perfect but again, if you apply realistic standards and not some expectation of perfection, think realistically they aren't too terrible? more transparent than many, more willing to engage with external criticisms to at least some degree, and not say shush or ignore or viciously attack back like many other entities would.
or is it your context you're railing against? like, in my own backyard of NYC postrats, a lot of things you say are kinda common knowledge, rat vs EA split is deep and goes far back. is your community much more crazily into EA?
Here is a simple explanation about EA: it is mostly drafted by well-intentioned moralistic nerds who have no real social skills, and then being fortified by manipulative narcissists wanting to turn this into a cult. The narcissists uses nerds as meat shield against criticism, and nerds are not aware yet since they are socially tone-deaf. This is of course familiar to subcultures who endured drama of jocks and "normie" infiltration. https://meaningness.com/geeks-mops-sociopathshttps://archive.ph/OvMnA
> spending the money on useless or counter-productive things
A classic reason why nerds do this, it is what Yarvin noted as "telescopic philanthropy" that happened about two centuries ago, and that nerds (and women) with too much money on their hands become crass with it, and think that most problems can be handwaved with money and idealistic aid. Of course two things happen: the further you are from the organization using the money, the more likely it is to be misused ("a fool and his money are soon parted"); and that the money lost will almost always be used in a nefarious way by manipulators, especially when socially naïve nerds have weak conceptualization of "the right people" than the working class. https://graymirror.substack.com/p/is-effective-altruism-effective
> Even if animal lives are worth a fraction of human lives (and not all EAs agree this is so), sufficient animal lives would still outweigh an individual human life.
Case in point, THIS provides a good example as to how things go wrong. Firstly, the value of animals don't scale the same way as people do. For animals, since they often reproduce abundantly, and also die quickly, the cost of saving one more of them may be slow, but the value of saving one more them is also low. Unless there is a Costco or Aldi for environmental aid, no thanks. For humans, since everything we do in our social life follows Metcalfe's Law (no matter it is Facebook, Bitcoin, or your local community), having one more person on earth means a value increase relative to the current population size. https://archive.ph/uNVuWhttps://archive.ph/FexBS
Inherently humans are distinct from animals in this regard, and nerds are often tone-deaf about this since they are more left wing (right wing geeks please calm down for a second). They are more likely to be heterophilic (the opposite of "xenophobic" in its literal sense), and prioritize acquaintance over close friends, and that when nationhood is controlled, they disproportionately prefer earthly ecology and sometimes aliens. Weirdly enough liberalism is also tied to excessive verbosity and cultural-political participation relative to ones ability to reason and think scientifically, as demonstrated by the "wordcel" phonomena. https://www.nature.com/articles/s41467-019-12227-0https://emilkirkegaard.dk/en/2020/05/the-verbal-tilt-model/https://emilkirkegaard.dk/en/2022/02/wordcel-before-it-was-cool-verbal-tilt/
> Their fear of this outcome is deepened by the near-complete lack of people actually actively working to make advanced AI who agree with them.
Yarvin again dissected "AI risk" to its core, and that a lot of people miss: ANY AI needs to be fed by human behavior, as soon as there is a "human data strike" no AI can ever be as strong as these sci-fi lovers can make them. Another problem is that for AI to truly be sci-fi level powerful, it must first control wetware and meatware (brain and body) in some predictable but intrusive way. The problem in this line of through is that humans are valuable in its ability to "be free", and that lack of freedom is itself a type of malfunction. A smoking motherboard after overclocking is the same as a human "getting smoked" by any system, no matter how cultish or idealistic it is. https://graymirror.substack.com/p/there-is-no-ai-riskhttps://graymirror.substack.com/p/the-diminishing-returns-of-intelligence
Another way to see these "AI risk" issue, is that every trap of the AI is equivalent to some human problem. Roko's Basilisk (retroactive punishment by force)? Cancel Culture. Paperclip Maximizer (blind achivers)? Burnout culture and ROI-centric modern economics. Alignment Problem (not letting the AI "hack it")? Problems of middle management. It seems that nerds never see organizational and systematic problems as itself, and encode it into AI thought experiments. Kind of a coward's way of whining about how much the office sucks and wants to be paid to give bad advice instead of fixing it... come to think of it, this is exactly how "investment gurus" get people to buy their products that seem to solve a problem, only to realize it is a half-way attempt at explaining its intricacies. Silicone Valley angel investor mindset. venting: this is now The Menu (2022) but replace food with AI tech. https://archive.ph/Ar5Sxhttps://archive.ph/WwenP
> Rationalism at least somewhat leaves open the possibility that the implications of climate change might end up limited to a pretty small effect 100 years from now... Nuclear power is the kind of thing that utilitarians love... EAs broadly spend no money lobbying for this that I can tell...
See previous notes. If these liberal types are just genetically-wired tree-huggers we won't have problems, but they should be honest that it is emotionally biased, and not purely based on rational judgement, either at the front of greenhouse gases OR ecology protection. But then again it is likely that they don't speak out at the moment since nuclear risk is of real concern, and war-proof alternatives like thorium and fusion is still on the horizon. Is it possible that these nerds might be either defensive against jocks (manifesting as theory of disarmament), or that they worry they will emotionally snap and metaphorically "shoot up the school" given enough pressure (manifesting as madman theory and brinkmanship)?
> Here was a prize for innovative, worthwhile thinking where both things were defined as “Agreeing with and boosting things we already think are correct”... this contest dedicated to constructive, substantial criticism of EAs is fully winnable by someone who writes an essay detailing how they looked very deeply into all the available evidence and finds that EAs are great in every way... Outs are littered everywhere; escapes from actually criticizing them are omnipresent.
In essence their organized route of "steelmanning" (to harden or change ones argument to be robust) is devolving into some kind of braindead mental exercise. Steelmanning is always lit af, but this classic playstyle, similar to committee games from Yes Minister, is the martial arts equivalent to the Chinese master who can't even take a punch from an amateur boxer, and cries over how the rites have been broken and how it is "unfair". They are better off reforming them into a more modern style like what Bruce Lee did (e.g. Yarvin's Effective Tribalism proposal). At this point they are either spineless idiots, or that they are only doing this to be a liberal ceremonial equivalent of a Tithe (or a Fung Shui master paying a visit). https://archive.ph/y1poc
> it’s entirely possible (if unlikely) that the post you are reading, one in which is a guy working on figuring out ways to like and be less critical of EAs, could win that give-us-criticism contest if it was entered.
The only to win while showing them a bit of humility, is through the Red Pen game. Yes, Slavoj Žižek is the joker, and we live in a society, harhar. But what are they trying to speak without uttering the words directly? What is the main thing outside of the spreadsheet of the Tyranny of Numbers? "I am a brainy idiot being held hostage by narcissists, and we are starving!" Okay, outside of this west-coast prison for nice guys is the Nevada dessert. Do you have enough willpower and street-smarts to walk out? https://newcriterion.com/issues/2015/3/in-praise-of-red-inkhttp://www.maxpinckers.be/texts/slavoj-zizek/https://thomasjbevan.substack.com/p/the-tyranny-of-numbers
After reading this very interesting and enlightening piece, I would like to opine, if I may. At a time where our economy, even the global economy, is on the verge of collapse, these EAs have the audacity to pretend they care about anything outside of themselves! I hate the very idea of these self serving mutants! I don't hate the person, I won't endanger my soul for these pukes, I just hate the evil they cling to. They gave MILLIONS to the Democrats, second only to satan himself! That right there tells you everything you need to know about their state of mind and why they feel no shame for what they've done to OTHER PEOPLE, you know, the ones they care about! They only feel shame because they were caught and are revealing a lot of the deep state players for the fools they are 😂 Anyways, I pray for their souls.
Considering the current drama around FTX and SBF one has to admit that this article was prescient. With hindsight there was obviously some things to hate about effective altruists. At least with the effective altruists that wanted to be so effective in their altruism that they helped themselves to other people's money.
For what it's worth, to confirm your point a little bit, I have sometimes worked as an AI research foot soldier, building models for search engines using other people's mathematical frameworks, and I am quite convinced that we're nowhere near true general AI, and I'm not worried about it, and it's a big part of why I find EA (and its big brother Rationalism) off-putting.
Without reading the other comments I wanted to give my answer. I don't actually hate EAs, but I have strong instinctual concerns about them and I've gone through a similar thought process.
EAs strongly believe that they are at least close, if not already there, to figuring out how to fix all the world's problems. If not the particulars, at least the method - implying that the only answers they don't have are either not worth pursuing or they just haven't gotten to it yet. At the same time they pursue projects and goals that obviously aren't the best possible uses of money. I would point to AI and animal welfare as well, but even bed nets can be relevant if the nets aren't used properly or aren't as effective as advertised. There are examples of individual projects that went very poorly - and read like people stuck in a "white savior" mentality not understanding why they were rejected by the world's poor.
As a movement to try to get atheist tech workers to spend excess money for some good, it's probably way more effective than just about anything else. Even AI and animal welfare and Democratic politics make sense looked at that way, as that's their target audience. They just add bed nets or whatever to that stuff. To be anything more than that, it's going to fail badly.
There is a word missing from this essay: paternalism.
Your discussions with Hamish Doodles and others in these comments walk all around this, using words like "elitist" and "pretentious", but omit the accurate word. Let us be clear: EA is inherently paternalistic. Very paternalistic.
With its bednets and water projects, EA is all about doing things *to* people, in the belief that "we (EAists) know what's best for you. We see the big picture. We do not respect you as equals, adult humans living in valid social structures (and we can't actually *see* your society), so we are confiscating your agency: your right to make your own decisions and live life according to your customs."
Paternalism. Viewing outgroups as children. Or perhaps it's even more extreme than that: humans as livestock.
If this is right, the separate obsession with livestock is not a separate aspect at all. Humans are livestock. Ditto, the focus on existential risks: protect the herd. The Repugnant Conclusion fits right in. It's all livestock. Increase the herd. I think even the obsession with AI safety fits in here. A bigger wolf, or a bigger sheepdog? Protect the herd.
My dislike of EA stems from this... paternalism, almost eugenicism.
RC, I am guessing, reading between the lines, that some of your anger is a reaction to this too. From reading some of your other essays, it appears you have been in situations where you had little agency for long periods of time. Life experience like that would certainly sensitize me to programmes like EA, if I weren't already sensitised.
EA seeks to override agency. A real effective altruism would seek to increase opportunities for people to exercise their agency, in ways and to degrees that they feel are appropriate in their own societies, while respecting them as equals. Instead of "happiness", or "welfare", or "utility", it would seek to create space for agency.
There is a tried-and-true method for allowing others to increase their agency while respecting them as equals: trade. It can take a thousand years, but it's worked more often than not: trading with them.
Trade. Real rational altruists would focus on eliminating some of the barriers to trade that exist in developed countries that developing countries experience, such as internal producer subsidies, quotas and bans, exclusionary treaties, and byzantine regulations.
I think we need to push back on the idea that reducing poverty in Africa is indisputably good. EA's math seems to end at the point of 'it's a person whose is alive, so their life has value', and they don't consider the differences in instrumental value that people have.
EAs treat every human life on the planet the same, whether their next-door neighbour in San Francisco or some rural Ugandan. They might know each about as well too.
Donating via strictly utilitarian ways their sense of community responsibility is fulfilled without actually having to deal with anyone in an ongoing way. (It may not entirely be their fault either, see "Throwing rocks at the Google bus".)
An EA seeks to effect change through individual action on a one-dollar one-action basis, they are not so interested in political solutions rather than financial solutions. By and large the solutions to just about every major problem around the world today require increasing the levels of social trust, but the EA seeks to solve problems without addressing this underlying issue.
The EA seeks to brute force a better world via financial means and salve their conscience by helping their community, which they consider to be the whole world. A noble idea, but by ignoring that very few share this view of the world they overlook much better ways of solving problems, solutions that by being pro-communitarian would be much more lasting and cost effective over the long term.
This is, of course, a generalisation.
Ultimately I see EA as a movement grounded in libertarianism and libertarians don't put much value in trust or community; libertarianism is also pretty unpopular, especially outside of the United States.
I have a theory. Please let me know if this sounds plausible.
- EA has a culture of assuming a high-status position
- If an EA and runs into a random person on the street, the EA will assume by default that they have a better understanding of what is true and good. Kinda like if an Englishman and an Indian ran into each other in colonial India, the Englishman would automatically assume he was superior.
- The language EA uses isn't "we're a bunch of scruffy underdogs with some exciting ideas and ambitious plans", but more like "we've crunched the numbers and here's objectively what everyone should be doing".
- The high-status culture comes from the class background of the founding members: Oxford students. As I understand it, EA was originally an Oxford club and the name reflects that. "Effective Altruism" sounds pretentious and self-congratulatory.
- This status presentation triggers associations with rich idiots who assume they're entitled to everything and are better than everyone and are totally oblivious to their privilege.
If this is in fact what's going on, then it's a problem that can and should be solved. Status games are monkey-brain nonsense and shouldn't be getting in the way of actually helping people.
I think the instinctive revulsion against EA by those with any sense is justified, and explained by the fact that EA "philosophy" lives in an abstract fantasy world utterly detached from (and opposed to) how life works. I put together a philosophical critique here: https://luctalks.substack.com/p/effective-altruism-cringe-alarm
You may be over-thinking this. I just found your substack for the first time today, so, even more so than usually is the case, I could be misjudging you, but the subtitle of your stack 'Breaking down the arguments of superior minds.' leads me to provisionally sort you as some sort of egalitarian/anti-elitist by basic inclination.
Well, EA, which I also hadn't heard of until about 2 weeks ago, is an elitist project. The language is elitist, and the conclusions are elitist. Its take on one of the age-old problems in justice -- 'Why should I have so much, when so many have so little?' is 'because I am smarter than you are, and also more rational, which makes me more effective, which means that I will use the money more wisely than you would'. But in less than a week of reading about EA, trying to figure out what the fuss is all about, I have found a tiny bit of cleverness and almost no wisdom at all. And unacknowledged egoism. The underlying tone is 'morality is a crutch for people who just aren't as clever as we are'. Which means they don't understand rage (it sounds to me as if they could take lessons from you if they were interested) or humility, or 'power corrupts, and absolute power corrupts absolutely'. They don't seem to have any notion that Mr. Moneybags is a position of power, and that 'how do you use your power?' is a different question than 'does this produce less suffering?' All of these moral problems, near as I can tell, are either to be fixed by a little more cleverness, or don't/shouldn't exist in the first place.
Or perhaps mercilessness is what is needed? If you end up concluding that 'I am more moral than others, because others are too squeamish to accept Repugnant Conclusions', rather than 'I need to work on my morality generator so it stops spitting out Repugnant Conclusions because they are almost certainly clever-but-wrong' -- then what's to stop you from accepting a Repugnant Conclusion that the enemies of the movement all belong in gulags? After all, they shouldn't exist ...
Interesting point re nuclear power, I am also disappointed that it's not a major EA cause area, and I hope this is not for political reasons.
https://www.finmoorhouse.com/writing/ea-projects thinks it would be a good idea (Ctrl+F Nuclear advocacy organization) but seems to worry that it's not very tractable to change US nuclear regulation. I wasn't able to find much other discussion apart from an article on the EA forum that considers it not a priority since solar+wind will be cheaper and an easier sell.
I'm trying to figure out why I don't like Effective Altruists
very ratsy of you to overintellectualize and try to find rational reasons for your feelings. to me the post reads more like "why ea is imperfect" and less like "trying to figure out why I don't like them", as it doesn't really have much of a self psychoanalyzing one would think is necessary to understand one's feelings. this is gonna sound weird coming from a stranger online.. but, really, "I'm inherently angry and hateful person" => "I have unresolved issues and would benefit from some good therapy and deeper understanding of my emotions" sounds like a no-brainer implication to me. sure, call me a member of a therapy cult. ignore me if this doesn't sound appropriate, meant this as an advice, not an adhominem.
not surprisingly from that "your feelings are your feelings and have little to do with reasons" stance, I mostly agree with your observations/characterizations, and it doesn't trigger me at all.
trying to understand your feelings a bit, sounds like part of it might be disappointment/disillusionment: "I thought they are knights in shining armor but turns out they are leftie california vegans", or maybe even "I thought they are knights and wanted to join but then I found vegan cows corpses in the basement".
Next stage is acceptance: yeah, their sociology is a mixed bag, and they have plenty of vegans and environmentalists and even some outright jacobin socialists under the umbrella - so what, compared to how pointless and inefficient a lot of charities (or government spending on education or health, for that matter) are, would argue - and I think you won't even disagree? - that EAs are net good, RCT-validated 3rd world cheap but effective interventions are a good idea (though you could argue re how much prominence it deserves), popularizing earning to give is mostly good, some attention to existential risk is not bad (though one could argue it's so hard a pursuit as to be mostly pointless).
ditto re self-advertising and overconfidence: again, it seems what you're doing is first setting an unrealistically high expectations, and then get angry over EAs not meeting them. like, you don't hate on domino's ad not being entirely even-handed and self-aware, are you? or that St Jude's (https://twitter.com/peterwildeford/status/1618739307447042048) is not being fully transparent in their ads and doesn't include RCTs or QALYs calcs?
in my model of the world, every org is to some extent dishonest and puts its own interests first, and isn't expected to self-incriminate, is allowed to promote itself, is expected to put the best spin on what it's doing.
EAs are not perfect but again, if you apply realistic standards and not some expectation of perfection, think realistically they aren't too terrible? more transparent than many, more willing to engage with external criticisms to at least some degree, and not say shush or ignore or viciously attack back like many other entities would.
or is it your context you're railing against? like, in my own backyard of NYC postrats, a lot of things you say are kinda common knowledge, rat vs EA split is deep and goes far back. is your community much more crazily into EA?
Here is a simple explanation about EA: it is mostly drafted by well-intentioned moralistic nerds who have no real social skills, and then being fortified by manipulative narcissists wanting to turn this into a cult. The narcissists uses nerds as meat shield against criticism, and nerds are not aware yet since they are socially tone-deaf. This is of course familiar to subcultures who endured drama of jocks and "normie" infiltration. https://meaningness.com/geeks-mops-sociopaths https://archive.ph/OvMnA
> spending the money on useless or counter-productive things
A classic reason why nerds do this, it is what Yarvin noted as "telescopic philanthropy" that happened about two centuries ago, and that nerds (and women) with too much money on their hands become crass with it, and think that most problems can be handwaved with money and idealistic aid. Of course two things happen: the further you are from the organization using the money, the more likely it is to be misused ("a fool and his money are soon parted"); and that the money lost will almost always be used in a nefarious way by manipulators, especially when socially naïve nerds have weak conceptualization of "the right people" than the working class. https://graymirror.substack.com/p/is-effective-altruism-effective
> Even if animal lives are worth a fraction of human lives (and not all EAs agree this is so), sufficient animal lives would still outweigh an individual human life.
Case in point, THIS provides a good example as to how things go wrong. Firstly, the value of animals don't scale the same way as people do. For animals, since they often reproduce abundantly, and also die quickly, the cost of saving one more of them may be slow, but the value of saving one more them is also low. Unless there is a Costco or Aldi for environmental aid, no thanks. For humans, since everything we do in our social life follows Metcalfe's Law (no matter it is Facebook, Bitcoin, or your local community), having one more person on earth means a value increase relative to the current population size. https://archive.ph/uNVuW https://archive.ph/FexBS
Inherently humans are distinct from animals in this regard, and nerds are often tone-deaf about this since they are more left wing (right wing geeks please calm down for a second). They are more likely to be heterophilic (the opposite of "xenophobic" in its literal sense), and prioritize acquaintance over close friends, and that when nationhood is controlled, they disproportionately prefer earthly ecology and sometimes aliens. Weirdly enough liberalism is also tied to excessive verbosity and cultural-political participation relative to ones ability to reason and think scientifically, as demonstrated by the "wordcel" phonomena. https://www.nature.com/articles/s41467-019-12227-0 https://emilkirkegaard.dk/en/2020/05/the-verbal-tilt-model/ https://emilkirkegaard.dk/en/2022/02/wordcel-before-it-was-cool-verbal-tilt/
> Their fear of this outcome is deepened by the near-complete lack of people actually actively working to make advanced AI who agree with them.
Yarvin again dissected "AI risk" to its core, and that a lot of people miss: ANY AI needs to be fed by human behavior, as soon as there is a "human data strike" no AI can ever be as strong as these sci-fi lovers can make them. Another problem is that for AI to truly be sci-fi level powerful, it must first control wetware and meatware (brain and body) in some predictable but intrusive way. The problem in this line of through is that humans are valuable in its ability to "be free", and that lack of freedom is itself a type of malfunction. A smoking motherboard after overclocking is the same as a human "getting smoked" by any system, no matter how cultish or idealistic it is. https://graymirror.substack.com/p/there-is-no-ai-risk https://graymirror.substack.com/p/the-diminishing-returns-of-intelligence
Another way to see these "AI risk" issue, is that every trap of the AI is equivalent to some human problem. Roko's Basilisk (retroactive punishment by force)? Cancel Culture. Paperclip Maximizer (blind achivers)? Burnout culture and ROI-centric modern economics. Alignment Problem (not letting the AI "hack it")? Problems of middle management. It seems that nerds never see organizational and systematic problems as itself, and encode it into AI thought experiments. Kind of a coward's way of whining about how much the office sucks and wants to be paid to give bad advice instead of fixing it... come to think of it, this is exactly how "investment gurus" get people to buy their products that seem to solve a problem, only to realize it is a half-way attempt at explaining its intricacies. Silicone Valley angel investor mindset. venting: this is now The Menu (2022) but replace food with AI tech. https://archive.ph/Ar5Sx https://archive.ph/WwenP
> Rationalism at least somewhat leaves open the possibility that the implications of climate change might end up limited to a pretty small effect 100 years from now... Nuclear power is the kind of thing that utilitarians love... EAs broadly spend no money lobbying for this that I can tell...
See previous notes. If these liberal types are just genetically-wired tree-huggers we won't have problems, but they should be honest that it is emotionally biased, and not purely based on rational judgement, either at the front of greenhouse gases OR ecology protection. But then again it is likely that they don't speak out at the moment since nuclear risk is of real concern, and war-proof alternatives like thorium and fusion is still on the horizon. Is it possible that these nerds might be either defensive against jocks (manifesting as theory of disarmament), or that they worry they will emotionally snap and metaphorically "shoot up the school" given enough pressure (manifesting as madman theory and brinkmanship)?
> Here was a prize for innovative, worthwhile thinking where both things were defined as “Agreeing with and boosting things we already think are correct”... this contest dedicated to constructive, substantial criticism of EAs is fully winnable by someone who writes an essay detailing how they looked very deeply into all the available evidence and finds that EAs are great in every way... Outs are littered everywhere; escapes from actually criticizing them are omnipresent.
In essence their organized route of "steelmanning" (to harden or change ones argument to be robust) is devolving into some kind of braindead mental exercise. Steelmanning is always lit af, but this classic playstyle, similar to committee games from Yes Minister, is the martial arts equivalent to the Chinese master who can't even take a punch from an amateur boxer, and cries over how the rites have been broken and how it is "unfair". They are better off reforming them into a more modern style like what Bruce Lee did (e.g. Yarvin's Effective Tribalism proposal). At this point they are either spineless idiots, or that they are only doing this to be a liberal ceremonial equivalent of a Tithe (or a Fung Shui master paying a visit). https://archive.ph/y1poc
> it’s entirely possible (if unlikely) that the post you are reading, one in which is a guy working on figuring out ways to like and be less critical of EAs, could win that give-us-criticism contest if it was entered.
The only to win while showing them a bit of humility, is through the Red Pen game. Yes, Slavoj Žižek is the joker, and we live in a society, harhar. But what are they trying to speak without uttering the words directly? What is the main thing outside of the spreadsheet of the Tyranny of Numbers? "I am a brainy idiot being held hostage by narcissists, and we are starving!" Okay, outside of this west-coast prison for nice guys is the Nevada dessert. Do you have enough willpower and street-smarts to walk out? https://newcriterion.com/issues/2015/3/in-praise-of-red-ink http://www.maxpinckers.be/texts/slavoj-zizek/ https://thomasjbevan.substack.com/p/the-tyranny-of-numbers
After reading this very interesting and enlightening piece, I would like to opine, if I may. At a time where our economy, even the global economy, is on the verge of collapse, these EAs have the audacity to pretend they care about anything outside of themselves! I hate the very idea of these self serving mutants! I don't hate the person, I won't endanger my soul for these pukes, I just hate the evil they cling to. They gave MILLIONS to the Democrats, second only to satan himself! That right there tells you everything you need to know about their state of mind and why they feel no shame for what they've done to OTHER PEOPLE, you know, the ones they care about! They only feel shame because they were caught and are revealing a lot of the deep state players for the fools they are 😂 Anyways, I pray for their souls.
Considering the current drama around FTX and SBF one has to admit that this article was prescient. With hindsight there was obviously some things to hate about effective altruists. At least with the effective altruists that wanted to be so effective in their altruism that they helped themselves to other people's money.
It's because they're smugly confident of their virtue, for one thing.
"I have *math* to prove how virtuous I am, and I have the ability to multiply math by guilt to make you realize how unvirtuous you are."
And that's as someone who generally *agrees* with the concept. Dunno.
To the fear of AI one, you might be interested in this piece arguing against that fear: https://idlewords.com/talks/superintelligence.htm
For what it's worth, to confirm your point a little bit, I have sometimes worked as an AI research foot soldier, building models for search engines using other people's mathematical frameworks, and I am quite convinced that we're nowhere near true general AI, and I'm not worried about it, and it's a big part of why I find EA (and its big brother Rationalism) off-putting.
Without reading the other comments I wanted to give my answer. I don't actually hate EAs, but I have strong instinctual concerns about them and I've gone through a similar thought process.
EAs strongly believe that they are at least close, if not already there, to figuring out how to fix all the world's problems. If not the particulars, at least the method - implying that the only answers they don't have are either not worth pursuing or they just haven't gotten to it yet. At the same time they pursue projects and goals that obviously aren't the best possible uses of money. I would point to AI and animal welfare as well, but even bed nets can be relevant if the nets aren't used properly or aren't as effective as advertised. There are examples of individual projects that went very poorly - and read like people stuck in a "white savior" mentality not understanding why they were rejected by the world's poor.
As a movement to try to get atheist tech workers to spend excess money for some good, it's probably way more effective than just about anything else. Even AI and animal welfare and Democratic politics make sense looked at that way, as that's their target audience. They just add bed nets or whatever to that stuff. To be anything more than that, it's going to fail badly.
There is a word missing from this essay: paternalism.
Your discussions with Hamish Doodles and others in these comments walk all around this, using words like "elitist" and "pretentious", but omit the accurate word. Let us be clear: EA is inherently paternalistic. Very paternalistic.
With its bednets and water projects, EA is all about doing things *to* people, in the belief that "we (EAists) know what's best for you. We see the big picture. We do not respect you as equals, adult humans living in valid social structures (and we can't actually *see* your society), so we are confiscating your agency: your right to make your own decisions and live life according to your customs."
Paternalism. Viewing outgroups as children. Or perhaps it's even more extreme than that: humans as livestock.
If this is right, the separate obsession with livestock is not a separate aspect at all. Humans are livestock. Ditto, the focus on existential risks: protect the herd. The Repugnant Conclusion fits right in. It's all livestock. Increase the herd. I think even the obsession with AI safety fits in here. A bigger wolf, or a bigger sheepdog? Protect the herd.
My dislike of EA stems from this... paternalism, almost eugenicism.
RC, I am guessing, reading between the lines, that some of your anger is a reaction to this too. From reading some of your other essays, it appears you have been in situations where you had little agency for long periods of time. Life experience like that would certainly sensitize me to programmes like EA, if I weren't already sensitised.
EA seeks to override agency. A real effective altruism would seek to increase opportunities for people to exercise their agency, in ways and to degrees that they feel are appropriate in their own societies, while respecting them as equals. Instead of "happiness", or "welfare", or "utility", it would seek to create space for agency.
There is a tried-and-true method for allowing others to increase their agency while respecting them as equals: trade. It can take a thousand years, but it's worked more often than not: trading with them.
Trade. Real rational altruists would focus on eliminating some of the barriers to trade that exist in developed countries that developing countries experience, such as internal producer subsidies, quotas and bans, exclusionary treaties, and byzantine regulations.
I think we need to push back on the idea that reducing poverty in Africa is indisputably good. EA's math seems to end at the point of 'it's a person whose is alive, so their life has value', and they don't consider the differences in instrumental value that people have.
EAs treat every human life on the planet the same, whether their next-door neighbour in San Francisco or some rural Ugandan. They might know each about as well too.
Donating via strictly utilitarian ways their sense of community responsibility is fulfilled without actually having to deal with anyone in an ongoing way. (It may not entirely be their fault either, see "Throwing rocks at the Google bus".)
An EA seeks to effect change through individual action on a one-dollar one-action basis, they are not so interested in political solutions rather than financial solutions. By and large the solutions to just about every major problem around the world today require increasing the levels of social trust, but the EA seeks to solve problems without addressing this underlying issue.
The EA seeks to brute force a better world via financial means and salve their conscience by helping their community, which they consider to be the whole world. A noble idea, but by ignoring that very few share this view of the world they overlook much better ways of solving problems, solutions that by being pro-communitarian would be much more lasting and cost effective over the long term.
This is, of course, a generalisation.
Ultimately I see EA as a movement grounded in libertarianism and libertarians don't put much value in trust or community; libertarianism is also pretty unpopular, especially outside of the United States.
I have a theory. Please let me know if this sounds plausible.
- EA has a culture of assuming a high-status position
- If an EA and runs into a random person on the street, the EA will assume by default that they have a better understanding of what is true and good. Kinda like if an Englishman and an Indian ran into each other in colonial India, the Englishman would automatically assume he was superior.
- The language EA uses isn't "we're a bunch of scruffy underdogs with some exciting ideas and ambitious plans", but more like "we've crunched the numbers and here's objectively what everyone should be doing".
- The high-status culture comes from the class background of the founding members: Oxford students. As I understand it, EA was originally an Oxford club and the name reflects that. "Effective Altruism" sounds pretentious and self-congratulatory.
- This status presentation triggers associations with rich idiots who assume they're entitled to everything and are better than everyone and are totally oblivious to their privilege.
If this is in fact what's going on, then it's a problem that can and should be solved. Status games are monkey-brain nonsense and shouldn't be getting in the way of actually helping people.
I think the instinctive revulsion against EA by those with any sense is justified, and explained by the fact that EA "philosophy" lives in an abstract fantasy world utterly detached from (and opposed to) how life works. I put together a philosophical critique here: https://luctalks.substack.com/p/effective-altruism-cringe-alarm
I thought this was a thoughtful critique of EA in line with your critique of EA's certainty it is right on various issues: https://www.strangeloopcanon.com/p/ea-is-a-fight-against-knightian-uncertainty
You may be over-thinking this. I just found your substack for the first time today, so, even more so than usually is the case, I could be misjudging you, but the subtitle of your stack 'Breaking down the arguments of superior minds.' leads me to provisionally sort you as some sort of egalitarian/anti-elitist by basic inclination.
Well, EA, which I also hadn't heard of until about 2 weeks ago, is an elitist project. The language is elitist, and the conclusions are elitist. Its take on one of the age-old problems in justice -- 'Why should I have so much, when so many have so little?' is 'because I am smarter than you are, and also more rational, which makes me more effective, which means that I will use the money more wisely than you would'. But in less than a week of reading about EA, trying to figure out what the fuss is all about, I have found a tiny bit of cleverness and almost no wisdom at all. And unacknowledged egoism. The underlying tone is 'morality is a crutch for people who just aren't as clever as we are'. Which means they don't understand rage (it sounds to me as if they could take lessons from you if they were interested) or humility, or 'power corrupts, and absolute power corrupts absolutely'. They don't seem to have any notion that Mr. Moneybags is a position of power, and that 'how do you use your power?' is a different question than 'does this produce less suffering?' All of these moral problems, near as I can tell, are either to be fixed by a little more cleverness, or don't/shouldn't exist in the first place.
Or perhaps mercilessness is what is needed? If you end up concluding that 'I am more moral than others, because others are too squeamish to accept Repugnant Conclusions', rather than 'I need to work on my morality generator so it stops spitting out Repugnant Conclusions because they are almost certainly clever-but-wrong' -- then what's to stop you from accepting a Repugnant Conclusion that the enemies of the movement all belong in gulags? After all, they shouldn't exist ...
This seems like a very relevant post regarding the sorts of issues raised here: https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of.
Interesting point re nuclear power, I am also disappointed that it's not a major EA cause area, and I hope this is not for political reasons.
https://www.finmoorhouse.com/writing/ea-projects thinks it would be a good idea (Ctrl+F Nuclear advocacy organization) but seems to worry that it's not very tractable to change US nuclear regulation. I wasn't able to find much other discussion apart from an article on the EA forum that considers it not a priority since solar+wind will be cheaper and an easier sell.
https://forum.effectivealtruism.org/topics/nuclear-energy