

Discover more from Resident Contrarian
(Note: This article is incredibly long, boring for most, and inside-baseball-y in a lot of ways. If you just want to have a good time like a normal person, I did a podcast with Will Jarvis of Narratives here. It’s pretty good! It’s not 7000 words long!)
I don’t agree with most of Scott Alexander’s conclusions about how social stuff does or should work. We live in very different areas, around very different people, and see the world in very different ways. Given that, it’s not surprising when you look back through my archive and see something like a half-dozen articles with topics that boil down to “Scott was wrong on the internet!”.
It was inevitable, I think, that he’d someday take enough exception with one of these to respond to it, and so he has.
In reading his response (and my response to his response) keep in mind that Scott is generally pretty nice to me despite probably agreeing with me on very little, and has a “bank” of about half a dozen unanswered swipes from me to finance stiffly-worded responses from; I don’t think he could overstep on the response without making fun of my weird tiny hands or something similarly sensitive.
There’s some stuff I agree with him on, but more that I don’t, so this is a long article even judging by my already-wordy standards. Get ready, folks; I used all my words.
Context
The basic story is that Scott wrote an article talking about Jhana meditative states, essentially different meditative milestones that meditators state they click into at different levels of meditation mastery.
Claims about what each stage of the Jhanas actually feels like vary quite a bit, but Scott picked pretty extreme examples (relative to what I’ve seen); a guy claiming it was better than sex, made him quit (or use less) recreation drugs, eat less dessert, and drink less coffee (because it sensitized his brain to caffeine). There was also a person (I think a woman) who claimed that she (or he) would sometime later almost (or actually?) orgasm from touching blankets in Target as a result of previous Jhana use.
Aftereffects notwithstanding, Jhana practitioners claim exactly no downsides from using this; they say there is no addiction or loss of function of any kind associated with these states. In fact, they say that the states are so very non-addictive that they often forget to do them at all; there are not only no cravings but indeed no impulse to get at them at all.
Some people called bullshit in the comments. Scott pointed out that the states are both plausible (since brains do crazy stuff) and claimed by a lot of people (“thousands”) and that given this it was weird to question Jhana states where you wouldn’t question, say, hunger.
You have probably already read my previous article or Scott’s summary of it, and if you’ve read most of my work outside that you know I enjoy a scrap. That said, I’m not right on every single thing here - I’ll be walking back my weak stuff before trying to poke holes in Scott’s.
Spoonies again
Scott says this:
Telling doctors that you’re adopted in order to get genetic tests seems more genuinely deceptive and counterproductive. But it seems like the sort of deception that you would come up with if you were suffering a lot and wanted to maximize chances of your doctor figuring out why, without really understanding how genetic tests worked.
But based on a few circumstantial things like these, people keep tarring all “spoonies” as fakers. I had a patient like this recently. They had some weird symptoms that pattern-matched to “the type of thing someone might make up”, the two or three most common tests didn’t find anything, a succession of doctors accused them of making the symptoms up, and they had absolutely awful quality of life for a year or two. Finally some doctor (not me, I was their psychiatrist) dug a little deeper and found a tumor the size of a tennis ball, removal of which relieved all of their symptoms.
I don’t think he’s entirely wrong here. Directly after writing the original article, I got a lot of comments and emails that all boiled down to something like this:
I think at the very least you are using the term spoonies wrong; it predates the popularity of the “fake spoonie” narrative. There are lots of us who have diagnoses that were hard to get but are legitimate, and treatments we are getting that have cured/lessened/helped our problems.
Think about it: do you think that diagnostic medicine is perfect? If you do, do you think every doctor is perfect at it? Do you think doctors, who are all humans, are doing their jobs well at all times?
And no, I don’t. I was trying to use a term I don’t think I fully understood in a universal pejorative way, and I think if I had thought about it a bit longer I would have said “you know, it’s pretty much assured that there are at least some significant amount of people with some combination of hard-to-diagnose diseases and shitty doctors I’m lumping into this category”.
That leaves me in a weird spot because everything we know about the world says there are going to be sick people having needlessly tough experiences getting care. But if I’m right about “some people lie for attention” at all, there’s going to be at least some of those mixed in with the spoonies. But even if there’s sort of a lot of fakers, that would be additive to actually-sick people, not reductive; the bad players would make them more sympathetic, not less.
Scott pegs bad-type-spoonie numbers at very minimal:
Are all spoonies like this? No. My totally made-up wild guess, which might be completely wrong, is that about 20% have some physical illness we understand perfectly well (like a tumor) that just hasn’t been detected yet, 30% have some physical illness we haven’t discovered/characterized/understood yet, 45% have some psychosomatic condition, and 5% are consciously faking for attention.
If he’s right about these numbers, it would matter in terms of the kinds of policies you’d want. If most of everyone is just getting mistreated by the system, and <1% are fakers taking advantage of that system for funzies, the funzies-havers are not-great but should be ignored in favor of the much-superior “make sure the 99% are taken seriously” messaging.
I’m not sure that Scott is right about 50% of everybody who is sitting in doctors’ offices going “I don’t care what your tests say, I’m sick” is perfectly legit, but I also have no great way of confidently saying what the correct level is.
I think I’d be less hard on myself if I had said “there’s certainly a ton of legit people but this is still a problem because it’s a growing threat and we have to carefully balance taking care of the sick or kicking out the shitty so we get an optimal outcome”, and I have some text in there that kind of does this, but not enough.
So really I shouldn’t have talked about this group or should have thought about it hard enough to put nuance into the argument. That’s bad and I feel bad about it. I’ll be talking about them more later (especially as it relates to psychosomatic pain), but I wanted to address this upfront; I’m accusing of Scott veering too-trustful, but I very probably veer too untrustful, and this is an example of how that fails.
DID, Appeals to Friends-of-Opponent, Bias as Evidence, Etc.
If you read my original article, you know I brought up a variety of people making a variety of claims I think are wild and should be distrusted, inclusive of Spoonies (see the last section), TikTok DiD people, Jhana-havers, and people who said they caught Mew in 1st gen Pokemon in 1998
.While all these examples were fun to write, they were included because (at least at the time) I thought they were doing a specific thing that furthered an argument. I was looking for examples of things you might have seen and intuitively distrusted in the past, especially where there wasn’t much at stake besides internet points.
Eventually, this all resolved into the conclusion I want you to draw if I’ve done my job well:
Anyway, the point is this: I’m arguing for a concept of a reasonable middle between “running up to everyone who says they have long COVID and calling them dirty, filthy liars” and “accepting every unproven claim of any sort as face-value true”.
If I get you from A to B here, it’s because I’ve reminded you that sometimes people relay false information, pointed out that we all often decide we don’t believe some claim or another, and that this is a good reason to think want a norm where you can acknowledge you don’t buy something without it carrying the same weight as calling someone an out-and-out liar.
The easiest way to destroy this argument is to show that people don’t relay false information like I’m saying they do. Scott goes a different route on DiD:
They emphasize that it really feels like Vader is in their head giving them advice, or that they sometimes “become” Vader - and in particular they emphasize that this is different from just asking themselves “what would Darth Vader do in this situation?”. They understand that most people learning about their situation would expect that they’re exaggerating a much more boring “just ask yourself what Vader would do” situation, and they’re fine with people believing that if they want, but insist that it’s actually something different and more interesting than that...
I find this all pretty believable for a few reasons. Lots of people (Buddhists, philosophers, psychologists) talk about how the ego is an illusion. And if you’re going to have an illusion, it doesn’t seem significantly weirder to have two illusions. Reading Origin Of Consciousness In The Breakdown Of The Bicameral Mind (see my review here) convinced me that all theories of mind are made up, that different cultures make up their theories of mind differently, and that theories of mind which separate the ego and superego and whatever into different entities aren’t inherently dumber or harder to work with than theories which count them all as the same entity.
I have to consider this in tandem with a quote from the last section:
Also, everyone on TikTok is terrible and shouldn’t be considered a representative of their respective communities.
Following that quote, it feels like he’s saying “reasonable sounding versions of this claim exist”. But the point of bringing up hyper-crazy versions of this kind of thing wasn’t to show that everyone making these claims was as easy to identify as lying as TikTok folks; it was to show a place where Scott’s “Lots of people make this claim, and it’s plausible, so there’s no reason to question it” logic fails.
There’s no bright-line reason the TikTok people’s 20 wacky alters are less believable than his friends’ contagious-but-mild multiple personalities. If “brains are crazy and we don’t fully understand them, and they say it’s so, and it’s not like you are a mindreader” works for one, it works for the other; if it’s not enough for one then it’s also not enough for another. I agree that his friend’s version seems a lot more believable - but that opens the door to “seems”, and a whole can of worms.
To differentiate between the two, you have to dip into the messy well of personal judgments, judgments based on tone, biases about what kind of claims you’ve found to be true in the past, etc. that his original justification left out. That’s reinforced here:
The evidence for jhanas is thousands of people over thousands of years saying they’ve experienced it, a bunch of my friends who I trust a lot saying it worked for them,
I have at least three acquaintances who are in the category RC talks about - people who say they have some sort of multiple personality type thing going on, that it’s fine, they live with it, it’s no big problem.
He’s bringing up personal reasons - things based on friendship, feels, and social experience - to say something like “Listen, I vibe hard with these folks. For reasons not directly related to this claim, I find them mostly trustworthy; this is evidence”. But if social stuff like this works in one direction, it should work well in both; if you can find someone trustworthy based on judgment, you can also find them untrustworthy based on the same.
What I don’t think you can do, fairly, is say “These people vibe as trustworthy for me, and that should go in the formula” then balk when people do the opposite
. If vibes don’t belong here at all, that's an argument that we can make. If they belong, there should be a pretty stiff argumentative burden to say you can only put them in the “pro” column.On RC’s Magic Lie Detection Abilities
Long quote:
RC goes on to use these two cases as proof that sometimes large groups of people lie, even if they don’t seem to have much motivation:
The why-would-they-lie argument doesn’t hold water; you can point to countless groups who conveyed information that was false as a group. You can see the obvious falseness mixed into Spoonieism and DID TikTok fads.
…and then makes an argument I find pretty bizarre. He quotes a Douglas Adams piece on how predicting the trajectory of a baseball seems to require advanced physics, but many children and ignorant people can do it anyway by instinct, then concludes:
It is not a secret that people who trend towards rationalism (or tech, with which rationalism has significant overlap) are not, on average, considered to be exceptionally socially skilled. A placement on the autism spectrum or some other form of neurodivergence is considered to be the norm rather than the exception to the rule amongst them. I don’t think this is bad; if anything, it’s where the group’s value comes from in the first place.
But with that comes a group-wide expectation that things that can’t be quantified with math are thus default-unknowable. A statement like “I could tell he was lying” isn’t quite taboo nonsense there, but it carries much less weight than in other places. People are less able or less willing to point out that someone looks less credible for socially understood reasons than in other less-enlightened-more-practical contexts.
Sometimes this is nice, but at some extremes, it ends up being a lot like if someone looked at [Douglas Adams’] description of a person catching a baseball above, realized that they can’t really explain how that happens, and then concluded that baseball catching was not a skill that does or could exist.
At the extremes of those extremes, you see things like the jhana thing: where it’s something that seems unlikely to most but is unfalsifiable, and because of that unfalsifiability is then assumed to be true because it was claimed at all. By Scott’s standard above, we would basically assume that any claim we couldn’t disprove was true, provided we could find at least a few thousand people who claimed it.
This sounds like: “I, RC, have the mysterious mental ability to detect liars. I admit I can’t prove this, but come on, you should probably just trust me because it’s perfectly reasonable to think other people have mysterious mental abilities you don’t.”
But that’s the exact point he’s been arguing against this whole time! Either we trust trustworthy-sounding people who we like, when they say stuff that sounds kind of plausible - or we apply extreme skepticism about every not-immediately-verifiable claim!
This is the only part of the entire complaint (most of which was reasonable) that seems like a malicious misreading to me
.This Douglas Adams quote points out that humans can do things that often go far beyond what they could work out on a spreadsheet - that some stuff is automatic in a way that, as Adams puts it, would allow you to catch a baseball without several minutes worth of calculations. That this is weird, but it’s also true.
The reason I bring it up is pretty clear: I’m arguing against what I perceive to be an over-simplified justification for belief, one that excludes stuff like “Your decades of experience being around and observing people and the various truthfulness algos you developed over that time”.
I think that rationalist-type people at some point addressed the problem of people who refused to look at data, and quite rightly corrected towards an expectation of “the data has to be part of this; you can’t go entirely off your gut”. But I also argue that at some point they overcorrected past what was reasonable, eventually saying that your gut shouldn’t factor in at all - or, as guessed above, that you can’t use it to decide someone is lying the same way you’d use it to reinforce a decision that they were truthful.
Essentially, I’m saying that rationalists tend towards hearing someone say something didn’t have the ring of truth to it, and saying something equivalent to “no, that’s an unreasonable thought to have; there’s no way you could have done the math on it” in much the same way as they wouldn’t say “Nobody could possibly catch a baseball without a Ph.D. in physics and an awful lot of time”.
Note that you can certainly argue that I’m wrong in my perception of rationalism there, or misreading Scott’s argument, or that I’m over-weighting things like “feelings you get based off experiences” - both are reasonable arguments to make. But what I wish you wouldn’t do is say “He’s claiming he has a special superpower here”
because I’m claiming some group has over-corrected away from normal human social abilities.On Semantics
There are some levels of this disagreement where I use “lie” differently than Scott does
, who in turn uses belief differently from me. Scott says this:RC is doing an old trick: summing up his opponent’s position as an extreme absolute, then summing up his own position as “it’s diverse and complicated and a middle ground”.
I reject this characterization. Everything is a middle ground. The whole point of all this Bayes stuff is that “the middle ground” is wide and worth fighting over. We can have a non-absolute middle ground with 1% probability, a non-absolute middle ground with 99% probability, or anything in between. I’m not doing the morality/etiquette thing of demanding a norm that you believe people, I’m doing an epistemic thing of providing justifications for a prior that you believe people.
Which seems reasonable, until you get to this, which you want to read really carefully:
You should believe the spoonies! You should believe the DID people! You should believe that people experience astral projection - it’s just a cheap off-brand lucid dream, and I’ve personally tried lucid dreaming and can confirm it’s real!
At which point, full-stop, we now have to reassess the entirety of this conversation. We have to set up stupid argumentative frameworks and do a bunch of other shit because we are no longer talking about the same shit.
So let’s imagine that for any claim, a person might mean one of two things: Whatever they claimed, or that they know what the word “sandwich” means. And say a person comes to you and says “I can literally fly through the clouds; I have literally and not at all figuratively touched the sun”. Though they used those words, you now have to remember: they might just mean they know what a sandwich is.
In that framework, it becomes immensely important to sort of pin down whether or not anyone is talking about a sandwich at any given time, or you can’t know what they actually mean. It’s easy to imagine it getting inconvenient enough that people started to have a standard where you just said “I know what a sandwich is” for the sandwich meanings and “I can eat a cannonball” or whatever for anything that wasn’t knowing what a sandwich was.
Above, Scott deals with claims of astral projection. I’m sure there are some Wiccans/whatever that if asked if astral projection was actually leaving your body in spirit form and observing things in other dimensions/other locations would say “Naw, man, it’s just that I’ve seen sandwiches.”. But at least some are actually claiming that they do magic shit
and would get mad at you if you said "No, I believe you, it's just that I believe you in the sense that I believe you know what sandwiches are, not all that other shit you said".The other day I was talking to an old coworker - someone I really like - and he told me he had experienced Jhanas. I asked him how that worked for him, and he said this:
Moderate corporal and mental bliss that comes in waves, induced spontaneously by breathing meditation, which I’ve been doing in and off for 25 years. Can’t really go looking for it.
Instead of bliss, a good word would just be pleasure. More relatable.
It’s all kind of banal, actually. But it happens. And then you get up and have a cup of coffee and go to work…
And then compare it to a claim like this, amalgamated from a bunch of sources:
My Jhana experience is one of intense, unworldly pleasure; it surpasses all things, indeed anything I can imagine. I can initiate it at will. It has no negative aftereffects but several positive ones, literally rewiring my behavior in purely beneficial ways that work in my interest. Also, it makes me orgasm at big-box stores.
Let's say I am inclined to believe more modest claims like my friend's where I wouldn't believe others.
. I could then downgrade the more-intense latter explanation to the more sandwichy claim of my friend, but it’s actively confusing to do that and still call it belief. It feels like saying “Yes, I know that’s not what was claimed - but I believe them anyway, so long as I don't have to believe what they actually said”.This might feel pretty nitpicky, but consider the video that was originally featured in the Spoonie section. When she relates the experience of a doctor saying something that translates very closely to “I believe you might be experiencing something, but I’m moving forward in another direction without expressing disbelief because I think something related-but-different is happening”, the reaction in the video is pretty intense. The woman uses a stupid-doctor impression and heavy eye-rolls to indicate she’s very much not OK with the doctor believing some alternate, more reasonable-in-their-view version of what the woman (or the people she’s talking to) has claimed.
In some cases like this, people are as clear as they can be that they are making a specific claim and that they will be pissed if that specific claim they are making is not treated as true in a way consistent with the specifics of the claim. There’s no sense that they’d be OK with “listen, you are probably in the 45% for whom this pain is indicative of a mental health sort of thing, and we should probably do the specific treatments for that instead”
.It might be that this is good - that she's actually sick, and advocating for herself in a way that will end up getting her more appropriate care - but what's absolute is that she doesn't consider modified-belief-in-something-like-that to be enough. She wants her actual claim she actually made believed as-is.
I’m confident there are people who would be satisfied with “I believe some modified version of your claim that lines up with what I think is actually happening while still allowing for the possibility that you experienced something”, and I’m confident that there are some situations where believing some modified claim leads to the same practical implications. But it’s not hard to imagine differences in both scenarios that matter, and at some point your “what’s appropriate to do based on what I see” is going to butt up against your desire to believe everyone in all situations.
Very much too long; I forgive your “didn’t read”
Here’s the basic summary of what I’m trying to and might not be succeeding in saying, both in this article and the last: Sometimes it matters if a claim is true. It’s often not possible to prove or disprove a claim, and people assess claims against a wide variety of stuff when deciding how they react to it.
Some of that stuff - the gold standard - is actual data, actually applicable math that maps cleanly on the situation and tells you “I once jumped to the moon” is an implausible claim and shouldn’t be believed. But there’s a whole realm of plausible claims you can’t prove or disprove with data, which is why this argument exists in the first place.
I think Scott
thinks people lie only rarely, using a pretty constrained definition of "lie" that requires a pretty high level of intentionality, a sort of conscious "I'm heading out to trick some people" level of deception. In relation, I think he uses a pretty broad definition of "believe" that allows for using the phrase when you think at least something sorta like what's claimed is happening - that you believe they astral project in the sense that you believe they probably had a really vivid dream.I think people lie pretty often, using a pretty broad definition of lie that assumes an active duty to try to make sure your statements are as close to true-as-stated as you can get them, and that doing this well takes a fair amount of practice/work.
I use a pretty narrow definition of "believe" that corresponds to that - that I think someone told me something load-bearing in a way where I can model actions on it.I think there are good reasons to want to listen to Scott here. He’s arguing from what I think is a more generous place than I am, out of assumptions about the world that I don’t agree with but also can’t disprove - that people are mostly good, lie rarely, and can and should often be accommodated with belief in the face of uncertainty.
I’m arguing from a colder place - one where I acknowledge the difficulties of saying “I don’t believe this”, but think it’s often justified to do it anyway. Sometimes, probably most of the time, this is internal - i.e. someone tells you that you could have access to unlimited bliss with a few months worth of effort and you don’t end up doing that because you don’t quite buy it. Sometimes this is external, in a way they’d know about, but it’s called for - i.e. you shouldn’t pick fights unnecessarily, but there’s a practical implication that doesn’t let you take the better part of valor.
There are some other things we haven’t talked about - incentives and how they affect how much dishonesty we see at a societal level is one of the big ones - but I think it’s OK to think of it like that for now. And it’s OK, I think, to go either way on it - I’m not pretending to have settled this here.
The pokemon thing has become a strong contender for the weirdest part of this experience for me. For context, there is a way to catch Mew in-game by manipulating a glitch. This glitch was discovered in 2003 and immediately spread all over. The story I told would have occurred in 1998-1999, or thereabouts.
The way the glitch works is that in-game NPCs often challenge you to fights when you walk into their line of sight. If that line-of-sight is far enough that they are off-screen, you can pause the game, warp away before they get to you and initiate the fight, walk all the way to one of just a few trainers who through code peculiarities line up just right with some accidental mathematical formulas you’ve brought into play, and it starts an encounter with Mew.
I’ve since then had something like a half-dozen people bring this glitch-that-was-not-known up and posit that maybe this guy wasn’t lying - maybe he is the real discoverer of a glitch that wasn’t found until five years later in one of the most-played games in history, didn’t realize it, got confused, was none-the-less confident that he had it figured out in an entirely different way that doesn’t match up with any of the shit he ended up saying, and thus was not lying and had caught Mew.
Even assuming this is possible, this strikes me as a million-to-one shot at best. I’m probably going to use this as an example in the future of reluctance-to-believe-in-lies, so I’m typing it out here so I can steal from it at that time.
Note that the strength with which this posit was put forth varied a lot from person to person - if you are reading this and going “RC, that’s not what I said exactly”, it probably actually isn’t and I probably know that.
Part of the reason why I think this is true is what we can call the “Jane is a Slut” principle. Say I go to Scott and say “Jane is a slut, so she’s more likely to have an STD”. Scott might consider me to be especially prudish - i.e. that I tend to think of anyone promiscuous at all as a slut. He lives in a different world where all levels of promiscuity are much more common and accepted, so he doesn’t necessarily want to accept my judgment re: Jane’s Herpes Chances. If anything, he just doesn’t know what my statement means - has Jane had sex with a thousand suitors, or just one extra-spousal person when she was 16? I’m foreign enough to him that he might not be able to model what I’m thinking.
Scott lives in the bay area, which to me is a writhing mass of weirdos. Jane is a Slut applies here, because Scott’s conception of normalcy (as I perceive it from an uninformed distance) is much, much different from mine. So when he says “this is a normal, trustworthy person”, I have no idea what to take from that.
One pretty serious concern I have is that Scott is going to come back and say “this is purely me defending my belief in this - I’m not telling you to believe me more because I find my friends trustworthy, I’m saying I am more likely to believe someone my personal experience says is generally trustworthy myself, in a way that only applies to me”.
I think I’m justified (in the sense that I didn’t just make it up) in treating this like it’s not-that, because he’s brought up his friends like this in every instance of him talking about this, including the “the evidence for Jhanas is” phrasing quoted. But it changes things a bit if he’s not trying to use that as general argumentative support for a point.
I originally had a section in here that noted “I believe this more because my friends, who I trust and like, tell me so” is actually something you can argue should (as opposed to being used as evidence) trigger a bias warning - i.e. “my wife told me this, but I love my wife a great deal, and thus I’m more likely to believe her in a way agnostic from evidence stuff I can fairly ask you to follow along with”.
The flip side of that is that Scott isn’t dumb, and his judgment of his friend’s reliability doesn’t seem like it should count for nothing, either. So I don’t know what to do with that part of it anymore. Credit to Randy from LXR for bringing this a bit more into focus.
Not everything that seems like a malicious reading is so, mind you. Especially when the perception is that of the “attacked”. Fun fact: Did you know sometimes writers are writing too fast, or don’t end up saying exactly what they wanted to say? It happens.
This particular section makes me suspect that Scott and I might be talking past each other a bit. If Scott thinks I’m going “I know Jhanas are fake, I have figured it out with my superhuman skills”, this makes a lot of sense. If I’m saying “I think most people have this skill, and the people who fell in love with data are neglecting it” this makes a lot less sense. There might be a disconnect here!
For what it’s worth, I think I’m probably about average or slightly above average on what salespeople often call “people reading”. This isn’t magic, and if I claim that I high-confidence know someone is lying based on vibe alone, you should feel comfortable saying “hey, man, I can’t go entirely off your gut here”.
I do use this for situations where there’s no great evidence one way or another. If someone comes up to me and says they need $40 for gas to drive to the baby store and buy baby supplies for a baby they say they definitely have, and I wouldn’t give them the $40 if they are lying, I might look at their face as part of my decision-making process.
(In real life, “it’s for a baby somehow” remains a good way to squeeze me for resources.)
This article is getting incredibly long so I feel uncomfortable adding in a bunch of stuff on how I use “lie” in the main text. I think the crux of it is that I think of truthfulness as a fairly active process where there’s a certain amount of good-faith duty to police what you say to make sure it’s giving people a reasonable impression of the actual reality of the world as you understand it in terms of the scope of your statements.
What’s sort of fucked up about this is that it causes me to use “lie” very broadly compared to other people, in a similar way to how I’m criticizing Scott for using “believe” - I tend to use “lie” for any situation where I think someone walking away thinking the wrong thing about the world because of a statement someone made was A. Preventable through a minimal amount of work and B. Arguably the duty of the person making the statement.
What’s MORE fucked up about this is that generally speaking Scott is better at not “lying” in the sense that I tend to thoughtlessly use lie than I am, at least in my opinion. Part of the reason I read him in the first place is I find that overall he’s very careful with how he says things and how far out on limbs he goes and tends to mislead people this way a lot less than others, including me (I think). Nobody is perfect at this, but I think you get me.
I always have to note that I myself claim I do/know about/believe in certain magic shit. I even use that phrasing when I’m talking to atheists about it.
This was overall an interesting conversation. I’m working off memory here (friend, correct me if I’m wrong on any of this) but to some extent and in some particular ways friend-who-jhanas is more concerned about jhana claims than I am. A couple of his points were:
If you want people to believe your jhana claims, we need a lot more FMRI work and similar showing that something like you are describing is happening, because whether or not he believes it/has experienced it he understands it’s a pretty out-there sounding claim for a lot of people.
There are a lot of grifters out there, and whether or not this is currently grift it’s pretty vulnerable to it in the same way all “try my way to ultimate happiness” claims are.
He believes in jhanas, but doesn’t believe every individual’s claims that they have had them. Some people are less trustworthy than others, and he questions some in the same way he’d question some people saying they own a Ferrari even if he owned a Ferrari himself.
Note: I have a hard time telling to what extent the treatments for psychosomatic pain overlap with the treatments for “real” pain - do you give someone opiates for sufficiently intense psychosomatic pain?
Dangerous words; I’m doing my best here but this might be a strawman.
As always, I am not here claiming that I am actually especially good at this.
Kinda Contra ACX Contra Resident Contrarian
Yet another (claimed) jhana-haver, checking in. With the unpopular opinion of "it's trivially obviously real, I verified it myself, but there are two major dynamics that lead to it being overstated, and most people who talk about jhana inadvertently shove those under the rug"
Issue 1: People selectively remember their successful meditation attempts.
It's exactly like fishing. People remember the times they caught a big fish, not the times they went fishing and didn't catch anything. Looking at ten previous first-jhana attempts, if two failed completely, three were lackluster, three were decent, one was pretty awesome, and one was sex-tier, if you riffle back through your memories associated with first jhana, the two most successful ones will be the most salient. The memory of the time where you tried to attain first jhana on a long airplane trip and mostly failed won't come up in most cases. If you're at a party and trying to chat with somebody about meditation stuff, you're going to bring up the "hits" that make good party anecdotes, and not bring up that time where you tried to meditate in bed but you fell asleep. Or that time where you tried to meditate on your couch but your back was too itchy. Or the time during the road trip where you managed to get it going pretty well, and it was a worthwhile experience, but could not reasonably be described as "sex-tier".
Descriptions along the lines of "comparable to sex", strike me as accurate. Even the one about blissing out in the Macy's afterwards strikes me as accurate, there is a really weird aspect of the afterglow where your sensitivity to other pleasures is enhanced and you find yourself spontaneously going "wow, that rock is really pretty and great" (or whatever else you're interacting with). Leigh Brasington went through an MRI, we know that the brain is doing some rather unusual stuff during these states. So any claim like the strong form of "they're making it up", is just outright false.
However, what these experiences do not strike me as, is typical. When asking somebody about jhana experiences, you're getting their highlights reel unless you're very specifically asking them otherwise, just like asking somebody about their fishing experiences. Ordinary-ass jhana-in-the-airport is quite nice, but nowhere near sex-tier.
Even beyond that, there's an issue where, in everyday life, feelings of excitement are usually paired with an exciting thing. A great thing happened to you, you say "it was very exciting", the other person correctly infers that it must have been pretty great. During jhana, feelings of excitement aren't paired with an exciting thing happening. It's possible to experience strong feelings of excitement, but your mind knows that it isn't actually paired with anything, and so it comes across as more of a physiological feeling, and the overall experience is generally a good deal more lackluster than it sounds like. And so you can truly say "it felt EXTREMELY exciting", the other person (falsely) infers that it must have been REALLY great, but feelings of extreme excitement that aren't about anything aren't actually that great.
Issue 2: The word "first jhana" is used to describe a pretty wide range of experiences.
Not too long after figuring out how to do it (ie, attain a mental state that is clearly abnormal while meditating), I recorded what happened, managed to consistently replicate it, and the state was something like a cross between being well-caffeinated, very tightly focused, highly excited, and with occasional waves of body chills, the same sort that music generates. Also it keeps turning off and on again like a car that refuses to properly start. Overall niceness level: like eating a pan of fresh-cooked brownies.
After getting it down well enough and practicing it for a while, it turns out there were a few missing mental steps, and if you don't try quite so hard to force it and just have fun with it, it's much nicer. Energy levels and focus go down a bit, the state gets more stable than it used to be, the excitement component gets a considerably larger dose of happiness added into it instead of being pure excitement, and there's a nice afterglow for ten or so minutes.
And now we get to stuff I haven't personally done.
Leigh Brasington, one of the main meditation teachers teaching this stuff, claims that the body chills/tingles become continuous, rather than coming in waves, when you REALLY hit first jhana, and the stuff I'm doing is just messing around with some pre-jhana states. Nick C's described experiences also seem around this level, if he's describing them as "10x better than sex".
And then there was a time where Leigh Brasington visited some of the really hardcore Thai forest monks, and after a week or so, (claimed to have) managed to get some extremely damn strong jhanas that near-perfectly matched up to the original descriptions in the Pali Canon.
There are many reasonable questions that could be asked at this point, like
"wait, if the word "jhana" is used to describe both an energetic/focused/nice state that can be worked out in a week and easily attained in a spare 20 minutes while waiting in an airport, and some sort of ultra-rare infinite cosmic bliss thing that you can only attain by meditating in a Thailand cave for five years with absolute silence, then isn't the word "jhana" uselessly vague and referring to way too many things?? It's one of the worst cases of motte and bailey I've ever seen!"
and
"Wait, if I have to be a meditation teacher and spend a week in a Thailand cave to get the highest levels of this, then aren't those highest levels totally useless for everyday life? Even if we buy that it's actually great enough to justify the time sink, what's the point of having mental motions for unbounded bliss if you can't practically use it without sitting in a cave for a week? Looks like wireheading."
and
"I extremely want to call bullshit on Nick C's experiences and people running around going "you dummy, jhana is totally real" when what they're describing is <1/10th of what Nick C is claiming IS NOT HELPING"
I agree with all of these points! When someone is claiming to have attained jhana, it's really important to try to figure out what they're claiming on this spectrum and not do the implicit motte/bailey thing.
I'm coming to the party kinda late so you probably won't end up reading this, but if you do, that'd be cool.
I’m unsure whether I’ve misinterpreted because some of the prose is muddled, so please forgive me if you’ve covered this. Here’s how I understand the dialectic, if we cleaned it up a bit.
You’re making an argument from analogy. You want to say that spoonies et all are relevantly similar to jhana people. You then argue using the more familiar spoonies example that we should default to low credence. You then apply that to jhana people, since their claims are relevantly similar from an epistemic POV.
Scott implicitly agrees with the framing of your argument, buying that the cases are analogous. However, he objects to your argument by claiming that we should actually believe the spoonies et al. He defends this claim by pointing out that psychosomatic pain is pain just as much as other pain, since pain is just the experience of the pain. This is meant to undermine a principle on which your argument is based. Specifically, he takes you to be claiming that we can infer from “lots of doctors have been unable to identify a physical cause of the pain” to “the patient is full of shit.” By explaining that psychosomatic pain isn’t the sort of thing that has that kind of physical cause, he takes himself to have undermined your principle and therefore crippled your overall argument. Thus, your argument to disbelieve the spoonies fails, and so does your argument to disbelieve the jhana people.
You respond by denying Scott’s claim that we should believe the spoonies. You do this by distinguishing between belief_1 and belief_2. Belief_1 is your kind, the better kind, and involves endorsing the content of what people say. Belief_2 is less clear, but seems to involve endorsing a related, different, and more plausible claim. It’s more charitable. You then argue that the word “belief” actually picks out the concept belief_1, because the patients themselves get mad when doctors say “I believe you and think it’s psychosomatic.”You argue that if “belief” really picked out concept belief_2, then they wouldn’t get so mad. After all, they’d presumably be happy the doctor believed them!
Here’s what Scott should say. The patients are making two separate claims. Claim 1 is about being in pain, which you should believe. Claim 2 is about the pain’s origin, which you shouldn’t necessarily believe. The patient, if he were rational, would realize he’s mad that the doctor doesn’t believe claim 2 and that his anger has nothing to do with whether the doctor believes claim 1. After all, the doctor DOES believe claim 1! So Scott is not using belief_2 at all. Rather, you’ve simply made a reasoning mistake by inferring from “patient mad at doctor” to “doctor doesn’t believe claim 1.”
That’s a very cleaned up version of the dialectic. The real thing is full of nonsense, like Scott’s bizarre complaints about how you use norms. Here’s why Bayesians shouldn’t get their panties in a knot over epistemic norms.
Even Bayesians want want to treat like cases alike, and you’re arguing that spoonies are relevantly similar to jhana, or at least close enough that our credences should be similar. When you argue for a norm, you’re not claiming it to apply everywhere. You’ve ALREADY argued that spoonies and jhana people are relevantly similar. The norm is meant to apply only to whatever group of people contains both and is epistemically at issue. And Scott implicitly admits that the cases seem to be analogous. So he really shouldn’t have beef with the norms stuff.
I do think, however, that your move to attribute an off base definition of belief to Scott doesn’t work for the reasons I outlined above. What you should’ve argued was that Scott’s assessment of what percentage of people are simply full of shit is just wrong. Or you could’ve argued that jhana is even more likely to be full of shit, since it presumably requires involves more complicated neural mechanisms than psychosomatic pain.
I think you can probably tell that I think highly of your thinking and work and want you to do well. From your POV, I am just another internet guy, so why should you give a shit about my views of your writing? Fair enough. It’s easy to ignore if you want. With that said, I have a couple of writing comments that I think would benefit your prose, coming from a philosophy PhD student.
I understand that you were pressed to get this article out quickly. The conversation moves on otherwise. But this piece needed a LOT more editing or to be organized very differently. It needed some understandable jargon instead of all the hyphenated words. It needed a concise reconstruction of Scott’s argument all in one place. It needed you to point where you identified which of Scott’s premises you objected to. This piece was structured as an “omg a big famous dude attacked me and was kinda unfair, let me defend myself.” I do think he misread you at various points, especially with the baseball example. But writing a piece explicitly designed to defend the merit of your previous one came at a huge cost to readability and credibility. Even if Scott wanted to engage again with your ideas, he couldn’t, because this isn’t written clearly enough. This isn’t entirely your fault, because Scott’s piece wasn’t much clearly or better organized. At no point did either of you lay out exactly what the key points of issue were. It led to a muddy back-and-forth. I believe that focusing on the arguments themselves and trying to make them very explicit would work wonders.