love this, love the judo of agreement and the framing that admitting wrongness is the sign of deep confidence. love that becoming more right involves a greater proclivity towards being wrong.
on the pain of being wrong: IME, this comes from me mixing up my map and my territory. I believe something, then confirmation bias shows me again and again that the belief is *correct*. I throw out contradictory evidence, unconsciously. Time passes and the belief calcifies. I live from the fiction.
Then something happens that rocks my world. The belief is shown to be false. The territory rushes in and I'm confronted with the fact that I've been wrong. But maybe even worse that being wrong: from a desire to protect the softness of my being, I've deceived myself, living from a lie. It's incredibly disconcerting to be confronted by my abilities of self-deception. "How could I have done such a thing? Am I doing it now??".
Maybe this is a deeper sort, or different flavor, of wrongness though.
I wanted to add: I try to think in terms of there being a bit of truth in everything under the layers of ego inflation and insecurities. When we think we have the whole truth then that’s the only time we are truly wrong.
Doesn’t truth resonate more than right and wrong?
For some it might. Others, may be not.
A part of me wants to insist that “everyone should find truth - untruth more compelling than right-wrong”. When I am in that space (not infrequently) that’s when I feel so wrong.
I don't think that reluctance to admit being wrong is due to a fear of ridicule. Rather, I think it's a risky move if you're playing a status game. Admitting you're wrong typically lowers your status while raising the status of the person who was right. Admitting being wrong is a high-status move only if you are already high-status and the "audience" values correctness (rather than in-group conformity). Imagine how the status of a priest would change if they said "sorry, after a lot of thought and discussion I was wrong about our god, we should actually all convert to X religion / become atheists." Imagine how the status of a politician would change if they said "Sorry, after a lot of thought and discussion I was wrong about [abortion/gun control/immigration/hot button issue X], but you should still vote for me."
This is particularly problematic when playing the game in front of a large audience (e.g. on social media). In this case, a sizable number of people will agree with you even if you're wrong. Sticking to your guns will raise your status with these people (who are typically your in-group, where you most value your status) regardless of whether or not you're correct, and will lower your status with your outgroup (which ironically might raise your ingroup status even more).
"Admitting being wrong is a high-status move only if you are already high-status and the "audience" values correctness"
This is why I try to surround myself with those who value correctness. I have limited respect for anyone who does not.
"In this case, a sizable number of people will agree with you even if you're wrong."
In situations like this they should be ignored if possible. Nothing good should be expected from allowing our ability to iteratively improve to be limited by those who cannot understand the value of intellectual honesty. To do so would be akin to restricting walking speed to never be faster than that of the slowest people around us.
Okay, but most of the big stuff in life can't be empirically demonstrated as either right or wrong. For example, philosophical, political, and religious beliefs. There's intrinsic value in discussing and debating them with other people, but there's no universal set of criteria by which we can come to a consensus about their accuracy (or inaccuracy). This is different from how we evaluate phenomena in the material world, e.g., everything we come to understand using "hard" scientific methods, like the efficacy of a new medication or efficiency of a route for travel. It's dangerous to imply that all the things one can think about, talk about, and believe in are candidates for the same or similar forms of evaluation (at least, so similar they're not worth distinguishing). In other words, I'm not just making the obvious point that there are differences between being wrong about a philosophical or political principle; a scientific hypothesis; and a decision about what to wear to work. I'm saying that the first type of wrongness - exemplified by philosophical and political wrongness - is so different from the others that it can't be included in the same conversation. There is no general calculus for right and wrong.
I feel like I get where you're coming from, but what if you're wrong? 😊 Empirically, no. You're conflating 'hard to prove with certainty' with 'equally valid.' Deutsch would say that even philosophical explanations can be evaluated by whether they're hard to vary while maintaining explanatory power. Consider: 'humans need freedom' vs. 'humans need the capacity to recognize authentic desires and take coherent action toward them.' The second explanation is harder to vary, predicts more specific outcomes, and connects to what we know about motivation, meaning-making, and psychological wellbeing. Some interpretations are objectively better tools for understanding reality.
I don't see precisely where I'm conflating 'hard to prove with certainty' with equally valid' -- could you give an example? Also -- I just want to point out that David Deutsch isn't anywhere near well-known enough to reference without the first name, at least if you're talking to somebody you don't know! Assuming that that's the Deutsch you're referring to, well, he's just one small strain in the very, very broad tapestry of philosophy. Even if we limit it to Western philosophy. In my circles, he's best known for having a shoddy grasp of the discipline's history. So, I'm not sure what makes his definition of a "philosophical explanation" a particularly great one. Your post subordinates the norms of philosophy to the norms of science as if that's a universally-accepted and self-evident move, which suggests that you're mostly (or only) familiar with a narrow and relatively new subset of philosophical scholarship.
I appreciate the critique. You're right that I'm making philosophical assumptions. I'm not a trained philosopher or academic. Merely a continual novice. But here's my approach: rather than debate these assumptions in the abstract, I'm interested in tracing them down to their biological implementation. Does 'authentic desire' correspond to something measurable in motivation circuits? Does 'coherence' show up in neural network integration patterns? Does 'agency' map onto specific executive function capabilities? I can't pretend to know the answers, but that's the angle I'm coming at this from.
I appreciate the good-faith engagement. If you assume a 1:1 ratio between the realm of the physical/material (including everything that's biologically implementable) and the realm of the abstract (including everything we can call a "concept" or an "idea"), then there's no need for any distinction between the two. Everything reduces to the material. The thing is, the question of whether this is a fair or valid assumption isn't something that scientific methods can resolve, which is why we still have areas of study (including philosophy) that pertain to the abstract-conceptual and generally do not attempt to find the physical basis for what we typically express in abstract-conceptual terms (with the exception of philosophers who defer to science, like Patricia Churchland). For my money, if we suggest that the abstract-conceptual realm is less real than the physical-material realm, we're devaluing a critical part of what it means to be human, which is the ability to take mental steps beyond the immediate data of the senses, beyond the purely empirical, and envision/create realities that didn't previously exist.
I think we might be talking past each other a bit. I'm not trying to reduce philosophical concepts to biology - I'm working backwards from observable patterns to understand mechanisms. Agency, as I define it, is a system's ability to preserve transformation possibilities over time. The philosophical questions about what makes transformation 'authentic' remain important, but I'm focused on the practical question of how we help people maintain optionality and adaptive potential.
Well, you made a remark about addressing philosophical assumptions not in the abstract, but "tracing them down to their biological implementation." I think that what you meant by "philosophical assumptions" was "assumptions about philosophy," but in any case, that remark clearly rests on the belief that there's a physical ground for abstract philosophical concepts.
We are certainly in different lanes. This last comment doesn't have anything to do with what I was trying to express in my original comment; it's about what you're interested in scientifically. The point in my original comment is that there's no general "rightness and wrongness," so, we can't subject philosophical and political beliefs to anything like the kind of judgment we apply to scientific statements. Being "wrong" about politics is so different from being wrong about a scientific hypothesis that we need to use different terms and frameworks for each respectively. People who do science should recognize that science has a finite remit, and that remit excludes phenomena too complex to be confirmed or disconfirmed under bounded and controlled conditions (as in the phenomena addressed by the humanities and social sciences, using concepts that don't have a correlate in the physical world).
One of the hidden gems of being legally blind is that I have gotten extremely good at letting go of perfectionism. Sure, it took awhile to loosen the knot, but once I was able to see there was no way I was going to be pretend that I could see what everyone else did, I stopped pretending.
I appreciate your point but I don't think that's necessarily a good way to put it. I mean what we want to maximize is something like accuracy and psychologically speaking, you can't really care less about being wrong without reducing the effort you put into being right. That's not generally what we want -- even if in some contexts move fast and break things is a good strategy here we don't want to encourage just confidently making assertions and shrugging when it turns out we are wrong.
What is psychologically plausible is taking more pride in being willing to admit you are wrong and update than in being right. Unless you didn't care about being right in the first place admitting you are wrong is going to be unpleasant -- but you can take more pride in your willingness to admit you got it wrong and the chance to show that off than you are upset by being wrong.
And nowhere is the egoic need to be right more relentless than on social media! At the same time, in the broader culture—where everything seems to be tightening, stakes rising, competition sharpening, resources thinning—resistance to being wrong can feel less like ego and more like a survival strategy. The system punishes ambiguity. Dissent can really cost you, especially in places like academia, politics, medicine; anywhere status is fragile.
I didn’t think I wanted to read this through because it would make me feel wrong about something, but I was wrong, and that’s alright.
Being wrong is necessary. So is being honest about being wrong. No one can make real science without that understanding.
My marriage advice: you can choose to be happy or be right
Weirdly, this is also my divorce advice.
Yay, now I have an essay to direct folks to when this topic inevitably resurfaces among my social circle, haha. Thanks!
thank you :D
I like to phrase this as, I care a lot about being right, but I don't care at all about *having been* right.
Because I do care about being right! I really want to be right! Enough to know I need to change my mind almost constantly in that pursuit.
love this, love the judo of agreement and the framing that admitting wrongness is the sign of deep confidence. love that becoming more right involves a greater proclivity towards being wrong.
on the pain of being wrong: IME, this comes from me mixing up my map and my territory. I believe something, then confirmation bias shows me again and again that the belief is *correct*. I throw out contradictory evidence, unconsciously. Time passes and the belief calcifies. I live from the fiction.
Then something happens that rocks my world. The belief is shown to be false. The territory rushes in and I'm confronted with the fact that I've been wrong. But maybe even worse that being wrong: from a desire to protect the softness of my being, I've deceived myself, living from a lie. It's incredibly disconcerting to be confronted by my abilities of self-deception. "How could I have done such a thing? Am I doing it now??".
Maybe this is a deeper sort, or different flavor, of wrongness though.
I really enjoy your thinking. Thanks for putting it out there.
Hope all is well! Wish I had more to read here; there is a lot of wisdom here and it has been a blessing.
“I remember feeling existentially threatened by the possibility that I might be wrong or just not know something.”
One of the few positives of a PhD is you quickly realize how impossible this is.
Apologizing works in a similar manner
I cannot recommend this book, literally called Being Wrong, enough!
https://www.harperacademic.com/book/9780061176050/being-wrong/
Oops! I hadn’t finished. I can’t get it back🤭
I wanted to add: I try to think in terms of there being a bit of truth in everything under the layers of ego inflation and insecurities. When we think we have the whole truth then that’s the only time we are truly wrong.
Doesn’t truth resonate more than right and wrong?
For some it might. Others, may be not.
A part of me wants to insist that “everyone should find truth - untruth more compelling than right-wrong”. When I am in that space (not infrequently) that’s when I feel so wrong.
Yesterday’s truth is tomorrow’s prison…
I don't think that reluctance to admit being wrong is due to a fear of ridicule. Rather, I think it's a risky move if you're playing a status game. Admitting you're wrong typically lowers your status while raising the status of the person who was right. Admitting being wrong is a high-status move only if you are already high-status and the "audience" values correctness (rather than in-group conformity). Imagine how the status of a priest would change if they said "sorry, after a lot of thought and discussion I was wrong about our god, we should actually all convert to X religion / become atheists." Imagine how the status of a politician would change if they said "Sorry, after a lot of thought and discussion I was wrong about [abortion/gun control/immigration/hot button issue X], but you should still vote for me."
This is particularly problematic when playing the game in front of a large audience (e.g. on social media). In this case, a sizable number of people will agree with you even if you're wrong. Sticking to your guns will raise your status with these people (who are typically your in-group, where you most value your status) regardless of whether or not you're correct, and will lower your status with your outgroup (which ironically might raise your ingroup status even more).
"Admitting being wrong is a high-status move only if you are already high-status and the "audience" values correctness"
This is why I try to surround myself with those who value correctness. I have limited respect for anyone who does not.
"In this case, a sizable number of people will agree with you even if you're wrong."
In situations like this they should be ignored if possible. Nothing good should be expected from allowing our ability to iteratively improve to be limited by those who cannot understand the value of intellectual honesty. To do so would be akin to restricting walking speed to never be faster than that of the slowest people around us.
Okay, but most of the big stuff in life can't be empirically demonstrated as either right or wrong. For example, philosophical, political, and religious beliefs. There's intrinsic value in discussing and debating them with other people, but there's no universal set of criteria by which we can come to a consensus about their accuracy (or inaccuracy). This is different from how we evaluate phenomena in the material world, e.g., everything we come to understand using "hard" scientific methods, like the efficacy of a new medication or efficiency of a route for travel. It's dangerous to imply that all the things one can think about, talk about, and believe in are candidates for the same or similar forms of evaluation (at least, so similar they're not worth distinguishing). In other words, I'm not just making the obvious point that there are differences between being wrong about a philosophical or political principle; a scientific hypothesis; and a decision about what to wear to work. I'm saying that the first type of wrongness - exemplified by philosophical and political wrongness - is so different from the others that it can't be included in the same conversation. There is no general calculus for right and wrong.
I feel like I get where you're coming from, but what if you're wrong? 😊 Empirically, no. You're conflating 'hard to prove with certainty' with 'equally valid.' Deutsch would say that even philosophical explanations can be evaluated by whether they're hard to vary while maintaining explanatory power. Consider: 'humans need freedom' vs. 'humans need the capacity to recognize authentic desires and take coherent action toward them.' The second explanation is harder to vary, predicts more specific outcomes, and connects to what we know about motivation, meaning-making, and psychological wellbeing. Some interpretations are objectively better tools for understanding reality.
I don't see precisely where I'm conflating 'hard to prove with certainty' with equally valid' -- could you give an example? Also -- I just want to point out that David Deutsch isn't anywhere near well-known enough to reference without the first name, at least if you're talking to somebody you don't know! Assuming that that's the Deutsch you're referring to, well, he's just one small strain in the very, very broad tapestry of philosophy. Even if we limit it to Western philosophy. In my circles, he's best known for having a shoddy grasp of the discipline's history. So, I'm not sure what makes his definition of a "philosophical explanation" a particularly great one. Your post subordinates the norms of philosophy to the norms of science as if that's a universally-accepted and self-evident move, which suggests that you're mostly (or only) familiar with a narrow and relatively new subset of philosophical scholarship.
I appreciate the critique. You're right that I'm making philosophical assumptions. I'm not a trained philosopher or academic. Merely a continual novice. But here's my approach: rather than debate these assumptions in the abstract, I'm interested in tracing them down to their biological implementation. Does 'authentic desire' correspond to something measurable in motivation circuits? Does 'coherence' show up in neural network integration patterns? Does 'agency' map onto specific executive function capabilities? I can't pretend to know the answers, but that's the angle I'm coming at this from.
I appreciate the good-faith engagement. If you assume a 1:1 ratio between the realm of the physical/material (including everything that's biologically implementable) and the realm of the abstract (including everything we can call a "concept" or an "idea"), then there's no need for any distinction between the two. Everything reduces to the material. The thing is, the question of whether this is a fair or valid assumption isn't something that scientific methods can resolve, which is why we still have areas of study (including philosophy) that pertain to the abstract-conceptual and generally do not attempt to find the physical basis for what we typically express in abstract-conceptual terms (with the exception of philosophers who defer to science, like Patricia Churchland). For my money, if we suggest that the abstract-conceptual realm is less real than the physical-material realm, we're devaluing a critical part of what it means to be human, which is the ability to take mental steps beyond the immediate data of the senses, beyond the purely empirical, and envision/create realities that didn't previously exist.
I think we might be talking past each other a bit. I'm not trying to reduce philosophical concepts to biology - I'm working backwards from observable patterns to understand mechanisms. Agency, as I define it, is a system's ability to preserve transformation possibilities over time. The philosophical questions about what makes transformation 'authentic' remain important, but I'm focused on the practical question of how we help people maintain optionality and adaptive potential.
Well, you made a remark about addressing philosophical assumptions not in the abstract, but "tracing them down to their biological implementation." I think that what you meant by "philosophical assumptions" was "assumptions about philosophy," but in any case, that remark clearly rests on the belief that there's a physical ground for abstract philosophical concepts.
We are certainly in different lanes. This last comment doesn't have anything to do with what I was trying to express in my original comment; it's about what you're interested in scientifically. The point in my original comment is that there's no general "rightness and wrongness," so, we can't subject philosophical and political beliefs to anything like the kind of judgment we apply to scientific statements. Being "wrong" about politics is so different from being wrong about a scientific hypothesis that we need to use different terms and frameworks for each respectively. People who do science should recognize that science has a finite remit, and that remit excludes phenomena too complex to be confirmed or disconfirmed under bounded and controlled conditions (as in the phenomena addressed by the humanities and social sciences, using concepts that don't have a correlate in the physical world).
One of the hidden gems of being legally blind is that I have gotten extremely good at letting go of perfectionism. Sure, it took awhile to loosen the knot, but once I was able to see there was no way I was going to be pretend that I could see what everyone else did, I stopped pretending.
And that’s when the magic happened 🪄
I appreciate your point but I don't think that's necessarily a good way to put it. I mean what we want to maximize is something like accuracy and psychologically speaking, you can't really care less about being wrong without reducing the effort you put into being right. That's not generally what we want -- even if in some contexts move fast and break things is a good strategy here we don't want to encourage just confidently making assertions and shrugging when it turns out we are wrong.
What is psychologically plausible is taking more pride in being willing to admit you are wrong and update than in being right. Unless you didn't care about being right in the first place admitting you are wrong is going to be unpleasant -- but you can take more pride in your willingness to admit you got it wrong and the chance to show that off than you are upset by being wrong.
I sense that we have sufficiently different psychologies that you may not find this piece useful.
And nowhere is the egoic need to be right more relentless than on social media! At the same time, in the broader culture—where everything seems to be tightening, stakes rising, competition sharpening, resources thinning—resistance to being wrong can feel less like ego and more like a survival strategy. The system punishes ambiguity. Dissent can really cost you, especially in places like academia, politics, medicine; anywhere status is fragile.