A Theory of Wokeness
Epistemic status: this is satire of philosophizing. Useful concepts may be accidentally introduced on the way to building a totally unfounded theory with a stark lack of illustrative examples. Thanks to Z for the early Christmas discussion that inspired this.
Bayes’ theorem is this thing Rationalists annoyingly love to tout as the one thing greater than sliced bread. In a nutshell, it’s an algorithm for how one should update one’s beliefs about the world. In this post, we’re going to show how that theorem can inform a theory of wokeness. But before we get there, let’s get the math out of the way (I promise, the judgy part about woke people for which you probably clicked on this link will come right after).
The formula to update belief H upon new evidence E (provided by some test) is:
where O(.) means “odds” and P(.) means “probability”. The concept of odds is slightly different from the concept of probability. Odds correspond to the ratio of the number of events that produce the outcome to the number that don’t, whereas probability is the ratio of the number of events that produce the outcome to the number of all events, whether or not they produce the outcome. O(H|E) is the posterior odds (the odds that your belief H is true given the new evidence E), O(H) is the prior odds, and P(E|H)/P(E|¬H) is the factor by which you should update your prior (known as “Bayes’ factor”). P(E|H) is the probability of a positive test assuming your belief is true (also known as “sensitivity”), and P(E|¬H) is the probability of a positive test assuming your belief is false (“¬” just means “not”), which is closely related to “specificity” (it is equal to one minus the specificity). Specificity tells us how likely a negative test means belief H is indeed false.
Equipped with Bayes’ theorem, we can now put our finger on a key flaw in reasoning that is central to our Theory of Wokeness. This video masterfully walks you through a clinical example of this mistake and illustrates how it can lead to paradoxes. Say there’s a test to detect breast cancer that has 90% sensitivity and 91% specificity. Sounds pretty good, right? If you take the test and it turns out positive, what are the chances you actually have breast cancer? Many people will look at the sensitivity and say 90%. Wrong. The right answer is 10% chance. How can that be? How can a highly sensitive and highly specific test only have 10% positive predictive value? Well, it’s because predictive value depends on your prior. In the case of breast cancer, the prior will be the prevalence of the condition in the population, which is quite low (1%). Using our above formula, P(E|H) is P(positive test|cancer) (probability of a positive test in people with cancer) and P(E|¬H) is P(positive test|no cancer) (probability of a positive test in people without cancer). Since we know that P(E|¬H) is equal to one minus specificity, that means that the Bayes’ factor we need to update our prior odds with is P(E|H)/P(E|¬H) = 0.9/(1–0.91) = 10. With prior odds of 1:99 (1% prevalence), the posterior odds end up being 10:99, or, in probability terms, 10/109, or roughly 10% chance of actually having cancer. This example perfectly illustrates the dangers of conflating a positive test result with the truth, even with highly accurate tests.
So how does this all relate to wokeness? Well, at the end of the day, Bayes’ theorem truly is about proper thinking. It’s a shame that it too often is confined to clinical reasoning or statistical nerding out, as it’s a general theorem about how to properly incorporate evidence to form a model not only of the world around us but of interpersonal relationships as well. For instance, it should be involved anytime someone judges someone else. Judging happens all the time, there’s no avoiding it. Someone does or says something, and it triggers a bunch of readjustments of all our beliefs about who they are. (You know you do it. It’s fine.) In Bayesian parlance, our beliefs about that person are our priors, and whatever they say or do is evidence. Another way to put it is that our witnessing someone do or say something is isomorphic to a clinical test with an associated sensitivity and specificity. If they did the deed, the test is positive. If they didn’t do the deed, the test is negative. In both cases, do or don’t, it is evidence (E) for whatever we’re judging this person to be (H).
Let’s say you have a “Nazi test”. For instance, that test is “whether someone has worn blackface in the past”. The sensitivity and specificity of that test have been estimated through tribal experimentation. If the test is positive, should we conclude they are a Nazi? The right answer is: maybe? While the sensitivity and specificity of the blackface test (and thus the Bayes’ factor of that test) is constant, what Bayes tells us is that the positive predictive value of that test depends on your prior about how much of a Nazi your blackface-wearing friend is before you learned about their blackface episode. The point is that the test only tells you by how much you should update your prior that someone is a Nazi, not whether or not someone is actually a Nazi. To get the latter, you need to take into account the prior. For instance, the prior should be low if the person wore blackface in the 1960s, because people then tended to be oblivious to these things and therefore they lacked meaning. It should also be low if you know other things about that person that are typically correlated with being a Good Liberal™️. Similarly, the prior can probably be higher if we’re talking about someone in Mississippi with a history of striking black jurors from trials. This example is probably dumb, but I think you get the point.
And this is the first element of our theory of wokeness: it is the tendency to conclude of the presence of a “disease” from a positive test result only. It’s like thinking you have 90% chance of having breast cancer when the test comes back positive. It’s a form of tunnel vision that ignores priors. It usually takes the form of someone woke saying about someone who just tested positive on some woke test that they have been “unmasked”, that their “true nature” has been uncovered behind the façade of humanity. Such essentialization, a bad habit of the mind, has only been made more acceptable by the widespread dissemination of tools like the Implicit Assessment Test (fewer people know that replications of that experiment have shown mixed results). It’s like tossing out the window all notion of prior and focusing exclusively on that one test result. Let’s call this form of flawed thinking or tunnel vision prior blindness. The blindness means that the prior is indiscriminate. Side note: it also tends to be ungenerous (“I expect most people to be Nazis”). It’s unclear if that’s a necessary condition for the theory though.
Prior blindness, although widespread in woke culture, is not enough to explain wokeness. For a more complete theory, you need to look at the amount of testing someone does. Everyone will quite naturally use some of their interactions with other people to refine their opinion of them. But the extent to which any interaction is given meaning, aka the extent to which an interaction is indeed conceived as a test for adjusting one’s concept of the other, can vary from one person to another. I posit that the more someone sees interactions as tests is a function of their aversion to uncertainty. Someone averse to uncertainty will be more likely to use any interaction as a test. The second prong of our theory of wokeness is thus that wokes have higher uncertainty avoidance.
We’re all on a spectrum of uncertainty aversion. I believe it is rooted in our very human tendency to dislike admitting we’re wrong about things. Admitting we’re wrong begets uncertainty. Out of self-preservation, we will avoid those who challenge our beliefs. One way to achieve that is to conceive our interactions with others as tests, early warning signs to flee from them or shut them down. The more uncertainty avoidant, the more likely to deploy that strategy. The problem is that forming new knowledge necessarily requires questioning current knowledge. Well-adjusted individuals balance their uncertainty avoidance with knowledge building. I argue that wokes have that balance heavily tilted toward the former: they’re on higher alert, meaning they “test” the people around them more.
This leads us to the Grand Final Theory of Wokeness: it is strongest in people who (1) tend to conflate test results with truth, and (2) tend to see interpersonal interactions as tests. The end result is someone frequently testing while being blind to priors. The combination of both forces is required for the emergence of Woke: intense testing is necessary for uncertainty avoidance (or preservation of dogma purity), and prior blindness explains the harshness of the judgement (of which a side effect is cancel culture). Both forces combine into a recipe for false positives: intensive testing drives up the chance of false positives through the well-known multiple comparisons problem, and prior blindness drives it up by overindexing on the latest evidence. The theory therefore predicts that the world, in the eyes of the woke, is full of enemies. *Reality check*: seems about right.
As a side note, a perverse effect of the woke’s taste for intensive testing is that those tests will necessarily fire more for heterodox people (heterodox, that is, relative to the Woke Dogma) for being naturally more inclined to question things. Thus the curious, all other things being equal, independently of the Nazi prior placed on them, will be more likely to step onto the woke mousetraps. However, as already mentioned, one cannot really know something, i.e. incorporate something, without first questioning it, pushing back on it. Pushback is key, whether or not the thing one pushes back on is actually true or not: if it’s true, it’ll lead to true incorporation of the true thing; if it’s false, it’ll lead to a better theory. But when any pushback is cancelled, we’re left not with true wising up but with blind acquiescence — in other words, a recipe for eventual personal blow-up. Heterodox pushback does not always equal Nazi. Sometimes it does, but more often than not, it does not.
Potential flaws of this theory: doesn’t “prior blindness” imply that an Actual Nazi™️ accidentally passing a highly-specific woke test would be pretty much in the clear? Well, that may be so. In that case, we may need to add the extra condition of ungenerous prior blindness for a fully realized theory. Or we may need to bring in the new concept of the long shadow of the first positive test: failing a woke test once somehow casts a long shadow, with no satisfactory theory as to why. At the same time, it’s unclear that this extra theorizing is necessary. Let’s be honest, could an Actual Nazi™️ really ever pass a highly-specific woke test? Also, since it’s easier to err randomly than to succeed randomly, people are doomed to fail at woke tests more often than succeed. In other words, there are disproportionately more ways to get something wrong than to get something right. In the long haul, everyone is a Nazi.