Two Recent Studies on Implicit Bias and Microaggressions Find Opposite Results of What SJW’s Predict

This piece was authored by Austin Henshaw.

This past week has been yuge, as the President-Elect would say, for calling into question concepts taught on university campuses and sometimes practiced in the corporate sector as true. Namely, the concepts of implicit bias and microaggressions.

In “A Meta-Analysis of Change in Implicit Bias” study that was conducted in May 2016 and recently covered by media outlets, researchers amalgamated evidence from 426 studies (comprising over 72,000 participants) to examine the effectiveness of different interventions for changing implicit bias, defined in the paper as “mental associations between concepts that are activated automatically.” In practice, this is oftentimes used in the context of unconscious racial bias. The Implicit Association Test (IAT), a popular psychological instrument for measuring implicit bias when it comes to racial preference, was even alluded to by Hillary Clinton in her first debate with Donald Trump. While implicit bias was thought to be fixed and unchangeable, the meta-analysis yielded results suggesting it was a malleable concept, with test results depending heavily on context. Interventions for changing implicit bias did yield different test results, but little measurable change in explicit bias (all |rs| < .10) or behavior.

I won’t be getting into the specific statistical techniques and methodology used in detail, but the practical implications are worthy of a little discussion. Given the weak evidence for implicit bias having a strong impact on behaviors, interventions targeted at reducing implicit bias measures may not be effective at positive behavioral change. Also given faulty reliability (a measure of consistency) of the assessment, a test which gives very different results depending on context shouldn’t be utilized for addressing a serious issue. The failure at this time of the IAT should be considered a victory for empirical psychology (being based on evidence and quantitative data, not postmodern ideology).

Similarly, this week a paper titled “Microaggressions: Strong Claims, Inadequate Evidence” was published as well. Microaggressions in the paper were defined as “subtle snubs, slights, and insults directed towards minorities, as well as to women and other historically stigmatized groups, that implicitly communicate or at least engender hostility.” The author identified “five core premises” of microaggression theory, but concluded “negligible support” for the core premises. Microaggression theory was found not to connect with other areas of psychological science, including psychological testing, cognition, occupational-industrial psychology, etc. Little practical application was found with the theory, and the author actively called for “microaggression training programs” to be abolished for the time being pending further research, given the harm the concept of microaggressions does to the social sciences public image. The author even cites sources mocking microaggression lists.

Broadly, Greg Lukianoff (President of the civil liberties group Foundation for Individual rights in Education) and Jonathan Haidt (social psychologist and Professor of Ethical Leadership) criticized microaggressions, trigger warnings, and safe spaces in their piece “The Coddling of the American Mind”. They’ve argued these are disastrous for education and ultimately mental health. It’s been argued that emotional coddling ultimately hurts students and promoting psychological frailty will do little for students’ emotional or cognitive development.

Examining the failures of the IAT and microaggression theory emphasize the importance of validity (a measure of whether an assessment actually measures what it states to measure) and replication (whether a study or assessment can be repeated over time across a variety of contexts). In what is being called a replication crisis, in recent years several results reported in papers in the psychological sciences are showing a low rate in reproducibility when the experiments are conducted again.

This isn’t to say the social sciences have no value or they haven’t positively contributed to mankind. The work of Dr. Gary Wells in studying eyewitness misidentification and improving the criminal justice system comes to mind. Plus rigorous methodology of the social sciences was used to call into question previous concepts taught as true. If results in the social sciences are going to be used for practical application in public policy, they need rigorous methodology behind them and need to stand the test of replication before they should be considered for use. Michael Shermer, founder of The Skeptic Magazine and author of The Moral Arc: How Sciences Makes Us Better People, cites a 2015 study titled “Political Diversity Will Improve Social Psychological Science,” in suggesting that increased political diversity will reduce the risk of confirmation bias and “empower dissenting minorities to improve the quality of the majority’s thinking.” Given progressives love for racial and gender diversity, perhaps it’s time for some ideological diversity as well.




  1. Medical sciences also show shockingly low replicability. This is a cross-discipline problem, and should be not be assumed to be due to the lack of underlying legitimacy of the topic.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s