Critical Thinking and Expert Consensus


Here’s a problem: how to you know what is right if you don’t explore the question for yourself?

While we should strive to only accept those things which we can verify for ourselves, that is simply impossible to do in all things. We can’t all do our own cancer treatment studies, for example. Doing just one such study takes huge amounts of time, money, effort, and knowledge in getting results and interpreting them. And that would be just one study. What about food safety, or discoveries in modern physics? It’s implausible that you will be trying to double-check that you can create your own superconductor at higher temperatures.

So either you have to be ignorant about so much that is important in the modern world, or you need to accept the work done by others. But how reliable can that be? We know individuals lie or are mistaken all the time, and being an expert or well-experienced in a field doesn’t stop that from happening. It’s easy to point to examples in recent times, such as scientists paid to argue against the connection between smoking and lung cancer as well as other examples detailed in Merchants of Doubt. However, one of the authors of that book, Naomi Oreskes, argues that we should still trust the scientific enterprise as she details well in this TED talk.

Simple summary of why we trust science.

In particular, Oreskes says we should put our trust not in individuals experts but scientific consensus, a wisdom of the crowd of experts in a subject. And the reason that the scientific consensus should be given weight rather than, say, the consensus of astrologers on their subject, is because of the nature and values of science, especially its organized skepticism. So, if there is a group of people with expertise in a subject, and if that community of experts evaluates a claim and it goes through debate and is analyzed with good data and skeptical inquiry, and that group comes to a consensus, those of us outside of that group have good reason to trust the results of that group; at the very least, it is much more likely that they are right than wrong, and you are unable to determine otherwise.

Or are you?

The question now arises as to when might you have reason to doubt the consensus? Perhaps now those outside of the group doing the research can evaluate the claims because so much of the data and analysis papers are online we can go figure it out for ourselves. The problem is, that is much harder than one thinks. That point is made well in Harry Collins’s book Are We all Scientific Experts Now? With the particular example of so-called Climategate, we see what happens when emails by the researchers are cherry-picked and misunderstood to mean something dastardly. Additionally, Collins shows the healthcare disaster in South Africa when the president of that country did his research online and then denied the use of anti-retroviral drugs to alleviate the spread of HIV to newborn children. Doubting the consensus to go with the minority report or fringe view tends to lead to misunderstanding and even suffering.

But why do people outside of the expert researchers fail to examine the evidence properly? And are we doomed to just trust the consensus no matter what? What can a regular person do to make sure that they have justified reason to trust any or all given consensuses? About two months ago, Richard Carrier posted a blog entry on that subject on how to evaluate an argument from consensus, and since then he has discussed the question in a few different places, in particular this video chat on Inspiring Doubt.

Carrier does make good points about when a consensus may not be a reliable consensus (if the experts haven’t evaluated the methods critically or having considered the opposing arguments, which is why astrologers cannot be trusted when they state astrology works), but one thing in particular comes up in that blog post and his interview: our use of critical thinking skills to evaluate the arguments of the experts, especially if against a fringe view. And while it is true that if you can see someone continuously using fallacious arguments you have reason to doubt the strength of that person’s position, there is trouble with using critical thinking. This is in part to something echoed by Chris Hallquist, Julia Galef, and Luke Muehlhauser, and in part it is related to the talk I will be giving at the upcoming SSA East conference next weekend.

So, what is wrong with critical thinking? 

In itself, nothing, and it is something we all should do, and probably more than you already are doing. However, related to Hallquist et al.’s worries, just because you are a good critical thinker in some domain doesn’t mean you can outdo an expert in that person’s respective zone of knowledge. And there is science to back that up, though to some degree it can be understood from some philosophical reflection. In particular, consider the paradox from Plato’s dialogue, Meno. The question at the pedestal of the work is the nature of inquiry and if it is possible. The paradox about doing inquiry is, simply this: do you know what you are looking for? If you already know it, then you won’t inquire (you already know/have it). If you don’t know it, you won’t know where it is or recognize it. While Plato’s solution requires his metaphysics and belief in a form of reincarnation and forgetting upon rebirth, that isn’t usually the way the paradox is avoided. In reality, we are not binary in our knowledge of a thing; I may have a vague understanding of it, or I may know a part of it. For example, you may not know how a car works, but you understand that gasoline is part of what makes it go, and thus a car is not a black box that locomotion magics out of.

However, the less you know about a subject, the harder it is to inquire into. There are more things you will not know or understand, at least at first, in some areas depending on what you already have learned. Perhaps that is obvious to some degree, since our education system teaches young children the alphabet before they are taught post-modern literary analysis (cf. zone of proximal development). Another way of putting it is that it takes knowledge to make knowledge, not unlike it takes money to make money.

And here is where the issue with critical thinking comes in: our ability to think critically about a subject is not a skill that can be transferred to any domain, but rather it is context-dependent. The research on this point is summed by up Daniel Willingham in the American Educator several years ago (PDF here) and mentioned again in his book related to psychology/cognitive science research and education. Part of why critical thinking is going to be context-based is because of how experts and non-experts look at the same problem sets. An expert, because of their domain-specific knowledge, will evaluate a problem based on its deep structure; a novice, on the other hand, evaluates it based on its surface details. For example, professional physicists will categorize problems based on the underlying physics concept involved (i.e., conservation of energy, force summation), while a college student just after completing their first semester of intro physics will tend to group problems based on surface features (i.e., inclined plane, has a spring).

A  useful example to consider here is one that Carrier has brought up: the statement by E.P. Sanders that the evidence for the historical Jesus is was better than it is for Alexander the Great (see pp. 3-4 of The Historical Figure of Jesus). On a surface level, this makes great sense. Consider the dating of the Gospels, as is found in most mainstream New Testament studies books: Mark is written around 70 CE, John being the last perhaps around the year 100 CE. At the latest, the Gospels are written in the second century, but most NT scholars will say they were written within a generation or two of the life of Jesus. Compared to the extant biographies for Alexander the Great, that is amazing; for example, Plutarch is writing about Alexander about four centuries after the man’s death. Similarly the biography/history written by Rufius and Arrian. The Gospels are so much closer in time to the life of Jesus than is Arrian’s biography of Alexander to the Macedonian that it seems the evidence for Jesus’s life is on firmer grounds.

However, a classicist will realize that this surface detail (when the book was written) is a poor indicator of the reliability and utility of a document. Arrian, for example, clearly states he is using eyewitness sources, compares them, and when they contradict he provides multiple takes and critically evaluates which is most likely correct. That is far better than the Gospels were the authors are unknown, the historical methods (if any) are unknown, the sources (if any go back to Jesus) are unknown, and the reliability of those alleged sources is uncertain. Moreover, the very nature of Arrian’s writing is clearly different than the Gospels. The former is trying to say what happened and why, while the Gospels are filled with symbolic narrative, which is partially demonstrated in my book on the Star of Bethlehem. The deep structure here* is the type of source in question, and that cannot be figured out just by looking at the approximate date of composition. Not only this, but it also ignores all the primary evidence from the time of Alexander (coins, inscriptions, etc.), none of which exists for Jesus. The number and quality of sources for Alexander are supremely better than those for JC to such a degree that it should make Sander embarrassed.

But if you are not a historian, let alone a classicist, how would you know to even evaluate this argument? Or even prior to that, why would you think it was suspect? After all, your background knowledge suggests that the temporal distance a report is from its source the more unreliable it is makes the statement by Sanders totally plausible. It’s only when you have the background knowledge that you realize how bad the position is. Thus even a critical thinker would read past the few sentences by Sanders without any red flags popping up. On the other hand, if you are suspicious of every statement by Sanders, then you will have to effectively redo his research, and in that process you will need to become an expert, the very thing you hoped to avoid in order to save time. And when I say time, I am talking what is likely years of research.

If you think Google will fix that, you unfortunately are wrong. As noted by Willingham, and highlighted by Daisy Christodoulou in Seven Myths About Education, you can’t just look it up. First, you have to know what to look up. What would be a relevant fact, what would not? You may not even realize what are the sorts of things you need to know in order to evaluate a claim. You also need to know if your source is reliable or not. Unless you have background knowledge about what are good sources for your subject of interest, you will have a hard time. If you are ignorant, you can’t really tell the difference between bad and good sources; you don’t have the background knowledge to say something seems fishy. You also have the problem of finite working memory: you can only have a few ideas in your conscious memory at any given moment. If you are looking up fact after fact you will forget things. You will also have a very hard time remembering and evaluating facts and arguments if it’s all new and sudden. Your finite working memory is in part why it’s hard to talk on a cell phone and drive effectively, and it’s worse if you have to really pay attention to what the person is saying and if the drive is one you are not familiar with.

This isn’t to say that it’s impossible for a non-expert to evaluate professional arguments (at least in some cases), but it is certainly hard. And given the research from political science, it’s even worse still because more information can polarize people. Worse still, the more educated/knowledgeable/rational people become the most polarized by the same data. This sort of research is the bread-and-butter of Dan Kahan, but it is confirmed by numerous other researchers. We may fight about our own biases, but that tends to be a red-queen game, and if you are not so aware of your ignorance in a matter you will likely incorrectly estimate where the real answer lies (cf. Dunning-Kruger effect). Again, it is possible to overcome your baises with a lot of effort. But it is just that: effort. And all of this to question the expert consensus.

So while Carrier does provide good reasons to consider questioning a consensus under particular circumstances, the project I think is much harder than he realizes. Without becoming all but an expert yourself in a subject, it’s really hard to figure out if a majority position on some topic in a professional’s field is ill-founded. After all, how long did it take Carrier to even think it plausible that the consensus on Jesus-historicity was ill-founded or that historical Jesus studies had such poor methodology? Years, and he had graduate training in ancient history! I think Carrier is cognizant to this difficulty to some degree; after all he doesn’t advocate atheists using the Jesus-myth position to go after believers in part because it goes against the consensus. But I think it is even harder still than he thinks.

One thread that still remains, though, is figuring out what is the consensus or mainstream view on some subject. With climate science, for example, there have been surveys of the peer-reviewed literature and counting the papers that believed climate change was real and human-caused (almost all papers in the last 20 years do), but in other subjects it may not be clear. Best chance one might have is learning that from an expert in the field what the consensus is. From there, perhaps one can start to then pick up the nuances and start evaluating claims based on deep structures.*

As for what books, I will be devoting a few upcoming blog posts to some of my recent reading in various categories, and you may find them useful and interesting for your own research or just to keep on learning. Your best chance of not getting duped is to be curious as well as skeptical.

* The deep structure is also why critical thinking is so context-based. The structure of evaluating historical sources is hardly the same as that for evaluating physics problems or feminist critiques of culture. Each has its own “bag of tricks”.

2 thoughts on “Critical Thinking and Expert Consensus

  1. Pingback: Recent Math Book Reviews | Fleeing Nergal, Seeking Stars

  2. Pingback: Seeing it all add up | A Tippling Philosopher

What's on your mind?