Tuesday, January 6, 2009

The problem of belief, part 1

As a scientist, ones beliefs are supposed to be based on evidence and arrived at through logic (this is not to imply other people and other professions are not the same, just to say that this is the ideal for a scientist). Ideally, individual beliefs are constantly subject to revision, or even to complete overturning as more information is gathered. We are supposed to be rational and skeptical of all claims, requiring data and distrusting claims that can not be verified independently.

But scientists are humans, and just like other humans, it feels good to be right and bad to be wrong, and, even though we're supposed to take all relevant data into account, it's easy to ignore anything that contradicts what we believe. Scientists do a decent job of ferreting out the truth, at least on average, but it's also safe to assume any individual scientist is going to claim some things that are not true, or at least aren't supported by the facts, just like any other human on the planet.

A scientist spends a lot of time (and I mean A LOT of time) thinking about some question, in the process becoming an expert who is, more than likely, more familiar with their topic and its associated minutia than any other person on the planet. Although experts can be wrong for a variety of reasons, typically they are not, or at least, are not completely wrong. We scientists argue quite a bit in our pursuit of truth, but usually it's over minutia, not over the whole theoretical structure (mantle plumes being a notable exception, but there's no real way to resolve that one without an unobtainium ship capable of drilling to the center of the Earth a la "The Core"). For the most part, though, I am comfortable trusting the interpretations of other scientists, especially when I understand the data and logic that lead to those interpretations.

The thing is, I don't always understand the data. You can show me an fMRI and tell me what it means, but I don't have the training to really independently understand what it means. My trust is, in some sense, blind. There is so much out there to understand it really isn't possible for me as an individual to evaluate the validity of all the science I'm presented with, so I am reduced to simply trusting that the person measuring and interpreting their results is doing so in a way that is consistent with their data and consistent with previous scientific findings. As maligned as the peer review process is by non-scientists in particular, it does allow me as a non-expert to trust that the publications I read are rigorous and, even if they're not totally correct, at least they're not completely ridiculous. Ultimately, my trust of other scientists comes from a trust in the scientific method; trust in the peer review process; and trust in the cantankerousness and stubborn reasoning of men and women who, through time and thought and reason, have become experts in their fields.

Which is a bias that makes me no different from anyone else.

Nobody can go out and verify the reasonableness of all our opinions and beliefs. At some point we are all reduced to listening to expert (and not so expert) opinions to form our own. So, how do we choose who is an expert worth listening to? How do we choose who to trust, and over what topics we should trust them?

1 comment:

  1. Hey Steeny, Trust your heart, trust your head, ask God. You are a smart cookie after all.