This post will be somewhat abstract, and somewhat concrete. For the first time in a long time, I've reappraised the foundations of my personal philosophy, which for the moment we'll call empiricism. The occasion was a coincidence. On the one hand, I got into one of those postmodernist discussions about how what appears to be real depends on the observer, with some people arguing that there is no absolute reality. (I'm afraid I didn't respond as respectfully as I should have.) And on the other hand, I heard what medical science has recently learned about sensory perception. Read on.
The scientific way of looking at life
Empiricism is the view that observation is truth; it's the careful editing of all statements to make sure that a distinction is made between what is directly sensed (fact) and what is inferred (theory). In my personal shorthand, I call this epistemology--the practice of asking 'how do you know what you know'. Wikipedia quotes Charles Pierce about the reconciliation of empiricism and rationalism (the latter convinced us to stop trying to make predictions based on superstition):
(1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".
When Peirce says "science" in the above, you can read in "life": just as the general public as a whole reaches self-corrective conclusions about the likelihood of rain, so does science (that is, researchers who publish) reach self-corrective conclusions about global warming. How much corrective power your individual views have depends on how much expertise people think you have in that area.
The monkey wrench that medicine threw into the works
An article by Atul Gawande in The New Yorker about phantom-limb syndrome has given me serious doubts about the absolute nature of observation. I used to take it for granted that what we sense is real. The sky is blue, end of story. However, brain scans have revealed that when we have the experience of seeing something, the amount of communication between our eyes and the thinking parts of our brain isn't nearly enough to carry all the necessary information. Instead, there's a lot of extra information coming to our thinking parts from our *memory*. When you see a dog running behind a slatted fence, you don't imagine that the dog is cut into pieces; you perceive a dog, even though you can never see the whole thing at once. Your brain fills in the gaps from its memory. The medical community now agrees that experience is something like 10% direct sensory perception and 90% memory.
Most of the taste of that cheeseburger you had for lunch was just your brain's best guess. Makes you wonder if it's worth the calories, doesn't it?
What does this have to do with anything?
OK, but I still get hungry. There's no sense in making Jean-Paul Sartre's tuna casserole for dinner. We have to get through the day. Well, at some level we have to have some reason (or perhaps 'motivation' is a less loaded word) for believing what we believe. Or at least I have to have that, or I get to feeling a bit unmoored.
I believe that if we observe things, and make up a theory to explain them, and then test the predictions of that theory, then we can understand the natural world and gain mastery over our surroundings. This is the scientific method, and one kind of mastery it gives us is technology. I like technology. I'm a geek.
I'm also a working scientist. How do I know what I know? When I say I observed something, like for example a number on the screen of a voltmeter, am I still 100% sure? Or, given the above discussion, am I only 10% sure?
Scientists do blind and double-blind tests for a reason: if the test administrator, or, worse, the test subject, knows anything about the stimulus they're being asked to respond to, there is a very strong chance that the results will be steered by their expectations. It's tough enough when the question is which philodendron is most attractive. In medical research, the control group's placebos are a matter of life and death. Even physicists get irrationally optimistic once in a while and fall for some cold fusion thing or another, though eventually science corrects itself when researchers come along who don't have a vested interest in a particular result.
I'm pretty sure about the voltmeter. If I blink, the reading doesn't change. If I do the experiment again tomorrow, I'll get the same result, unless the janitor unplugs something important. I'll bank on the consistency of my observations. And I'll try not to stress about it too much. The philosophers, though, will probably feel a good bit more threatened by it.