Wednesday, November 01, 2006

Hypothesis testing

Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.
- Richard Feynman

Reading list: Science as Falsification - Karl Popper

Back in the time of the Greeks, "science" as it existed then proceeded almost entirely by abstract thought. Needless to say, this was not always terribly effective - thought will simply tell you whether your model is consistent not whether it's accurate. As a result of these inadequacies, a new approach was developed called empiricism, which emphasised the importance of evidence.

But how to actually apply that evidence? With sufficient rationalisation, it's possible to make any data set fit any model - how to avoid the tendency to make ad hoc excuses?

The conclusion that scientists came to was very concordant with my thoughts on usefulness. They decided that the ultimate arbiter of truth (or at least of accuracy) would be the predictive power of an hypothesis. It may be possible to contort even as daft a notion as creationism to fit the evidence, but only at the expense of removing every last scrap of its predictive power.

Thus, the "hypothesis testing" approach was formalised. Generally it goes something like:

1) Find an area to study
2) Gather data from that area and look for patterns
3) Explicitly state an hypothesis
4) Derive concrete testable predictions from that hypothesis
5) Test the predictions
5a) If they're correct, go back to step 4 and carry on testing
5b) If they're incorrect, choose another hypothesis and go back to step 3

One thing you'll notice about this sequence is that there's no actual exit condition - there's no "6) Congratulations, your hypothesis is true". So how does the scientific community determine when an hypothesis is sufficiently well-tested? The threshold is inherently subjective here, but usually the scientific community eventually decides as a group that an hypothesis is firmly established enough to be taken as a given, at least provisionally.


So far I've treated the scientific method as comparison of respective predictivities. There is, however, another paradigm worth considering: Popperian falsifiability.

Karl Popper's notion was that science proceeds by a process of weeding out the less accurate hypotheses. So, for example, Aristotle's theory of gravity was proven to be less accurate than Newton's, which was proven less accurate than Einstein's. This approach has two advantages: first, it provides a very precise linguistic framework for understanding these principles, and secondly, it throws up an interesting analogy.

The linguistic framework looks like this:
1) A conjecture is any claim about the universe
2) A falsifiable conjecture, aka an hypothesis, is one that can be disproven (i.e. "unicorns don't exist" could be disproven by finding a unicorn)
3) A verifiable conjecture is one that can be proven true (i.e. "unicorns exist" - the converse of a falsifiable conjecture is verifiable)
4) A testable conjecture, aka a prediction, is one that is both verifiable and falsifiable

The problem that science solves is: there is no way to demonstrate an hypothesis to be true. The solution is to stop worrying about truth and start worrying about accuracy. It's very easy to demonstrate an hypothesis accurate, by logically deriving predictions from it and testing them - in other words, by attempting to falsify them. Repeated failure to be falsified can be taken as a sign that an hypothesis is representative of the universe.

Now, one thing that's interesting to note is that this is directly equivalent to standard processes of biological evolution. In both cases, new variants of an object (DNA or hypotheses) are created pretty much at random, and the useful ones are retained whilst the less effective ones are discarded. Thus, the state of the scientific art tends towards a more accurate model of the universe, just as species tend towards a more efficient genome. In this sense, the goal of the scientific community is simply to create an environment in which the survival of an idea is proportional to its scientific usefulness.

Which is why scientists get so annoyed when politicians and preachers attempt to dictate reality by governmental fiat. But that's another issue.

(Oh, and: so much for my attempt to write in a more abbreviated style, huh?)

No comments: