Monday, November 17, 2008

Cuts like a knife

Entia non sunt multiplicanda praeter necessitatem.
- William of Ockham

The scientific method is probably the most important investigative tool we as a species have ever produced. As generally understood*, it is a means of comparing and contrasting hypotheses. It uses three conditions: accuracy, predictivity and parsimony.

Accuracy is easy to understand: a new model of the universe must be consistent with existing data. So general relativity looks like Newtonian dynamics at low energies, quantum atoms behave like point particles on human scales, and so on. A theory of gravity that did not produce an inverse square law would be no damn good.

Predictivity is similarly straightforward: a model can't just describe what has happened so far, it must also give us some clue what's coming up. This is for two reasons. Firstly, it limits people's ability to equivocate, which stops science descending into an angels-on-pinheads talking shop. Secondly, it means that the entire business occasionally generates useful real-world results.

Parsimony, also known as Ockham's Razor, is not so clear. In short, it states that you shouldn't include more "stuff" in your model than necessary. So never assume a conspiracy where stupidity is an adequate explanation; never infer psychic powers where outright fraud is a possibility; never choose epicycles over ellipses.

But what do we mean by "simple", and how do we justify this principle? The two questions are interlinked: a thorough justification of the Razor will of necessity give us a working definition of simplicity. Let's take a quick tour through some historical arguments put forward for this enigmatic principle. We should accept the simplest explanation because...

1) ...Simplicity is so damn cool.

This is probably the original view of Ockham's Razor. As far as Classical civilisation was concerned, simplicity was a desirable goal in itself, without needing any further justification.

This presupposes some kind of human aesthetic sense which would allow us to distinguish the simple from the complex. It's very "Zen and the Art of Motorcycle Maintenance".

I remain unconvinced by this for two main reasons. Firstly, I don't think it really answers the question; it just wraps it in even fuzzier clothing. Secondly, even after several thousand years there is no consensus about whether God is defined as simple or complex. If our aesthetic sense can display inconsistency in a case as grandiose as this, what hope does it have in other, subtler, contexts?

2) ...Simplicity is more likely to be true.

This is an intriguing notion. Could it be the case that the universe is in some way geared towards elegant explanations? It's actually quite a common belief, not just amongst the religious, but also among scientists who see elegance amidst the chaos.

However, I'm not aware of any good explanation of why this should be the case. In the absence of that, it's impossible to say that this rule holds generally. And I'm fairly sure there's a certain amount of confirmation bias here.

There are some broad philosophies that would make this explanation more plausible. In general, though, I fall on the side of Sir Arthur Eddington: the mathematics is not there until we put it there.

3) ...Simplicity makes better targets for science.

This was Popper's take on parsimony. He believed that simple hypotheses were, if false, far easier to squash than complex ones.

This view has a certain amount of empirical support. Consider for example the epicycle "hypothesis". Turns out that, by sticking enough extra epicycles onto a planet's orbit, you can match almost any data set. So the hypothesis was rendered so fuzzy as to be undisprovable. Undisprovable hypotheses are the plaque in science's arteries: they seriously impede progress and they're almost impossible to get rid of.

In this sense, simplicity relates to the number of "magic variables" that an hypothesis contains. Epicycle theory had an arbitrary number of magic variables that could be set by scientists: the radii and rotation speed of the epicycles. Galileo's elliptic orbits, by contrast, were specified entirely by a single gravitational constant plus the masses and present velocities of the various heavenly bodies. It took several centuries to disprove epicycles. By contrast, if Galileo had been wrong, it could have been demonstrated in a matter of months.

4) ...Simplicity is functional

Let's say you have a tiger charging towards you. You have two explanations of its progress: one in terms of mass, momentum, chemical interactions and the behaviour of various neurons, and one in terms of it being a bloody great big cat that wants to eat you. Which model do you think would do most for your survival chances?

Humans only have a limited amount of computational resource at hand, so it makes sense to shepherd it as much as possible. Why waste valuable neurons believing in yetis, ghosts, gods? It doesn't make it any easier to dodge the tiger, and it reduces the space available for beliefs that could.

From this point of view, simplicity means computational simplicity: the model that generates the most accurate results in the shortest time. One interesting feature of this is that simplicity may actually vary from organism to organism: a cyborg with a silicon brain might have far different preferences from a mammal with a bunch of neurons. Heck, even different processors could lead to different views of the world.

This is probably the most popular explanation for Ockham's Razor as far as the philosophers are concerned. Game over? Probably... but this explanation also causes great fuss. Philosophers do not generally like pragmatism - it can change so easily from situation to situation, making a mess of all our overarching frameworks.

If Ockham's Razor is pragmatic, then a sufficiently strong pragmatic incentive could lead us to discard it. Furthering our position within the tribe, motivating ourselves, avoiding depression - all these become valid reasons for unparsimonious belief**.

We skeptics can find only a Pyrrhic victory in justifying the Razor by reference to pragmatism. In slicing away the gods, we slit our own philosophical wrists.



* According to Karl Popper, anyway. Kuhn would disagree. I tend to equivocate on this: I think that, while Kuhn probably describes the practice of science better, Popper provides a necessary level of justification. In football terms, Kuhn is the coach who talks about positions and tactics; Popper is the coach who talks about human biology.

** Or at least for giving the impression of belief. But for someone who isn't a good liar, it might be necessary to persuade themselves.

6 comments:

Anonymous said...

There is another important argument for parsimony, related to both 3 and 4 - without it, you're asking to get stuck in an infinite regress. Any phenomenon explainable by reference to a single entity can always be explained by the interaction of two or more entities. And each of those entities can be then replaced by the interaction of two or more entities, ad infinitum. Turtles all the way down...

Lifewish said...

Ah yes, good ol' Duhem-Quine. I knew I was forgetting something. Might go back and add it in when I'm feeling sufficiently diligent (e.g. not now).

I'm not particularly convinced by underdetermination as an argument for parsimony. It shows that you need extra selection principles, but it doesn't say anything about what those principles should be.

I read a nice fiction book recently about an autistic kid. Among his unusual traits, he chooses red stuff where possible and avoids yellow stuff. This approach would break underdetermination just as effectively as parsimony does.

So how can we justify choosing the simplest explanation rather than the one with the most red in it?

Anonymous said...

Good question. I'll have to think about that...

Anonymous said...

OK, having thought about it some more, I'm not convinced that there are any selection principles which can resolve the problem without also resorting to parsimony. If you can propose one hypothesis which satisfies the selection criteria, you can always propose a infinite number of alternative, non-parsimonious hypotheses which satisfy them equally well.

I'm also not convinced that this problem really is underdetermination as per Duhem-Quine, but I'm not sufficiently familiar with the idea to say for certain.

All I'm really saying is that if you allow undetectable leprechauns, you can never know how many of them there are. You can't tell the difference between one leprechaun screwing with your experiment, two leprechauns fighting over which direction to screw your experiment, or an infinite number of leprechauns, organised into an infinite number of competing parties, some of which are trying to screw your experiment in an infinite variety of ways, while the others try to stop them. Saying that you're only going to consider leprechauns in red hats doesn't really help the problem at all.

Lifewish said...

No, but you can at least say that the leprechauns must be red...

My personal favourite justification is a variant on Popper. First, assume that a given system S can be modelled "perfectly" by some number N of axioms. At any given time t, our preferred model of S will have n(t) axioms. If we get our model absolutely spot-on then n(t) = N.

Then note that (as you and Popper allude to) it is quite hard to disprove complex theories in favour of simple theories, because a complex theory will often be able to emulate a simpler theory (the epicycle effect). So application of falsifiability will only tend to make models more complex. So n(t) only increases as t gets bigger.

What happens if n(0), our initial guesstimate, is greater than N? Since falsifiability alone will never reduce the number of axioms, we will never end up with that perfectly accurate model. So we must choose n(0) to be less than N. But we don't know what N is, so how do we do this? By trying to keep the number of axioms to an absolute minimum!

I'm sure this justification would have a philosopher of science quietly crying into his beer, but I can't see any obvious holes beyond the assumption that getting n(t)=N is a good thing.

Anonymous said...

This is pretty much the point at which I throw up my hands and declare that smarter people than me have surely done all this already. ;)

Of course, Goedel has something to say about the provability of axioms...