Mathematics is actually easier to understand than science to anyone with any knowledge of basic logical discourse. In principle, it's quite simple: choose a bunch of axioms whose truth defines your system, recombine them to produce other true statements, and gradually work your way up to the conclusion you were after.

This description is perfect, apart from one thing: no real mathematician actually bothers to do that. Over the years a methodology has emerged that's considerably more practical and is essentially equivalent to this idealised approach. So failure to conform to the ideal is not itself evidence of poor Quality.

**All maths is trivial**

There is a mathematical theorem, an an area known as propositional calculus, which says that all mathematical theorems are tautological. Now, they're using "tautological" in a far more rigorous sense than you or I, but the impression given by this quote is useful in other ways - it does convey a vaguely accurate perception of what's going on.

A mathematical proof exists as a set of definitions combined with a series of logical steps. The hard part for a non-mathematician is that the definitions may use prepackaged terms ("let X be the dihedral group D_2n, and Y a Sylow subgroup of order p

^{m}for m>1...") and the logic may skip large steps ("...then p divides n"). So in general it won't be possible for a non-mathematician to verify a proof without either learning a bit about the field or finding a helpful mathematician.

What would the mathematician do to verify the proof? Well, firstly, she would confirm she knew what all the definitions meant. Then she would go through the proof, figuring out which steps were obviously valid based on those definitions. Her response to the remaining few steps would be more interesting: she would break them down into smaller substeps.

This is where the "triviality" of mathematics comes into play. Large steps in logic are (in theory at least) mere shorthand for a whole bunch of shorter, more complex steps. So, for example, if I want to prove a differential equation I could use a giant step taken from the field of Fourier analysis. This would break down into smaller steps from the internal logic of the same field, which would effectively be huge steps taken from the field of complex analysis. These would break down into smaller steps from the field of naive set theory, which IIRC can be justified in terms of axiomatic set theory, and so on...

What value does being a mathematician have in this circumstance? Why couldn't a layman do this? There are three reasons. Firstly, the mathematician will generally have some hard-earned intuition about the objects being defined, and will be able to "verify" many of the steps by sheer instinct. She would be able to verify them by hand if necessary, but that's often just a massive waste of time.

Secondly, the mathematician will know where to look for information about the remaining steps. She will have some idea of which journals to check, which individuals to ask, for further information about what the heck is going on.

Thirdly, in the event that a step is

*not*fully explained in any literature, a mathematician will have the creative mathematical abilities to handle solving it herself. But creating maths is a whole different kettle of fish from verifying it, so I won't go there.

**How to recognise bad maths**

All this doesn't really do much for the average Joe. When confronted with a piece of mathematics, "find a mathematician" may indeed be the best advice, but in situations where someone is trying to pull one over on you there often isn't time.

The classic example here is probably Intelligent Design, a variant of creationism. Its advocate, William Dembski, effectively attempts to reinvigorate the age-old Argument From Design by ruling out evolution as a source for much of the natural world. Say you got presented with one of Dembski's papers by a street preacher and was told "this paper proves God exists". How would one know that it was a fraud?

Fortunately, honestly mistaken maths differs greatly from actively misleading maths. In this case, the problem is obvious: the worst maths is no maths. If you read through Dembski's paper, you'll see that there's a mere handful of actual mathematical symbols per page, most of which are merely used to summarise Dembski's words rather than actually playing any part in proceedings.

This is a strong indicator that he's not actually proving very much. There are only two reasons for using words in a mathematical document: to explain stuff to students, and to pull the wool over people's eyes. Neither is appropriate for a mathematical paper. If you knew nothing about the mathematics in question, you'd still be able to spot this issue.

Further posts to come on this, when I'm feeling a little more coherent.

## 1 comment:

Well, I would say that it's far too strong to say "There are only two reasons for using words in a mathematical document: to explain stuff to students, and to pull the wool over people's eyes.", most serious math papers I've seen (I'm a math undergraduate) have probably half the space in words, but mostly only to summarize the steps of the proof that should be easy for the reader to do. For example "proof follows exactly as in previous lemma once you notice that..." or "an easy induction shows...".

However, the page will probably be at least half math, and the words will mostly be mathematical instruction, so your point that a "math paper" that is nearly all verbal discourse is probably bogus.

Post a Comment