Sunday, November 26, 2006

Quality in mysticism

The Tao that can be told is not the eternal Tao;
The name that can be named is not the eternal name.
The nameless is the beginning of heaven and earth.
The named is the mother of ten thousand things.
Ever desireless, one can see the mystery.
Ever desiring, one can see the manifestations.
These two spring from the same source but differ in name;

this appears as darkness.

Darkness within darkness.
The gate to all mystery.


- Tao Te Ching

Some people recognise as True the greatness inherent in the universe, the lack of distinction between subject and object, and other similarly fuzzy ideas. Some people think that all this is silly. From a Quality perspective, who is right?

The Mystical defense mechanism

As a staunch member of the Reality-Based Community for many years now, I have a strong intuitive perception of mysticism as a load of fluff excreted by people who wish to protect their precious worldview from scrutiny.

And there's no doubt that, in some cases, this perception is accurate. It's interesting how many religious people, when pressed hard enough, spontaneously become post-modernists. The same holds for practitioners of many alternative medicines - the moment they're hit with a genuine evidence-based challenge to their therapies, they switch gears and start discussing the importance of spiritual health. Once the skeptic has wandered off, they promptly switch back again. It's similar to the way that squid shoot clouds of ink to confuse predators.

Why is this poor Quality? These people claim that their beliefs are part of the objective universe. For their behaviour in this respect to be high-Quality, they must consequently meet certain obligations that in practice are equivalent to showing their model is predictive. Not only do they not attempt to do this, they actively attempt to prevent anyone else managing it. Their bait-and-switch tactics merely add a dab of hypocrisy to this unpleasant cocktail.

Such behaviour is repugnant to reality-based individuals, with good reason. This variant of mysticism attempts to rhetorically undermine the reputation that evidence-based enquiry has legitimately earned for itself, and quite often succeeds. "Emergency mystics" of this sort are actively damaging the Quality of society as a whole in order to feed their personal delusions of objectivity. This is not a victimless crime - non-trivial numbers of people die in agony each year because they relied on a quack rather than seeking proper medical assistance.

What's outside the box?

I'm increasingly coming to believe, however, that carefully targeted mysticism can be an extremely effective tool. The act of temporarily shutting down the "reality checks" that keep us all moderately sane allows us to explore concepts further removed from our current understanding of the universe. At worst, that wider view helps us place our current stance in context better, a labeling that means we can . At best, the search can lead us to powerful concepts that we'd never otherwise have discovered.

My views in this area are partially bolstered by objective evidence from Prof. Richard Wiseman, who has fairly conclusively determined that "luckiness" is tied to the ability to relax one's focus. He tested this by asking individuals who self-assessed as particularly "lucky" or "unlucky" to count the number of pictures in a newspaper he'd had specially produced. Placed on some of the pages were large adverts containing captions like "Tell the experimenter you spotted this and you'll win £100". Overwhelmingly, the lucky people spotted these; the unlucky people were so focused on the counting that they read straight past them.

As best I can tell, a certain amount of mysticism - be it Buddhist meditation, classical religion, New-Age spirituality or simply a sense of joy and wonder at the universe - can be a useful means to this end. It can snap us out of the daily grind, give us a new perspective on life, and return us refreshed, revitalised, and ready to improve the Quality of our world.

Verdict: mysticism is valid for some individuals. It's not necessarily valid for everyone, and its abuse should be avoided, but there's nothing intrinsically wrong with it if applied judiciously.
Read the full post

Not True For Everyone: Free Software and me

In my previous post, I described how configurability is something that techies love and cherish, and how they hate to be unable to get at the workings of a device. Given this, it's no surprise that many of us loathe most software with a fiery vengeance. Your average piece of commercial software is a "binary blob" - a string of ones and zeroes with no meaning to anyone but the computer. That represents a hideous restriction on the techie's ability to tinker with the thing.

Worse, it's a restriction that (in the strictest sense) is unnecessary. By publishing their source code, companies could solve this problem in an instant (they'd probably go out of business, but that's another issue). In fact, until about 30 years ago, that was precisely what usually happened with software. It wasn't until the late 60s, when Bill Gates sent out his infamous Open Letter to Hobbyists, that the concept of Intellectual Property really started to have an effect on software geeks.

Hard-core techies have begun to fight back. The first one to really go for the throat in this battle of cultures was a guy called Richard Stallman. In appearance and in habits he's pretty much the archetype of a techie, so it's no surprise that he should also have this drive to customise. After hitting a barrier with "closed-source" software on a printer (see here for details), Stallman struck back by creating a software license called the GPL that explicitly protected key freedoms for the user:

* The freedom to run the program, for any purpose (freedom 0).
* The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help your neighbor (freedom 2).
* The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.

It has taken a long time to happen, but the GPL and its kindred "Free Software" licenses (that's free as in freedom, not free as in beer - you can usually make money selling Free Software) have started to make an impression. Maybe you've heard of Firefox? How about Apache, the most popular web server in the world? Even if you're not a geek, you might well have heard of Linux, the ultimate techie operating system. These are becoming ever more widespread. And techies the world over are rejoicing.

The Dangers of Evangelism

In fact, many of them are in some ways rejoicing too much. It's become a characteristic of the techie stereotype: we waffle on about Linux to people who aren't the least bit interested. It doesn't do any good to anyone, and it creates extremely poor Quality in our interactions with these people. Why don't we keep our powder dry, saving our spiel for those times when it can actually have an effect?

The answer is simple: we expect other people to think like us. It's intrinsically hard for human beings to realise that other people do not have the same balance of motivations as we do. In this case, techies assume that other people will have the same urge to proactively improve the Quality of their environment, the same drive to tinker with machines until their behaviour matches our desires - and the same frustration when we're artificially shut off from those opportunities.

If you possess those qualities, Free Software will be an inherently interesting concept for you. But, if you don't, the beauty of this concept will simply not exist for you. "Free Software is wonderful" is not a statement that contains any truth for most people. When we evangelise to these people, we're burning up valuable time, energy - and Quality.

This is a general feature of evangelism. For example, Christians who truly feel the joy of Jesus's presence inside them naturally assume that the motivations that give Quality to this relationship apply to everyone. They don't. Not everyone feels the need to subsume themselves in the Holy Spirit, and many people (myself included) are rather disturbed by the idea and its effects.

The same goes for lovers of sports, politics and alternative therapies. When we evangelise without casting an eye towards this issue of Quality, we become bores.
Read the full post

Saturday, November 25, 2006

Poor Quality in action: unwanted assistance

As of half a year ago, I'm finally learning to drive. I've gotten to the point where it's actually safe to let me in a car with dual controls and, in a desperate attempt at learning to stall less before my practical test, I've been driving many miles a weekend with a parent in tow.

In the last few weeks, I've discovered many things that annoy me about driving. Assholes who think that learners are there to be dominated, worn-out road markings at roundabouts, the difficulty of doing ten things at once. But there's one thing, and only one, that's been really driving (ahem) me up the wall.

You know how indicator lights in modern cars automatically switch themselves off when you've finished turning? Yeah, that.

The problem is simple. When I indicate left whilst turning right (or vice versa), the indicator system assumes that I've made a horrible mistake, and helpfully corrects that mistake by deactivating itself. However, sometimes it's necessary to signal in that fashion, for example when coming off a small roundabout. In these circumstances, signalling the "wrong" way is exactly correct - so it always catches me by surprise when I find I actually need to hold the bloody lever down to keep the indicator on.

That's fairly annoying - but so is much of driving. Why should it be this that irritates me the most?

I've always had a deep loathing of wonderfully helpful little features that are impossible to disable. And it's actually a problem that shows up quite a lot. For example, my mobile phone is convinced that the first letter after any full stop should be a capital (thus completely ignoring the common use of abbrev. in txt communication). My suspicion is that this is a general feature of technically-minded people - just look at the backlash against Clippy.

It's interesting to consider why this should be so. Why are techies so averse to machines attempting to be too clever? It's almost as if we're one step away from being Luddites - why is this the case?

I think it all comes down to control. The driving passion of the technophile, the sheer joy of experimentation, comes from a strongly increased feeling of control over one's environment. In the human psyche, control is heavily linked with survival instincts - hence, for example, the elaborate rituals of human dominance - so it's easy for such a relationship with technology to become a focus of Quality. In other words, we get our kicks making machines do our bidding. What the hey, everyone needs a hobby.

To a geek, the idea of a helpful feature is sheer pleasure - but the existence of such a feature that you can't turn off is a studied insult. It's a deliberate, callous limitation of the techie's ability to shape the machine to their will. No matter how many blue flashing LEDs it has, such a creation is fundamentally poor Quality - a diseased device deliberately rendered incapable of fulfilling its owner's desires. A feature of this sort is worse than useless: it's actively repulsive.

Someday, maybe techies will have great enough market share that designers start considering issues like this. Until then, I guess I'm just going to have to put up with this bloody indicator.
Read the full post

Some things never change

OK, so I was out at the pub (or, rather, an array of pubs) with friends from work last night. At one point, we ran into a bunch of other people from work, one of whom was with a group of other friends.

And one of those other friends was someone I knew from my school days. Oh horror, I was about to be outed as a complete nerd in front of the people to whom I had been trying to project an image of normality.

Actually, that turned out not to be a problem - I think I managed to bluff my way through, with much manly handshaking and very little cowering in horror.

But then the guy from my work did something I honestly hadn't expected. "You know, Lifewish, that girl over there [one of the others from work] fancies you. Hey, Sarah*, come over here!"

How very schoolyard.

The problem, though, is that, while I may have been exposed to this particular method of putting someone on the spot whilst I was in secondary school, I'm still not sure how to deal with it. I think I handled it at the time by glaring at the guy and turning to talk to someone else, but that just shows that he managed to get to me - a sign of weakness. It also leaves poor Sarah in the lurch somewhat.

I could have gone the other way, and played along. However, I'm honestly not sure I know the rules for that game well enough. And again that merely validates Andy's approach and leaves Sarah with the bill (metaphorically speaking).

I could have tried to turn the tables, for example by getting to Sarah first and telling her loudly that Andy was calling her over to say how much he fancied her. That's getting warmer in its probable effect on Andy, but means I'm actively contributing to the winding up of Sarah.

What I really need if I'm to achieve Quality in this area is a simple approach that will:
1) cut Andy off before he can mess Sarah about
2) leave Andy looking foolish
3) not require me to be an asshole

Any thoughts?

(Disclaimer: neither Andy nor I was remotely sober at the time, so it's always possible I've misunderstood him or his motivation - albeit IMO not likely. Regardless, it's still an interesting question.)

* Name changed to protect the probably innocent - it's highly unlikely that she does in fact fancy me, and even less likely that she'd tell Andy** about it. Actually it's infinitely more probable that she fancies Andy, the company heartthrob, which makes the whole debacle even nastier.

** The guilty deserve no protection. However, there are half a dozen Andys at my work, including some lovely chaps, so anyone who knows my secret identity should not draw conclusions.
Read the full post

Tuesday, November 21, 2006

Like a bat outta hell...

...I'll be gone in half an hour to get an early night.

I'm aware that that's not exactly what the song was referring to - but, as far as filling life to the full goes, I'm on a roll. In fact, I'm on a roll call. I've just started a new job and it's rather hectic.

Usefully, the job is at a company that has a very strong ethos of Quality. Hopefully I shall have much to pass on (in suitably generic terms, of course).

Hey, any company that can have me getting to bed by 10:00 must have something going for it :)
Read the full post

Thursday, November 02, 2006

What ID wasn't

A comment at a blog I frequent just gave me reason to search back through history to find a mock journal I once produced. The subject: what an actual mathematical discovery of Intelligent Design would have looked like. On rereading, it's mostly still good, so I'm copying it here for safe keeping.




January 2006: First draft of paper completed.

February 2006: Paper discussed with supervisor in depth. Minor alterations made to strengthen the argument with respect to some pathological cases. Supervisor points out that one section is irrelevant, so it's removed. Extra section added to eliminate previously unconsidered option.

March 2006: Paper now 100% ready. Submitted it to prestigious mathematics journal.

May 2006: Finally heard back from journal. Paper rejected by reviewer with a snarky comment about an error in section 3. Will get back to this once I've finished grading exam scripts.

June 2006: Paper corrected and resubmitted.

July 2006: Paper accepted.

August 2006: Paper finally published. Almost immediately am contacted by three people who think I'm badly wrong, but it turns out to be a result of printer's error. Another objection comes in, this one valid. Get contacted by rather confused-sounding journalist - local newspaper prints short article. Remaining mail is mainly positive, includes compliments on novel use of Zorn's Lemma and a request that I give a seminar.

September 2006: Figure out how to bypass the valid objection, write up as paper, submit to journal. This one gets in without much trouble - the reviewer responded impressively quickly. Give seminar, attendees seem bored but perk up quickly when I mention the implications for bioinformatics. Much use of photocopier afterwards.

November 2006: New paper published, get mildly snowed under with email. Minor mention on BBC website. Start getting crank calls complaining about inferred attempt to "know the mind of God".

December 2006: Paper discussed on popular evolutionary biology forum, attempted rebuttal by resident mathematician. Engage in short debate about Axiom of Choice, opponent concedes defeat. Asked to give rundown on implications for biology. Asked to comment on genetic algorithms and evolutionary simulations, explain why result doesn't apply to these. Asked about falsifiability, explain how the approach used permits "interventions" to be pinned down and analysed. Implication is that detailed hypotheses can be developed - no "big tent" for us!

January 2007: Story picked up by New Scientist (they misspelled my name of course). Crank calls increase in volume. Am forced to get new email address. Research group formed to discuss results, seems promising. Story picked up by Guardian and Sun ("Egghead Explodes Evolution").

Two vitriolic attempted rebuttals, one on a biology blog claiming I'm wrong and one from the Design Institute complaining I'm stealing their idea. One of my new postgrads does a detailed rebuttal of the biology blog one, then writes the central thesis up as a formal paper. He'll go far, if he doesn't waste too much time on the internet.

Asked to speak in Cambridge, Oxford, Hull. Accept the Cambridge and Hull offers.

Accept offer to speak at local evangelical event. Get booed off stage after using phrases "outmoded superstition" and "put the Designer under the microscope".

March 2007: Research group expanded, having to turn away applicants. Promoted. Awarded three honorary doctorates. Still getting the crank calls, but not so much email. Postgrad figures out chemical signature apparently associated with "interventions". MIT researcher uses modified cladistic technique to pinpoint exact dates of interventions, turns out there are 6 detectable ones.

April 2007: Am contacted by geologist specialising in trace mineral concentrations of post-Cambrian strata, with comment about a strange hydrocarbon found at approximate dates of interventions. Do joint paper for Nature - accepted almost immediately.

June 2007: Contacted by NASA - apparently the trace hydrocarbon appears as a byproduct of an experimental low-orbit propulsion system they've been working on for two years. Asked to keep things under wraps for a couple of weeks til they can get the patent application sorted out.

Two new research groups formed to work parallel to our group. Friendly rivalry emerges, regular collaboration on papers defuses possible tension. Particular focus on what exact biological structures are results of intervention. Am happy to say that we regularly steal their best postgrads.

July 2007: Propulsion story leaked to Sun ("Alien Conspiracy!"). Seems everyone is talking about our research. Minor riots in several areas, university burned down in Iran (quote of the day: "they will either contradict the Quran, so they are heresy, or they will agree with it, so they are superfluous"). Research endorsed by Raelians, criticised by Scientologists (their objection is something about Xenu and volcanoes).

August 2007: Richard Dawkins sends me heartfelt gratitude for "proving me wrong all these years", plus early draft of new book "Of Alleles And Astronauts". Am apparently the subject of a fatwa by prominent Saudi cleric. More crank calls, death threats etc (much repetition of "I ain't no alien experiment").

Nominated for Fields Medal, Nobel Prize.

Early 2008: NASA funding increased, ESA funding increased, competition to be first in locating a Designer artifact heats up. International conference of biochemistry convened to discuss plausible techniques used by Designers. Conference a great success - "design signature" chemical turns out to effectively immobilise DNA, allowing for detailed biomolecular surgery. Nanoengineers gatecrash, start discussing applications for solar panels.

Mid 2008: Awarded professorship at Cambridge. Best part: no bloody undergrads to deal with. Take sabbatical before accepting position, work on book (no doubt it'll be outsold by Dawkins, but a man's gotta try). Title: "A Brief History of Slime". Give guest lectures in Japan, Russia, America, New Zealand. Disney attempts to buy movie rights, but I cut them off when they mention changing the name to "The Eternal Triangle".

Now have three full-time bodyguards to guard against:
a) pissed-off religious folk
b) freaked-out nonreligious folk
c) alien-related death cults

Late 2008: Alien structure discovered on Mars. Martial law declared in South Carolina in aftermath. Rush by world's engineers to examine the finds is intense. One engineer attempts to shorten the waiting list by judicious assassination.

One of the artifacts turns out to be some sort of faster-than-light conveyance. This is going to be interesting...

Postscript

I think I'll end there, given that it's getting quite long enough. I guess the moral of the story is that, if you prove that evolution didn't happen, it doesn't stop there. Saying "we've disproved evolution, yet we have absolutely no interest in trying to find out what did actually happen" is a fairly reprehensible attitude from anyone claiming to be a scientist.

Quite apart from anything else, this was extremely fun to write. There may be a science fiction story in it somewhere. Thanks to Andrew Rowell for the inspiration.
Read the full post

Quality in maths

I've covered plenty of ground on the issue of quality in science, so I'd like to take a brief moment to discuss something that I actually... uh... know anything the heck about: mathematics.

Mathematics is actually easier to understand than science to anyone with any knowledge of basic logical discourse. In principle, it's quite simple: choose a bunch of axioms whose truth defines your system, recombine them to produce other true statements, and gradually work your way up to the conclusion you were after.

This description is perfect, apart from one thing: no real mathematician actually bothers to do that. Over the years a methodology has emerged that's considerably more practical and is essentially equivalent to this idealised approach. So failure to conform to the ideal is not itself evidence of poor Quality.

All maths is trivial

There is a mathematical theorem, an an area known as propositional calculus, which says that all mathematical theorems are tautological. Now, they're using "tautological" in a far more rigorous sense than you or I, but the impression given by this quote is useful in other ways - it does convey a vaguely accurate perception of what's going on.

A mathematical proof exists as a set of definitions combined with a series of logical steps. The hard part for a non-mathematician is that the definitions may use prepackaged terms ("let X be the dihedral group D_2n, and Y a Sylow subgroup of order pm for m>1...") and the logic may skip large steps ("...then p divides n"). So in general it won't be possible for a non-mathematician to verify a proof without either learning a bit about the field or finding a helpful mathematician.

What would the mathematician do to verify the proof? Well, firstly, she would confirm she knew what all the definitions meant. Then she would go through the proof, figuring out which steps were obviously valid based on those definitions. Her response to the remaining few steps would be more interesting: she would break them down into smaller substeps.

This is where the "triviality" of mathematics comes into play. Large steps in logic are (in theory at least) mere shorthand for a whole bunch of shorter, more complex steps. So, for example, if I want to prove a differential equation I could use a giant step taken from the field of Fourier analysis. This would break down into smaller steps from the internal logic of the same field, which would effectively be huge steps taken from the field of complex analysis. These would break down into smaller steps from the field of naive set theory, which IIRC can be justified in terms of axiomatic set theory, and so on...

What value does being a mathematician have in this circumstance? Why couldn't a layman do this? There are three reasons. Firstly, the mathematician will generally have some hard-earned intuition about the objects being defined, and will be able to "verify" many of the steps by sheer instinct. She would be able to verify them by hand if necessary, but that's often just a massive waste of time.

Secondly, the mathematician will know where to look for information about the remaining steps. She will have some idea of which journals to check, which individuals to ask, for further information about what the heck is going on.

Thirdly, in the event that a step is not fully explained in any literature, a mathematician will have the creative mathematical abilities to handle solving it herself. But creating maths is a whole different kettle of fish from verifying it, so I won't go there.

How to recognise bad maths

All this doesn't really do much for the average Joe. When confronted with a piece of mathematics, "find a mathematician" may indeed be the best advice, but in situations where someone is trying to pull one over on you there often isn't time.

The classic example here is probably Intelligent Design, a variant of creationism. Its advocate, William Dembski, effectively attempts to reinvigorate the age-old Argument From Design by ruling out evolution as a source for much of the natural world. Say you got presented with one of Dembski's papers by a street preacher and was told "this paper proves God exists". How would one know that it was a fraud?

Fortunately, honestly mistaken maths differs greatly from actively misleading maths. In this case, the problem is obvious: the worst maths is no maths. If you read through Dembski's paper, you'll see that there's a mere handful of actual mathematical symbols per page, most of which are merely used to summarise Dembski's words rather than actually playing any part in proceedings.

This is a strong indicator that he's not actually proving very much. There are only two reasons for using words in a mathematical document: to explain stuff to students, and to pull the wool over people's eyes. Neither is appropriate for a mathematical paper. If you knew nothing about the mathematics in question, you'd still be able to spot this issue.

Further posts to come on this, when I'm feeling a little more coherent.
Read the full post

Wednesday, November 01, 2006

What's with the "concrete" thing?

In a previous post on creationism, I used the word "concrete" when discussing the predictivity of scientific theories. What did I mean by that?

I gave one example of what a non-concrete prediction might look like. Many cranks have a habit of claiming that, since the dogmatic scientific community will be horribly undermined by their work, they're guaranteed to be laughed at. This prediction usually turns out to be true, which the crank claims supports a model of the world in which their notion is accurate and science is dogmatic.

Of course, the obvious response to this particular case is the infamous Carl Sagan quote: "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright Brothers. But they also laughed at Bozo the Clown." However, there are some non-concrete claims that are less immediately laughable.

For example, many people present a variety of equally non-concrete evidences for God - a baby's smile, a sunset, a soaring bird. How, it's asked, can we see these things and not believe?

There are actually two answers to this. The more mundane one is that all this represents a rather impressive example of confirmation bias, the tendency to only remember half the story (in this case the good half). Incidentally, this form of bias is also the primary factor behind Murphy's law - it seems like toast always drops butter-side-down because, when it lands nicely, we don't feel annoyed enough to remember about it.

So, for example, the soaring bird is beautiful - but its subsequent evisceration by an unexpected hawk is less so. The baby's smile is lovely - but the other end of the baby is decidedly less pleasant. Sunsets can be gorgeous - until we recall that their vibrant colours are largely due to air pollution.

The second, more philosophically interesting, point is: it's not the baby, the sunset, the bird that's actually being offered as evidence. There is no logical progression from "baby smiles exist" to "God exists". What's really being presented as evidence is the intuitive beliefs that beauty exists and that beauty implies God.

Why can't we just accept that as scientifically useful? Because we have a long history of discovering that intuitive notions are wrong. The sky is not a big blue wall with holes in for rain and starlight. The Sun is not a small whizzy ball that spins round the Earth. Solid objects are actually probabilistic wavefunctions. Heck, space isn't even Euclidean. And yet, at various points in history, people have been sure of each of these to the point that expressing the opposite view would earn you a trip to the nuthouse*.

Given this repeated history of failure of "gut feel", why should the argument suddenly have far more credibility just because it's got the word "God" in it?

Disclaimer: as always, none of this means that God can't be considered real for an individual, or even real for the human species as a whole. It just means that the concept, as usually stated, doesn't necessarily meet the criteria to be considered real for any intelligent lifeform.

* In the case of the last two items on the list, these nuthouses still exist - they're called "universities"...
Read the full post

Hypothesis testing

Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.
- Richard Feynman

Reading list: Science as Falsification - Karl Popper

Back in the time of the Greeks, "science" as it existed then proceeded almost entirely by abstract thought. Needless to say, this was not always terribly effective - thought will simply tell you whether your model is consistent not whether it's accurate. As a result of these inadequacies, a new approach was developed called empiricism, which emphasised the importance of evidence.

But how to actually apply that evidence? With sufficient rationalisation, it's possible to make any data set fit any model - how to avoid the tendency to make ad hoc excuses?

The conclusion that scientists came to was very concordant with my thoughts on usefulness. They decided that the ultimate arbiter of truth (or at least of accuracy) would be the predictive power of an hypothesis. It may be possible to contort even as daft a notion as creationism to fit the evidence, but only at the expense of removing every last scrap of its predictive power.

Thus, the "hypothesis testing" approach was formalised. Generally it goes something like:

1) Find an area to study
2) Gather data from that area and look for patterns
3) Explicitly state an hypothesis
4) Derive concrete testable predictions from that hypothesis
5) Test the predictions
5a) If they're correct, go back to step 4 and carry on testing
5b) If they're incorrect, choose another hypothesis and go back to step 3

One thing you'll notice about this sequence is that there's no actual exit condition - there's no "6) Congratulations, your hypothesis is true". So how does the scientific community determine when an hypothesis is sufficiently well-tested? The threshold is inherently subjective here, but usually the scientific community eventually decides as a group that an hypothesis is firmly established enough to be taken as a given, at least provisionally.

Falsifiability

So far I've treated the scientific method as comparison of respective predictivities. There is, however, another paradigm worth considering: Popperian falsifiability.

Karl Popper's notion was that science proceeds by a process of weeding out the less accurate hypotheses. So, for example, Aristotle's theory of gravity was proven to be less accurate than Newton's, which was proven less accurate than Einstein's. This approach has two advantages: first, it provides a very precise linguistic framework for understanding these principles, and secondly, it throws up an interesting analogy.

The linguistic framework looks like this:
1) A conjecture is any claim about the universe
2) A falsifiable conjecture, aka an hypothesis, is one that can be disproven (i.e. "unicorns don't exist" could be disproven by finding a unicorn)
3) A verifiable conjecture is one that can be proven true (i.e. "unicorns exist" - the converse of a falsifiable conjecture is verifiable)
4) A testable conjecture, aka a prediction, is one that is both verifiable and falsifiable

The problem that science solves is: there is no way to demonstrate an hypothesis to be true. The solution is to stop worrying about truth and start worrying about accuracy. It's very easy to demonstrate an hypothesis accurate, by logically deriving predictions from it and testing them - in other words, by attempting to falsify them. Repeated failure to be falsified can be taken as a sign that an hypothesis is representative of the universe.

Now, one thing that's interesting to note is that this is directly equivalent to standard processes of biological evolution. In both cases, new variants of an object (DNA or hypotheses) are created pretty much at random, and the useful ones are retained whilst the less effective ones are discarded. Thus, the state of the scientific art tends towards a more accurate model of the universe, just as species tend towards a more efficient genome. In this sense, the goal of the scientific community is simply to create an environment in which the survival of an idea is proportional to its scientific usefulness.

Which is why scientists get so annoyed when politicians and preachers attempt to dictate reality by governmental fiat. But that's another issue.

(Oh, and: so much for my attempt to write in a more abbreviated style, huh?)
Read the full post