Sunday, December 24, 2006

Merry... uh...

There's one major problem with being an atheist: what the hellblazes do you wish people at all those wonderfully Christian festivals?

Let's get this clear: I respect people's religious rights to the hilt, and will happily beat round the head any idiot who thinks that Christmas is bad per se. However, the whole central concept of Christmas - a baby being born after artificial insemination by a deity and with angels and astronomically-implausible phenomena hanging overhead - is very much against my personal reality-based ideals. Quite frankly it's daft, and it's increasingly the case that self-oblivious daftness bugs the bejeezus heck out of me. Even before you throw in the commercialism, Christmas to me is the festival equivalent of the infamous three plaster ducks.

Of course I'm not the only person who feels like this. Many atheists are irritated by being forced into association with someone else's festival, so let's look at what they do.

Approach 1: Commercialism is the reason for the season

Proponent: Dawkins

The first approach amongst atheists is to unhesitatingly accept all the trappings of Christmas, but in addition to accept something that Christians have been complaining about for years: that Christmas seems to be getting less oriented around Christianity every year. The result is a sense of smug satisfaction without having to worry about whether it's acceptable to break out the Christmas Pudding.

On a side-note, I'd like to ask those complaining Christians: who do you think you're kidding? If you're going to "borrow" random holidays off of older religions, and at a time of year where your story couldn't possibly have happened to boot, you have very little grounds to grumble when said holidays are repurloined. Honestly, it's worse than Disney ripping off random fairytales and then whining about copyright not being strict enough.

Rant over, next approach.

Approach 2: Cthulhu is the reason for the season

Proponent: PZ Myers

The next methodology takes the above idea about Christmas not being Christian one step further, by picking a completely new name for an event that just happens to roughly coincide with the better-known festival. Examples are Cephalopodmas, wintereenmas, solstice, festivus, ramahanukwanzmas and probably dozens of others.

Many of these provide a wonderful biting commentary on various aspects of Christmas, not least its complete and unconscionable lack of squid, but to be quite frank they all sound quite, quite lame. Not to mention as fake as a tin shilling.

Approach 3: Axial Tilt is the reason for the season

Proponent: No-one I know, although there is a cool tshirt

This version challenges the underlying premise of the above two arguments: that we need to slap labels on any attempt to be merry and generous. This assertion is clearly silly, and yet so many people take it as a given (admittedly mostly at the instigation of that scourge of wallets, the greeting card industry). Why not just have fun?

Go visit your relatives not in celebration of one particular baby out of the billions who have been born in December, but because you love them and want them to be happy during this frankly rather gloomy time of year. Go sledding not from any sense of obligation, but because sliding down snow is great fun and probably won't be possible once global warming really kicks in. Buy people as many presents as your heart desires and your bank account can cover, but don't forget them the other 364.242375 days of the year. Stick up a Christmas tree not because you need to compete with the Joneses next door and their indoor redwood, but because it's hilarious to watch your cat trying to mug the fairy on top.

In short: treat December 25th not as some bizarrely different chunk of time, but as a linchpin in your plans to fill the entire year with as much happiness as possible. And as a chance to get completely plastered, of course.
Read the full post

Saturday, December 16, 2006

Software that grows as it goes

In a previous post, I discussed the key principle of Free Software: that software should be freely modifiable and redistributable.

Now I'd like to discuss a completely unrelated concept called Open Source. Its key principle: that software should be freely modifiable and redistributable.

The Open Source movement is based on the observation that, in some contexts, an appropriately-structured but otherwise freewheeling community of developers can produce some damn good products. The key to this result is a sort of ratchet effect - if any individual can take effective ownership of the code then said code can only get better. Improvements will grabbed by everyone else; crappy code will be dropped and eventually disappear from the communal pool.

Of course, this is a major simplification of the process of software development, but it's close enough to reality that the Open Source community is doing very well for itself, thank you very much. The concept of "forking" - creating duplicates of the codebase, which can then be worked on separately - remains pretty much a nuclear option, and is rarely used. But the availability of this option enforces passable development standards, and where it has been used (for example in the case of the XFree86 window manager) the result has been massively increased levels of innovation and quality control.

In fact, this development model has been so successful that many large companies such as IBM are increasingly supporting Open Source communities by seeding them with paid developers. The companies get software that fits their needs more completely, plus a lot of goodwill; everyone else gets better software. And they only have to pay for the developers once, rather than having to continually spend fortunes on Microsoft software licenses. Everyone's happy, except for the closed-source companies.

Like any other model, Open Source has its strengths and weaknesses. The biggest problem is the chicken-and-egg issue of getting buy-in for any truly revolutionary ideas you may have - until those ideas have debuted, it's hard for potential contributors to realise how valuable they could be. Closed source, as far as I can tell, is a lot better at innovation for this reason.

However, the corresponding strength is very powerful. Open Source's advantage is that the development community scales proportional to the user community. The more users you get, the more contributors will emerge. As a result of this, already-solid products such as the Apache web server, the Firefox web browser and the Linux operating system can be honed to a level of quality that wouldn't be easily possible for a closed-source company. Open Source can be envisaged as a rising tide that eventually beats closed source but that leaves a big gap open for new ideas.

How does this differ from Free Software? Well, they both focus on Quality in software arising from availability of the source code, and are completely compatible in terms of artefacts, processes and philosophies, but beyond that the similarity ends. Free Software is fundamentally a matter of civil rights - FSers are the libertarians of the software world. Open Source, by contrast, is simply keen on this neat development model that has been stumbled upon. FSers see OSers as sellouts who miss the point; OSers see FSers as uncompromising zealots.

So which am I? Really I'm neither - I have yet to contribute anything to the FOSS (Free/Open Source Software) community except in the most tangential fashion. In another sense I'm both - I agree with both groups. However, if I had to pick one, I'd probably go for Free Software. It's my computer and it'll do what I tell it to and like it, dammit!
Read the full post

The experiment continues

In a previous post, I discussed how applying a miniscule amount of thought to the problem of stalling the car served to provide me with a solution. I want to start using this approach a lot more. My new target: mornings.

I'm fundamentally bad at mornings. Generally my brain doesn't really wake up until at least a couple of hours in. That's why, for my new job, I've been getting up earlier than strictly necessary, with the aim of getting into work half an hour before it actually starts. That way I don't need to feel guilty if all I'm good for for said half hour is reading a book.

It also provides a useful buffer against oversleeping, and allows me to notice whether I'm starting to slip up before it becomes an issue. That warning light is currently flashing.

The desired timeline is: get up at 6:30, leave the house at 7:20, get into work at 8:00 for an official start of 8:30. Redundancy is built into this approach at two points: the long period before I have to leave the house, and the half hour that I'm early by. It's the former of these that's starting to be a problem. The short version is: I keep sleeping on until about 6:45. I don't like it.

This quarter of an hour represents an interesting issue of quality. Whilst it's actually fairly unnecessary as far as scheduling is concerned, it's vitally important to me as a representation of my commitment to professionalism at work. If I lose that momentum, I'll be headed straight down the extremely slippery slope to becoming a complete slob.

What's supposed to happen is that I get woken up by the alarm, turn it off, get out of bed and get in the shower. What's actually happening is that I get woken up, turn the alarm off, and fall asleep again. I usually wake up again in under 20 minutes - my subconscious apparently knows that that's the point at which I do actually need to get up. I have a very constraint-driven subconscious. I wish I could make it more goal-driven, but that's not a viable solution in the short term.

In the brainstorming process, I've come up with the following broad categories of idea:
1) Make myself less likely to sleep past the alarm
2) Make the alarm less easy to sleep past
3) Mitigate the effects of sleeping past the alarm

The concrete ideas on the table for idea 1 are:
a) Get more sleep (i.e. get to bed earlier)
b) Figure out ways to make my sleep more re-energising (e.g. exercise before bed, fine-tune the bedroom temperature, get a new mattress, etc)
c) Figure out ways to trick my body into thinking it's been re-energised (e.g. attempt to fine-tune my bedtime so that at 6:30 I'm in REM sleep)

For idea 2:
a) Make the alarm require a bit more thought and/or effort to turn off (e.g. put it on the other side of the room, find an alarm which is trickier to disable, etc)
b) Increase the alarm's impact (e.g. make it louder, make it more intrusive, wire electrodes to it and attach them to my body, etc)

For idea 3:
a) Set the alarm earlier
b) Set more than one alarm

Of these, 1a is already being played to the hilt (although I should definitely keep in mind the importance of early nights). 1c is probably intractable in the short term, although at some point I might want to review the relevant scientific literature. 2b is probably undesirable - loud alarms wake the family, annoying alarms leave me feeling disgruntled all morning, electrodes leave burn marks etc. 3a is rather incompatible with the goal of 1a. That leaves 1b, 2a, and 3b.

I therefore propose the following response:
- Try exercising before bed
- Actually bother to learn how to operate the thermostat
- Investigate the comparative softnesses of various matresses, and attempt to estimate how they would affect my sleep
- Relocate the existing alarm
- Investigate getting a more complicated alarm (possibly as a second alarm)

Hopefully this'll all have the desired effect.
Read the full post

A life of Quality

I just found out today that a guy who studied maths with me at university died a couple of weeks ago. Technically I'm only a couple of days out of the loop - his body was only just found.

Despite being in the same year, at the same college and studying the same subject, I didn't know Daniel terribly well as a person. I generally hung out with a group of maths students who could legitimately be termed the slackers of the dept (to the extent that anyone at Cambridge can be considered a slacker); Daniel was substantially more driven about his studies. His hobbies were rowing and choir; mine were substantially more eclectic. He spent many an evening in the college bar; in some ways I never fully integrated into my college, and as a result I preferred town pubs.

So most of what I know about him was very shallow and scarce - barely more than what was covered in the obituary. I never knew he was interested in philosophy or computer science, two areas which I also find fascinating. If we'd chatted in more depth, would we have hit it off more? Did I miss an opportunity to make a friend? Heck, would he even have wanted to be my friend? My behaviour at university was often very immature, and my level of personal Quality is in many ways still shocking - am I someone that Daniel could have respected enough to form a bond with?

If not, I hope I can someday be such an individual. Daniel was a very conscientious person, with no fear of hard work and a great deal of raw talent. He was quiet but friendly, the proverbial Nice Guy, and tended to get along with other students of all stripes. In many ways, he was the man I'd like to become.

As an atheist, I have no great expectation of being able, in 50 years or so, to meet Daniel again on the Other Side and impress him with my growth as a person. I don't believe that he's up there now, singing in the heavenly choir or strumming a harp. What I do believe, though, is that he took the time he had and built an awesome life from it. I hope some day I can say the same.
Read the full post

Sunday, December 10, 2006

Driving update

The experiment appears to have worked. By thinking through my driving style carefully, I seem to have managed to stop myself stalling repeatedly. Certainly it hasn't happened since I wrote my last post on the subject.

So just throwing a little brainpower into the mix can be unbelievably effective at improving the Quality of my passage through life. In some ways I find this rather disturbing - how many other frustrations are there that I could easily cure with just a little thought? How much of my life have I wasted on a stimulus/response approach to affairs? How far have I been off the path of true virtue?

Speaking of virtue, I've lately been reading an abridged edition of Gibbon's seminal "Rise and Fall of the Roman Empire". I can thoroughly recommend it - the guy manages to squeeze a truly impressive amount of passion into his words. It's basically an historically accurate polemic - and Gibbon's focus is on virtue and its loss. Very inspiring, especially when taken with a side-order of the film "Gladiator" :)
Read the full post

Monday, December 04, 2006

False assumptions and girl-chasing

One of the interestingly constant things about males worldwide is that they tend to prefer women who are a couple of years younger than them. There are two explanations for this: the "why?" and the "why not?". I'll take the latter first.

When mankind's ancestors moved down from the trees and onto the plains, their social structure changed rather dramatically. Originally we probably lived in chimplike tribes, with complete promiscuity being the rule. Out on the plains, a monogamous arrangement proved more appropriate. If you want more detail, I recommend the book "Genome" by Matt Ridley.

One of the side-effects of this was that men developed a propensity to choose younger mates where possible, as this would increase the overall breeding lifespan of the relationship and hence the number of kids they could have. Obviously none of this was conscious - propensities for "marrying" older women were just selected against. That's our "why not?" answer.*

The "why?" answer is in many ways more interesting. In most men, as best I can tell, the propensity to chase women a few years younger than oneself manifests itself as a dominance issue of sorts. To my understanding, men have developed the tendency to believe that younger women will be willing to take a more submissive role in the relationship, thus providing their boyfriend/husband/whatever with a much-desired ego-stroking.

To be fair, this belief actually has a small dose of truth to it in our younger years, so it's not that surprising that men should take some time to shake the habit. But in general, although in some cases women will be flattered by the attention of an older male, a dominant girl will be dominant no matter what age her other half is. When men daydream about having some pretty young thing gaze adoringly up at them, they fool themselves. In reality, they're more likely to be set to work putting up shelves and mowing the lawn.**

One common adage in geek culture is: never attempt a technological solution to a social problem. Men appear to be making a similar sort of mistake - attempting to compensate for concerns about their social status by applying the fantastical magic bullet that is this hypothetical submissive younger woman.

In reality there is no magic bullet, and I suspect that part of growing up is realising that. The only solution for men is to meet the challenge head-on. Wish us luck.


* It's also related to one possible cause for human intelligence, but that's a different issue entirely.

** Yes, this is horribly stereotyped. I'm trying to write catchy prose here, not the frickin' manifesto of the Women's Freedom League.
Read the full post

Sunday, December 03, 2006


I'm really starting to dislike driving. I've been learning with an instructor for about a year now (with a few long breaks), and it's all been generally OK up til recently.

What's changed is that I've been put on my parents' car insurance, so I can practice driving (with them supervising) at weekends and evenings. The problem is that the parental car behaves very differently from the instructor's car. The short version: it stalls a hell of a lot more easily.

Typically, it does this at roundabouts, and (in obedience to the inviolable dictates of Murphy's Law) when I have a long queue of assholes behind me beeping at me. Often, the pressure is just enough to make me respond instinctively, and I hammer the foot pedals in a fashion that, in my instructor's car, would cause major G-forces.

In my parent's car, it stalls me again.

By this point I'm somewhat freaked, but, being me, I still try to apply basic problem analysis - figuring out which thing I'm doing wrong. The problem is that, by this point, the number of things I'm doing wrong is so large that fault isolation is impossible. And typically the attempt to figure things out stalls me another four or five times.

Eventually I normally get it right (as much by fluke as anything), but by this point I'm mentally exhausted and sobbing with frustration. Remember how I said in the posts on geekhood that lack of control over my environment is a turn-off for me? Now compare that with lack of control over my own brain...

This situation is a fairly classic example of one of the most significant historical counterarguments to utilitarianism: what do you do when pleasure-seeking itself causes displeasure? Every step in my process for attempting to un-stall is at least vaguely sensible in the short term, but the overall effect is to really upset me.

The response to this counterargument is equally classic: if your pleasure-seeking is getting in the way of your pleasure, the fault is with the seeking not the goal. If you're tripping over your own feet, it's a sign that you're not thinking long-term enough.

I've come to the conclusion that I can best improve the Quality of my interaction with the other drivers by the paradoxical approach of not giving a damn about what they think. In future, I'll try not to get stressed out, not to let the honking horns and the worries about my own discourtesy push me over the edge into panic. Whilst in the long term I may be focused on making these people's lives easier, in the short term I'll be giving them a big metaphorical two fingers.

In "Zen and the Art of Motorcycle Maintenance", Pirsig draws a line between romantic and classic quality - the Quality of form versus the Quality of function. In this case, I've become far too focused on the poor romantic quality of ignoring the other drivers, thus ignoring the strong classical quality of this attitude as a means to getting the hell away from the roundabout. In a way, this is the same mistake that quacks and cranks and other non-reality-based individuals make - they base their long-term assessments ("I'll take homeopathy as my cancer treatment") on immediate stimuli ("I really don't like chemotherapy"). The result inevitably has overall poor Quality.

For me, that stops now.
Read the full post

Sunday, November 26, 2006

Quality in mysticism

The Tao that can be told is not the eternal Tao;
The name that can be named is not the eternal name.
The nameless is the beginning of heaven and earth.
The named is the mother of ten thousand things.
Ever desireless, one can see the mystery.
Ever desiring, one can see the manifestations.
These two spring from the same source but differ in name;

this appears as darkness.

Darkness within darkness.
The gate to all mystery.

- Tao Te Ching

Some people recognise as True the greatness inherent in the universe, the lack of distinction between subject and object, and other similarly fuzzy ideas. Some people think that all this is silly. From a Quality perspective, who is right?

The Mystical defense mechanism

As a staunch member of the Reality-Based Community for many years now, I have a strong intuitive perception of mysticism as a load of fluff excreted by people who wish to protect their precious worldview from scrutiny.

And there's no doubt that, in some cases, this perception is accurate. It's interesting how many religious people, when pressed hard enough, spontaneously become post-modernists. The same holds for practitioners of many alternative medicines - the moment they're hit with a genuine evidence-based challenge to their therapies, they switch gears and start discussing the importance of spiritual health. Once the skeptic has wandered off, they promptly switch back again. It's similar to the way that squid shoot clouds of ink to confuse predators.

Why is this poor Quality? These people claim that their beliefs are part of the objective universe. For their behaviour in this respect to be high-Quality, they must consequently meet certain obligations that in practice are equivalent to showing their model is predictive. Not only do they not attempt to do this, they actively attempt to prevent anyone else managing it. Their bait-and-switch tactics merely add a dab of hypocrisy to this unpleasant cocktail.

Such behaviour is repugnant to reality-based individuals, with good reason. This variant of mysticism attempts to rhetorically undermine the reputation that evidence-based enquiry has legitimately earned for itself, and quite often succeeds. "Emergency mystics" of this sort are actively damaging the Quality of society as a whole in order to feed their personal delusions of objectivity. This is not a victimless crime - non-trivial numbers of people die in agony each year because they relied on a quack rather than seeking proper medical assistance.

What's outside the box?

I'm increasingly coming to believe, however, that carefully targeted mysticism can be an extremely effective tool. The act of temporarily shutting down the "reality checks" that keep us all moderately sane allows us to explore concepts further removed from our current understanding of the universe. At worst, that wider view helps us place our current stance in context better, a labeling that means we can . At best, the search can lead us to powerful concepts that we'd never otherwise have discovered.

My views in this area are partially bolstered by objective evidence from Prof. Richard Wiseman, who has fairly conclusively determined that "luckiness" is tied to the ability to relax one's focus. He tested this by asking individuals who self-assessed as particularly "lucky" or "unlucky" to count the number of pictures in a newspaper he'd had specially produced. Placed on some of the pages were large adverts containing captions like "Tell the experimenter you spotted this and you'll win £100". Overwhelmingly, the lucky people spotted these; the unlucky people were so focused on the counting that they read straight past them.

As best I can tell, a certain amount of mysticism - be it Buddhist meditation, classical religion, New-Age spirituality or simply a sense of joy and wonder at the universe - can be a useful means to this end. It can snap us out of the daily grind, give us a new perspective on life, and return us refreshed, revitalised, and ready to improve the Quality of our world.

Verdict: mysticism is valid for some individuals. It's not necessarily valid for everyone, and its abuse should be avoided, but there's nothing intrinsically wrong with it if applied judiciously.
Read the full post

Not True For Everyone: Free Software and me

In my previous post, I described how configurability is something that techies love and cherish, and how they hate to be unable to get at the workings of a device. Given this, it's no surprise that many of us loathe most software with a fiery vengeance. Your average piece of commercial software is a "binary blob" - a string of ones and zeroes with no meaning to anyone but the computer. That represents a hideous restriction on the techie's ability to tinker with the thing.

Worse, it's a restriction that (in the strictest sense) is unnecessary. By publishing their source code, companies could solve this problem in an instant (they'd probably go out of business, but that's another issue). In fact, until about 30 years ago, that was precisely what usually happened with software. It wasn't until the late 60s, when Bill Gates sent out his infamous Open Letter to Hobbyists, that the concept of Intellectual Property really started to have an effect on software geeks.

Hard-core techies have begun to fight back. The first one to really go for the throat in this battle of cultures was a guy called Richard Stallman. In appearance and in habits he's pretty much the archetype of a techie, so it's no surprise that he should also have this drive to customise. After hitting a barrier with "closed-source" software on a printer (see here for details), Stallman struck back by creating a software license called the GPL that explicitly protected key freedoms for the user:

* The freedom to run the program, for any purpose (freedom 0).
* The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help your neighbor (freedom 2).
* The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.

It has taken a long time to happen, but the GPL and its kindred "Free Software" licenses (that's free as in freedom, not free as in beer - you can usually make money selling Free Software) have started to make an impression. Maybe you've heard of Firefox? How about Apache, the most popular web server in the world? Even if you're not a geek, you might well have heard of Linux, the ultimate techie operating system. These are becoming ever more widespread. And techies the world over are rejoicing.

The Dangers of Evangelism

In fact, many of them are in some ways rejoicing too much. It's become a characteristic of the techie stereotype: we waffle on about Linux to people who aren't the least bit interested. It doesn't do any good to anyone, and it creates extremely poor Quality in our interactions with these people. Why don't we keep our powder dry, saving our spiel for those times when it can actually have an effect?

The answer is simple: we expect other people to think like us. It's intrinsically hard for human beings to realise that other people do not have the same balance of motivations as we do. In this case, techies assume that other people will have the same urge to proactively improve the Quality of their environment, the same drive to tinker with machines until their behaviour matches our desires - and the same frustration when we're artificially shut off from those opportunities.

If you possess those qualities, Free Software will be an inherently interesting concept for you. But, if you don't, the beauty of this concept will simply not exist for you. "Free Software is wonderful" is not a statement that contains any truth for most people. When we evangelise to these people, we're burning up valuable time, energy - and Quality.

This is a general feature of evangelism. For example, Christians who truly feel the joy of Jesus's presence inside them naturally assume that the motivations that give Quality to this relationship apply to everyone. They don't. Not everyone feels the need to subsume themselves in the Holy Spirit, and many people (myself included) are rather disturbed by the idea and its effects.

The same goes for lovers of sports, politics and alternative therapies. When we evangelise without casting an eye towards this issue of Quality, we become bores.
Read the full post

Saturday, November 25, 2006

Poor Quality in action: unwanted assistance

As of half a year ago, I'm finally learning to drive. I've gotten to the point where it's actually safe to let me in a car with dual controls and, in a desperate attempt at learning to stall less before my practical test, I've been driving many miles a weekend with a parent in tow.

In the last few weeks, I've discovered many things that annoy me about driving. Assholes who think that learners are there to be dominated, worn-out road markings at roundabouts, the difficulty of doing ten things at once. But there's one thing, and only one, that's been really driving (ahem) me up the wall.

You know how indicator lights in modern cars automatically switch themselves off when you've finished turning? Yeah, that.

The problem is simple. When I indicate left whilst turning right (or vice versa), the indicator system assumes that I've made a horrible mistake, and helpfully corrects that mistake by deactivating itself. However, sometimes it's necessary to signal in that fashion, for example when coming off a small roundabout. In these circumstances, signalling the "wrong" way is exactly correct - so it always catches me by surprise when I find I actually need to hold the bloody lever down to keep the indicator on.

That's fairly annoying - but so is much of driving. Why should it be this that irritates me the most?

I've always had a deep loathing of wonderfully helpful little features that are impossible to disable. And it's actually a problem that shows up quite a lot. For example, my mobile phone is convinced that the first letter after any full stop should be a capital (thus completely ignoring the common use of abbrev. in txt communication). My suspicion is that this is a general feature of technically-minded people - just look at the backlash against Clippy.

It's interesting to consider why this should be so. Why are techies so averse to machines attempting to be too clever? It's almost as if we're one step away from being Luddites - why is this the case?

I think it all comes down to control. The driving passion of the technophile, the sheer joy of experimentation, comes from a strongly increased feeling of control over one's environment. In the human psyche, control is heavily linked with survival instincts - hence, for example, the elaborate rituals of human dominance - so it's easy for such a relationship with technology to become a focus of Quality. In other words, we get our kicks making machines do our bidding. What the hey, everyone needs a hobby.

To a geek, the idea of a helpful feature is sheer pleasure - but the existence of such a feature that you can't turn off is a studied insult. It's a deliberate, callous limitation of the techie's ability to shape the machine to their will. No matter how many blue flashing LEDs it has, such a creation is fundamentally poor Quality - a diseased device deliberately rendered incapable of fulfilling its owner's desires. A feature of this sort is worse than useless: it's actively repulsive.

Someday, maybe techies will have great enough market share that designers start considering issues like this. Until then, I guess I'm just going to have to put up with this bloody indicator.
Read the full post

Some things never change

OK, so I was out at the pub (or, rather, an array of pubs) with friends from work last night. At one point, we ran into a bunch of other people from work, one of whom was with a group of other friends.

And one of those other friends was someone I knew from my school days. Oh horror, I was about to be outed as a complete nerd in front of the people to whom I had been trying to project an image of normality.

Actually, that turned out not to be a problem - I think I managed to bluff my way through, with much manly handshaking and very little cowering in horror.

But then the guy from my work did something I honestly hadn't expected. "You know, Lifewish, that girl over there [one of the others from work] fancies you. Hey, Sarah*, come over here!"

How very schoolyard.

The problem, though, is that, while I may have been exposed to this particular method of putting someone on the spot whilst I was in secondary school, I'm still not sure how to deal with it. I think I handled it at the time by glaring at the guy and turning to talk to someone else, but that just shows that he managed to get to me - a sign of weakness. It also leaves poor Sarah in the lurch somewhat.

I could have gone the other way, and played along. However, I'm honestly not sure I know the rules for that game well enough. And again that merely validates Andy's approach and leaves Sarah with the bill (metaphorically speaking).

I could have tried to turn the tables, for example by getting to Sarah first and telling her loudly that Andy was calling her over to say how much he fancied her. That's getting warmer in its probable effect on Andy, but means I'm actively contributing to the winding up of Sarah.

What I really need if I'm to achieve Quality in this area is a simple approach that will:
1) cut Andy off before he can mess Sarah about
2) leave Andy looking foolish
3) not require me to be an asshole

Any thoughts?

(Disclaimer: neither Andy nor I was remotely sober at the time, so it's always possible I've misunderstood him or his motivation - albeit IMO not likely. Regardless, it's still an interesting question.)

* Name changed to protect the probably innocent - it's highly unlikely that she does in fact fancy me, and even less likely that she'd tell Andy** about it. Actually it's infinitely more probable that she fancies Andy, the company heartthrob, which makes the whole debacle even nastier.

** The guilty deserve no protection. However, there are half a dozen Andys at my work, including some lovely chaps, so anyone who knows my secret identity should not draw conclusions.
Read the full post

Tuesday, November 21, 2006

Like a bat outta hell...

...I'll be gone in half an hour to get an early night.

I'm aware that that's not exactly what the song was referring to - but, as far as filling life to the full goes, I'm on a roll. In fact, I'm on a roll call. I've just started a new job and it's rather hectic.

Usefully, the job is at a company that has a very strong ethos of Quality. Hopefully I shall have much to pass on (in suitably generic terms, of course).

Hey, any company that can have me getting to bed by 10:00 must have something going for it :)
Read the full post

Thursday, November 02, 2006

What ID wasn't

A comment at a blog I frequent just gave me reason to search back through history to find a mock journal I once produced. The subject: what an actual mathematical discovery of Intelligent Design would have looked like. On rereading, it's mostly still good, so I'm copying it here for safe keeping.

January 2006: First draft of paper completed.

February 2006: Paper discussed with supervisor in depth. Minor alterations made to strengthen the argument with respect to some pathological cases. Supervisor points out that one section is irrelevant, so it's removed. Extra section added to eliminate previously unconsidered option.

March 2006: Paper now 100% ready. Submitted it to prestigious mathematics journal.

May 2006: Finally heard back from journal. Paper rejected by reviewer with a snarky comment about an error in section 3. Will get back to this once I've finished grading exam scripts.

June 2006: Paper corrected and resubmitted.

July 2006: Paper accepted.

August 2006: Paper finally published. Almost immediately am contacted by three people who think I'm badly wrong, but it turns out to be a result of printer's error. Another objection comes in, this one valid. Get contacted by rather confused-sounding journalist - local newspaper prints short article. Remaining mail is mainly positive, includes compliments on novel use of Zorn's Lemma and a request that I give a seminar.

September 2006: Figure out how to bypass the valid objection, write up as paper, submit to journal. This one gets in without much trouble - the reviewer responded impressively quickly. Give seminar, attendees seem bored but perk up quickly when I mention the implications for bioinformatics. Much use of photocopier afterwards.

November 2006: New paper published, get mildly snowed under with email. Minor mention on BBC website. Start getting crank calls complaining about inferred attempt to "know the mind of God".

December 2006: Paper discussed on popular evolutionary biology forum, attempted rebuttal by resident mathematician. Engage in short debate about Axiom of Choice, opponent concedes defeat. Asked to give rundown on implications for biology. Asked to comment on genetic algorithms and evolutionary simulations, explain why result doesn't apply to these. Asked about falsifiability, explain how the approach used permits "interventions" to be pinned down and analysed. Implication is that detailed hypotheses can be developed - no "big tent" for us!

January 2007: Story picked up by New Scientist (they misspelled my name of course). Crank calls increase in volume. Am forced to get new email address. Research group formed to discuss results, seems promising. Story picked up by Guardian and Sun ("Egghead Explodes Evolution").

Two vitriolic attempted rebuttals, one on a biology blog claiming I'm wrong and one from the Design Institute complaining I'm stealing their idea. One of my new postgrads does a detailed rebuttal of the biology blog one, then writes the central thesis up as a formal paper. He'll go far, if he doesn't waste too much time on the internet.

Asked to speak in Cambridge, Oxford, Hull. Accept the Cambridge and Hull offers.

Accept offer to speak at local evangelical event. Get booed off stage after using phrases "outmoded superstition" and "put the Designer under the microscope".

March 2007: Research group expanded, having to turn away applicants. Promoted. Awarded three honorary doctorates. Still getting the crank calls, but not so much email. Postgrad figures out chemical signature apparently associated with "interventions". MIT researcher uses modified cladistic technique to pinpoint exact dates of interventions, turns out there are 6 detectable ones.

April 2007: Am contacted by geologist specialising in trace mineral concentrations of post-Cambrian strata, with comment about a strange hydrocarbon found at approximate dates of interventions. Do joint paper for Nature - accepted almost immediately.

June 2007: Contacted by NASA - apparently the trace hydrocarbon appears as a byproduct of an experimental low-orbit propulsion system they've been working on for two years. Asked to keep things under wraps for a couple of weeks til they can get the patent application sorted out.

Two new research groups formed to work parallel to our group. Friendly rivalry emerges, regular collaboration on papers defuses possible tension. Particular focus on what exact biological structures are results of intervention. Am happy to say that we regularly steal their best postgrads.

July 2007: Propulsion story leaked to Sun ("Alien Conspiracy!"). Seems everyone is talking about our research. Minor riots in several areas, university burned down in Iran (quote of the day: "they will either contradict the Quran, so they are heresy, or they will agree with it, so they are superfluous"). Research endorsed by Raelians, criticised by Scientologists (their objection is something about Xenu and volcanoes).

August 2007: Richard Dawkins sends me heartfelt gratitude for "proving me wrong all these years", plus early draft of new book "Of Alleles And Astronauts". Am apparently the subject of a fatwa by prominent Saudi cleric. More crank calls, death threats etc (much repetition of "I ain't no alien experiment").

Nominated for Fields Medal, Nobel Prize.

Early 2008: NASA funding increased, ESA funding increased, competition to be first in locating a Designer artifact heats up. International conference of biochemistry convened to discuss plausible techniques used by Designers. Conference a great success - "design signature" chemical turns out to effectively immobilise DNA, allowing for detailed biomolecular surgery. Nanoengineers gatecrash, start discussing applications for solar panels.

Mid 2008: Awarded professorship at Cambridge. Best part: no bloody undergrads to deal with. Take sabbatical before accepting position, work on book (no doubt it'll be outsold by Dawkins, but a man's gotta try). Title: "A Brief History of Slime". Give guest lectures in Japan, Russia, America, New Zealand. Disney attempts to buy movie rights, but I cut them off when they mention changing the name to "The Eternal Triangle".

Now have three full-time bodyguards to guard against:
a) pissed-off religious folk
b) freaked-out nonreligious folk
c) alien-related death cults

Late 2008: Alien structure discovered on Mars. Martial law declared in South Carolina in aftermath. Rush by world's engineers to examine the finds is intense. One engineer attempts to shorten the waiting list by judicious assassination.

One of the artifacts turns out to be some sort of faster-than-light conveyance. This is going to be interesting...


I think I'll end there, given that it's getting quite long enough. I guess the moral of the story is that, if you prove that evolution didn't happen, it doesn't stop there. Saying "we've disproved evolution, yet we have absolutely no interest in trying to find out what did actually happen" is a fairly reprehensible attitude from anyone claiming to be a scientist.

Quite apart from anything else, this was extremely fun to write. There may be a science fiction story in it somewhere. Thanks to Andrew Rowell for the inspiration.
Read the full post

Quality in maths

I've covered plenty of ground on the issue of quality in science, so I'd like to take a brief moment to discuss something that I actually... uh... know anything the heck about: mathematics.

Mathematics is actually easier to understand than science to anyone with any knowledge of basic logical discourse. In principle, it's quite simple: choose a bunch of axioms whose truth defines your system, recombine them to produce other true statements, and gradually work your way up to the conclusion you were after.

This description is perfect, apart from one thing: no real mathematician actually bothers to do that. Over the years a methodology has emerged that's considerably more practical and is essentially equivalent to this idealised approach. So failure to conform to the ideal is not itself evidence of poor Quality.

All maths is trivial

There is a mathematical theorem, an an area known as propositional calculus, which says that all mathematical theorems are tautological. Now, they're using "tautological" in a far more rigorous sense than you or I, but the impression given by this quote is useful in other ways - it does convey a vaguely accurate perception of what's going on.

A mathematical proof exists as a set of definitions combined with a series of logical steps. The hard part for a non-mathematician is that the definitions may use prepackaged terms ("let X be the dihedral group D_2n, and Y a Sylow subgroup of order pm for m>1...") and the logic may skip large steps ("...then p divides n"). So in general it won't be possible for a non-mathematician to verify a proof without either learning a bit about the field or finding a helpful mathematician.

What would the mathematician do to verify the proof? Well, firstly, she would confirm she knew what all the definitions meant. Then she would go through the proof, figuring out which steps were obviously valid based on those definitions. Her response to the remaining few steps would be more interesting: she would break them down into smaller substeps.

This is where the "triviality" of mathematics comes into play. Large steps in logic are (in theory at least) mere shorthand for a whole bunch of shorter, more complex steps. So, for example, if I want to prove a differential equation I could use a giant step taken from the field of Fourier analysis. This would break down into smaller steps from the internal logic of the same field, which would effectively be huge steps taken from the field of complex analysis. These would break down into smaller steps from the field of naive set theory, which IIRC can be justified in terms of axiomatic set theory, and so on...

What value does being a mathematician have in this circumstance? Why couldn't a layman do this? There are three reasons. Firstly, the mathematician will generally have some hard-earned intuition about the objects being defined, and will be able to "verify" many of the steps by sheer instinct. She would be able to verify them by hand if necessary, but that's often just a massive waste of time.

Secondly, the mathematician will know where to look for information about the remaining steps. She will have some idea of which journals to check, which individuals to ask, for further information about what the heck is going on.

Thirdly, in the event that a step is not fully explained in any literature, a mathematician will have the creative mathematical abilities to handle solving it herself. But creating maths is a whole different kettle of fish from verifying it, so I won't go there.

How to recognise bad maths

All this doesn't really do much for the average Joe. When confronted with a piece of mathematics, "find a mathematician" may indeed be the best advice, but in situations where someone is trying to pull one over on you there often isn't time.

The classic example here is probably Intelligent Design, a variant of creationism. Its advocate, William Dembski, effectively attempts to reinvigorate the age-old Argument From Design by ruling out evolution as a source for much of the natural world. Say you got presented with one of Dembski's papers by a street preacher and was told "this paper proves God exists". How would one know that it was a fraud?

Fortunately, honestly mistaken maths differs greatly from actively misleading maths. In this case, the problem is obvious: the worst maths is no maths. If you read through Dembski's paper, you'll see that there's a mere handful of actual mathematical symbols per page, most of which are merely used to summarise Dembski's words rather than actually playing any part in proceedings.

This is a strong indicator that he's not actually proving very much. There are only two reasons for using words in a mathematical document: to explain stuff to students, and to pull the wool over people's eyes. Neither is appropriate for a mathematical paper. If you knew nothing about the mathematics in question, you'd still be able to spot this issue.

Further posts to come on this, when I'm feeling a little more coherent.
Read the full post

Wednesday, November 01, 2006

What's with the "concrete" thing?

In a previous post on creationism, I used the word "concrete" when discussing the predictivity of scientific theories. What did I mean by that?

I gave one example of what a non-concrete prediction might look like. Many cranks have a habit of claiming that, since the dogmatic scientific community will be horribly undermined by their work, they're guaranteed to be laughed at. This prediction usually turns out to be true, which the crank claims supports a model of the world in which their notion is accurate and science is dogmatic.

Of course, the obvious response to this particular case is the infamous Carl Sagan quote: "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright Brothers. But they also laughed at Bozo the Clown." However, there are some non-concrete claims that are less immediately laughable.

For example, many people present a variety of equally non-concrete evidences for God - a baby's smile, a sunset, a soaring bird. How, it's asked, can we see these things and not believe?

There are actually two answers to this. The more mundane one is that all this represents a rather impressive example of confirmation bias, the tendency to only remember half the story (in this case the good half). Incidentally, this form of bias is also the primary factor behind Murphy's law - it seems like toast always drops butter-side-down because, when it lands nicely, we don't feel annoyed enough to remember about it.

So, for example, the soaring bird is beautiful - but its subsequent evisceration by an unexpected hawk is less so. The baby's smile is lovely - but the other end of the baby is decidedly less pleasant. Sunsets can be gorgeous - until we recall that their vibrant colours are largely due to air pollution.

The second, more philosophically interesting, point is: it's not the baby, the sunset, the bird that's actually being offered as evidence. There is no logical progression from "baby smiles exist" to "God exists". What's really being presented as evidence is the intuitive beliefs that beauty exists and that beauty implies God.

Why can't we just accept that as scientifically useful? Because we have a long history of discovering that intuitive notions are wrong. The sky is not a big blue wall with holes in for rain and starlight. The Sun is not a small whizzy ball that spins round the Earth. Solid objects are actually probabilistic wavefunctions. Heck, space isn't even Euclidean. And yet, at various points in history, people have been sure of each of these to the point that expressing the opposite view would earn you a trip to the nuthouse*.

Given this repeated history of failure of "gut feel", why should the argument suddenly have far more credibility just because it's got the word "God" in it?

Disclaimer: as always, none of this means that God can't be considered real for an individual, or even real for the human species as a whole. It just means that the concept, as usually stated, doesn't necessarily meet the criteria to be considered real for any intelligent lifeform.

* In the case of the last two items on the list, these nuthouses still exist - they're called "universities"...
Read the full post

Hypothesis testing

Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.
- Richard Feynman

Reading list: Science as Falsification - Karl Popper

Back in the time of the Greeks, "science" as it existed then proceeded almost entirely by abstract thought. Needless to say, this was not always terribly effective - thought will simply tell you whether your model is consistent not whether it's accurate. As a result of these inadequacies, a new approach was developed called empiricism, which emphasised the importance of evidence.

But how to actually apply that evidence? With sufficient rationalisation, it's possible to make any data set fit any model - how to avoid the tendency to make ad hoc excuses?

The conclusion that scientists came to was very concordant with my thoughts on usefulness. They decided that the ultimate arbiter of truth (or at least of accuracy) would be the predictive power of an hypothesis. It may be possible to contort even as daft a notion as creationism to fit the evidence, but only at the expense of removing every last scrap of its predictive power.

Thus, the "hypothesis testing" approach was formalised. Generally it goes something like:

1) Find an area to study
2) Gather data from that area and look for patterns
3) Explicitly state an hypothesis
4) Derive concrete testable predictions from that hypothesis
5) Test the predictions
5a) If they're correct, go back to step 4 and carry on testing
5b) If they're incorrect, choose another hypothesis and go back to step 3

One thing you'll notice about this sequence is that there's no actual exit condition - there's no "6) Congratulations, your hypothesis is true". So how does the scientific community determine when an hypothesis is sufficiently well-tested? The threshold is inherently subjective here, but usually the scientific community eventually decides as a group that an hypothesis is firmly established enough to be taken as a given, at least provisionally.


So far I've treated the scientific method as comparison of respective predictivities. There is, however, another paradigm worth considering: Popperian falsifiability.

Karl Popper's notion was that science proceeds by a process of weeding out the less accurate hypotheses. So, for example, Aristotle's theory of gravity was proven to be less accurate than Newton's, which was proven less accurate than Einstein's. This approach has two advantages: first, it provides a very precise linguistic framework for understanding these principles, and secondly, it throws up an interesting analogy.

The linguistic framework looks like this:
1) A conjecture is any claim about the universe
2) A falsifiable conjecture, aka an hypothesis, is one that can be disproven (i.e. "unicorns don't exist" could be disproven by finding a unicorn)
3) A verifiable conjecture is one that can be proven true (i.e. "unicorns exist" - the converse of a falsifiable conjecture is verifiable)
4) A testable conjecture, aka a prediction, is one that is both verifiable and falsifiable

The problem that science solves is: there is no way to demonstrate an hypothesis to be true. The solution is to stop worrying about truth and start worrying about accuracy. It's very easy to demonstrate an hypothesis accurate, by logically deriving predictions from it and testing them - in other words, by attempting to falsify them. Repeated failure to be falsified can be taken as a sign that an hypothesis is representative of the universe.

Now, one thing that's interesting to note is that this is directly equivalent to standard processes of biological evolution. In both cases, new variants of an object (DNA or hypotheses) are created pretty much at random, and the useful ones are retained whilst the less effective ones are discarded. Thus, the state of the scientific art tends towards a more accurate model of the universe, just as species tend towards a more efficient genome. In this sense, the goal of the scientific community is simply to create an environment in which the survival of an idea is proportional to its scientific usefulness.

Which is why scientists get so annoyed when politicians and preachers attempt to dictate reality by governmental fiat. But that's another issue.

(Oh, and: so much for my attempt to write in a more abbreviated style, huh?)
Read the full post

Tuesday, October 31, 2006

All about the name

Recently someone asked me if my name meant I was pro-life. My reaction was along the lines of "what the... what are you... oh, I see what you mean. No."

I think the "embryo==human" stance of most pro-lifers is overly simplistic. This is due to my personal understanding that "me" is not some disembodied soul; it's a network of neurons firing in intricate patterns. This means there's no lightbulb moment where the embryo becomes human, so a more sophisticated approach is needed.

Going back to enlightened self-interest, a key reason to be anti-murder is that I could be next. As the "I" in question is a bunch of neurons, drawing the line at the start of humanlike neuronal activity makes sense - the embryo thinks, therefore it is.

If I recally correctly, that happens around the end of the second trimester, although adding a safety margin might be valid. I'm fairly sure that this doesn't qualify as "pro-life" by any normal definition... As a moral relativist, I'm open to reasoned debate on this.

Oh, and 'Lifewish' is just a really lame play on 'deathwish'.
Read the full post

Poor quality in action: my writing

You may have noticed that my writing uses a lot of words. This is a habit I've picked up debating - the more time one spends defining one's terms, the less easily people can creatively misunderstand them.

However, in standard blog posts, it's unnecessary and unreadable - poor quality. As such, it stops now.


You have no idea how hard it is not to extend that first paragraph, just to make sure people are clear what I'm saying...
Read the full post

Usefulness v. truth

One thing that anyone who's actually bothered to read my posts on scientific usefulness may have noticed is: scientific usefulness and truth are not the same thing by a long shot. They cover a lot of the same ground, but there are things that fall into one and not the other.

For example, "emergent" phenomena like tornadoes are scientifically useful categorisations without being literally true. There is fundamentally no such thing as a tornado; it's just a label we apply to a really diverse array of air patterns. Useful but not true.

On the other hand, it would be possible for something to be true but not scientifically useful. For example, maybe there's a universe somewhere where Darth Vader really exists. If so, that would be true - but, since there'd be no way of having any contact of any sort with that universe, that truth would be completely useless.

However, there's another class of potentially true statements that are more problematic. These are statements that, if true, would be scientifically very useful, but that aren't testable - there's no way to tell whether they're true or false in advance. For example, if God exists and atheists really get sent to a big fiery pit after death, that would be very useful, but as yet no-one's come up with any experiment that could test for this.

We're getting into Pascal's Wager territory here - if these statements are potentially so significant, are we justified in ignoring them simply because we can't immediately assess their usefulness?

I would say yes. My rationale is that, by declaring these statements untestable, we're making it impossible to distinguish between them and the infinite number of other untestable statements that would counsel different behaviour. This is a classic refutation to Pascal's Wager - the possibility of a God that gets angry when people worship Him effectively cancels out the possibility of a God that gets angry when people don't. Until we find some way to test these statements, it's impossible to make useful decisions based on them - there are just too many conflicting options.

There's no little black box that will test the statements we feed it for truth. One could say that truth is not a useful concept, except insofar as it relates to predictivity. As far as predictivity is concerned, we do have such a little black box - and it's called science. That's as good as it gets, folks.

[Edit: on reflection, I think this cartoon said it better.]
Read the full post

To do list

I'm just going to make a brief list of stuff I need to cover at some point, so I don't forget it. This list is intended for my personal use, so don't expect much of it to make sense to you.

  • Mental quality
    • Components of the scientific method
    • Recursive usefulness and paradox

  • Social quality
    • Dominance
    • Body language
    • Networking

  • Physical quality
    • Martial arts gunk

Note to self: go back through the archives, see if I can spot anything else.
Read the full post

Response to a comment

On another blog I commented that Christians seem highly reluctant to correct each other, and was promptly shown the error of my ways :)

The conversation is getting into issues of evolution/creationism, which I'd rather not contaminate another perfectly good blog with, so I'm proposing that such debate be moved over here. This is in response to my comment that believing in special creation of animals was counterfactual.

You might want to study the "Cambrian Explosion" in which all the phylla of large animals appear in a short period of time in the fossil record.

A few major responses here:

1) depending on one's definitions of "phyla" and "Cambrian explosion", only about 1/3 of the metazoan phyla appeared during this period.

2) when we talk about "phyla of large animals" the kind of large animals we're (presumably) talking about are small, jawless, boneless fish. Things like mammals, birds, reptiles etc came far far later. They weren't even like the fish we have today.

3) when we say "short" we mean "over 10 million years".

4) we actually have a few transitional fossils from within the Cambrian, which show (for example) how worms evolved from arthropods via lobopods.

Any thoughts?

Also, new studies of specific genomes are showing a much greater diversity between humans and other previously assumed "cousins" such as chimpanzees.

You'd have to point me at the studies. My understanding was that all they'd done was pinpointed the most changed and most conserved areas of the genome. The interesting thing about the approaches they're using for this is that they're based on comparing the rate of change in possibly useful areas to the rate of change in "junk" DNA. Thus, this approach would only be expected to produce useful results (which it has) if:

a) a majority of the "junk" DNA was actually pretty much useless
b) said junk DNA was inherited from a common ancestor

With the enormous amount of information found in the human genome, a 6% variation is now much greater than it was thought to be before.

I just finished a maths degree, so I'm legally required to call you on this: how are you defining "information" here? See, information theorists define it as the inverse log of the probability of an event - but, by this definition, randomly-selected events will generally have more information. Computer scientists define information as the compressibility of a string - but that's also greater for random strings. Randomness increases information. (Note: I can explain this in more detail if you wish)

If you have a new, rigorous definition, please share it. Or if you meant some less measurable concept of information, please elaborate. Otherwise, please be aware that the tendency to throw buzzwords like "information" into the conversation without defining them is one of the other things that I consider extremely daft.

There's a lot more to learn than the evolutionists lead on about. Their idea that they have it all figured out [a la Dawkins] reminds me more of the claims of the young earth creationists. One of my favorite Wittgenstein quotes, "what we do not know we must pass over in silence." Good advice for those bloviating about things they cannot verify.

I would note that the word "evolutionists" could be replaced with "quantum theorists" and that sentence would still make exactly the same amount of sense. It's not arrogance if you're right :)

Evolutionary biology is (in some areas at least) a predictive science. That's usually taken as an indicator that a science is at least broadly correct. There is no other predictive model of origins - if creationists were able to create one, they'd have done so by now. If you think evolutionists are so wrong, why not win a Nobel by producing one?

(Note: I haven't started listing predictions and evidences here, because I could... uh... "bloviate" for hours and still not have discussed all of them. If you're interested, ask.)

Disclaimer: I spend way too much time debating this stuff, and hence know most of the arguments inside out. Thus, even if I appear to come out ahead in this discussion, it might just mean that I know the talking points better.
Read the full post

Monday, October 30, 2006

What is science?

Science means many things to many people, but on closer examination it turns out that all the definitions are tightly interconnected. I need to hit the hay in a moment, so I'll briefly run through the stack and maybe comment more tomorrow.

Science is a goal. The goal of science, broadly speaking, is to improve our ability to understand and manipulate the universe. My preferred phrasing is that science attempts to find predictive models of how the world works.

Science is a method. It turns out that a particular pair of approaches (hypothesis testing and peer review), along with a bunch of handy if somewhat ad-hoc guidelines, turn out to be highly effective at helping us to achieve the scientific goal.

Science is a community. The scientific community is a group of people who subject their work to the rigours of the scientific method, with unparalleled success.

Science is a body of knowledge. Or, in my terms, a body of accurate predictive models. This is, of course, an incredibly useful thing to have. This body of knowledge is mostly produced by the scientific community because (more or less by definition) they have the most efficient ways of extending it.

It's important to get these definitions out of the way, because there's a certain amount of equivocation over the term due to these different levels. Each level is not equivalent to the others, but it can be legitimately assumed to be a good approximation thereof. Sometimes this approximation breaks, which is where you get "cargo cult science". More to come.
Read the full post

Why believe in God?

A couple of posts ago, I promised a discussion of what I felt were legitimate reasons to believe in God despite the lack of scientific usefulness in the concept. However, on reading my own archives, I realised that I'd already covered a lot of what I wanted to say. However, I would like to recap briefly, to put these things in the new context of usefulness.

Before I start, I'd like to make a general statement. A lot of these reasons for believing in God appear at first glance to be extremely derogatory. They're not. They're useful. The reason that people look at them and think "oh, you're soooo mean" is that it's been drummed into them for years that these other forms of usefulness are intrinsically of less value than scientific usefulness.

My feeling is that this is the cause of a lot of grief, and of travesties like Intelligent Design. People believe in God for these other legitimate reasons, but feel slightly silly for doing so and thus try to validate that belief in terms of scientific usefulness, which only causes unnecessary confusion.

Use 1: happiness

Conserved component: anything that makes one immediately happy is (in general) useful.

Many religious beliefs certainly fall into this category. The reassurance that these beliefs provide can be an incredibly powerful force when an individual is feeling out of control of their life. Read Sara Robinson on fundamentalism for further discussion of the connection between stress and religion.

Use 2: motivation

Conserved component: anything that makes one more motivated is (in general) useful.

Although in this area there's nothing special about religion per se, for some individuals it appears to be very motivating - certainly qualifying as useful for them (although not necessarily for humanity as a whole).

Use 3: mental efficiency

Conserved component: anything that makes one more efficient is (in general) useful.

Let's face it: there are some things that it's not really worth wasting a lot of time discussing. You'll never figure out an answer to the meaning of life in an online debate, so why waste that time when you could be off making the most of said life?

However, this is harder than it sounds because humans have built-in drives to seek out new knowledge (and, I suspect, to argue incessantly about it...). The statement "Goddidit" is useless from a scientific perspective because it works in every possible circumstances - but, from an action-based perspective, closing off those channels of thought can be extremely useful, and God makes a wonderful cork to keep these particular genies in their bottles.

I suspect this is what Ken Miller means when he says "I find that the hypothesis of God helps me to make sense of life and of the world around me, and I find that hypothesis congruent with science, not dependent upon it", although he would almost certainly disagree with my characterisation. Religion can provide a useful framework in that it helps one to focus more tightly on individual issues without being distracted by the scenery.

Use 4: socialisation

Conserved component: anything that makes one more easily able to gain the respect of one's peers is (in general) useful.

For this particular subset of usefulness, it's not actually necessary to believe in God, only to give the appearance thereof. And this is certainly what many people do... However, in many of the more evangelistic communities, it's getting ever harder to just pretend - eventually your mask will probably slip. In this sense, the belief itself, rather than merely the impression of it, could be considered useful.

Use 5: transmission of wisdom

Conserved component: accurate guidelines for effective living are pretty damn useful.

This almost certainly used to be one of the major reasons for religion, as anyone who's read Psalms can guess, and for many religions it's still a key component. However, in an age where any idiot can give advice, it's hard to give a good reason why merely having existed for a few thousand years means scriptural advice will be any better than that from other sources. I guess that selective effects would be expected to weed out some of the dross, though.

Thanks to the good folk at the interfaith society for pointing me towards this use, which I honestly hadn't really considered before. Maybe for some traditions it's even true.

So what's with the atheism?

So, if there are all these convincing reasons to believe (and I'm sure I've just scratched the surface), why do I not believe? Well, my feeling is that, as of now, the price is not right. I'm not in major need of comfort, I seriously doubt that belief in God will get me off my ass, I have no hopes that I could satisfy my rampant curiosity with "Goddidit", and my social life is actually doing quite well on its own (the main limiting factor is ineptness rather than beliefs!). Whilst I'm certainly in need of wisdom, I have doubts that the Abrahamic religions would be good teachers for me. Possibly I should try Buddhism, but that's completely compatible with atheism.

To my mind, the negative usefulness conveyed by religion's lack of scientific usefulness easily overwhelms these marginal advantages. Not particularly because I think the small reduction in predictivity would be a deal-killer, but because I worry that it wouldn't stop there. Religious belief can eat rationality alive. And that gives rise to a fundamental paradox which... but that's a post for another day.

I'll leave you with another reason for religion, which I couldn't quite figure out how to work into the preceding list. Say hi to the cutest religion analogy ever!
Read the full post

Quality Quickies

One habit I've picked up recently is: every time I've been doing something that gives me new experiences, I make a list of issues to watch out for. That way, the next time I hit that situation I can do a better job.

I'm going to start publishing some of my collection under the title Quality Quickies. I'm aware that that could be hideously misinterpreted - it's all part of a cunning plan to push up my blog's hitcount >:)

The following nuggets were collected the first time I attempted this approach, halfway through a summer internship, after a particularly, um, eventful day. Enjoy.

  • Keep an eye out for things that are more likely than usual to go wrong. For example, if a 2 hour train journey took you four hours yesterday night due to buggered tracks, expect the same train journey to take some time today too.
  • If your boss wants you to keep in touch in any way, do not leave the building without a working, powered, paid-up mobile.
  • Best to get a signature for deliveries unless otherwise specified.
  • If you find that you're wasting work time online, kill your browser. No excuses.
  • Make sure you know where the project you're working on is supposed to be going, that way you can more effectively help it get there.
  • When working on computer-killing programs, make sure you have something else to do before running the damn thing.
  • Don't drink more than (at absolute most) two pints the night before you have to be in work. And don't mix wine and beer - they're not exaggerating about the effects of that.

Read the full post

Sunday, October 29, 2006

What moral relativism means to me

(Disclaimer: I'm probably using the term "moral relativism" in a non-standard fashion. So sue me.)

I mentioned before that scientific knowledge acted as a very strongly-conserved component of the more general idea of usefulness. I'd like to throw a little light on another conserved component: enlightened self-interest.

ESI is the basis on which rationalists make moral decisions. The basic principle is simple - in order to achieve one's personal goals, it is useful to support a society in which achieving those goals is easy. If you enjoy walking in the midst of unspoiled nature, don't leave crisp packets behind you. If you aim to get fit through a swimming regime, don't piss in the pool. If you want people to be nice to you, be nice to them, and encourage them to be nice to each other.

This bears some resemblance to the Broken Windows approach to crime prevention - even small acts of personal responsibility can help create an environment that is far more enjoyable and/or effective. Likewise, exhibiting sociopathic or psychopathic behaviour (as many theists seem to believe atheists should) results in a very poor-Quality environment. We're all in the same boat - it's daft to drill holes in the bottom.

Why does this differ from absolute morality? Because the path of greatest ESI may vary between different times and places. To take the classic example: lying is not wrong when you've got Jews hiding in your cellar and the Nazis knocking on the door and asking if you've seen them. It's possible to imagine situations in which any "sin" could actually be the best thing to do in order to create a better environment. ESI morality is a means to that end, not an end in itself.

Obviously it's impossible to argue that absolute morality is immoral, because moral systems can only be evaluated in terms of other moral systems. However, I feel I can make a fairly strong case that absolute morality is extremely counterproductive. It makes reasoned debate impossible and causes concrete harm to individuals worldwide who have either a relative morality or merely a different absolute morality to those in power.

A recent example of where absolute morality can lead us is the case of the Australian Imam who raised outrage worldwide by claiming that unveiled women were like "uncovered meat", and were therefore at least partially responsible for acts of sexual violence perpetrated against them. As explanations go, that one is quite blatantly complete bollocks. It's fairly clear that the Imam merely had some a priori, morally absolute idea that it was good for women to be veiled, and sought to justify that in more rational terms.

What has got fewer column inches is the fact that his behaviour here was actually an improvement over the norm. In most theocracies, authorities don't even bother with faking up a rational explanation for why something is bad. The fact that it is Written would be enough - off with his head. Again this seems to be a particular problem for Islam - the specific issue that springs to mind is its injunction to kill apostates, which is quite frankly idiotic from the point of view of any of the goals that Islam claims to prioritise.

Absolute morality just is. You can't argue with it. You can't point out why it's stupid or mistaken or unhelpful, because all of that is beside the point - the morality is the important thing, and we humans are merely the protagonists or antagonists of Its justice. As I said before, I obviously can't prove that to be morally wrong. However, I think I'm justified in saying that, at least to me, it's an extremely scary concept.

Moral relativism, to me, means that our decisions are made with respect not to some calcified set of semi-arbitrary rules but with respect to the impact those decisions will have on our fellow human beings and, ultimately, ourselves. It means that we can be reasoned out of behaving badly, and reasoned into behaving in a way that benefits society. It means that, when something freaks us out, we actually have to come up with good reasons as to why it's wrong rather than simply turning to the nearest bigoted Holy Book*.

Whilst, as a moral relativist, it's impossible for me to say that moral relativism is good in all circumstances, I think it's fair to say that ESI is a useful approach in the overwhelming majority of cases, qualifying easily for Conserved Component status.

* I was chatting to a very nice (Christian) girl this evening who is going to have major problems with this when she tells her (extremely religious) parents that she's gay. A decent proportion of the public seem never to have learned that "yuck" does not constitute a valid moral argument.
Read the full post

Lapsed atheists

One of the more amusing comments at the interfaith meeting mentioned in my last post was one theist's comment that he, along with many others present there, was a "lapsed atheist". On reflection, this was probably directed at the Christian panelist, who was a Roman Catholic, and the Humanist panelist, who had been gently ranting about inability to get off the RC church's membership rolls. It's an amusing concept - one doesn't normally think of a theist as being an apostate atheist.

It did get me thinking, though: what motivates people to ditch atheism? In some ways, atheism is the least stable religious position - you only need one solitary bit of hard evidence to be justified in dropping it. Have lapsed atheists acquired some strong new scientific evidence that I've yet to stumble across? Or have they simply renounced membership of the reality-based community in favour of a viewpoint that they personally find more effective? I'm guessing the latter, but I'd be interested to know for sure.

The problem is that I currently don't know any lapsed atheists, at least not that I'm aware of. Can any of my theistic readers* help me out here?

* I realise I'm making a rather large assumption here that anyone out there actually reads my blog... However, if I'm wrong, no-one will notice, so I might as well take the chance. It's all a matter of usefulness :)
Read the full post

Religion done right

I just spent the afternoon at a meeting of the local interfaith society, which was arranged as a panel discussion between various denominations - Christian, Muslim, Hindu, Jewish and (to my surprise) Humanist. I was somewhat shocked to find that I agreed with almost everything that was said.

Recently I've been feeling my way towards a new philosophy, based on notions analogous to Quality albeit somewhat more pragmatic in tone, which defines reality in terms of beliefs which are useful. We can then draw a distinction between consensual reality (the beliefs that are real for all humans) and personal reality (subjective "truths").

This has the usefulconvenient feature that it provides a concise explanation for the importance of science. Science is based around a particular subset of usefulness: an idea is scientifically useful if it gives rise to a more predictive understanding of the universe. This scientific usefulness has the unique property of applying to pretty much any intelligent lifeform that could exist (with a scant few pathological examples). Not just to certain individuals, not just to humanity as a whole - any intelligent lifeform, whatever its goals in life, will be more able to achieve those goals if it can accurately assess what future circumstances will be. As such, concepts conforming to this version of usefulness can be legitimately treated as part of an objective reality, regardless of their actual material existence.

Note: this also provides a nice refutation of the all-too-common claim that atheism cannot be true because if it were there would be no reason for us to be able to accurately perceive our environment. It is quite clear that, in general, it will be of value to an organism to accurately perceive its environment - as that is a prerequisite for predicting what its environment will do next - and so such a trait would be an extremely plausible evolutionary outcome.

Consensual reality is slightly more expansive than scientific reality - at present, it also incorporates concepts that are useful to all humans, but that might not be useful to martians (if we ever met any martians, we might have to re-evaluate this...). For example, "soft sciences" such as project management or psychotherapy apply to anyone with a human-like mind, but might not be useful for (picking a random example) a hive species.

This redefinition moves the philosophical battleground of theism from the question of whether God is real to the question of whether He is real for all lifeforms, or "just" all humans, or simply a subset thereof. This is where the concept of usefulness comes into its own as a model of human behaviour, as we can then start to analyse all the other ways in which a statement might be useful, with a concept's reality for an individual being a weighted average. It's also a significantly more tractable question than whether God "exists" in any more materialistic sense. As you're probably aware, my belief is that God is not real in this sense - I am, of course, open to discussion here.

This brings us back to the interfaith talk, because it was quite apparent that most of the participants were talking less about whether their beliefs were objectively (scientifically) true than whether their beliefs were subjectively useful.

Of particular note were the Hindu Swami and the Jewish Rabbi. The Swami focused almost entirely on religion as a means of spiritual development, a set of useful markers and methods to help us along the path of personal growth (this theme was also echoed, albeit with rather less vigour, by the Christian on the panel). In this sense, religion is "merely" a handy guide, a set of well-trodden roads that people have laid down over the centuries, with no need to be objectively real. The Rabbi focused on the standard gunk - religion as bringer of morality etc - but was unusual in that she expressly stated that this was true for her, and might not apply to someone else. IMO that's a far more respectable stance on this question than the usual one, and I found it very refreshing.

Is God objectively real? As far as I can tell, the answer is no. Is God real for some individuals? Apparently so, and I hope to explore further why this should be the case. Is God part of consensual reality? At present I'd say no, but I'll need to examine this question in future posts.

In light of this reformulation, how does one define the reality-based community, the faith-based community and the action-based community? What's the most effective combination of these to live by, and why?

Watch this space, folks. And visit any interfaith events in your area, they're bloody marvellous.
Read the full post

Tuesday, October 24, 2006

A challenge to creationists

I'm being very lazy about the Protein Challenge at the moment, due to a combination of general lassitude and complete lack of any idea as to where to start. I will get onto it, especially since the issues raised seem to be flaring up again over on Paul's blog (he's doing a series on Dembski's design inference and specification).

In the meantime, here's another money-where-mouth-is challenge to those of you out there who have very different beliefs to me in the area of origins.

Science and predictivity

A scientific model (a hypothesis or group of hypotheses) is said to be predictive if it tells us the results of experiments that we haven't performed yet. For example, the model of quantum mechanics tells us how electrons will behave in various potentials without our actually having to generate those potentials and throw electrons at them.

It's fair to say that predictive models are the holy grail of science. These perfect crystal balls, these insights into the future, justify the entire enterprise - they are what makes science so unbelievably useful.

Evolution and predictivity

Evolution has been the only model that accurately fitted all the data for over a century now. However, it's theoretically possible to create any number of models to fit a given set of data (although currently the only alternatives are very heavy on the Goddidits), so that in itself is not conclusive evidence for evolution.

What is conclusive evidence that evolution is at least on the right lines is the large number of confirmed predictions that evolution has made. For example:

  • When the genomes of the great apes (inc. humans) were first studied, it was observed that humans had one chromosome per gamete less than any of their closest relatives. As humans were unique in this respect, it was hypothesised that having one more chromosome was the ancestral condition. It was known at the time that it is nigh-on impossible to just lose a chromosome, hence it was hypothesised that at some point in the human lineage two chromosomes must have fused. It was therefore predicted that one human chromosome would look exactly like two fused chimp chromosomes. This prediction was later confirmed.

  • It's been known for a long time that primates are unable to produce vitamin C (chemical name: L-ascorbic acid). Since most mammals can, it was hypothesised that this was the ancestral condition, and that primates have subsequently lost that ability. With the emergence of population genetics, it became apparent that, in this case, remnants of the vitC gene should exist in primates - these pseudogenes wouldn't have had time to fade away. Hence it was predicted that primate genomes would contain DNA strings very similar to those used to produce vitC in other mammals. This prediction was later confirmed

Neither of these facts were predicted by any other models that existed at the time.

The Challenge

The challenge is aimed at anyone who claims that creationism, or any other alternative origins conjecture, has scientific merit. Simply provide me with one valid prediction that your model has produced. If you can do this, I'll publicly declare that your model is scientific, and devote the next month or so of my life to figuring out

The conditions

Some of the following criteria for what constitutes a "valid prediction" will border on the insulting to the more honest of you, but I've seen people argue each of these points so it's best to get things out in the open. The criteria I'm concerned with are:

  • It must be new knowledge - it can't be something we already knew to be true at the time the prediction was made. "Predicting" the existence of the universe doesn't qualify.
  • It must be concrete, i.e. relating to the physical universe - sociological predictions derived from physical models will not generally be accepted. Predicting that people will be picky about your model doesn't qualify.
  • It must be testable, i.e. both verifiable and falsifiable. "Predicting" that evolution won't be able to explain something doesn't qualify, as that conjecture is falsifiable but not verifiable (it's actually an hypothesis). Note: "predictions" that are verifiable but not falsifiable may be accepted as a weaker evidence for your model.
  • It must be unique to your model - so no predictions that would also be made by current mainstream scientific models. Predicting that the sun will come up tomorrow doesn't qualify.
  • It must be confirmed. Predicting something that hasn't actually been tested yet doesn't qualify.
  • It must follow inevitably from the model. Conjecturing that God exists and hence predicting that Mars will have subterranean water will not qualify unless you provide a damn good rationale. Throwing out thousands of mutually-conflicting conjectures and hoping one will stick is also not acceptable.
  • It must be checkable - I must be able to confirm that all the other criteria hold. For this reason, predictions older than 50 years will not in general be accepted.

Disclaimer: I reserve the right to modify these criteria if someone comes up with some really really daft workaround that I hadn't thought of, but I solemnly swear that this will only be done in absolutely exceptional circumstances.

These criteria are not excessive - both of the evolutionary predictions I discussed above pass with flying colours. These criteria are not ad-hoc - each and every point is essential to proper scientific hypothesis testing. I'm currently fairly sure that these criteria, rigorously enforced, are sufficient to confirm that a model is at least broadly accurate.

Before anyone asks, emotional arguments, or logical arguments not relating to hypothesis testing, will not be accepted for the purposes of this challenge.

No, I would accept the evidence

One response that I've heard in a variety of contexts is "you're an atheist, you wouldn't accept the evidence even if we gave it to you". This is factually inaccurate.

I self-define primarily as a member of the reality-based community. Although, at present, most RBCers are atheists, this is for strictly pragmatic reasons rather than any sort of underlying resistance to any given brand of theism. In short, if you provide me with the evidence, I will accept your hypothesis as being more accurate in this area than the mainstream scientific consensus (if one exists in that area).
Read the full post

Wednesday, September 20, 2006

The joys of jobhunting

Today I had the interview (flowerily termed an "assessment centre") for the company I mentioned previously. Thought it went quite well, although the other applicants were scarily good.

Actually, it was the most challenging interview process that I've had yet - and I thoroughly enjoyed it. First there was a test. I'd assumed it'd be a standardised test. Boy was I wrong. They asked us everything from complex actuarial case studies, through our thoughts on the company and role, to translating random Esperanto phrases (based on their homologies with other languages - we weren't expected to speak Esperanto).

Another stage was to produce and perform a ten-minute presentation on any subject. Now, being a skeptic by trade, I figured there might be a way to leverage my wasted time online here. I ended up doing my presentation on "Cranks and how to spot them". There was one restriction I had to deal with, though - I didn't know how my interviewers felt about any of the cranks I was planning to dissect, so I was forced to stick to marginal stuff like astrology, dowsing and the Hoxsey treatment. In particular, I felt unable to risk going after the big guns of Creationism. I thought there'd be too high a chance of my interviewers turning out to be Hovind devotees.

Or at least I did until 30 minutes into the test, when I turned a page to see the question:
Describe how evolution operates by natural selection.

Read the full post

Tuesday, September 19, 2006

CSI: the explanation

(No I'm not talking about Crime Scene Investigation, fool!)

OK, so I really should be focusing on the Protein Challenge. However, being easily sidetracked, I got thinking about Dembski's concept of CSI - Complex Specified Information. It's a surprisingly hard concept to understand, not least because AFAICT Dembski makes it as difficult as possible to do so.

The basic principle is that evolutionary processes aren't supposed to be able to produce structures that are improbable (for a sufficiently well-defined value of "improbable"), complicated (ditto) and specified. The concept of a specification is basically an attempt to extend the basic probabilistic concept of an event to handle post-hoc reasoning. A specification defines a target space of possible things that can happen.

The argument goes as follows:

1) Chance processes tend not to give results that are both unlikely and specified. So, for example, drawing 13 cards and getting all spades is highly unlikely, and you'd rightly assume that someone had tinkered with the deck.

2) Natural processes (regularities) tend not to give results that are both complex and specified. For example, though a snowflake may be complex, you won't get the same one twice.

3) Hence, anything that is complex, improbable and specified is most likely the result of intelligent intervention (nb. human brains apparently don't qualify as natural processes).

There are some problems when you try to extend this to evolution and genetic algorithms and so on - both are quite capable of generating complex, improbable, extremely useful systems. Dembski gets round this by saying that genetic algorithms can generate CSI if and only if the target space associated with the specification represents a local optimum of the fitness function. GAs work if and only if the problem you feed them (fitness function) is actually the one you want solved (specification).

That's why examples like the infamous "methinks it is like a weasel" work - the specification we choose (the text) is 'coincidentally' identical to the optimum of the fitness function. Dembski, if I understand correctly, points out that, unless we select our specification to correspond to the fitness function we're using, we still won't generate CSI. We'll have complex information, but it won't match the right specification. As such, he feels justified in saying that, in feeding the GA the right problem for our desired solution, we're "smuggling" CSI into the system.

The problem here is that, looking at the "fitness function" to which real-world genes are exposed, we see that it's basically something along the lines of "ability to survive and breed". In that context, the ability for a gene or combination of genes to produce something like a flagellum would certainly be of value for survival, and hence could represent a local optimum of the fitness function. The flagellum could evolve despite its CSI, because evolution would be selecting for the same underlying trait that we're basing our specification on - ability to live long and prosper.

Thus, simply by basing his specifications on the functionality of a system, Dembski is setting up a range of target spaces that evolution can quite definitely find. It's something of a Texan Marksman issue - Dembski is running round painting targets around all the areas that evolution by natural selection is naturally inclined to hit.

Key terms:

Complex - refers to Kolmogorov complexity, best thought of as a measure of how easily a system can be described. So, for example, "AAAAAAAAAAAAAAAAA" would be low-complexity, "AABBCCDDEEFFGGHH" would be higher, and a random string like "BJECDWYIVFYUEUBUFIIHI" would be highest.

Information - refers to Shannon information, also known as the "surprisal" of a system. So, for example, "EEEEEEEEEEEEEEEEEE" would be fairly low-information because E is a common letter - it doesn't surprise us to see it. "I LIKE FISH" would be higher, as not all of its components occur with such frequency. "XXXXXXXXXXXXXXXXXXXXXX" would be very unexpected (except in the context of really strong beer) so gets a high "surprisal" value.

Target space - refers to a set of states that we'd like the system to end up in. So, for example, the target space of a system composed of lots of bits of wood might be a bookshelf.

Search space - refers to all the states that a system could end up in. So, for example, the search space of a system composed of bits of wood could include both bookshelves and mere piles of planks.

Specification - a simple delineation of the target space. For example, the specification "bookshelf".

Genetic algorithm - a program that attempts to imitate evolution in a model system.

Fitness function - something that allows a GA to tell which of a group of organisms is the most "fit". In the real world, the primary attributes of the fitness function are ability to survive (natural selection) and ability to attract mates (sexual selection).

Local optimum - an area of the search space where there are no small changes that can increase the corresponding organisms' fitness. If you think of fitness as corresponding to height on a graph, the local optima are the peaks of the resulting mountain range.
Read the full post