Thursday, November 19, 2009

Just my opinion

Anyone who uses the word "quantum" in a serious discussion, and who has never solved Schroedinger's equation, should be stuck in a box with a radioisotope, a neutron-triggered poison dispenser, and an introductory textbook on quantum mechanics. This is what we call "incentive to learn".

First nominee for this treatment: Deepak Chopra. Anyone got their own preferred woo-merchant?
Read the full post

Tuesday, November 03, 2009

Not Sophistication

Yeah.

So.

This is kinda embarrassing...

How can I put this? Some people are alcoholics. Some people snort cocaine. Some people visit prostitutes. Some people have scary fetishes.

I'm afraid my personal addiction is not nearly as socially acceptable.

Yup, I'm a Dungeons and Dragons webcomic freak.

I swear I never saw it coming. It started off small, just a little bit of Order of the Stick when I was feeling bored. Then a gamer friend landed up in hospital, and I bought him some OOTS books to keep him company, and... I couldn't resist their siren song. I fell into depravity like an expensive, beautifully-painted character model onto a stone floor*.

Next it was Goblins. But hey, two comics ain't so bad. I could give it up any time I liked.

I dabbled with Irregular Webcomic, but quickly moved on to harder drugs. I tasted Looking For Group, and life was sweet for a while.

But then I discovered Darths and Droids and DM Of The Rings. Webcomics about fantasy films, in the style of D&D adventures? I think I've hit bottom here. It's time to admit: I need help.

The really irritating thing is that I don't even play Dungeons and Dragons.

* It's well-known that the force of impact of a dropped model is proportional, not to the height of the table it's knocked off, but to how annoyed you'll be if it gets broken.

Read the full post

Sunday, October 18, 2009

Sophistication

You know that guy? The one who is always in great shape despite apparently living off fast food. The one who always gets top marks despite apparently never revising for exams. The one who can drive, skate, ski, swim, fight, play every card game known to man, and all without ever seeming to break a sweat.

Everyone knows someone like this (guy or girl). After years of wondering how the blazes they do all that and still look so laid back, I've come to a conclusion.

They're frauds.

Sure, they may have a slightly broader range of talents than the average bloke. Sure, they possibly started out with slightly better strength and dexterity than us mere mortals. But there is no level of innate ability that could set them that high above the rest of us.

I think that, for every burger you see being eaten, there's an hour in the gym that you never find out about. For every cakewalk of an exam, there are many frantic hours of secret preparation. For every activity that they're "just naturally good" at, they have undoubtedly spent time preparing and training.

This is reassuring. It means that anyone can do what that guy does. Anyone can upgrade themselves to the status of god among men. It just takes a lot of work. Specifically, it takes a lot of work without any sort of immediate reward.

I now have a goal in life. Firstly, to practice the discipline required for this sort of long-term training plan. Secondly, to develop the sophistication required to STFU.

I will make better progress towards both of these goals if I get a good eight hours sleep. Goodnight.
Read the full post

Bwahahaha

I've been trying to cut back on my public use of mad-scientist laughter. Despite its proven stress-relieving effect, it does disturb my co-workers somewhat.

To compensate, I intend to use my blog as a gloating platform. I consider this to be ethically acceptable on three grounds:

1) My blog, my rules (incidentally, the new dress code round here is "winged monkey")
2) Blog-reading is strictly voluntary - you fools chose to read this garbage
3) It's not like you can do anything about it anyway

Those preliminaries out the way, I would just like to say I'm very happy. And boastful. But mostly happy.

About four months back now, I got bounced up to a different part of England - I'm sure I've whined about it previously. My temporary home is a hotel in a little sea town that wishes it was Las Vegas. For generations, anyone with any brains or talent has been escaping from this dump*, and the result is reminiscent of Innsmouth without the successful fishing industry.

Coming to terms with the mindless tedium presents an interesting challenge. As a partial solution (because there's only so many books you can cram in a suitcase - believe me I know) I've taken up Taekwondo. I've previously done Karate, but they don't have a club for that within walking distance of the hotel, so what the hey.

I've actually been really enjoying it. This is the first martial arts training I've done in about three years, and it's been good to feel the old skills starting to come back (plus a few new ones). It's been going so well that I've been rather looking forward to the grading, which was earlier today.

Ah, the grading. I used to think the sweetest words you could hear in a grading were "you've passed". It turns out I was wrong. The sweetest words are "we've decided to let this student skip a belt". I've jumped yellow-belt entirely and gone straight to yellow-with-green-tag. Who thinks up these colours?

Normally I would take this with a large dose of humility - until this morning I was the only adult white-belt at the club, so they could have just been letting me catch up with my "peer group". However, when they were handing out the new belts, it transpired that they did not have a green-tag belt with them, which suggests that this was a spur-of-the-moment decision based on my performance in the grading.

Quite apart from providing me with excellent bragging rights, this episode also highlights the unreasonable effectiveness of the human brain (yes, even mine). Once connections are made, they tend to stay made. Once a skill is learned, it persists far beyond its anticipated sell-by date.

The moral: never be afraid to spend a bit of time learning a new skill, or polishing an old one. I'm going to be leaving this area in December, so I'll only have had a few months with this club. But even this short few months has been good for me. And I know that, next time I decide to take up a martial art, it'll be easier than ever.



* This is unduly harsh. There are many good, intelligent people here - I work with a bunch of them. But the social agenda seems to be completely controlled by people who think fart jokes are the height of humour, big flashing lights make a place look modern and tasteful, psychics can solve all your problems, and Sophistication is an island in Greece.
Read the full post

Sunday, September 27, 2009

Where did they go?

I'm not a good programmer. I'm naturally quite techie, but I've never really had the patience to sit down and make great art with my PC. The skills I have are those I was able to absorb from those in my immediate vicinity - replication not initiation.

That said, I do enjoy absorbing the culture of tech - the story of mel, the jargon file, esoteric languages, and the old tales of the MIT AI lab and Xerox Park. I read books by Neal Stephenson and William Gibson. Buried somewhere on my hard drive is the complete archives of Phrack mag, although I barely understand half of it (typically the obsolete half - yay analog phreaking!).

And one of the things I pick up from these shards of geekiness is a sense of wistfulness. They talk about the September that never ended. They talk about the AI Winter. They refer to newsgroups and communities of unsurpassed elegance and sophistication, that now no longer exist. There's a sense of stumbling across the forgotten artifacts of some lost higher civilisation. Once heroes and wizards strode the Earth; now there are only echoes.

Where did they go? What happened to the cypherpunk generation? Did they all quit programming, or wind up in wage-slave jobs that crushed their creativity, or die of drug overdoses, or get locked up for not respecting someone's lack of respect for computer security? Have the elves passed into the West, making way for the Age of Man?

I hope not. I like to think that, somewhere, these ideals hold on. Somewhere, in a hidden mailing list, on a firewalled server, carried by a stream of encrypted emails spliced into innocuous data, the crypto-anarchist dream lives on. It's just waiting to be found, locked behind doors that cry out for the right key. So what if the key in question is 8192-bit?

Maybe I'm deluding myself. Maybe the cypherpunk movement just died out, faded back into oblivion. It would be a poorer world if that were so, but the world has no responsibility to respect our desires.

But I allow myself this one dream. And in consolation for the lack of evidence, I hold this thought tightly:

If they couldn't hide themselves from people like me, they wouldn't be worth admiring...
Read the full post

Sunday, September 20, 2009

How To Learn French

1) Work your way through Wikibooks: French

2) Download a freeware French-English dictionary (I'm usingFreedict - open-source but not very user-friendly).

3) Download a good book here and ici, and read them side by side.

4) Tune in to web radio (alternatively pick a DVD, and set the language to French and the subtitles to English).

It's amazing how long you can spend on the internet and still not use it to its full potential.
Read the full post

Monday, August 17, 2009

An example thereof

In my last post I explained why it was a good thing to be skeptical before passing ideas on.

Now, I've just come across a great reference to "the world's oldest apocalypse prediction":

"Our earth is degenerate in these latter days. There are signs that the world is speedily coming to an end. Bribery and corruption are common."
- Assyrian clay tablet, circa 2800BC


This reference is sourced to Isaac Asimov's Book of Facts. The problem is, I can't substantiate it. I don't have a copy of the Book of Facts handy and, even if I did, I would need to know more about the clay tablet in question before I could trust Mr Asimov's word on this.

So if I were to use this quote in any sort of serious discussion, I would need to accompany it with a shot of skeptical "penicillin". I would have to make my friends aware that I could not stake my life on the it being accurate. This would be boring and long-winded.

The only alternative is to try to track down the tablet in question online. This is not proving easy: googling for the translated text just finds thousands of people who have quite clearly copied it straight out of the Book of Facts. This is not corroboration.

So I need to dig deeper. With a bit of effort, I'll be able to figure out how the Assyrian research community organises its information, which should give me some idea of where to find this particular tablet. So far I've come across the Cuneiform Digital Library Initiative (not so helpful as it doesn't give translations) and the Neo-Assyrian Text Corpus Project (which appears to be defunct).

Beyond that, I may have to gatecrash the local university library. Watch this space.

It may take a while to learn the truth here. Heck, I might actually need to learn Assyrian to track down the tablet (or to demonstrate that it probably doesn't exist). I am unlikely to go that far. But the time I do spend on this exercise will be time well used - a tithe spent on improving the information available to the community as a whole.
Read the full post

Wash your hands before blogging

On this blog I often talk about skepticism. But what actually does this mean? Beyond the statistics, the science and the logic, what is it that defines us as skeptics? What is the driving force behind our community of pedants?

The answer is simple. When you get right down to it, modern skepticism is about hygiene.

Bear with me here...

Why do we wash our hands? Because there are tiny self-replicators called bacteria and viruses that can infest them. These bugs eat the nutrients on our hands, and given half a chance will take a bite out of the hand itself. They are harmful.

They also spread rapidly. When we perform the various acts of hygiene - using a tissue when sneezing, washing our hands after using the loo, cleaning up after our dog - we aren't just protecting ourselves. We're protecting those around us. Washing your hands makes you safer, sure, but it also helps slow the spread of disease through your community.

Why do we apply skeptical principles to our thoughts? Because there are tiny little self-replicators that can infest them. We call these replicators "memes", by analogy to biological genes. A meme is simply a bit of information that can "copy" itself from one human mind to another. It could be an email hoax, a news story, a technique for producing origami boats, a poem, or even a blog post.

Some of these memes are useful; some are harmful. Memes can encourage you to feed the homeless, or to give all your money to scammers. In general, memes that correspond well with reality are less likely to cause harm. Truth is usually better than falsehood.

When we pick up biological diseases, we have a responsibility to ourself and others to limit the damage those germs can cause. When we pick up memes from others, or when we pass our memes on, we have a similar responsibility to ensure that they are realistic. We must use the disinfectant of rationality, the soap of science and the hot water of critical evaluation to ensure that no-one will be injured or killed because we infected them with a dangerous untruth.

This explains some of the distaste that scientists and skeptics sometimes show towards people who believe in UFOs, homeopathy, psychics, creationism, conspiracy theories... and gods. It's not the beliefs that disturb us; rather, it is the lack of intellectual caution that these beliefs demonstrate.

In general, these believers have not bothered to "wash their hands". They have not attempted to protect themselves from bad memes, and they happily pass on their mental plagues to others. These people are walking around with unwashed minds, ready to transmit all sorts of potentially-harmful diseases.

It's unhelpful. It's dangerous. And it's certainly not hygienic.

Read the full post

Friday, July 10, 2009

Rituals of my people

Last Sunday I went to the christening of my cousin's second son. It was a full church service, and was rather well attended by various family members and friends of the proud parents. The kid is very sweet. All told, a nice day out.

So what's bugging me?

Well, like I say, this was a proper church service. It's actually quite a while since I've been to a church except as a tourist, so this took some getting used to. And, after spending years discussing religion in blogs and forums, I found myself with a very strong urge to hit the "reply" button...

I was amused by the Bible reading that included Matthew 13:47-49a and carefully airbrushed out the less family-friendly Matt 13:49b-50. I was mildly irritated by the liturgical question-and-answer format - what if you don't agree with the prescribed answer*?

And I was actually rather bothered by the content of the liturgy. There are several parts that to a non-Christian like myself are a bit disturbing. For example:

Faith is the gift of God to his people.
In baptism the Lord is adding to our number those whom he is calling.
People of God, will you welcome these children and uphold them in their new life in Christ?
All: With the help of God we will.


I'm sorry, but this child is not old enough to be considered one of "our number". He's part of your community, sure, but at his age you can't meaningfully say he subscribes to your beliefs. Beliefs come later. And he might not share your views even when he grows up. Talk about counting your chickens before they hatch...

The entire liturgy is founded on the assumption that, if you're born into a particular family or community, you're going to grow up as a Christian. I reject that assumption. The kid will follow his own path and, as a responsible relative, I'll support him whatever that path might be.

If he becomes an atheist then that's cool. If he becomes a Christian then fair enough. If he becomes a Muslim, Hindu, Buddhist or goat-sacrificing Satanist then I'll still be on his side. Any other attitude is reprehensible.

If atheism's popularity increases, I foresee a day when we'll start to develop rituals and liturgies of our own. When that day comes, there will be a lot we can learn from the Christian versions.

In the case of child baptisms, I hope we learn what not to say...



* I handled this situation by just not saying anything while everyone around me muttered their responses. Who says I'm not tactful? Incidentally, I'm pretty sure I saw some other folks doing the same thing.

Read the full post

Wednesday, July 08, 2009

Walk on by

There's a topic I've been meaning to cover for about two years now. And every time I decide to write about it, something comes up, goes down, or otherwise gets in the way. I'm jinxed.

It's a simple little thing: how we walk.

Now mostly this isn't something we think about much. If strolling down the street required cogitation at every step, there would be even more couch potatoes in the world. But there's a lot to find interesting...

An example. Next time you walk down a street, try to imagine that you're encased in a big solid sphere, like those Zorb balls. Convince yourself that the ball is rock solid - no-one can get through it to bump into you. Visualise the people around you rebounding from the ball if they try to push too close.

What you'll find is that you can walk straight at someone and they will always get out of your way. This is really kinda cool. And it doesn't seem to be anything to do with physical size or intimidation - I've seen tiny women pull this trick on burly blokes.

On close examination, it turns out that the "simple" act of walking past someone is actually quite complicated. As you approach a person, you use a range of subtle cues to plan a route round them, based on the direction you think they're going to head in.

The most important of these is probably foot position - your feet tend to point in the direction you're planning to go, and other people will pick up on this. If you really want to confuse someone, try walking past them on their left while keeping your feet pointed towards their right. Chances are good that they'll walk into you.

That's why the Zorb ball trick works. When you visualise being surrounded by an impenetrable force field, your feet point straight forward regardless of who is in your way. Everyone else unconsciously notices this and walks around you.

There's a metaphor in there somewhere.

Read the full post

Thursday, July 02, 2009

W00t

Just passed another actuarial exam. That makes three so far (out of eleventyumpteen...).

For bonus points, this is the one I didn't think I was going to pass first sitting. I am very happy right now.

Downside: by long-standing tradition, I have to buy cakes for the entire office come Monday.
Read the full post

Wednesday, June 24, 2009

Effect/Cause

A common theme in modern skepticism is how people naturally see causation where none exists. Got cancer and live near a phone tower? Must be evil vibes from the GSM network. Got a child with autism? Must be that damn MMR jab.

"But surely that's just something that crazy people do? Us nice normal skeptical folks would never put the cart before the horse," I hear you cry. Well, I'm sorry to break it to you, but it's a natural human trait. And the thing about natural human traits is they don't just affect the nutters; they apply to everyone...

An example from the world of martial arts. I used to do a lot of Karate, so I have a feel for how to punch, kick and otherwise mutilate an opponent. I haven't trained for years but, as a result of a change of location, I've decided to take up Taekwondo. My first lesson was yesterday.

So a few minutes into the lesson I'm kicking and blocking like mad, but it just doesn't feel right. I'm stiff, I'm tense, my techniques don't flow nicely. This sucks.

Now, one thing I've heard people tell beginners in a whole range of sports is "relax and your technique will improve". So I decide to consciously try this. I systematically unclench my arms and try a few more punches.

And amazingly... it completely failed to work. My arm muscles, not being trained for this kind of task, weren't able to throw my fists forward without over-punching (which bloody hurts). My untensed leg muscles weren't able to lift my feet above hip height.

This turns the old adage on its head. It's not a case of "relaxation leads to improved technique". It's more like "being out of practice leads to poor technique, and your weakened muscles tense up trying to compensate".

So correlation has been confused with causation, and the resulting expert advice turns out to be useless. I wonder how often this happens?

Read the full post

Friday, June 12, 2009

Stalinism ahoy!

As you may have noticed, I've been doing a bit of redecoration round the ol' blog. A new template, proper use of folds, and an upgrade to the new Blogger Layouts, should all combine to make the blog both readable and maintainable.

But that's just Phase 1 of the evil master plan...

For Phase 2, I'll be going back through the blog archives. In true Stalinist fashion, I aim to delete all the blogorrhea that's accumulated over the years: the whiney posts, the inappropriately rude posts, the posts that I clearly wrote after several pints of whiskey (it's amazing how the ability to type is always the last thing to go).

Whilst I'm at it, I'll add these new "tags" that all the cool kids are experimenting with. I'm really just a victim of peer pressure...

Phase 3 needs a bit more thought. Basically, I'll make a list of stuff I know just well enough to post about, and set up a poll or something to see what my - largely nonexistent - readers are most interested in.

Which neatly leads us to Phase 4. If I'm going to put more effort into this blog, it would be nice to know that someone is reading it. I know I'm never going to be Pharyngula, but I worry that if I spend too much time talking to myself I'll wind up in a padded cell. And those things don't even have wifi access.

In thinking about this, I've found that I don't really understand how blogs attract and retain readers. This is something I'll need to consider further.

Read the full post

Monday, June 08, 2009

I'm a weirdo

Today I walked about an hour out of my way to give blood. The sugar rush from the cookies afterwards has to be felt to be believed. I'm now awaiting receipt of that lovely letter they send out: "Thank you for giving blood. Unfortunately we cannot accept your donation because you have HIV, Malaria, Asian Bird Flu and at least two Hepatitis variants. Consult your local undertaker."

On the way home I saw a very small fledgeling bluetit that had apparently fallen out of its nest and waddled into the road. Needless to say, it was a little bit shellshocked. I picked it out of the road and stuck it in the bushes before it could metamorphose into a very wide fledgeling.

I wasn't able to help an old lady across the road, but only due to a regrettable shortage of old ladies in these parts. This being the North of England, where fat is the fifth food group, they probably all die young of coronaries or acquired diabetes...

These acts of madness are not isolated incidents. Only a couple of weeks ago, whilst on camping, I took half an hour of time out from the festivities to help the bloke in the next pitch put up his tent. And I'm no better in this respect (or worse, I hope) than the average guy on the street.

There was no obvious benefit to me from any of these. Apart from the cookies, the blood donation was just a very long-winded way to get mildly dizzy. Bluetits are not known for their gratitude, and this one gave me nothing but a mildly increased risk of Asian Bird Flu. And one of my (young, female, single) co-campers did comment "oh, you're so nice", but sadly she's not otherwise interested in me.

So why do we do this crazy stuff? Needless to say, I have a theory hypothesis . And it allows me to neatly illustrate a misunderstanding that many people have with evolutionary biology.

The key concept I'd like to introduce here is the difference between proximate and ultimate causes. Humans perform a great many activities that - considered in the short term - are daft in the extreme. Consider the well-known spike in mortality rates for people in their early 20s, due largely to deaths from violence (accident, homicide, suicide).

All told, the human race appears to consist of idiots who waste their time and life expectancy for no better reason than "I felt like it". We shall call this the proximate cause of their actions.

Occasionally people are able to justify their behaviour in terms of some longer-term plan. For example, I work for a financial company because I'd quite like to make lots of money, but I work in pensions rather than investment banking because I would prefer not to die of exhaustion by the age of 30. In this case, we say that the proximate cause is supported by prior causes.

Very occasionally, we can trace our chain of causes all the way back to some very fundamental cause like "I don't want to die young". At this point, logic has to get off and hop - as David Hume pointed out, you can't reason from "is" statements to an "ought" statement. I reckon that these low-level goals are hardwired into me by evolution, so the ultimate cause of my actions is reproductive fitness.

But what about situations, such as giving blood, where I myself can't see any link between action and reward?

Well, the important thing to realise is: just because I can't see a link, doesn't mean it ain't there. Anyone who put my life under a high-resolution microscope might observe that, in giving blood, I've probably endeared myself to many of my co-workers. By making this comparatively harmless sacrifice, I've demonstrated that I'm a good, upstanding, altruistic chap who is welcome to marry their sister.

Now it's important to note that none of this went through my head. I didn't think "hmm, let's manipulate my colleagues' feelings"; what I thought was "ooh, there's a blood drive on, I can go help save someone's life". My impulse to do good appears to be completely disconnected from any sense of the consequences.

But of course it's not disconnected at all. The impulse is a side-effect of how my brain is structured, and of how it was programmed when I was young (which is more or less a side-effect of how other people's brains are structured). My brain structure is controlled by my genes. My genes have spent 3.7 billion years avoiding being wiped out, and they've achieved this by producing survival machines (like me) that are comparatively successful.

The result is that our actions - our unthinking, instinctive, intuitive actions - are quite often smarter than we realise. No matter how dumb the behaviour, there's probably a shred of logic hiding behind it.

In short: maybe one day I'll rescue a baby bird and consequently attract a bird of the human variety.

Read the full post

Friday, June 05, 2009

Actuaries 101

So it occurs to me that, in my last post, I left one important question unanswered: what, in fact, is an actuary? What do they do, and why is it considered a remotely sensible use of time?

Actuarial science is best considered as forward-looking accounting. Traditional accountants look at what has happened in the past and try to figure out whether a company is broke or not. Actuaries look at what is likely to happen in the future and try to figure out whether a company will survive it all...

An example. Let's say that you send a ship to India to pick up some tea. You want to be sure that you don't go broke if the ship sinks. So you buy an insurance policy.

The company who sells you the policy has a dilemma: how much do they charge? If they charge too little for their policies then, in the long run, too many ships will sink and they'll go bust. If they charge too much, their competitors will steal all their trade. By this point, their stockholders are probably breathing down their neck for proof that the company is doing the right thing.

How do they handle this situation? They ask an actuary. The actuary will look through the mathematical literature on ship failures, consider the specific situation, and propose an actuarial model: a set of formulae that will put a price on that policy. The model may handle a number of factors - expected weather conditions at this time of year, age of ship, amount of maintenance done, even the professional opinion of engineers paid to examine the ship. The goal is to calculate a figure that will keep the company's "risk of ruin" - their chance of going bankrupt - below a certain level.

The three main areas of actuarial work are:

1) General insurance - dealing with the risk of expensive stuff breaking
2) Life insurance/assurance - dealing with the risk of people breaking
3) Pensions - dealing with the risk of people staying alive long after they've stopped earning

I mainly work in pensions, where the problem we deal with is that we don't know when someone will die. There are two different approaches to dealing with this:

1) Defined contribution schemes. These schemes hold a specified investment portfolio for each policyholder (PH), normally linked to the amount of money that the PH has fed into the scheme over the years. If, ten years down the line, all the scheme's investments fail, the PH just doesn't get much money. The hard part, then, is projecting the policy's value at date of redemption.

2) Defined benefit schemes. These set out in advance, according to some horrible messy formula, precisely how much money a pensioner will get. The hard part, then, is figuring out the amount of money the scheme needs to have right now in order to pay for all this. This is called scheme valuation and it is the subject of much actuarial thought, and of the heavy-duty actuarial software described in the last post.

In general, companies don't like DB schemes because, if the scheme's portfolio fails, the company has to carry the can. This is hard to allow for unless you have an unlimited source of money. So companies prefer DC schemes.

By contrast, governments are perfectly happy with DB schemes. After all, if the scheme needs more money, they'll just raise taxes. And having a deterministic formula for benefits makes negotiation with unions easier. In the UK, I suspect that this rather blasé attitude is likely to backfire at some point when the public realises how good the benefits are in the public sector...

Read the full post

Actuarial software

I now have a wonderful little tool called a dongle, which means that, even if my company persists in sending me to far-flung* locations, I can keep playing with teh intarwebz.

And I can keep bothering my readers (if any are still around) with pointless theorising. Bwahahahaha.

On with the show. My last post discussed actuarial software, specifically how hard it is to get hold of. Since then I've done a little bit of research on the subject...

Q: What is actuarial valuation software?

A: It's software that allows you to pull some numbers out of thin air ("make actuarial assumptions"), punch 'em into a standard statistical model, and thus figure out how much money your company needs to stockpile to ensure that employees get their promised pensions.

Q: Why not just use spreadsheet software?

A: Because many of the statistical models require incredible amounts of processing power. Also spreadsheets are too easy; if actuaries used them then we'd lose our aura of mystery.

(More seriously, these models are easy to screw up so it's best not to have the uninitiated trying their hand at them.)

Q: Why not use a general-purpose programming language?

A: Because actuaries generally don't think of themselves as programmers. Most of them can't code for toffee, and can't be bothered to learn - after all, that's not what gets them the big bucks. The purpose of actuarial software is to allow actuaries to program without actually needing to know any of the relevant concepts.

Q: Are the statistical models really worth all this effort?

A: Not really. There's no such thing as a crystal ball, and no such thing as an actuarial model that won't be blatantly wrong thirty years down the line.

A good example is smoking. A lot of the mortality rates we use are based on the tacit assumption that a sizeable proportion of the population has been inhaling plant-based tar for a lot of their life. Now that smoking is becoming less common in developed countries, our models can't always deal with the resulting increased life expectancy. See the intro to this article for an indication of how technical this can get.

So why do we bother? Let's get this straight: actuarial models will not allow you to prove that you're saving the right amount for your employees' retirement. However, it will allow you to prove that you're saving some money, and that the amount you're saving is justifiable.

This is of great interest to regulators, so they force pension schemes to jump through these hoops. It's like getting a degree from a prestigious uni - it doesn't actually prove that you've got two neurons to rub together, but it does make it a lot easier to filter out morons. Read up on information asymmetry for more info.

Q: Back to the main topic. What does actuarial software actually do?

A: Your standard actuarial software package will contain:

1) A bunch of standard actuarial algorithms, designed to predict e.g. mortality of pensioners.
2) A set of modules to handle country-specific or industry-specific regulatory requirements.
3) A lot of dainty footwork to allow things like distributed processing (important given how hefty some models are).
4) A user-friendly interface (remember, this has to be used by actuaries, who are generally not techies).

Q: And you're actually planning to produce all that???

Probably not. But it's an interesting goal to think about.



* This is Britain I'm talking about. In USA terms, this translates to "the other end of the state".

Read the full post

Wednesday, March 18, 2009

Rant: The closed-source black hole

So despite my best efforts, I seem to have ended up in an actuarial career path. The only question now is how best to get ahead in that path.

As a tech geek, one approach that makes sense is to learn the programming languages and toolkits associated with actuarial science. There are many of these for different tasks.

One common task is asset valuation, and possibly the most popular tool for this is a software package called MoSeS. As far as I can tell, it's basically a new interface and a huuuuuge set of libraries slapped on top of Microsoft Visual Studio.

And that's all I can find out...

I've searched Amazon. Apparently there's no such thing as a MoSeS textbook. There's no such thing as a MoSeS user manual (at least not that they're willing to sell separately).

I've tried Google. There's no such thing as a MoSeS help page or online documentation. The absolute best I can find is a bunch of newsletters on Towers Perrin's website, which at least confirm that MoSeS uses C++ syntax but don't tell me an awful lot else. I can't even figure out how to buy a copy.

In fact, every single line of enquiry I've followed reaches their website and then stops dead.

Why? Because if they provide no documentation, people will be forced to buy their training courses. Because if they provide no purchasing info, they'll be able to negotiate prices on a case-by-case basis (with all the arm-twisting that implies). Because, when you get right down to it, actuarial companies are usually rich enough to pay over and over and over.

Now of course Towers Perrin have every right to do that - it's their product so they make the rules. But as a long-time FOSS user, I have serious trouble getting my head round this concept. I keep thinking "that's a stupid way to behave, if they keep on like that then someone will fork the codebase". And then I remember, oh yeah, it's closed source so no-one can do that.

The actuarial world has always been relatively slow to adopt change. This is a natural side-effect of e.g. working with pension schemes that will almost certainly survive longer than the people currently maintaining them. Sadly, in software terms, it appears that actuaries have gotten as far as 1976 and stopped there. This is a damn shame, especially since I have to work in this industry.

I now have a mission in life: to get good enough at FOSS programming, actuarial science and copyright law that I can help create an alternative to this intensely painful arrangement.

Read the full post

Monday, March 16, 2009

The Downside

Today I'm feeling slightly irritated with skepticism. This doesn't happen often, so it's possibly worth discussing why.

Skepticism isn't a state so much as a goal. It involves a constant churn of invalidated notions, and an excessive focus on seeing, hearing and speaking no falsehood. As Feynman puts it: "The first principle is not to fool yourself - and you are the easiest person to fool."

In general, the effects of this are good. I have a somewhat better idea of what is sensible than the average human being. I'm better able to recognise when the emperor has no clothes. I recognise when my beliefs are not secure, and I am careful to state any caveats that may apply.

In many situations (particularly the regulation-happy financial industry), this is a good thing. In other situations, it sucks...

In particular, I've been reading about Cold Reading and Bavarian Fire Drills and similar psychological tomfoolery. I understand far better than most how these principles work. And yet I know I'll probably never manage to pull one of them off.

That's because I'm pathologically honest. If I were to try bluffing my way past a gate guard with the tried-and-true "don't you know who I AM???" tactic, I'd be forced to stop and say "well actually you probably don't know who I am, but you should let me through anyway. Pretty please? Alternatively, don't hit me too harOWWWWW!"

Needless to say, this is sub-optimal. Especially when the time comes for salary renegotiation.

There's really only one solution for it: I need to train myself to lie better. Now in the scientific world this would be a bad thing, because it would mess with people's "data hygiene" (although scientists tend to have access to fairly strong "data disinfectants"). In a long-term friendly relationship, this would be a bad thing, because it would damage the basis of trust. In many other situations, it's just unnecessarily cruel.

But in adversarial situations, such as the salary renegotiation or bartering in a market or dealing with evangelicals*, effective lying - or at least the charisma that's required to lie or bullshit effectively - is a survival trait.

Next question: how precisely does one exercise this faculty? Answers on a postcard.



* I've got into a habit where, if I wind up in a conversation with someone trying to convert me, I make a comment like: "Look, I've been in this debate for years, and I know all the arguments really well. So if I come out on top, that doesn't necessarily mean a lot."

This is the point where I realise I'm doing something wrong.

Read the full post

Thursday, February 12, 2009

My First Patch Submission

21:00 - Hmm, that's odd, the device manager on my Ubuntu laptop won't open up. It crashes. Very strange.

21:05 - Wait a second, that's not just a crash, that's a Python stack-trace! I know Python, I bet I could figure out the problem here. (And then everyone will love me and my name will live forever etc etc. Really I'm just in it for the girls.)

21:15 - Hey, it looks like hal-device-manager only consists of two Python scripts! This should only take five minutes.

00:10 - Wow that took a long time. But I've finally found the bug (see below) and I'm ready to submit it.

00:15 - Uh, where do I submit it?

00:30 - OK, so it looks like Ubuntu are getting the code off of Debian, who are getting it off FreeDesktop.org. Before I get in touch, I'd better check that the FreeDesktop folks haven't fixed it in the latest version of their code.

00:40 - Uh, where in tarnation is the code for this app? I can't find it in the git repository anywhere? Better google for it.

00:45 - Whadda ya mean, obsolete? NOOOOOOOOOOOOO!!!!!!!!!!!!!!!!!!! *sob*

For anyone who is the least bit interested in this sort of thing, the bug was actually kinda cool. What the code attempts to do is:

1) generate a list of devices (each of which has a predefined "parent" attribute)
2) for each device, create a list of that device's children (thus defining a device hierarchy) and ensure that the children's "parent" attribute points nicely at the parent
3) recursively set some of the children's attributes based on the parent's attribute

This works fine... except when there is more than one device with exactly the same name, and where those twin devices have a child. In this case, step 2 will result in the third device being registered as a child of both twins. From the child's point of view, the second twin will be designated as its parent.

When step 3 happens, the first twin is initialised. Then its child is initialised. But to do this, the child needs to check some of its parent's attributes. And who is the parent? The second twin, which hasn't yet been initialised. So the child can't get the info it needs and thusly throws a tantrum.

The patch is to stick a strategic "break" statement into the code:
--- DeviceManager.py.old 2009-02-12 23:15:48.000000000 +0000
+++ DeviceManager.py 2009-02-12 23:16:44.000000000 +0000
@@ -286,6 +286,7 @@
if p.device_name==parent_name:
device.parent_device = p
p.children.append(device)
+ break
if device!=virtual_root and device.parent_device==virtual_root:
virtual_root.children.append(device)
if device==virtual_root:


I suspect this situation won't come up very much, possibly only on those systems that (like me) have a Broadcom BCM4401-B0 100Base-TX ethernet controller.

Read the full post

Thursday, January 22, 2009

That education rant

Edit 14 June 08: I'm marking this post as purgeable because large chunks of it are borderline unreadable, even to me. If anyone finds the subject matter particularly fascinating, leave a comment and I'll rewrite it.

There is a phenomenon in the educational system known as "dumbing down". It's a least-common-denominator situation: where some people have trouble understanding a concept, you either remove the concept from your syllabus or spend crazy amounts of time explaining it.

The problem this creates is that the "improved" material is far harder for smart people to understand. Case in point: my actuarial study notes...

Actuaries are (ideally) clairvoyant accountants. Whereas a normal accountant can only see what your assets are worth now, an actuary can look at something like the cashflows associated with a pension scheme or insurance policy and make an educated guess about how much those cashflows will be worth in the future. This requires substantial maths to try to pin down the odds of various events (death, illness, car crashes) happening to the individual who took out the policy.

So actuaries have to be quite heavily trained. In England, this training is regulated by the Institute of Actuaries. The IoA run the actuarial exam system. They also provide a sort of syllabus known as the "Core Reading", which is a very terse description of everything examinable.

Problem is, a lot of people who take the exams don't have a strong maths or finance background.So a secondary market has grown up, which is mostly filled by the Actuarial Education Company (ActEd), a for-profit organisation supplying lecture notes. ActEd license use of the Core Reading material from the IoA, and they intersperse it with lots of detailed discussion.

This can sometimes be very useful. For example, where the Core Reading might just list a type of tradeable asset by name, the ActEd notes will provide a detailed description with examples. However, sometimes it can be very very annoying.

Currently I'm trying to read up on Markov Chains. For me, this is not a difficult concept. But revision is going veeery veeery sloooooowly, because for every five lines of actual maths I also have to digest two pages of wordy, confusing, often seriously dubious information. This slows my learning pace down to a crawl, not least because I frequently have to re-read stuff to convince myself that yes, they are bothering to say something that obvious.

If their only problem was overdocumentation of the Core Reading, I'd be unhappy but I'd accept it. I'm aware that many people (cough*economists*cough) don't have my familiarity with mathematical terminology, and these people also deserve some support. But what I can't handle is the fact that the Core Reading is also underdocumented.

Sounds paradoxical? Well, let me explain. The average mathematical proof, as written down, will contain approximately one part non-obvious statements to four parts obvious ("trivial") statements. The trivial bits are just mathematical filler - they make it easy to see the links between the key non-trivial assumptions and the conclusion.

As a mathematician, I would expect ActEd to take the following approach: lay out the whole proof, devote a fair amount of time to justifying the key assumptions, and if necessary spend a smaller number of column inches dissecting and rephrasing the rest of the logic.

Apparently ActEd disagrees. Their main approach appears to be: go through the proof step by step (no overview), ignore any steps that would take too long to explain, and go into mind-numbingly overwrought detail on the bits that economists might possibly be able to get their heads round. It's a sort of triage approach.

For example, one section of the material I'm working on is devoted to "time-homogenous Markov jump processes". This is a very technical name, but the concept is simple. Imagine a system (say a person) that can be in a number of states (say healthy, sick or dead). Specify the rates of transition between the various states (for instance the rate of transition from dead to healthy is zero).

One problem we often need to solve is: what are the odds of staying in a certain state S for a certain length of time T. Now it's fairly easy* to find out the odds of a person starting in one state (say healthy) and being in that state again in e.g. two years' time. But that doesn't take into account the possibility that they might have skipped between states (healthy->sick->healthy).

One approach to calculating this is as follows. Define a set of events** Bk. B0 is the event that the system is in state S at times 0 and T. B1 is the event that the system is in state S at times 0, T/2 and T. B2 gives S at times 0, T/4, T/2, 3T/4 and T. And so on, with each new event doubling the number of times after time 0 that the system must be in state S. All of these are relatively easy to calculate.

Notice two other things. Firstly, event Bk "contains" event Bj if j is less than k. For example, you can't hit S at times 0, T/2 and T (event B1) without automatically hitting S at times 0 and T (event B0).

Secondly, the event that the system stays in state S until time T (the thing we were having trouble calculating) is the union of all the events Bk. You achieve stasis (an event we'll call {T0>T}, meaning that the first transition time is later than T) by achieving all the events Bk.

This is hard to see, but imagine if the system hopped out of state S for just a minute then hopped back in again. There would be some value of k large enough that a multiple of 0.5k would fall into that minute. So if you leave state S, you forfeit that event, and therefore you forfeit the entire ensemble of events.

So we can now outline a method for calculating the probability of {T0>T}.

Step 1: {T0>T} = U0-∞(Bk)

The U symbol means a union of events, and the ∞ means that you include all such events right up to k=infinity.

Step 2: U0-∞(Bk) = limn→∞(U0-n(Bk))

This means that, if you take a finite union (up to n) and then let n tend to infinity, you get the same result as just going straight to infinity.

Step 3: limn→∞(U0-n(Bk)) = limn→∞(Bn)

This follows from our earlier note that each of these events contains all lesser events. If you take the union of a finite number of events, you just get the largest of those events.

Step 4: Therefore P({T0>T}) = limn→∞(P(Bn))

P(X) just means "probability of event X taking place".

We now have a nice elegant approach to calculating this nasty value P({T0>T}). We can just find a formula for the values P(Bn), and then see what happens as n gets bigger and bigger. The limiting value of this series will be the probability we're after.

Now, this was not an easy proof for me to explain. But allow me to point out three things:

1) Anyone who's got this far in the course will already have a thorough understanding of terminology like events, unions, probabilities, etc. ActEd doesn't need to explain that at this point.

2) The ActEd notes focus entirely on the symbol-manipulation steps 2-4. These steps are basically trivial - they shouldn't require much more than the line or two of elaboration I gave them. The mathematically interesting step 1, which should be analysed in detail, is completely ignored.

3) And yet they still took more words to describe this proof than I've used in this entire essay!

Reading these notes is giving me such a headache.



* Trust me on this. Or alternatively complain loudly and I'll provide you with a more detailed explanation.
** An event is a thing that may or may not happen, depending on the random behaviour of the system being studied

Read the full post

Sunday, January 11, 2009

Practical philosophising

Edit 14 June 09: I'm considering purging this post. There's some good philosophy going on in there somewhere, but it is not well-argued. I'll probably rewrite it as a sequence of posts at some point.

The basic principle of modern skepticism is that it's not about what you believe, it's about why. A good decision made for bad reasons is an oxymoron: flukes don't count.

Of course, this is not a complete answer. How do you know if your reasons for believing are good? For a start, you can look at how different approaches have panned out in the past...

In some areas, this is pretty much a solved problem. As far as figuring out how e.g. fundamental physical objects interact, it's a gimme: the scientific method as applied to physics is where it's at. Similarly for chemistry, cosmology and basic biology. By following the scientific method, you can analyse, understand and, most importantly, predict. The end justifies the means.

So what are the limitations of this method? What are the pathological situations where it breaks down? My favourite example is poker. By definition it's impossible to predict the responses of a good poker player: if they were predictable, they wouldn't be any good. The scientific method breaks down when someone is actively trying to subvert your results, which is why many excellent scientists get badly suckered by psychics.

And yet, in many ways, this is the area where understanding is most important to us. We live in a world constructed almost in its entirety by other people, none of whom would feel terribly comfortable if they thought we fully understood them. The man who understands you can manipulate you, can use you for his own ends. Attempting to understand someone borders on the disrespectful, and people will resist it tooth and nail.

The extent of this effect can be demonstrated by looking at the quite frankly convoluted procedures psychologists have to use. To get useful results from a subject, the good psychologist will have to thoroughly deceive that subject about the goal of the experiment. If you're doing an experiment about attention spans, tell them it's about pattern recognition. If you're doing pattern recognition, tell them it's about political reactions. Otherwise your data will be hopelessly contaminated by your subject's second-guessing of their own behaviour.

So how can we understand another person well enough to e.g. date effectively? There are two basic approaches: the theoretical and the practical.

The theoretical approach tries to achieve an understanding of your date that's one level deeper than your date expects. This approach is well known by stage magicians: by planning one step further ahead than your audience expects, you can create the most magical effects.

One example that I like was provided by my sister. When she was younger, a number of her friends were big fans of the horoscopes in the local paper. So if she was on the out with a friend, and that friend's horoscope said "a person wearing blue will be important to you today", my sister would wear blue. By understanding the factors that influenced her friend, she could achieve a closer relationship.

In dating, a similar role is played by the modern romance. Human beings are strongly conditioned by films and books to expect certain things of each other. This doesn't always work out well for us as a species. It's noticeable that the heros from action movies would have real trouble with the heroines from romantic films. So when an action-loving bloke meets a romance-loving girl (or vice versa) there are likely to be fireworks. And not in a good way.

The way to get round this is exactly what you'd expect: watch lots of sloppy romance films. This is a real issue for me - I'm a hide-behind-the-sofa kind of guy when it comes to this sort of thing. I empathise with the characters far too much to watch them blithely making fools of themselves in this way. I can't stand Mr Bean either.

But in the interests of fostering better communications between the sexes, I'm willing to make the effort. I just watched my first romantic comedy in years: the film Hitch, with Will Smith playing a relationship coach. I can't say I enjoyed it - in fact I spent most of it clutching my head and yelling at the screen "no! You idiot! Don't do that!!!" But I feel I've grown as a person. Soon I hope to be able to view stuff like this without needing heavy doses of anaesthetising whisky first.

Oh, and the practical approach to dating? That's to just try it a few times and see what works. In the end, this really is the only way of getting anywhere with something as complex as a woman...

Read the full post

Wednesday, January 07, 2009

Pythonia

"I aten't dead"
- Granny Weatherwax


Warning: blog post contains factual inaccuracies. See comments section for discussion.

So I keep meaning to write up my experiences in India, and I keep getting distracted. For example I've developed an interest in clockwork (don't ask).

Even more braincell-consuming is my ongoing effort to produce some sort of computer game worthy of the title. This is going very slowly, mostly because I'm taking a very anal approach to it. In short, I want to make it Pythonic.

An explanation. My favourite programming language, Python, puts a lot of effort into inculcating its users with good programming style. This is something very different from simply being able to program in the language, in the same way that not every writer of English prose can compete with Shakespeare. Good Python code, like any other form of poetry, should be elegant, non-kludgy, readable, evocative... Pythonic.

In particular, Pythonicism discourages "procedural" programming. This is code that consists primarily of single instructions linked by commands to go to a given line. For example, if you wanted to print out the numbers from one to twenty, a procedural approach would look like:

line 1: x = 0
line 2: add 1 to x
line 3: print x
line 4: if x < 20 then go to line 2
line 5: quit

The problem with this approach is that, unless you read and comprehend the entire program, it's a pain to figure out what the blazes it's doing. Let's say you start reading at line 3. What does x equal? From line 4 you can figure out that it's a number, but there's no clear indication of what x starts out as or how it changes over time. This is not a problem for this program as it's so short, but a 1000-line example would soon make your brain explode.

By contrast, a Pythonic approach would look like:

line 1: for x in range(20):
line 2:     print x

If you start reading at line 2, you'll see that the text is indented, which means it's a consequence of some preceding statement. "Oh," you'll say, "I need to look to see what this command is a subclause of. I'll look at the line above." And you look, and lo and behold the line above contains all the info you need to understand x: its starting point, its limit, and its mode of change. Not only is this code shorter, it's vastly more readable. That 1000-line program starts to look manageable.

The Python idiom actually has a very simple root: the principle of minimum power. Always use the most restrictive command that can achieve a given goal. It takes a bit of thought to realise that a "for" statement is less powerful than a "go to" statement, but it's true. A "for" statement imposes certain constraints: the list of values must be predefined, and in any iteration you can only move to the next value in the list.

By contrast, with "go to" statements, you can achieve any pattern of recursion, however convoluted. When you read code, what you're mostly doing is narrowing down the possible explanations for what the program actually does and how it does it. "Go to" statements do not narrow things down at all, so code with them in is harder to read than code that uses more restrictive statements. This is why "go to" statements are evil: any program containing them tends to become an unmaintainable mass of spaghetti.

So Pythonicism makes for nice programs. But it also makes for headaches on the developer's part as he/she desperately wrestles with how to make a program not only functional but elegant. This is not easy. It's like trying to write instructions for an educationally subnormal employee that at the same time read like the most beautiful sonnet.

The example I'm hitting is game design. The game I'm working on is extremely simple in concept: it's a turn-based game, and the actual game logic is not complicated in the least. But I'm having real trouble because I can't see how to structure the code Pythonically. Procedurally, things look like this:

1) Start the program up
2) Draw the menu screen
3) Once the "start new game" option is selected, initialise all the game variables
4) Draw the game-board screen
5) Let the user play a round (updating the board as they go)
6) Let the computer play a round (updating the board as they go)
7) Go to 5

This is perfectly viable... and completely procedural. A more Pythonic approach is still forming in my brain, but certain subtleties have become apparent. Firstly, the user actually has two roles: "dungeon-master" and "player". As dungeon-master, the user gets to choose the game settings (difficulty etc) and save or load games. As player, the user is limited to playing with the pieces they're given - no metagaming.

Secondly, the core logic of the game should be agnostic as to whether a given player is human or AI - no cheating. There will therefore be two equivalent components that provide player decisions, one of which happens to have a lot of AI code in it and one of which happens to have a user interface. The user-interface component will have to be tolerant of occasional switches from player mode to dungeon-master mode. This raises a number of design questions, for example what happens if two players try to use dungeon-master mode simultaneously?

So what's the pay-off from this incredibly abstract approach to designing the game? Well, there are two main advantages. Firstly, once I've got the structure sorted out to my satisfaction, inserting the actual game logic will be a piece of cake: I won't have to worry about unexpected interactions between the various bits of code, because each will have a well-planned interface to the wider world. Secondly, extending the game will also be very easy - for example I could make it a network game with minimal effort.

It's still painful though. That's the joy and despair of poetic Pythonic programming.
Read the full post