Thursday, January 22, 2009

That education rant

Edit 14 June 08: I'm marking this post as purgeable because large chunks of it are borderline unreadable, even to me. If anyone finds the subject matter particularly fascinating, leave a comment and I'll rewrite it.

There is a phenomenon in the educational system known as "dumbing down". It's a least-common-denominator situation: where some people have trouble understanding a concept, you either remove the concept from your syllabus or spend crazy amounts of time explaining it.

The problem this creates is that the "improved" material is far harder for smart people to understand. Case in point: my actuarial study notes...

Actuaries are (ideally) clairvoyant accountants. Whereas a normal accountant can only see what your assets are worth now, an actuary can look at something like the cashflows associated with a pension scheme or insurance policy and make an educated guess about how much those cashflows will be worth in the future. This requires substantial maths to try to pin down the odds of various events (death, illness, car crashes) happening to the individual who took out the policy.

So actuaries have to be quite heavily trained. In England, this training is regulated by the Institute of Actuaries. The IoA run the actuarial exam system. They also provide a sort of syllabus known as the "Core Reading", which is a very terse description of everything examinable.

Problem is, a lot of people who take the exams don't have a strong maths or finance background.So a secondary market has grown up, which is mostly filled by the Actuarial Education Company (ActEd), a for-profit organisation supplying lecture notes. ActEd license use of the Core Reading material from the IoA, and they intersperse it with lots of detailed discussion.

This can sometimes be very useful. For example, where the Core Reading might just list a type of tradeable asset by name, the ActEd notes will provide a detailed description with examples. However, sometimes it can be very very annoying.

Currently I'm trying to read up on Markov Chains. For me, this is not a difficult concept. But revision is going veeery veeery sloooooowly, because for every five lines of actual maths I also have to digest two pages of wordy, confusing, often seriously dubious information. This slows my learning pace down to a crawl, not least because I frequently have to re-read stuff to convince myself that yes, they are bothering to say something that obvious.

If their only problem was overdocumentation of the Core Reading, I'd be unhappy but I'd accept it. I'm aware that many people (cough*economists*cough) don't have my familiarity with mathematical terminology, and these people also deserve some support. But what I can't handle is the fact that the Core Reading is also underdocumented.

Sounds paradoxical? Well, let me explain. The average mathematical proof, as written down, will contain approximately one part non-obvious statements to four parts obvious ("trivial") statements. The trivial bits are just mathematical filler - they make it easy to see the links between the key non-trivial assumptions and the conclusion.

As a mathematician, I would expect ActEd to take the following approach: lay out the whole proof, devote a fair amount of time to justifying the key assumptions, and if necessary spend a smaller number of column inches dissecting and rephrasing the rest of the logic.

Apparently ActEd disagrees. Their main approach appears to be: go through the proof step by step (no overview), ignore any steps that would take too long to explain, and go into mind-numbingly overwrought detail on the bits that economists might possibly be able to get their heads round. It's a sort of triage approach.

For example, one section of the material I'm working on is devoted to "time-homogenous Markov jump processes". This is a very technical name, but the concept is simple. Imagine a system (say a person) that can be in a number of states (say healthy, sick or dead). Specify the rates of transition between the various states (for instance the rate of transition from dead to healthy is zero).

One problem we often need to solve is: what are the odds of staying in a certain state S for a certain length of time T. Now it's fairly easy* to find out the odds of a person starting in one state (say healthy) and being in that state again in e.g. two years' time. But that doesn't take into account the possibility that they might have skipped between states (healthy->sick->healthy).

One approach to calculating this is as follows. Define a set of events** Bk. B0 is the event that the system is in state S at times 0 and T. B1 is the event that the system is in state S at times 0, T/2 and T. B2 gives S at times 0, T/4, T/2, 3T/4 and T. And so on, with each new event doubling the number of times after time 0 that the system must be in state S. All of these are relatively easy to calculate.

Notice two other things. Firstly, event Bk "contains" event Bj if j is less than k. For example, you can't hit S at times 0, T/2 and T (event B1) without automatically hitting S at times 0 and T (event B0).

Secondly, the event that the system stays in state S until time T (the thing we were having trouble calculating) is the union of all the events Bk. You achieve stasis (an event we'll call {T0>T}, meaning that the first transition time is later than T) by achieving all the events Bk.

This is hard to see, but imagine if the system hopped out of state S for just a minute then hopped back in again. There would be some value of k large enough that a multiple of 0.5k would fall into that minute. So if you leave state S, you forfeit that event, and therefore you forfeit the entire ensemble of events.

So we can now outline a method for calculating the probability of {T0>T}.

Step 1: {T0>T} = U0-∞(Bk)

The U symbol means a union of events, and the ∞ means that you include all such events right up to k=infinity.

Step 2: U0-∞(Bk) = limn→∞(U0-n(Bk))

This means that, if you take a finite union (up to n) and then let n tend to infinity, you get the same result as just going straight to infinity.

Step 3: limn→∞(U0-n(Bk)) = limn→∞(Bn)

This follows from our earlier note that each of these events contains all lesser events. If you take the union of a finite number of events, you just get the largest of those events.

Step 4: Therefore P({T0>T}) = limn→∞(P(Bn))

P(X) just means "probability of event X taking place".

We now have a nice elegant approach to calculating this nasty value P({T0>T}). We can just find a formula for the values P(Bn), and then see what happens as n gets bigger and bigger. The limiting value of this series will be the probability we're after.

Now, this was not an easy proof for me to explain. But allow me to point out three things:

1) Anyone who's got this far in the course will already have a thorough understanding of terminology like events, unions, probabilities, etc. ActEd doesn't need to explain that at this point.

2) The ActEd notes focus entirely on the symbol-manipulation steps 2-4. These steps are basically trivial - they shouldn't require much more than the line or two of elaboration I gave them. The mathematically interesting step 1, which should be analysed in detail, is completely ignored.

3) And yet they still took more words to describe this proof than I've used in this entire essay!

Reading these notes is giving me such a headache.



* Trust me on this. Or alternatively complain loudly and I'll provide you with a more detailed explanation.
** An event is a thing that may or may not happen, depending on the random behaviour of the system being studied

Read the full post

Sunday, January 11, 2009

Practical philosophising

Edit 14 June 09: I'm considering purging this post. There's some good philosophy going on in there somewhere, but it is not well-argued. I'll probably rewrite it as a sequence of posts at some point.

The basic principle of modern skepticism is that it's not about what you believe, it's about why. A good decision made for bad reasons is an oxymoron: flukes don't count.

Of course, this is not a complete answer. How do you know if your reasons for believing are good? For a start, you can look at how different approaches have panned out in the past...

In some areas, this is pretty much a solved problem. As far as figuring out how e.g. fundamental physical objects interact, it's a gimme: the scientific method as applied to physics is where it's at. Similarly for chemistry, cosmology and basic biology. By following the scientific method, you can analyse, understand and, most importantly, predict. The end justifies the means.

So what are the limitations of this method? What are the pathological situations where it breaks down? My favourite example is poker. By definition it's impossible to predict the responses of a good poker player: if they were predictable, they wouldn't be any good. The scientific method breaks down when someone is actively trying to subvert your results, which is why many excellent scientists get badly suckered by psychics.

And yet, in many ways, this is the area where understanding is most important to us. We live in a world constructed almost in its entirety by other people, none of whom would feel terribly comfortable if they thought we fully understood them. The man who understands you can manipulate you, can use you for his own ends. Attempting to understand someone borders on the disrespectful, and people will resist it tooth and nail.

The extent of this effect can be demonstrated by looking at the quite frankly convoluted procedures psychologists have to use. To get useful results from a subject, the good psychologist will have to thoroughly deceive that subject about the goal of the experiment. If you're doing an experiment about attention spans, tell them it's about pattern recognition. If you're doing pattern recognition, tell them it's about political reactions. Otherwise your data will be hopelessly contaminated by your subject's second-guessing of their own behaviour.

So how can we understand another person well enough to e.g. date effectively? There are two basic approaches: the theoretical and the practical.

The theoretical approach tries to achieve an understanding of your date that's one level deeper than your date expects. This approach is well known by stage magicians: by planning one step further ahead than your audience expects, you can create the most magical effects.

One example that I like was provided by my sister. When she was younger, a number of her friends were big fans of the horoscopes in the local paper. So if she was on the out with a friend, and that friend's horoscope said "a person wearing blue will be important to you today", my sister would wear blue. By understanding the factors that influenced her friend, she could achieve a closer relationship.

In dating, a similar role is played by the modern romance. Human beings are strongly conditioned by films and books to expect certain things of each other. This doesn't always work out well for us as a species. It's noticeable that the heros from action movies would have real trouble with the heroines from romantic films. So when an action-loving bloke meets a romance-loving girl (or vice versa) there are likely to be fireworks. And not in a good way.

The way to get round this is exactly what you'd expect: watch lots of sloppy romance films. This is a real issue for me - I'm a hide-behind-the-sofa kind of guy when it comes to this sort of thing. I empathise with the characters far too much to watch them blithely making fools of themselves in this way. I can't stand Mr Bean either.

But in the interests of fostering better communications between the sexes, I'm willing to make the effort. I just watched my first romantic comedy in years: the film Hitch, with Will Smith playing a relationship coach. I can't say I enjoyed it - in fact I spent most of it clutching my head and yelling at the screen "no! You idiot! Don't do that!!!" But I feel I've grown as a person. Soon I hope to be able to view stuff like this without needing heavy doses of anaesthetising whisky first.

Oh, and the practical approach to dating? That's to just try it a few times and see what works. In the end, this really is the only way of getting anywhere with something as complex as a woman...

Read the full post

Wednesday, January 07, 2009

Pythonia

"I aten't dead"
- Granny Weatherwax


Warning: blog post contains factual inaccuracies. See comments section for discussion.

So I keep meaning to write up my experiences in India, and I keep getting distracted. For example I've developed an interest in clockwork (don't ask).

Even more braincell-consuming is my ongoing effort to produce some sort of computer game worthy of the title. This is going very slowly, mostly because I'm taking a very anal approach to it. In short, I want to make it Pythonic.

An explanation. My favourite programming language, Python, puts a lot of effort into inculcating its users with good programming style. This is something very different from simply being able to program in the language, in the same way that not every writer of English prose can compete with Shakespeare. Good Python code, like any other form of poetry, should be elegant, non-kludgy, readable, evocative... Pythonic.

In particular, Pythonicism discourages "procedural" programming. This is code that consists primarily of single instructions linked by commands to go to a given line. For example, if you wanted to print out the numbers from one to twenty, a procedural approach would look like:

line 1: x = 0
line 2: add 1 to x
line 3: print x
line 4: if x < 20 then go to line 2
line 5: quit

The problem with this approach is that, unless you read and comprehend the entire program, it's a pain to figure out what the blazes it's doing. Let's say you start reading at line 3. What does x equal? From line 4 you can figure out that it's a number, but there's no clear indication of what x starts out as or how it changes over time. This is not a problem for this program as it's so short, but a 1000-line example would soon make your brain explode.

By contrast, a Pythonic approach would look like:

line 1: for x in range(20):
line 2:     print x

If you start reading at line 2, you'll see that the text is indented, which means it's a consequence of some preceding statement. "Oh," you'll say, "I need to look to see what this command is a subclause of. I'll look at the line above." And you look, and lo and behold the line above contains all the info you need to understand x: its starting point, its limit, and its mode of change. Not only is this code shorter, it's vastly more readable. That 1000-line program starts to look manageable.

The Python idiom actually has a very simple root: the principle of minimum power. Always use the most restrictive command that can achieve a given goal. It takes a bit of thought to realise that a "for" statement is less powerful than a "go to" statement, but it's true. A "for" statement imposes certain constraints: the list of values must be predefined, and in any iteration you can only move to the next value in the list.

By contrast, with "go to" statements, you can achieve any pattern of recursion, however convoluted. When you read code, what you're mostly doing is narrowing down the possible explanations for what the program actually does and how it does it. "Go to" statements do not narrow things down at all, so code with them in is harder to read than code that uses more restrictive statements. This is why "go to" statements are evil: any program containing them tends to become an unmaintainable mass of spaghetti.

So Pythonicism makes for nice programs. But it also makes for headaches on the developer's part as he/she desperately wrestles with how to make a program not only functional but elegant. This is not easy. It's like trying to write instructions for an educationally subnormal employee that at the same time read like the most beautiful sonnet.

The example I'm hitting is game design. The game I'm working on is extremely simple in concept: it's a turn-based game, and the actual game logic is not complicated in the least. But I'm having real trouble because I can't see how to structure the code Pythonically. Procedurally, things look like this:

1) Start the program up
2) Draw the menu screen
3) Once the "start new game" option is selected, initialise all the game variables
4) Draw the game-board screen
5) Let the user play a round (updating the board as they go)
6) Let the computer play a round (updating the board as they go)
7) Go to 5

This is perfectly viable... and completely procedural. A more Pythonic approach is still forming in my brain, but certain subtleties have become apparent. Firstly, the user actually has two roles: "dungeon-master" and "player". As dungeon-master, the user gets to choose the game settings (difficulty etc) and save or load games. As player, the user is limited to playing with the pieces they're given - no metagaming.

Secondly, the core logic of the game should be agnostic as to whether a given player is human or AI - no cheating. There will therefore be two equivalent components that provide player decisions, one of which happens to have a lot of AI code in it and one of which happens to have a user interface. The user-interface component will have to be tolerant of occasional switches from player mode to dungeon-master mode. This raises a number of design questions, for example what happens if two players try to use dungeon-master mode simultaneously?

So what's the pay-off from this incredibly abstract approach to designing the game? Well, there are two main advantages. Firstly, once I've got the structure sorted out to my satisfaction, inserting the actual game logic will be a piece of cake: I won't have to worry about unexpected interactions between the various bits of code, because each will have a well-planned interface to the wider world. Secondly, extending the game will also be very easy - for example I could make it a network game with minimal effort.

It's still painful though. That's the joy and despair of poetic Pythonic programming.
Read the full post