So I recently took a trip to India, to attend the wedding of a friend from work. This was great fun and the trip was a hell of a learning experience in its own right (of which more anon).
There was, however, one element of the trip that proved a little... distracting. You see, I wasn't the only one who was willing to hop on a long-haul flight for the chance to see a proper Indian wedding. I had a fellow-traveller. A girl. Who I've had a mild crush on for a while.
I'm fairly sure she has no interest in me, and the trip probably did little to change her opinion given how fish-out-of-water we both were in Delhi. However, it's a demonstrable fact that close proximity to someone enhances one's emotional response to them*. In short, the crush has devolved into full-scale infatuation.
And then comes the office Christmas party. And she's wearing a gorgeous dress, looking incredible, and I'm feeling like the village idiot as I shamble around trying to dance without breaking anyone's toes. I can't even pluck up the courage to make a move. This sucks...
Now at this point you're probably thinking this is going to be another of those pitifully whiney posts I've been coming out with lately. But somewhere between the champagne and the Jack Daniels, I have a join-the-dots moment. I figure out what it is that's making me unhappy.
The girl is a part of it, sure, but I've had the same frustration when faced with my inability to move up the career ladder. It's the feeling that I've hit my limits, that I've found some kind of glass ceiling beyond which I can't progress. I stare out across the impenetrable ocean of my own inadequacies and weep, for there are no more worlds I can conquer.
This is just an organising principle for thoughts that have been percolating for a while, but describing problems helps me find solutions. It occured to me that, as long as I'm improving in one area of my life, I won't care so much about the rest.
And improvement takes less effort than I expected. I've spent the weekend tidying my flat. I've given a load of my old books to charity. I've sorted out dry-cleaning and food shopping and getting various parts of my life organised, and taking a bit of exercise.
Ah, the exercise. I'm very badly out of shape, and the first fifteen minutes of my half-hour jog on Saturday nearly did me in. But I reached the top of a hill, and I felt my muscles start to fall into line, and I got my breathing in check, and I looked at the world spread out before me, and I felt happy.
It's Sunday as I write this and I'm just about to head out for another run. I know it isn't solving all my problems: at some point there will be a reckoning with the girl, if only to give me closure. But I now know that, when the moment comes, I'll be a better person than I am today. And that's enough.
* Or at least that's how I imagine it would look to an outsider. Speaking from the inside of my own skull, I just think she's really smart, attractive, unconventional and generally fun to be around.
Read the full post
Monday, December 15, 2008
Wednesday, December 10, 2008
I don't know what they're smoking, but I want some
So I've just spent a couple of days frantically coding in (you guessed it) Excel/VBA. And I'm actually feeling surprisingly good about it.
This is because the task I'm dealing with involves extracting large blocks of data from various databases and then processing it in Excel. Up until a week ago, this was a truly hideous task for me, requiring a big messy chunk of partially-hardwired code to handle dumping the data into Excel.
Now, though, I'm happy. I've discovered a feature that takes care of all that: the QueryTable object. You just feed it an SQL query, point it at a cell, and it handles the rest. It has its limitations, but is generally rather nice. Apart from one issue.
Yes, there's always something. But this particular problem had me scratching my head in bewilderment: this lovely little feature has only the most cursory mention in the help file. I'd have been using it for months if I'd known about it.
The documentation it does have is likewise minimal. It took me two hours to figure out how to use a parametrised query, and actually getting a system up and running took a full working day.
Why, Microsoft? You actually did something right for a change: you produced a conceptually-elegant tool to solve a clearly-defined and commonly-occurring problem. There is almost nothing about QueryTables that I can complain about. So why the blazes aren't you shouting to the world about it?
Please please get the hang of when it's appropriate to blow your own trumpet. Or alternatively, pass me that spliff.
Read the full post
This is because the task I'm dealing with involves extracting large blocks of data from various databases and then processing it in Excel. Up until a week ago, this was a truly hideous task for me, requiring a big messy chunk of partially-hardwired code to handle dumping the data into Excel.
Now, though, I'm happy. I've discovered a feature that takes care of all that: the QueryTable object. You just feed it an SQL query, point it at a cell, and it handles the rest. It has its limitations, but is generally rather nice. Apart from one issue.
Yes, there's always something. But this particular problem had me scratching my head in bewilderment: this lovely little feature has only the most cursory mention in the help file. I'd have been using it for months if I'd known about it.
The documentation it does have is likewise minimal. It took me two hours to figure out how to use a parametrised query, and actually getting a system up and running took a full working day.
Why, Microsoft? You actually did something right for a change: you produced a conceptually-elegant tool to solve a clearly-defined and commonly-occurring problem. There is almost nothing about QueryTables that I can complain about. So why the blazes aren't you shouting to the world about it?
Please please get the hang of when it's appropriate to blow your own trumpet. Or alternatively, pass me that spliff.
Read the full post
Thursday, December 04, 2008
Ground Rules
In the great discussion about God, it's quite common for otherwise perfectly sane people, on both sides of the argument, to start frothing at the mouth and claiming that the Other Side wishes to censor, physically injure or otherwise silence them.
In many cases, this is sheer paranoia. Most folks, religious or otherwise, tend to be pretty nice when you get to know them, and are happy to allow space for the beliefs of others.
In many cases, this is not paranoia. Religious arguments tend to attract a worrying number of nutjobs of all stripes. Once the legislators get involved, it's quite easy for little things like free speech to go right out the window.
As such, I would like to propose a code of conduct for religious debate. The purpose of this code is to mark out a middle ground in this debate between the various groups of crazies who want to see all belief systems but theirs forcibly purged from the planet.
Please leave a comment if you agree with this code, or if you wish to suggest changes to the wording. By signing up, you are showing solidarity with folks of other religious denominations who stand against the forces of zealotry in their own ranks.
The Moderacy Manifesto
I hold beliefs that touch on religious topics (henceforth "religious beliefs"). I feel these beliefs are justified. I reserve the right to discuss these beliefs: to present arguments in support of my beliefs, and to critique the arguments of others.
I recognise that other people hold different beliefs on these topics. I acknowledge that these people feel their beliefs are justified. I respect their right to hold and discuss their beliefs, even where those beliefs conflict with mine.
I do not judge an individual's moral character solely on the basis of their religious beliefs. I accept that it is possible to be a good person whilst holding religious beliefs different from mine.
I do not approve of religious discrimination that results in physical, legal or financial hardships. I will not materially discriminate against anyone solely on the basis of their religious beliefs. However, I reserve the right to materially discriminate based on their actions, even when those actions are inspired by religious beliefs. I also reserve the right to discriminate in ways that do not result in hardship.
I ask my government not to censor, fine, imprison or otherwise penalise any individual or group simply for holding or discussing particular religious beliefs. I ask my government not to endorse, subsidise or otherwise support any individual or group simply for holding or discussing particular religious beliefs. This includes individuals or groups that hold the same beliefs as me.
Although I may disagree with other participants in this discussion, I respect their sincerity and I respect their rights. I believe that mutual tolerance, of the form described above, is the best way to demonstrate that respect.
Read the full post
In many cases, this is sheer paranoia. Most folks, religious or otherwise, tend to be pretty nice when you get to know them, and are happy to allow space for the beliefs of others.
In many cases, this is not paranoia. Religious arguments tend to attract a worrying number of nutjobs of all stripes. Once the legislators get involved, it's quite easy for little things like free speech to go right out the window.
As such, I would like to propose a code of conduct for religious debate. The purpose of this code is to mark out a middle ground in this debate between the various groups of crazies who want to see all belief systems but theirs forcibly purged from the planet.
Please leave a comment if you agree with this code, or if you wish to suggest changes to the wording. By signing up, you are showing solidarity with folks of other religious denominations who stand against the forces of zealotry in their own ranks.
The Moderacy Manifesto
I hold beliefs that touch on religious topics (henceforth "religious beliefs"). I feel these beliefs are justified. I reserve the right to discuss these beliefs: to present arguments in support of my beliefs, and to critique the arguments of others.
I recognise that other people hold different beliefs on these topics. I acknowledge that these people feel their beliefs are justified. I respect their right to hold and discuss their beliefs, even where those beliefs conflict with mine.
I do not judge an individual's moral character solely on the basis of their religious beliefs. I accept that it is possible to be a good person whilst holding religious beliefs different from mine.
I do not approve of religious discrimination that results in physical, legal or financial hardships. I will not materially discriminate against anyone solely on the basis of their religious beliefs. However, I reserve the right to materially discriminate based on their actions, even when those actions are inspired by religious beliefs. I also reserve the right to discriminate in ways that do not result in hardship.
I ask my government not to censor, fine, imprison or otherwise penalise any individual or group simply for holding or discussing particular religious beliefs. I ask my government not to endorse, subsidise or otherwise support any individual or group simply for holding or discussing particular religious beliefs. This includes individuals or groups that hold the same beliefs as me.
Although I may disagree with other participants in this discussion, I respect their sincerity and I respect their rights. I believe that mutual tolerance, of the form described above, is the best way to demonstrate that respect.
Read the full post
Thursday, November 20, 2008
Ubuntu 8.10: The Verdict
I've been using the latest version of Ubuntu, codename Intrepid Ibex, for a couple of weeks now. Given this, I think I can give a fair verdict on it.
The verdict is: meh.
There's not really much that's actively wrong with Intrepid. I've noticed a general decrease in system responsiveness and reliability compared to Hardy Heron (the last release), but frankly that's like saying Kilimanjaro is unacceptably small compared to Everest. It's still damn good by comparison with e.g. every instance of Windows I've used in the last few months. I have high hopes of penguinising my dad in the near future, as his loathing for his company Vista laptop increases.
The only other issue was the grace notes (quite ironic). Intrepid has a bunch of cute little features (see below) which, unfortunately, weren't actually installed when I upgraded my system. I've since hunted them down and installed them, but this definitely abbreviated the honeymoon period.
The big disappointment for me is probably OpenOffice 3.0. As far as I can tell, there is no way in which it is substantially cooler than 2.0, and the list of "great new features" is going to make any MS Office user roll their eyes and yawn theatrically. Considering what projects like Firefox and KDE have achieved in the same time, this is starting to get a bit silly. The longer this goes on, the more chance that Novell will be successful in forking the codebase and filling it full of Microsoft encrustations*.
So what do we get out of this release? Well, whilst there's nothing here to really make you go "wow", there are a number of little exploratory tweaks that could easily snowball into major developments. For example, each user now has an encrypted section in their home folder. Throw in the GnuPG plugin for Thunderbird and it looks like Ubuntu is rapidly becoming the first OS to have user-friendly crypto built in.
Another window on the future is Ubuntu's collaboration with the BBC to provide well-integrated access to media. Basically a huge chunk of content has just become available as a glorified playlist, via the Totem player. No need to piddle around on websites, just click through the list. I'm currently listening to the Digital Planet podcast.
Finally, a fair bit of love has been bestowed on Nautilus, the file browser. It has Firefox-like tabbed file browsing and a rather impressively effective previewing system. Very cute.
In general, this version took two steps forward and one step back. If you're on Hardy, I wouldn't particularly recommend you upgrade. That said, I'm glad it's out. Its showcase of features has enough neat ideas to keep the community busy for a long time.
* This may be an unfair accusation against Novell. However, their deals with Microsoft have scared the crap out of most penguinistas - we're worried they'll go kamikaze on us like SCO did a few years back. They would get caned if they tried, but Linux needs that sort of problem like it needs a hole in the head. Given this, everything that Novell touch (especially stuff like Mono and Moonlight that they touch with Microsoft's help**) tends to be interpreted as an early chess move in the game leading up to lawsuit. This may or may not be paranoia.
** It's definitely not paranoid to be worried about Microsoft. It used to be that they'd just try to squish open source. These days they seem to have graduated to trying to corrupt it - embrace, extend and extinguish. Microsoft is the living expression of why the open source movement needs the free software movement - open source's pragmatism doesn't handle attacks of this kind very well.
Read the full post
The verdict is: meh.
There's not really much that's actively wrong with Intrepid. I've noticed a general decrease in system responsiveness and reliability compared to Hardy Heron (the last release), but frankly that's like saying Kilimanjaro is unacceptably small compared to Everest. It's still damn good by comparison with e.g. every instance of Windows I've used in the last few months. I have high hopes of penguinising my dad in the near future, as his loathing for his company Vista laptop increases.
The only other issue was the grace notes (quite ironic). Intrepid has a bunch of cute little features (see below) which, unfortunately, weren't actually installed when I upgraded my system. I've since hunted them down and installed them, but this definitely abbreviated the honeymoon period.
The big disappointment for me is probably OpenOffice 3.0. As far as I can tell, there is no way in which it is substantially cooler than 2.0, and the list of "great new features" is going to make any MS Office user roll their eyes and yawn theatrically. Considering what projects like Firefox and KDE have achieved in the same time, this is starting to get a bit silly. The longer this goes on, the more chance that Novell will be successful in forking the codebase and filling it full of Microsoft encrustations*.
So what do we get out of this release? Well, whilst there's nothing here to really make you go "wow", there are a number of little exploratory tweaks that could easily snowball into major developments. For example, each user now has an encrypted section in their home folder. Throw in the GnuPG plugin for Thunderbird and it looks like Ubuntu is rapidly becoming the first OS to have user-friendly crypto built in.
Another window on the future is Ubuntu's collaboration with the BBC to provide well-integrated access to media. Basically a huge chunk of content has just become available as a glorified playlist, via the Totem player. No need to piddle around on websites, just click through the list. I'm currently listening to the Digital Planet podcast.
Finally, a fair bit of love has been bestowed on Nautilus, the file browser. It has Firefox-like tabbed file browsing and a rather impressively effective previewing system. Very cute.
In general, this version took two steps forward and one step back. If you're on Hardy, I wouldn't particularly recommend you upgrade. That said, I'm glad it's out. Its showcase of features has enough neat ideas to keep the community busy for a long time.
* This may be an unfair accusation against Novell. However, their deals with Microsoft have scared the crap out of most penguinistas - we're worried they'll go kamikaze on us like SCO did a few years back. They would get caned if they tried, but Linux needs that sort of problem like it needs a hole in the head. Given this, everything that Novell touch (especially stuff like Mono and Moonlight that they touch with Microsoft's help**) tends to be interpreted as an early chess move in the game leading up to lawsuit. This may or may not be paranoia.
** It's definitely not paranoid to be worried about Microsoft. It used to be that they'd just try to squish open source. These days they seem to have graduated to trying to corrupt it - embrace, extend and extinguish. Microsoft is the living expression of why the open source movement needs the free software movement - open source's pragmatism doesn't handle attacks of this kind very well.
Read the full post
Monday, November 17, 2008
Cuts like a knife
Entia non sunt multiplicanda praeter necessitatem.
- William of Ockham
The scientific method is probably the most important investigative tool we as a species have ever produced. As generally understood*, it is a means of comparing and contrasting hypotheses. It uses three conditions: accuracy, predictivity and parsimony.
Accuracy is easy to understand: a new model of the universe must be consistent with existing data. So general relativity looks like Newtonian dynamics at low energies, quantum atoms behave like point particles on human scales, and so on. A theory of gravity that did not produce an inverse square law would be no damn good.
Predictivity is similarly straightforward: a model can't just describe what has happened so far, it must also give us some clue what's coming up. This is for two reasons. Firstly, it limits people's ability to equivocate, which stops science descending into an angels-on-pinheads talking shop. Secondly, it means that the entire business occasionally generates useful real-world results.
Parsimony, also known as Ockham's Razor, is not so clear. In short, it states that you shouldn't include more "stuff" in your model than necessary. So never assume a conspiracy where stupidity is an adequate explanation; never infer psychic powers where outright fraud is a possibility; never choose epicycles over ellipses.
But what do we mean by "simple", and how do we justify this principle? The two questions are interlinked: a thorough justification of the Razor will of necessity give us a working definition of simplicity. Let's take a quick tour through some historical arguments put forward for this enigmatic principle. We should accept the simplest explanation because...
1) ...Simplicity is so damn cool.
This is probably the original view of Ockham's Razor. As far as Classical civilisation was concerned, simplicity was a desirable goal in itself, without needing any further justification.
This presupposes some kind of human aesthetic sense which would allow us to distinguish the simple from the complex. It's very "Zen and the Art of Motorcycle Maintenance".
I remain unconvinced by this for two main reasons. Firstly, I don't think it really answers the question; it just wraps it in even fuzzier clothing. Secondly, even after several thousand years there is no consensus about whether God is defined as simple or complex. If our aesthetic sense can display inconsistency in a case as grandiose as this, what hope does it have in other, subtler, contexts?
2) ...Simplicity is more likely to be true.
This is an intriguing notion. Could it be the case that the universe is in some way geared towards elegant explanations? It's actually quite a common belief, not just amongst the religious, but also among scientists who see elegance amidst the chaos.
However, I'm not aware of any good explanation of why this should be the case. In the absence of that, it's impossible to say that this rule holds generally. And I'm fairly sure there's a certain amount of confirmation bias here.
There are some broad philosophies that would make this explanation more plausible. In general, though, I fall on the side of Sir Arthur Eddington: the mathematics is not there until we put it there.
3) ...Simplicity makes better targets for science.
This was Popper's take on parsimony. He believed that simple hypotheses were, if false, far easier to squash than complex ones.
This view has a certain amount of empirical support. Consider for example the epicycle "hypothesis". Turns out that, by sticking enough extra epicycles onto a planet's orbit, you can match almost any data set. So the hypothesis was rendered so fuzzy as to be undisprovable. Undisprovable hypotheses are the plaque in science's arteries: they seriously impede progress and they're almost impossible to get rid of.
In this sense, simplicity relates to the number of "magic variables" that an hypothesis contains. Epicycle theory had an arbitrary number of magic variables that could be set by scientists: the radii and rotation speed of the epicycles. Galileo's elliptic orbits, by contrast, were specified entirely by a single gravitational constant plus the masses and present velocities of the various heavenly bodies. It took several centuries to disprove epicycles. By contrast, if Galileo had been wrong, it could have been demonstrated in a matter of months.
4) ...Simplicity is functional
Let's say you have a tiger charging towards you. You have two explanations of its progress: one in terms of mass, momentum, chemical interactions and the behaviour of various neurons, and one in terms of it being a bloody great big cat that wants to eat you. Which model do you think would do most for your survival chances?
Humans only have a limited amount of computational resource at hand, so it makes sense to shepherd it as much as possible. Why waste valuable neurons believing in yetis, ghosts, gods? It doesn't make it any easier to dodge the tiger, and it reduces the space available for beliefs that could.
From this point of view, simplicity means computational simplicity: the model that generates the most accurate results in the shortest time. One interesting feature of this is that simplicity may actually vary from organism to organism: a cyborg with a silicon brain might have far different preferences from a mammal with a bunch of neurons. Heck, even different processors could lead to different views of the world.
This is probably the most popular explanation for Ockham's Razor as far as the philosophers are concerned. Game over? Probably... but this explanation also causes great fuss. Philosophers do not generally like pragmatism - it can change so easily from situation to situation, making a mess of all our overarching frameworks.
If Ockham's Razor is pragmatic, then a sufficiently strong pragmatic incentive could lead us to discard it. Furthering our position within the tribe, motivating ourselves, avoiding depression - all these become valid reasons for unparsimonious belief**.
We skeptics can find only a Pyrrhic victory in justifying the Razor by reference to pragmatism. In slicing away the gods, we slit our own philosophical wrists.
* According to Karl Popper, anyway. Kuhn would disagree. I tend to equivocate on this: I think that, while Kuhn probably describes the practice of science better, Popper provides a necessary level of justification. In football terms, Kuhn is the coach who talks about positions and tactics; Popper is the coach who talks about human biology.
** Or at least for giving the impression of belief. But for someone who isn't a good liar, it might be necessary to persuade themselves.
Read the full post
- William of Ockham
The scientific method is probably the most important investigative tool we as a species have ever produced. As generally understood*, it is a means of comparing and contrasting hypotheses. It uses three conditions: accuracy, predictivity and parsimony.
Accuracy is easy to understand: a new model of the universe must be consistent with existing data. So general relativity looks like Newtonian dynamics at low energies, quantum atoms behave like point particles on human scales, and so on. A theory of gravity that did not produce an inverse square law would be no damn good.
Predictivity is similarly straightforward: a model can't just describe what has happened so far, it must also give us some clue what's coming up. This is for two reasons. Firstly, it limits people's ability to equivocate, which stops science descending into an angels-on-pinheads talking shop. Secondly, it means that the entire business occasionally generates useful real-world results.
Parsimony, also known as Ockham's Razor, is not so clear. In short, it states that you shouldn't include more "stuff" in your model than necessary. So never assume a conspiracy where stupidity is an adequate explanation; never infer psychic powers where outright fraud is a possibility; never choose epicycles over ellipses.
But what do we mean by "simple", and how do we justify this principle? The two questions are interlinked: a thorough justification of the Razor will of necessity give us a working definition of simplicity. Let's take a quick tour through some historical arguments put forward for this enigmatic principle. We should accept the simplest explanation because...
1) ...Simplicity is so damn cool.
This is probably the original view of Ockham's Razor. As far as Classical civilisation was concerned, simplicity was a desirable goal in itself, without needing any further justification.
This presupposes some kind of human aesthetic sense which would allow us to distinguish the simple from the complex. It's very "Zen and the Art of Motorcycle Maintenance".
I remain unconvinced by this for two main reasons. Firstly, I don't think it really answers the question; it just wraps it in even fuzzier clothing. Secondly, even after several thousand years there is no consensus about whether God is defined as simple or complex. If our aesthetic sense can display inconsistency in a case as grandiose as this, what hope does it have in other, subtler, contexts?
2) ...Simplicity is more likely to be true.
This is an intriguing notion. Could it be the case that the universe is in some way geared towards elegant explanations? It's actually quite a common belief, not just amongst the religious, but also among scientists who see elegance amidst the chaos.
However, I'm not aware of any good explanation of why this should be the case. In the absence of that, it's impossible to say that this rule holds generally. And I'm fairly sure there's a certain amount of confirmation bias here.
There are some broad philosophies that would make this explanation more plausible. In general, though, I fall on the side of Sir Arthur Eddington: the mathematics is not there until we put it there.
3) ...Simplicity makes better targets for science.
This was Popper's take on parsimony. He believed that simple hypotheses were, if false, far easier to squash than complex ones.
This view has a certain amount of empirical support. Consider for example the epicycle "hypothesis". Turns out that, by sticking enough extra epicycles onto a planet's orbit, you can match almost any data set. So the hypothesis was rendered so fuzzy as to be undisprovable. Undisprovable hypotheses are the plaque in science's arteries: they seriously impede progress and they're almost impossible to get rid of.
In this sense, simplicity relates to the number of "magic variables" that an hypothesis contains. Epicycle theory had an arbitrary number of magic variables that could be set by scientists: the radii and rotation speed of the epicycles. Galileo's elliptic orbits, by contrast, were specified entirely by a single gravitational constant plus the masses and present velocities of the various heavenly bodies. It took several centuries to disprove epicycles. By contrast, if Galileo had been wrong, it could have been demonstrated in a matter of months.
4) ...Simplicity is functional
Let's say you have a tiger charging towards you. You have two explanations of its progress: one in terms of mass, momentum, chemical interactions and the behaviour of various neurons, and one in terms of it being a bloody great big cat that wants to eat you. Which model do you think would do most for your survival chances?
Humans only have a limited amount of computational resource at hand, so it makes sense to shepherd it as much as possible. Why waste valuable neurons believing in yetis, ghosts, gods? It doesn't make it any easier to dodge the tiger, and it reduces the space available for beliefs that could.
From this point of view, simplicity means computational simplicity: the model that generates the most accurate results in the shortest time. One interesting feature of this is that simplicity may actually vary from organism to organism: a cyborg with a silicon brain might have far different preferences from a mammal with a bunch of neurons. Heck, even different processors could lead to different views of the world.
This is probably the most popular explanation for Ockham's Razor as far as the philosophers are concerned. Game over? Probably... but this explanation also causes great fuss. Philosophers do not generally like pragmatism - it can change so easily from situation to situation, making a mess of all our overarching frameworks.
If Ockham's Razor is pragmatic, then a sufficiently strong pragmatic incentive could lead us to discard it. Furthering our position within the tribe, motivating ourselves, avoiding depression - all these become valid reasons for unparsimonious belief**.
We skeptics can find only a Pyrrhic victory in justifying the Razor by reference to pragmatism. In slicing away the gods, we slit our own philosophical wrists.
* According to Karl Popper, anyway. Kuhn would disagree. I tend to equivocate on this: I think that, while Kuhn probably describes the practice of science better, Popper provides a necessary level of justification. In football terms, Kuhn is the coach who talks about positions and tactics; Popper is the coach who talks about human biology.
** Or at least for giving the impression of belief. But for someone who isn't a good liar, it might be necessary to persuade themselves.
Read the full post
Monday, November 10, 2008
Bah
So in the run-up to halloween I did a lot of thinking about my costume. I was going to a party where I knew people were going to make an effort, and I ended up spending many hours on a rather nifty scarecrow outfit.
It helps that I'm ridiculously lanky, but height maketh not the scarecrow. I ended up hollowing out a pumpkin pinata* to make a mask. I worked through several pages of half-remembered electronics-related maths to give the mask glowing red eyes** that wouldn't explode at any inopportune moment (say, when my real eyes were 3cm away cos I was wearing the bloody thing). I even spent a couple of spare train journeys really puzzling the other passengers by meticulously attaching strands of raffia to rings of elastic, to create that "straw falling out the sleeves" effect.
It went really well, and I got second prize. I was only pipped to the post by a guy who came in the best Marlon Brando getup I have ever seen.
And now a friend of mine is having a birthday, and throwing... a fancy dress party. Based on the theme 1985. I wasn't even toddling in 1985. I do not have much awareness of fashions at the time. And the party is on Friday, so I'm panicking.
The pertinent questions, therefore, are as follows:
1) How much Adam Ant garb can I acquire at short notice from the local fancy dress store?
2) How much Adam Ant garb can I get away with wearing on public transport without being given a lovely new fancy dress outfit, this one with the sleeves tied together?
Answers on a postcard.
* I'm aware this should have a squiggle above the "n", but I'm lazy, so there.
** Incidentally, it is perfectly possible to carry a large round object stuffed full of wiring and batteries on the London Tube without anyone so much as batting an eyelid. So remind me: what's the point of all these CCTV cameras again?
Read the full post
It helps that I'm ridiculously lanky, but height maketh not the scarecrow. I ended up hollowing out a pumpkin pinata* to make a mask. I worked through several pages of half-remembered electronics-related maths to give the mask glowing red eyes** that wouldn't explode at any inopportune moment (say, when my real eyes were 3cm away cos I was wearing the bloody thing). I even spent a couple of spare train journeys really puzzling the other passengers by meticulously attaching strands of raffia to rings of elastic, to create that "straw falling out the sleeves" effect.
It went really well, and I got second prize. I was only pipped to the post by a guy who came in the best Marlon Brando getup I have ever seen.
And now a friend of mine is having a birthday, and throwing... a fancy dress party. Based on the theme 1985. I wasn't even toddling in 1985. I do not have much awareness of fashions at the time. And the party is on Friday, so I'm panicking.
The pertinent questions, therefore, are as follows:
1) How much Adam Ant garb can I acquire at short notice from the local fancy dress store?
2) How much Adam Ant garb can I get away with wearing on public transport without being given a lovely new fancy dress outfit, this one with the sleeves tied together?
Answers on a postcard.
* I'm aware this should have a squiggle above the "n", but I'm lazy, so there.
** Incidentally, it is perfectly possible to carry a large round object stuffed full of wiring and batteries on the London Tube without anyone so much as batting an eyelid. So remind me: what's the point of all these CCTV cameras again?
Read the full post
Tuesday, November 04, 2008
A => B
It's UPGRADE TIME!
And, for me, new version of Ubuntu => time to change desktop background. I don't know why I've got into the habit of doing this, but it keeps life interesting. Last time I nicked a picture of Mars off the NASA website, but this time I want something a bit more fun.
I just so happen to notice that the new version of Ubuntu has a new built-in theme: Dark. Cue a frantic search for Lovecraftesque backgrounds. Something black and murky, with slimy tentacles just visible at the edges of the light.
And I can't find a damn thing.
There are lots of pictures that use Lovecraftian monsters - shuggoths, Cthulhu of course, even a rather good one of the colour out of space. But these are all Lovecraft backgrounds, not Lovecraftesque backgrounds.
The difference is subtle, but real. Lovecraft backgrounds include his characters. Lovecraftesque backgrounds convey his flavour of cold-sweat, hair-raising uncertainty about one's place in the world*. It's the "Signs" principle: that film was a lot scarier when you couldn't actually see the damn aliens.
On the bright side, I did find a rather nice collection of steampunkania. But I want tentacles, dammit. I don't suppose anyone has any suggestions?
* Of course, in Lovecraft's stories, our place in the world is actually very clearly-defined: Cthulhu's small intestine.
Read the full post
And, for me, new version of Ubuntu => time to change desktop background. I don't know why I've got into the habit of doing this, but it keeps life interesting. Last time I nicked a picture of Mars off the NASA website, but this time I want something a bit more fun.
I just so happen to notice that the new version of Ubuntu has a new built-in theme: Dark. Cue a frantic search for Lovecraftesque backgrounds. Something black and murky, with slimy tentacles just visible at the edges of the light.
And I can't find a damn thing.
There are lots of pictures that use Lovecraftian monsters - shuggoths, Cthulhu of course, even a rather good one of the colour out of space. But these are all Lovecraft backgrounds, not Lovecraftesque backgrounds.
The difference is subtle, but real. Lovecraft backgrounds include his characters. Lovecraftesque backgrounds convey his flavour of cold-sweat, hair-raising uncertainty about one's place in the world*. It's the "Signs" principle: that film was a lot scarier when you couldn't actually see the damn aliens.
On the bright side, I did find a rather nice collection of steampunkania. But I want tentacles, dammit. I don't suppose anyone has any suggestions?
* Of course, in Lovecraft's stories, our place in the world is actually very clearly-defined: Cthulhu's small intestine.
Read the full post
Monday, November 03, 2008
I Am Not Alone
It's intensely reassuring every time I discover that someone else has the same views as I do. Especially when it's obvious that they're far smarter and more articulate than me.
Read the full post
Read the full post
Wednesday, October 29, 2008
Riddle me this
OK, so I moan about the job market et al. But really I'm having a rather good few weeks. The VBA training courses are over*, so I don't have to worry about producing more training materials. A couple of other sources of panic have passed me by. I'm working 8-6 to make up hours, and I'm struggling with my halloween costume, but these are fairly minor problems really.
And so I do what I do every time my mind starts to free itself up: I come up with interesting little tasks for myself. I've been working on a pygame-based strategy game. I've started looking back over my textbooks in search of cool concepts. And I'm trying to invent a new kind of puzzle.
A whatnow?
Every month, the UK's actuarial trade mag, The Actuary, has a puzzle page. Since the actuarial profession consists entirely of maths geeks, these are often rather good (I'm still trying to figure out how they do the 16*16 Sudokus without employing a supercomputer or two).
However, it's quite rare for the puzzles to show real innovation. Normally they're of a pre-existing type (crosswords, sudoku, logic puzzles, number grids). The actual problems are damn hard, but they're not conceptually challenging**.
I'd like to change that.
I'm working on what I believe is a new design of puzzle. It will require not only sudoku-style pattern recognition, but also excellent spatial awareness. This is because it is played on the surface of a truncated polygon. Yes.
It's not going to make me famous, but if I'm contemplating leaving the actuarial profession then I would like to go out on a high note. Causing actuaries across the UK to spit coffee over their keyboards would be a good start.
* Actually they went really well. I discovered about an hour before giving the second training day that some of my trainees were from other companies that were paying my company for the training. This caused much panic. But the day went like a charm, the trainees really enjoyed it, and I can now put "professional trainer" on my CV :)
** The November edition contains a counterexample, but even that is just a combination of sudoku and another pre-existing type.
Read the full post
And so I do what I do every time my mind starts to free itself up: I come up with interesting little tasks for myself. I've been working on a pygame-based strategy game. I've started looking back over my textbooks in search of cool concepts. And I'm trying to invent a new kind of puzzle.
A whatnow?
Every month, the UK's actuarial trade mag, The Actuary, has a puzzle page. Since the actuarial profession consists entirely of maths geeks, these are often rather good (I'm still trying to figure out how they do the 16*16 Sudokus without employing a supercomputer or two).
However, it's quite rare for the puzzles to show real innovation. Normally they're of a pre-existing type (crosswords, sudoku, logic puzzles, number grids). The actual problems are damn hard, but they're not conceptually challenging**.
I'd like to change that.
I'm working on what I believe is a new design of puzzle. It will require not only sudoku-style pattern recognition, but also excellent spatial awareness. This is because it is played on the surface of a truncated polygon. Yes.
It's not going to make me famous, but if I'm contemplating leaving the actuarial profession then I would like to go out on a high note. Causing actuaries across the UK to spit coffee over their keyboards would be a good start.
* Actually they went really well. I discovered about an hour before giving the second training day that some of my trainees were from other companies that were paying my company for the training. This caused much panic. But the day went like a charm, the trainees really enjoyed it, and I can now put "professional trainer" on my CV :)
** The November edition contains a counterexample, but even that is just a combination of sudoku and another pre-existing type.
Read the full post
A compass is no good if you don't have a map
A definite advantage to working in a group full of contractors is that they're very happy to give careers advice. This is not something you get normally: the average co-worker, unless they're very chilled, will not tell you to ditch your company and strike out for greener pastures.
I've got a fair number of interesting ideas off them. In addition to searching for conventional jobs, one guy suggested that I take up contracting myself. On that front, I'm probably pretty employable: certainly the work I'm doing at the moment poses little challenge. Also I have an actuarial exam* under my belt, which looks damn good on a CV.
This option would probably do wonders for my bank balance. Contractors quite often get a stupidly large pay packet compared to conventional employees. There are two downsides, though. The first is reduced financial stability, which might be a problem if we hit a recession. The second is lack of career development, which is a massive issue for me. I would honestly rather hammer railroad spikes through my skull than do boring job after boring job for years on end**.
A second option, which I've just spent a fair chunk of evening discussing, is say "screw it" and go get an MBA or something. This idea holds a fair amount of interest. Technical skills for me are pretty much a solved problem, whereas managers tend be be confronted with challenges that involve people and are hence much weirder and more interesting.
My worries here are (again) twofold. Firstly, I'd hate to feel like I was running back to university as soon as it looked like the real world was putting up a fight. It is important to me that I retain my self-esteem in this area. When I feel like I'm on top of the world and everything is going consistently well - that's the time to go back to uni. Of course, at that point I may not feel like I need to.
Secondly, I read Dilbert. I know what people think about the stereotypical MBA, and I would hate to have them think that way about me. I've been in the real world for slightly over two years now, so I'm not a complete n00b, but it would worry me to study an entire course about management without ever having, y'know, actually been a manager.
I'm bouncing off the walls trying to figure out what to do next with my life. Some of those walls are figments of my imagination - for example, financial stability isn't really an issue for a young single male. This doesn't help much, though, because I don't really know which of the walls are illusory and which are solid and waiting for me to bust my nose on them.
Basically, I think this situation calls for a bit more self-confidence... and a lot more reading the job pages.
* Possibly two - fingers crossed for the November exam results!
** Which of course is why I'm doing the job search in the first place :(
Read the full post
I've got a fair number of interesting ideas off them. In addition to searching for conventional jobs, one guy suggested that I take up contracting myself. On that front, I'm probably pretty employable: certainly the work I'm doing at the moment poses little challenge. Also I have an actuarial exam* under my belt, which looks damn good on a CV.
This option would probably do wonders for my bank balance. Contractors quite often get a stupidly large pay packet compared to conventional employees. There are two downsides, though. The first is reduced financial stability, which might be a problem if we hit a recession. The second is lack of career development, which is a massive issue for me. I would honestly rather hammer railroad spikes through my skull than do boring job after boring job for years on end**.
A second option, which I've just spent a fair chunk of evening discussing, is say "screw it" and go get an MBA or something. This idea holds a fair amount of interest. Technical skills for me are pretty much a solved problem, whereas managers tend be be confronted with challenges that involve people and are hence much weirder and more interesting.
My worries here are (again) twofold. Firstly, I'd hate to feel like I was running back to university as soon as it looked like the real world was putting up a fight. It is important to me that I retain my self-esteem in this area. When I feel like I'm on top of the world and everything is going consistently well - that's the time to go back to uni. Of course, at that point I may not feel like I need to.
Secondly, I read Dilbert. I know what people think about the stereotypical MBA, and I would hate to have them think that way about me. I've been in the real world for slightly over two years now, so I'm not a complete n00b, but it would worry me to study an entire course about management without ever having, y'know, actually been a manager.
I'm bouncing off the walls trying to figure out what to do next with my life. Some of those walls are figments of my imagination - for example, financial stability isn't really an issue for a young single male. This doesn't help much, though, because I don't really know which of the walls are illusory and which are solid and waiting for me to bust my nose on them.
Basically, I think this situation calls for a bit more self-confidence... and a lot more reading the job pages.
* Possibly two - fingers crossed for the November exam results!
** Which of course is why I'm doing the job search in the first place :(
Read the full post
Wednesday, October 15, 2008
The Joys of Jobhunting
As of a few days ago, I'm finally getting my act together and looking round for a new job. Hopefully a change of company won't be necessary - I do actually like the people I work for - but if they try to send me out on placement again then they'll have to catch me first.
We're now several evenings on, and certain axioms of jobhunting are starting to become clear to me. To whit:
1) The jobs that catch your attention are the ones you have neither training nor experience in.
I've lost count of how many really cool jobs I've looked at and reluctantly clicked past when it became clear that BA Hons Cantab was not going to cut the mustard.
2) The jobs that match your specialist skills are boring as hell.
As a maths grad, my options are apparently finance, finance or - just to push the envelope a little - finance. I do not find finance particularly interesting. It varies between the accountancy end of the spectrum, which requires basic numeracy but no real maths knowledge, and the predictive disciplines (actuarial, quant, etc), which require the ability to pull large numbers of assumptions out of thin air and then pass the blame when it all falls through.
There's also cryptography, but apparently only if you have a first class degree. I have a 2II. Ohttre gung.
3) The jobs that you don't need specialist knowledge for are dangerously vague.
At present, the best example of this is "project management". Beware of any job with this label: chances are it's a pure documentation job. What has happened is that large number of job jobhunters have decided that PM is the fashionable thing. In response, large numbers of recruitment agencies have started mentioning PM in all their ads, regardless of appropriacy.
Ditto "analyst", which normally turns out to be a sales job.
4) More recruitment agencies does not equal more opportunities.
It is incredibly hard to find actual decent job opportunities online. The major online recruitment agencies just recycle each other's listings. Any good jobs vanish like a rump steak in a piranha pool, and the remainder circulate until they're withdrawn or some idiot applies for them.
This seems to be a classic case of the Internet exposing how dysfunctional an industry is. Maybe recruitment agencies are better in person than online, but I doubt it.
5) Life's too short for this crap.
Every single vacancy you come across will have the phrase "please send CV plus cover letter in it". Sounds easy, right? Problem is, it takes a fair few hours to thrash out a decent cover letter. This can quickly become a limiting factor.
Requesting a cover letter is good sense on the part of the company. If applying for the job ceases to be a ten-minute task, applicants can't just employ a "CV shotgun" approach but must actually consider whether it's worth their time to apply. It encourages respect for the application process.
Unfortunately, this encouragement of respect is entirely one-way. In general, companies don't bother to reply to CVs they don't like, not providing even a soupçon of feedback. When you've spent three hours preparing your application, this is very lame.
Conclusion) I hate jobhunting.
As far as I can tell, the entire jobhunting process is a sneaky trick by companies to help them retain their employees. I've been actively hunting for less than a week and I'm already concluding that sticking hot needles through my fingernails would be less painful than attempting to leave my current employer.
But I know that, if I don't, before I know it I'll be a 40-year-old "placement specialist" and candidate for Ark B. I want to do something with my life, dammit.
Read the full post
We're now several evenings on, and certain axioms of jobhunting are starting to become clear to me. To whit:
1) The jobs that catch your attention are the ones you have neither training nor experience in.
I've lost count of how many really cool jobs I've looked at and reluctantly clicked past when it became clear that BA Hons Cantab was not going to cut the mustard.
2) The jobs that match your specialist skills are boring as hell.
As a maths grad, my options are apparently finance, finance or - just to push the envelope a little - finance. I do not find finance particularly interesting. It varies between the accountancy end of the spectrum, which requires basic numeracy but no real maths knowledge, and the predictive disciplines (actuarial, quant, etc), which require the ability to pull large numbers of assumptions out of thin air and then pass the blame when it all falls through.
There's also cryptography, but apparently only if you have a first class degree. I have a 2II. Ohttre gung.
3) The jobs that you don't need specialist knowledge for are dangerously vague.
At present, the best example of this is "project management". Beware of any job with this label: chances are it's a pure documentation job. What has happened is that large number of job jobhunters have decided that PM is the fashionable thing. In response, large numbers of recruitment agencies have started mentioning PM in all their ads, regardless of appropriacy.
Ditto "analyst", which normally turns out to be a sales job.
4) More recruitment agencies does not equal more opportunities.
It is incredibly hard to find actual decent job opportunities online. The major online recruitment agencies just recycle each other's listings. Any good jobs vanish like a rump steak in a piranha pool, and the remainder circulate until they're withdrawn or some idiot applies for them.
This seems to be a classic case of the Internet exposing how dysfunctional an industry is. Maybe recruitment agencies are better in person than online, but I doubt it.
5) Life's too short for this crap.
Every single vacancy you come across will have the phrase "please send CV plus cover letter in it". Sounds easy, right? Problem is, it takes a fair few hours to thrash out a decent cover letter. This can quickly become a limiting factor.
Requesting a cover letter is good sense on the part of the company. If applying for the job ceases to be a ten-minute task, applicants can't just employ a "CV shotgun" approach but must actually consider whether it's worth their time to apply. It encourages respect for the application process.
Unfortunately, this encouragement of respect is entirely one-way. In general, companies don't bother to reply to CVs they don't like, not providing even a soupçon of feedback. When you've spent three hours preparing your application, this is very lame.
Conclusion) I hate jobhunting.
As far as I can tell, the entire jobhunting process is a sneaky trick by companies to help them retain their employees. I've been actively hunting for less than a week and I'm already concluding that sticking hot needles through my fingernails would be less painful than attempting to leave my current employer.
But I know that, if I don't, before I know it I'll be a 40-year-old "placement specialist" and candidate for Ark B. I want to do something with my life, dammit.
Read the full post
Monday, October 13, 2008
That height rant
My, how things have gone downhill. Remember the good old days of conspiracy theory? You had the shooting of JFK, the CIA, the KGB, various mafias. And if all else failed, just call in SPECTRE.
These days, sadly, the conspiracies are much more mundane. I'm the victim of one at the moment. I've never successfully interrogated one of its operatives (damn cyanide pills), but I've been calling it CAT.
Campaign Against Tallness.
Now you may be thinking, what kind of conspiracy is this? It doesn't sound very intimidating, does it. But before you come to that conclusion, please examine Heathrow airport's Terminal 1 building. As a 6'4" tall guy, I have no problem with the ceiling (6'7") or even the doorframes (6'5"). What I do have problems with is the emergency lights (6'3"). The emergency lights in their sharp-edged, plate steel cases that are positioned right above the centre of the main walkway.
I feel this particular design feature conclusively proves not only CAT's existence, but also its dryly sadistic sense of humour. Blofeld had nothing on these guys.
Another CAT operative is at work in the car industry. Ever seen one of those cute little smart cars? Ever tried getting into one when you're 6'4"? Didn't think so.
And don't get me started on clothes. You would not believe how few trouser brands there are that even fit a 34" leg, let alone look good. Size 12 trainers? Sorry, that's one size higher than anyone holds in stock. Tough cookies.
So what actually is the evil goal of CAT? My best guess at the moment is that all its members are sensitive about their diminutive stature. By selectively eliminating tall individuals, or at least limiting their ability to breed*, they can ensure that soon everyone will be their height or lower.
For example, another of my (6'4") friends has a massive white streak in his hair from running through a (6'2") door. If he'd been running any faster, he'd have been permanently removed from the gene pool.
Of course their evil plan is ultimately self-defeating. Someday they may succeed in wiping out all the tall people.
But at that moment, CAMS (Campaign Against the Medium Sized) will stab them in the back. Bwahahahaha...
* And doesn't this just bring a whole new meaning to "not tonight baby, I've got a headache"?
Read the full post
These days, sadly, the conspiracies are much more mundane. I'm the victim of one at the moment. I've never successfully interrogated one of its operatives (damn cyanide pills), but I've been calling it CAT.
Campaign Against Tallness.
Now you may be thinking, what kind of conspiracy is this? It doesn't sound very intimidating, does it. But before you come to that conclusion, please examine Heathrow airport's Terminal 1 building. As a 6'4" tall guy, I have no problem with the ceiling (6'7") or even the doorframes (6'5"). What I do have problems with is the emergency lights (6'3"). The emergency lights in their sharp-edged, plate steel cases that are positioned right above the centre of the main walkway.
I feel this particular design feature conclusively proves not only CAT's existence, but also its dryly sadistic sense of humour. Blofeld had nothing on these guys.
Another CAT operative is at work in the car industry. Ever seen one of those cute little smart cars? Ever tried getting into one when you're 6'4"? Didn't think so.
And don't get me started on clothes. You would not believe how few trouser brands there are that even fit a 34" leg, let alone look good. Size 12 trainers? Sorry, that's one size higher than anyone holds in stock. Tough cookies.
So what actually is the evil goal of CAT? My best guess at the moment is that all its members are sensitive about their diminutive stature. By selectively eliminating tall individuals, or at least limiting their ability to breed*, they can ensure that soon everyone will be their height or lower.
For example, another of my (6'4") friends has a massive white streak in his hair from running through a (6'2") door. If he'd been running any faster, he'd have been permanently removed from the gene pool.
Of course their evil plan is ultimately self-defeating. Someday they may succeed in wiping out all the tall people.
But at that moment, CAMS (Campaign Against the Medium Sized) will stab them in the back. Bwahahahaha...
* And doesn't this just bring a whole new meaning to "not tonight baby, I've got a headache"?
Read the full post
Monday, October 06, 2008
That VBA Rant
So in addition to my regularly-scheduled job, I somehow agreed to do some training for the group of grads who joined the grad scheme the year after me. In Visual Basic for Applications. Which I loathe.
Now it's easy for me to say I hate Microsoft products. I'm a Linux user and a civil liberties geek, so it's fairly natural for me to hate those monopolistic bastards and their locked-down software. But this goes waaaay beyond that.
Visual Basic was the first programming language I ever learned. A friend introduced me to it when I was fourteen, and I was soon able to produce a cute little quiz program that got me really good marks in IT class. At the time I thought VBA was the bee's knees.
The process of disillusionment took a while. It probably started when I took an internship at a software company that did all their work in Python. The people were really nice, but it was immediately obvious how bad my programming style was. Stuff that should be completely intuitive for any programmer just wouldn't fit into my skull. I left with a much expanded repertoire of concepts, and a very strong sense of what makes code easier or harder to maintain.
My education continued over the years. A key resource was the comedy site The Daily WTF which, in addition to being hilarious, is a brilliant primer on how not to write code. There's a strong dose of the scientific community to professional programming: most coding conventions have arisen from years or even decades of gradual refinement by large numbers of skilled practitioners.
The icing on the cake was probably my university years. I did maths, and the programming modules were by no means focused on best practice, but I got very very good at elegantly expressing difficult algorithms in code. Software is the true language of mathematics; standard mathematical notation is just a shorthand.
And there's something beautiful about writing code for a well-designed programming language. When concepts and implementation tie together neatly, it brings a tear to your eye. Python is very good on this point: if you can think it, you can probably express it in Python code.
And then I took on a summer job with a telecoms consultancy. Who did all their programming in VBA for Excel. And I realised how very far I'd moved on. By week 4 I was gnawing at tables trying to get this kludgy toy language to do what I asked it to.
It doesn't have proper error handling. It doesn't have a proper object layer (try passing a function as an argument and call me a liar). It's not even remotely self-consistent in approach or content. Most of the Excel component is a wrapper round the Excel UI, which means that things like text search aren't at all programmer-friendly. Source control? Don't be silly.
Almost every language has something going for it. Java is slow and ugly, but at least it's portable. Lisp is impractical, but so very elegant. C++ is time-consuming, but good grief it's powerful. VBA is ugly, platform-specific, hard to use, lacking in basic and advanced functionality, but... there is no but.
As languages go, VBA stinks.
And yet, despite knowing this, I've managed to land myself in an industry with the highest concentration of VBA users on the planet. The vast majority* of actuarial "tools" are just spreadsheets with a VBA layer to do the messy stuff (batch-processing, goal seeks, etc).
I really can't take much more of this. It's as if I was a semi-pro mountain biker and the company forced me to ride around on one of those tiny clown bikes. It's as if I was a fencer and the only weapon they'd give me was a feather duster.
What makes it worse is that most people don't think this way. They've never even spoken to programmers, they show no awareness of software best practices or concepts like separation of concerns. Their idea of version control is taking a backup copy every few days (and that's only if they're particularly on-the-ball). Even commenting the damn code is seen as a bit avant-garde.
Most of them have had no formal training. Now I freely admit that neither have I - I'm a software dilettante. But I know my limitations, and I put a fair amount of effort into discovering what lies beyond them. By contrast, most people in the finance industry are alchemists: they take what they've learned from one or two tutors** and elevate it into divine knowledge.
I worry that, if I stay too long, I'll turn into one of them.
So I'm producing this VBA training material, and hating every minute of it. It's like swimming through treacle. The only good thing I can say is that, once the training is over, the company will have someone else to lumber with VBA-related tasks.
Rant over.
* The only alternative appears to be COBOL. This kinda says it all.
** Who learned it from their tutors, who learned it from their tutors. The chain generally terminates with a bloke who read "VBA for Dummies".
Read the full post
Now it's easy for me to say I hate Microsoft products. I'm a Linux user and a civil liberties geek, so it's fairly natural for me to hate those monopolistic bastards and their locked-down software. But this goes waaaay beyond that.
Visual Basic was the first programming language I ever learned. A friend introduced me to it when I was fourteen, and I was soon able to produce a cute little quiz program that got me really good marks in IT class. At the time I thought VBA was the bee's knees.
The process of disillusionment took a while. It probably started when I took an internship at a software company that did all their work in Python. The people were really nice, but it was immediately obvious how bad my programming style was. Stuff that should be completely intuitive for any programmer just wouldn't fit into my skull. I left with a much expanded repertoire of concepts, and a very strong sense of what makes code easier or harder to maintain.
My education continued over the years. A key resource was the comedy site The Daily WTF which, in addition to being hilarious, is a brilliant primer on how not to write code. There's a strong dose of the scientific community to professional programming: most coding conventions have arisen from years or even decades of gradual refinement by large numbers of skilled practitioners.
The icing on the cake was probably my university years. I did maths, and the programming modules were by no means focused on best practice, but I got very very good at elegantly expressing difficult algorithms in code. Software is the true language of mathematics; standard mathematical notation is just a shorthand.
And there's something beautiful about writing code for a well-designed programming language. When concepts and implementation tie together neatly, it brings a tear to your eye. Python is very good on this point: if you can think it, you can probably express it in Python code.
And then I took on a summer job with a telecoms consultancy. Who did all their programming in VBA for Excel. And I realised how very far I'd moved on. By week 4 I was gnawing at tables trying to get this kludgy toy language to do what I asked it to.
It doesn't have proper error handling. It doesn't have a proper object layer (try passing a function as an argument and call me a liar). It's not even remotely self-consistent in approach or content. Most of the Excel component is a wrapper round the Excel UI, which means that things like text search aren't at all programmer-friendly. Source control? Don't be silly.
Almost every language has something going for it. Java is slow and ugly, but at least it's portable. Lisp is impractical, but so very elegant. C++ is time-consuming, but good grief it's powerful. VBA is ugly, platform-specific, hard to use, lacking in basic and advanced functionality, but... there is no but.
As languages go, VBA stinks.
And yet, despite knowing this, I've managed to land myself in an industry with the highest concentration of VBA users on the planet. The vast majority* of actuarial "tools" are just spreadsheets with a VBA layer to do the messy stuff (batch-processing, goal seeks, etc).
I really can't take much more of this. It's as if I was a semi-pro mountain biker and the company forced me to ride around on one of those tiny clown bikes. It's as if I was a fencer and the only weapon they'd give me was a feather duster.
What makes it worse is that most people don't think this way. They've never even spoken to programmers, they show no awareness of software best practices or concepts like separation of concerns. Their idea of version control is taking a backup copy every few days (and that's only if they're particularly on-the-ball). Even commenting the damn code is seen as a bit avant-garde.
Most of them have had no formal training. Now I freely admit that neither have I - I'm a software dilettante. But I know my limitations, and I put a fair amount of effort into discovering what lies beyond them. By contrast, most people in the finance industry are alchemists: they take what they've learned from one or two tutors** and elevate it into divine knowledge.
I worry that, if I stay too long, I'll turn into one of them.
So I'm producing this VBA training material, and hating every minute of it. It's like swimming through treacle. The only good thing I can say is that, once the training is over, the company will have someone else to lumber with VBA-related tasks.
Rant over.
* The only alternative appears to be COBOL. This kinda says it all.
** Who learned it from their tutors, who learned it from their tutors. The chain generally terminates with a bloke who read "VBA for Dummies".
Read the full post
Tuesday, September 30, 2008
Depressing much
It's no secret that I'm not particularly enthused by my current job. I'm on placement (e.g. my company is renting me out) to a big life office. The job title was "manual calcs" which, in the actuarial world, usually means a fair amount of juicy maths.
Actuarial maths doesn't really float my boat, but I figured I'd at least get some good old-fashioned brain stimulation. No dice. The job turns out to be basically data entry: extracting information from old systems and posting it to pensions administrators. Not exactly world-changing stuff.
One thing I don't mind about the job is that it gives you a certain feeling of connection. You read all these names, all these dry facts about people, and you wonder what their life is like. This lady went to Australia. Was it an elopement? This guy married at the age of 60. Did he finally meet the love of his life?
This young fella who was only in the scheme for a year. Did he find his dream job elsewhere?
Then there's the death claims. These quite often give a sense of connection, but for entirely the wrong reasons. Today I processed one that really got to me: a top-tier medical professional, lonely and living alone, committing suicide.
When you read something like that, you can't help but wonder: will that be my life? Will I live without love? Will I die with no-one to mourn me?
I'm 23 and I've been single for 5 years now, which time included my entire university career. I'm fairly sure that I'll be able to fall off this particular wagon given time, but it's not immediately obvious how to go about this. That's really disturbing, when you think about it: the only thing between me and a really depressing death claim is 40 years.
I'm not scared of kicking the bucket (no, really!). I just don't want to have too many regrets when it happens. The situation definitely calls for me to do something drastic. The problem is, I don't have a damn clue what.
Answers on a postcard.
Read the full post
Actuarial maths doesn't really float my boat, but I figured I'd at least get some good old-fashioned brain stimulation. No dice. The job turns out to be basically data entry: extracting information from old systems and posting it to pensions administrators. Not exactly world-changing stuff.
One thing I don't mind about the job is that it gives you a certain feeling of connection. You read all these names, all these dry facts about people, and you wonder what their life is like. This lady went to Australia. Was it an elopement? This guy married at the age of 60. Did he finally meet the love of his life?
This young fella who was only in the scheme for a year. Did he find his dream job elsewhere?
Then there's the death claims. These quite often give a sense of connection, but for entirely the wrong reasons. Today I processed one that really got to me: a top-tier medical professional, lonely and living alone, committing suicide.
When you read something like that, you can't help but wonder: will that be my life? Will I live without love? Will I die with no-one to mourn me?
I'm 23 and I've been single for 5 years now, which time included my entire university career. I'm fairly sure that I'll be able to fall off this particular wagon given time, but it's not immediately obvious how to go about this. That's really disturbing, when you think about it: the only thing between me and a really depressing death claim is 40 years.
I'm not scared of kicking the bucket (no, really!). I just don't want to have too many regrets when it happens. The situation definitely calls for me to do something drastic. The problem is, I don't have a damn clue what.
Answers on a postcard.
Read the full post
Thursday, August 28, 2008
Recommended Reading
So I've been meaning to read something by Cory Doctorow for a while. The guy has class, and he's a prominent member of today's civil-rights community, which I have a lot of respect for.
I've finally gotten round to looking up his site, and read one of the e-books he makes freely available: Little Brother. It freaked the hell out of me. And it prominently raised a question that's been bothering me for a while: why don't most people pay any attention to attacks on their rights?
Why do people cheer when the government arrests people, locks them up without trial, and won't release them even if they're proved innocent? Why do people accept the government's right to snoop on people in the hope of catching them in minor misdemeanours? In short, how can anyone hear the words "if you've nothing to hide then you've nothing to fear" without spitting soft drink all over their keyboard?
Governments are made of people. People can get things wrong. Worse, a decent minority of people are power-hungry bastards. Any organisation that possesses power will tend to attract such individuals like wasps to a picnic. Once a critical mass of bastards builds up, a powerful organisation can go bad real fast.
Terrorist groups are also made up of people, most of whom can be legitimately considered to be bastards. However, there are several important differences here:
1) Governments have better raw materials. So far no terrorist has ever got hold of a nuke; the government of the USA has about 10,000 and once blew up two whole cities.
2) Governments have more manpower. Al Qaeda is estimated to have on the order of 20,000 members worldwide, many of whom have day jobs. The UK Civil Service has 500,000 full-time employees, and that's just one component of one country's government.
3) Governments have stronger surveillance capacity. Fraudsters have to struggle to get access to even a few people's records; the UK government has lost 30 million personal records this year alone, including mine.
4) Governments tend to get the benefit of the doubt. If any non-governmental group had killed as many dogs as the various USA police forces, they'd have been (ahem) hounded out of the country.
Governments have 99% of the power, 99% of the weapons, and 99% of the mob support. Goverments have a long history of going bad. And yet we're worried about those terrorists who don't work for the State?
Read the full post
I've finally gotten round to looking up his site, and read one of the e-books he makes freely available: Little Brother. It freaked the hell out of me. And it prominently raised a question that's been bothering me for a while: why don't most people pay any attention to attacks on their rights?
Why do people cheer when the government arrests people, locks them up without trial, and won't release them even if they're proved innocent? Why do people accept the government's right to snoop on people in the hope of catching them in minor misdemeanours? In short, how can anyone hear the words "if you've nothing to hide then you've nothing to fear" without spitting soft drink all over their keyboard?
Governments are made of people. People can get things wrong. Worse, a decent minority of people are power-hungry bastards. Any organisation that possesses power will tend to attract such individuals like wasps to a picnic. Once a critical mass of bastards builds up, a powerful organisation can go bad real fast.
Terrorist groups are also made up of people, most of whom can be legitimately considered to be bastards. However, there are several important differences here:
1) Governments have better raw materials. So far no terrorist has ever got hold of a nuke; the government of the USA has about 10,000 and once blew up two whole cities.
2) Governments have more manpower. Al Qaeda is estimated to have on the order of 20,000 members worldwide, many of whom have day jobs. The UK Civil Service has 500,000 full-time employees, and that's just one component of one country's government.
3) Governments have stronger surveillance capacity. Fraudsters have to struggle to get access to even a few people's records; the UK government has lost 30 million personal records this year alone, including mine.
4) Governments tend to get the benefit of the doubt. If any non-governmental group had killed as many dogs as the various USA police forces, they'd have been (ahem) hounded out of the country.
Governments have 99% of the power, 99% of the weapons, and 99% of the mob support. Goverments have a long history of going bad. And yet we're worried about those terrorists who don't work for the State?
Read the full post
Tuesday, August 26, 2008
Eureka
One of my favourite books, Cryptonomicon by Neal Stevenson, contains frequent references to a specialist mailing list for crypto geeks. The problem is that, in the real world, any mailing list offering high-level cryptographic discussion quickly gets colonised by opinionated numpties. So, in true Darwinian form, they're quite hard to locate.
I think I've just stumbled across one here. This is great for me, because I've wanted to develop a more advanced knowledge of cryptoanalysis for some time. Probably not so good for the list that it's been discovered by the outside world.
Once my forthcoming actuarial exams are over, I'll have to have a browse through the archives (in between learning Bengali and ancient Assyrian and brushing up on my computational biology). I'll let you know if I discover anything particularly interesting.
Read the full post
I think I've just stumbled across one here. This is great for me, because I've wanted to develop a more advanced knowledge of cryptoanalysis for some time. Probably not so good for the list that it's been discovered by the outside world.
Once my forthcoming actuarial exams are over, I'll have to have a browse through the archives (in between learning Bengali and ancient Assyrian and brushing up on my computational biology). I'll let you know if I discover anything particularly interesting.
Read the full post
Thursday, August 14, 2008
Random poetry day (week?)
If you can keep your head when all about you
Are losing theirs and blaming it on you,
If you can trust yourself when all men doubt you,
But make allowance for their doubting too;
If you can wait and not be tired by waiting,
Or being lied about, don't deal in lies,
Or being hated, don't give way to hating,
And yet don't look too good, nor talk too wise:
If you can dream - and not make dreams your master;
If you can think - and not make thoughts your aim;
If you can meet with Triumph and Disaster
And treat those two impostors just the same;
If you can bear to hear the truth you've spoken
Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
And stoop and build 'em up with worn-out tools:
If you can make one heap of all your winnings
And risk it on one turn of pitch-and-toss,
And lose, and start again at your beginnings
And never breathe a word about your loss;
If you can force your heart and nerve and sinew
To serve your turn long after they are gone,
And so hold on when there is nothing in you
Except the Will which says to them: 'Hold on!'
If you can talk with crowds and keep your virtue,
Or walk with Kings - nor lose the common touch,
if neither foes nor loving friends can hurt you,
If all men count with you, but none too much;
If you can fill the unforgiving minute
With sixty seconds' worth of distance run,
Yours is the Earth and everything that's in it,
And - which is more - you'll be a Man, my son!
- "If" by Rudyard Kipling
This poem is the best expression of the Stoic philosophy I've ever come across, and every time I read it I find some new application to my life. Stoicism is intended to inure people to the stresses of existence - a good lifestyle for kings, generals, and those who try to arrange large lunch gatherings.
Read the full post
Are losing theirs and blaming it on you,
If you can trust yourself when all men doubt you,
But make allowance for their doubting too;
If you can wait and not be tired by waiting,
Or being lied about, don't deal in lies,
Or being hated, don't give way to hating,
And yet don't look too good, nor talk too wise:
If you can dream - and not make dreams your master;
If you can think - and not make thoughts your aim;
If you can meet with Triumph and Disaster
And treat those two impostors just the same;
If you can bear to hear the truth you've spoken
Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
And stoop and build 'em up with worn-out tools:
If you can make one heap of all your winnings
And risk it on one turn of pitch-and-toss,
And lose, and start again at your beginnings
And never breathe a word about your loss;
If you can force your heart and nerve and sinew
To serve your turn long after they are gone,
And so hold on when there is nothing in you
Except the Will which says to them: 'Hold on!'
If you can talk with crowds and keep your virtue,
Or walk with Kings - nor lose the common touch,
if neither foes nor loving friends can hurt you,
If all men count with you, but none too much;
If you can fill the unforgiving minute
With sixty seconds' worth of distance run,
Yours is the Earth and everything that's in it,
And - which is more - you'll be a Man, my son!
- "If" by Rudyard Kipling
This poem is the best expression of the Stoic philosophy I've ever come across, and every time I read it I find some new application to my life. Stoicism is intended to inure people to the stresses of existence - a good lifestyle for kings, generals, and those who try to arrange large lunch gatherings.
Read the full post
On the Psychology of Military Incompetence
...is the title of a rather good book I came across a couple of years back. Despite being dated by its Freudian lingo, it was a rather well-thought-out consideration of an apparently simple question: why do generals suck so badly?
It used to be (possibly still is) a well-worn aphorism among soldiers that, whilst the poor sod in a different uniform might kill you, it was your general who would murder you. That was adequately demonstrated by the "meat-grinder" battles of the World Wars, where millions of men were thrown into combat with effectively no chance of achieving the mission objectives or coming out alive.
Or, for another example, consider the infamous defence of Johore (in Malaya). The British commander decided what direction he thought the attack would come from, concentrated all his forces in that direction, and stubbornly ignored all evidence to the contrary. He even ordered that no barricades be built because it would be "bad for morale". Of course, when the Japanese arrived, the defensive line was obliterated.
Why would someone give orders that were so blatantly stupid? OtPoMI gives a thorough discussion of this question, and turns up two main conclusions:
1) The reputation of the military is such that the people who join it tend to be insecure people looking for a stable foundation.
2) The structure of the military is such that risk-averse careerists tend to rise through the ranks fastest.
The result of this is that you get a whole range of commanders who replace self-confidence with bluster, who are inexperienced at dealing with trouble, and who see each battle not as a learning experience but as a threat to their personal reputation. You get people who, faced with a deteriorating situation, are completely unable to get their brain in gear, let alone sort things out (in fact, the Johore story gives a clear example of generals focusing on morale to the exclusion of reality - were they just trying to clear their heads?)
In short, you get people who are likely to screw up really badly.
I've currently got a bit too much of that in me for my liking. In yesterday's restaurant debacle, I was more worried about my reputation as an organiser than about the actual event. A good leader would have focused on making sure everyone was happy, even if this meant ditching the restaurant early and hitting the canteen. But I was too busy panicking to display that level of flexibility.
A good leader would have accepted that this was just one of those things, taken his lumps from the rest of the group, and moved on. I got defensive. I don't think I did anything particularly dumb, but the potential was there. Again, I was focusing on my reputation, and thus failing to "keep my head when all about were losing theirs and blaming it on me".
I have a naturally careerist streak. I don't necessarily apologise for this - it's an excellent source of personal motivation, and frankly I'd have done a lot better at uni if I'd developed this tendency sooner. But it does leave me open to precisely this sort of funk. Before we left for the restaurant, the dept manager had complimented me for arranging the leaving do unasked. If anything, being in the spotlight like this just made me freeze up even worse when everything went pear-shaped.
However, one advantage I have over the generals of yesteryear is that no-one can(successfully) accuse me of avoiding novelty or challenge. Hell, I went to one of the weirdest (and hence most successful) universities in the world. I can hack it.
The problem with careerism isn't so much that it leads to bad behaviour as that it leads to overfocusing on reputation and hence to bad stress reactions. I hereby resolve to learn to control this behaviour. In a way, yesterday was great, because I learned how much damage my adrenal gland can do me. Next time the shit hits the fan, I intend to be carrying an umbrella.
Read the full post
It used to be (possibly still is) a well-worn aphorism among soldiers that, whilst the poor sod in a different uniform might kill you, it was your general who would murder you. That was adequately demonstrated by the "meat-grinder" battles of the World Wars, where millions of men were thrown into combat with effectively no chance of achieving the mission objectives or coming out alive.
Or, for another example, consider the infamous defence of Johore (in Malaya). The British commander decided what direction he thought the attack would come from, concentrated all his forces in that direction, and stubbornly ignored all evidence to the contrary. He even ordered that no barricades be built because it would be "bad for morale". Of course, when the Japanese arrived, the defensive line was obliterated.
Why would someone give orders that were so blatantly stupid? OtPoMI gives a thorough discussion of this question, and turns up two main conclusions:
1) The reputation of the military is such that the people who join it tend to be insecure people looking for a stable foundation.
2) The structure of the military is such that risk-averse careerists tend to rise through the ranks fastest.
The result of this is that you get a whole range of commanders who replace self-confidence with bluster, who are inexperienced at dealing with trouble, and who see each battle not as a learning experience but as a threat to their personal reputation. You get people who, faced with a deteriorating situation, are completely unable to get their brain in gear, let alone sort things out (in fact, the Johore story gives a clear example of generals focusing on morale to the exclusion of reality - were they just trying to clear their heads?)
In short, you get people who are likely to screw up really badly.
I've currently got a bit too much of that in me for my liking. In yesterday's restaurant debacle, I was more worried about my reputation as an organiser than about the actual event. A good leader would have focused on making sure everyone was happy, even if this meant ditching the restaurant early and hitting the canteen. But I was too busy panicking to display that level of flexibility.
A good leader would have accepted that this was just one of those things, taken his lumps from the rest of the group, and moved on. I got defensive. I don't think I did anything particularly dumb, but the potential was there. Again, I was focusing on my reputation, and thus failing to "keep my head when all about were losing theirs and blaming it on me".
I have a naturally careerist streak. I don't necessarily apologise for this - it's an excellent source of personal motivation, and frankly I'd have done a lot better at uni if I'd developed this tendency sooner. But it does leave me open to precisely this sort of funk. Before we left for the restaurant, the dept manager had complimented me for arranging the leaving do unasked. If anything, being in the spotlight like this just made me freeze up even worse when everything went pear-shaped.
However, one advantage I have over the generals of yesteryear is that no-one can(successfully) accuse me of avoiding novelty or challenge. Hell, I went to one of the weirdest (and hence most successful) universities in the world. I can hack it.
The problem with careerism isn't so much that it leads to bad behaviour as that it leads to overfocusing on reputation and hence to bad stress reactions. I hereby resolve to learn to control this behaviour. In a way, yesterday was great, because I learned how much damage my adrenal gland can do me. Next time the shit hits the fan, I intend to be carrying an umbrella.
Read the full post
Wednesday, August 13, 2008
Opposite of "up"
Feeling fairly down at the moment. I'm dealing with the aftermath of a massive adrenaline spike round about lunchtime, followed by an emotional kick in the goolies during the afternoon.
Today was a co-worker's last day in the team - she's leaving for a more interesting job elsewhere. I've only been on the team for a couple of weeks now but, when I found out that no-one had arranged a goodbye lunch for her, I figured "what the hell, worth a shot".
This is the first time I've organised something like this, so I made a point to cover all bases. I booked the restaurant, sent emails to the group, kept a log of who was coming. I even called in to the restaurant earlier today, just to make sure nothing could go wrong. I got everyone out of the office promptly, made sure they made it to the restaurant, ordered food, and sat chatting while we waited for it to arrive.
And waited. And waited. And waited.
After 40 minutes, we're getting a bit worried. The food hasn't arrived and, although the conversation has been excellent, we do need to be back at the office in about half an hour for a team meeting. It's only a three-minute walk so this isn't a problem, as long as the food arrives now. I check with one of the waitresses and she tells us it'll only be five minutes more.
So we wait. And wait. And wait. And all the time my blood pressure is getting higher and higher as I contemplate the consequences of having invited everyone to a lunch at which no food was actually served.
Turns out that the restaurant next door had closed for the day, so all its business was coming to our restaurant. And, being such helpful fellows, they couldn't possibly turn anyone away...
Eventually the manager comes over to us and confesses that there's unlikely to be any food arriving. He's very apologetic, and offers us a stack of free pizzas for collection in half an hour or so. So at least people got some lunch in the end, but it was still bloody stressful to feel like I'm responsible for everyone going hungry.
That was Act One. By this point, I'm on the boil, out of my mind on fight-or-flight hormones, I feel like the sky is falling, etc, etc. And another co-worker (who hadn't even been to lunch) picks that moment to request that I don't talk to him again except about work stuff.
That's kinda harsh. The worrying thing is, I have this complete uncertainty about whether I did anything to deserve it. I've said nasty things to this guy previously, but only in a well-defined context of mutual bloke-on-bloke teasing. Did I cross a line? Did I go outside that context? If so, why did he wait til now to say something?
Or, more worryingly, is it something I'm not even aware of that set him off? I react very badly to adrenaline. Round about the lunch-induced hormone spike, there are periods where I can't remember precisely what I said to whom. What did I say to him???
My brain is currently frying in its own juices on a mixture of emotional exhaustion and Othello-level paranoia. So no management skepticism tonight. I'll be OK come tomorrow, when my body chemistry is normal and the molehill stops looking quite so mountainous.
Read the full post
Today was a co-worker's last day in the team - she's leaving for a more interesting job elsewhere. I've only been on the team for a couple of weeks now but, when I found out that no-one had arranged a goodbye lunch for her, I figured "what the hell, worth a shot".
This is the first time I've organised something like this, so I made a point to cover all bases. I booked the restaurant, sent emails to the group, kept a log of who was coming. I even called in to the restaurant earlier today, just to make sure nothing could go wrong. I got everyone out of the office promptly, made sure they made it to the restaurant, ordered food, and sat chatting while we waited for it to arrive.
And waited. And waited. And waited.
After 40 minutes, we're getting a bit worried. The food hasn't arrived and, although the conversation has been excellent, we do need to be back at the office in about half an hour for a team meeting. It's only a three-minute walk so this isn't a problem, as long as the food arrives now. I check with one of the waitresses and she tells us it'll only be five minutes more.
So we wait. And wait. And wait. And all the time my blood pressure is getting higher and higher as I contemplate the consequences of having invited everyone to a lunch at which no food was actually served.
Turns out that the restaurant next door had closed for the day, so all its business was coming to our restaurant. And, being such helpful fellows, they couldn't possibly turn anyone away...
Eventually the manager comes over to us and confesses that there's unlikely to be any food arriving. He's very apologetic, and offers us a stack of free pizzas for collection in half an hour or so. So at least people got some lunch in the end, but it was still bloody stressful to feel like I'm responsible for everyone going hungry.
That was Act One. By this point, I'm on the boil, out of my mind on fight-or-flight hormones, I feel like the sky is falling, etc, etc. And another co-worker (who hadn't even been to lunch) picks that moment to request that I don't talk to him again except about work stuff.
That's kinda harsh. The worrying thing is, I have this complete uncertainty about whether I did anything to deserve it. I've said nasty things to this guy previously, but only in a well-defined context of mutual bloke-on-bloke teasing. Did I cross a line? Did I go outside that context? If so, why did he wait til now to say something?
Or, more worryingly, is it something I'm not even aware of that set him off? I react very badly to adrenaline. Round about the lunch-induced hormone spike, there are periods where I can't remember precisely what I said to whom. What did I say to him???
My brain is currently frying in its own juices on a mixture of emotional exhaustion and Othello-level paranoia. So no management skepticism tonight. I'll be OK come tomorrow, when my body chemistry is normal and the molehill stops looking quite so mountainous.
Read the full post
Tuesday, August 12, 2008
Vertigo
One of the useful features of a new job is a chance to reinvent yourself a bit. I've been doing OK on that front, but of course there's some things you can never change.
For example, as a long-time introvert, it's inevitable that I'll seek peace and quiet to recharge my batteries. As an introvert, I should do better when focusing on my own personal tasks than when interacting with others. Right?
Well, actually, I spent today's "dead time"* organising a leaving do for a team member. And, despite the slight possibility that everyone will weasel out and the restaurant will break my legs for booking too much space, it's been really energising. It totally got me raring to go, and my ability to focus on work improved dramatically.
So what's the secret here? What is it that turns an habitual introvert into a contact-seeking extrovert? After a couple of weeks in my current role, I think I've put my finger on it.
Extroverts are people who have really. Boring. Jobs.**
* In case my employers come across this, I should clarify that this refers to e.g. the time between handing over a massive stack of completed cases and being given another massive stack to start on.
** I'm not so bothered about my employers reading this, because it's a widely-acknowledged fact of life in our team. If they fired everyone who said the work was dull, they'd get lonely real fast...
Read the full post
For example, as a long-time introvert, it's inevitable that I'll seek peace and quiet to recharge my batteries. As an introvert, I should do better when focusing on my own personal tasks than when interacting with others. Right?
Well, actually, I spent today's "dead time"* organising a leaving do for a team member. And, despite the slight possibility that everyone will weasel out and the restaurant will break my legs for booking too much space, it's been really energising. It totally got me raring to go, and my ability to focus on work improved dramatically.
So what's the secret here? What is it that turns an habitual introvert into a contact-seeking extrovert? After a couple of weeks in my current role, I think I've put my finger on it.
Extroverts are people who have really. Boring. Jobs.**
* In case my employers come across this, I should clarify that this refers to e.g. the time between handing over a massive stack of completed cases and being given another massive stack to start on.
** I'm not so bothered about my employers reading this, because it's a widely-acknowledged fact of life in our team. If they fired everyone who said the work was dull, they'd get lonely real fast...
Read the full post
Sunday, August 10, 2008
Random poetry day
Know then thyself, presume not God to scan;
The proper study of Mankind is Man.
Plac'd on this isthmus of a middle state,
A being darkly wise, and rudely great:
With too much knowledge for the Sceptic side,
With too much weakness for the Stoic's pride,
He hangs between; in doubt to act, or rest,
In doubt to deem himself a God, or Beast;
In doubt his Mind or Body to prefer,
Born but to die, and reas'ning but to err;
Alike in ignorance, his reason such,
Whether he thinks too little, or too much
- Excerpt from Alexander Pope, An Essay on Man
Hat tip to Matt Ridley and his excellent book The Red Queen
Read the full post
The proper study of Mankind is Man.
Plac'd on this isthmus of a middle state,
A being darkly wise, and rudely great:
With too much knowledge for the Sceptic side,
With too much weakness for the Stoic's pride,
He hangs between; in doubt to act, or rest,
In doubt to deem himself a God, or Beast;
In doubt his Mind or Body to prefer,
Born but to die, and reas'ning but to err;
Alike in ignorance, his reason such,
Whether he thinks too little, or too much
- Excerpt from Alexander Pope, An Essay on Man
Hat tip to Matt Ridley and his excellent book The Red Queen
Read the full post
Tuesday, August 05, 2008
Management Skeptic #3: Detecting Lemons
This series discusses the concept of "management models" from a skeptical viewpoint. In my first post, I raised the question: do "management models" such as PRINCE2 improve a practitioner's ability to manage? In my second post, I discussed a common argument in support of this claim and demonstrated why it doesn't hold up.
This post analyses a second common argument in support of management models, and provides more reasons why a course can be popular without being any good.
Argument #2: Management models must be good, or why would companies buy into them?
In my last post, I discussed how a qualification can be very useful for job applicants without actually achieving anything in itself. Candidates who have been on well-marketed courses like PRINCE2 will often be seen as more competent, regardless of their real proficiency. As a counterexample I presented the "VISCOUNT4" thought experiment: a training course that is deeply popular despite sucking.
But why, one might ask, does this work in the long term? Why would companies hire people with a worthless qualification, and how can they prosper if they do? In short, why doesn't the "natural selection" of capitalism weed out VISCOUNT4-style courses?
From this question, it's easy to start jumping to conclusions. If these companies survive then preferential hiring of PRINCE2 practitioners must be beneficial for a company, so clearly PRINCE2 itself must make managers more effective...
I already discussed one objection to this chain of logic: it's possible that this preferential hiring is doing damage, but that the self-interested interviewers still favour a qualification that many of them possess.
In this post, I will provide another answer. I'd like to draw your attention to one of the most fascinating developments in economics since the young Adam Smith got a summer job at the nail-making factory: the new concept of information asymmetry.
The basic principle is simple. When someone suspects that a purchase isn't worth the asking price, they won't buy it. For example, why is it that the value of a car drops dramatically as soon as it leaves the vendor's turf? Because a small number of cars in any batch will be dodgy - "lemons" - and, by trying to sell your car on so soon, you're signalling to the world that you've been landed with one of them.
This sounds like common sense, but it creates a headache for economists. For example, back when England's currency was based on precious metals, it was well-known that "bad money drives out good". Some coins would be adulterated, forged, and otherwise tampered with in order to extract some of the value from them. Of course, when you found that you'd been landed with one of these lemons, you'd try to palm it off on some other sucker, but you'd keep the good coins safe in your wallet.
After a while, the very fact that you were trying to pay with a coin sent a signal to the vendor that that coin sucked. The value of money declined, which made it even more futile to pay with a good coin. A race to the bottom ensued that required heavy-duty government intervention to resolve.
The job market seems like it would be very vulnerable to this effect. If companies tend to retain good employees and ditch bad ones, soon the very fact that someone is after a job is itself enough to taint their application. How do we get around this? How does a good candidate signal to the employer that they are indeed a good candidate?
The answer is that they perform a task that is relatively easy for them but would be deeply painful for a poor candidate. Back in the old days, hunters used to prove their strength and tactical expertise by slaying a large predator, and anyone who hadn't yet done so was given reduced privileges within the tribe.
These days, students flock to universities in their thousands, in the safe knowledge that the time spent (wasted?) on some obscure academic subject will repay itself when they hit the job market. It's (fairly) easy for a smart, disciplined individual to pass exams, but very difficult if you're a poor candidate.
Similarly, although VISCOUNT4 may not contribute to a manager's effectiveness, the fact that someone has put themselves through expensive training courses demonstrates a great deal of commitment to the cause. Imagine: they could have gone on a three-week bender, they could have spent a weekend in Amsterdam's red light district, they could have knocked a month off their mortgage repayments, but instead they went on a training course. What dedication that individual must have!
So, counterintuitively, even if VISCOUNT4 sucks it may still be in employers' best interests to hire VISCOUNT4 practitioners. The world is weird that way.
Read the full post
This post analyses a second common argument in support of management models, and provides more reasons why a course can be popular without being any good.
Argument #2: Management models must be good, or why would companies buy into them?
In my last post, I discussed how a qualification can be very useful for job applicants without actually achieving anything in itself. Candidates who have been on well-marketed courses like PRINCE2 will often be seen as more competent, regardless of their real proficiency. As a counterexample I presented the "VISCOUNT4" thought experiment: a training course that is deeply popular despite sucking.
But why, one might ask, does this work in the long term? Why would companies hire people with a worthless qualification, and how can they prosper if they do? In short, why doesn't the "natural selection" of capitalism weed out VISCOUNT4-style courses?
From this question, it's easy to start jumping to conclusions. If these companies survive then preferential hiring of PRINCE2 practitioners must be beneficial for a company, so clearly PRINCE2 itself must make managers more effective...
I already discussed one objection to this chain of logic: it's possible that this preferential hiring is doing damage, but that the self-interested interviewers still favour a qualification that many of them possess.
In this post, I will provide another answer. I'd like to draw your attention to one of the most fascinating developments in economics since the young Adam Smith got a summer job at the nail-making factory: the new concept of information asymmetry.
The basic principle is simple. When someone suspects that a purchase isn't worth the asking price, they won't buy it. For example, why is it that the value of a car drops dramatically as soon as it leaves the vendor's turf? Because a small number of cars in any batch will be dodgy - "lemons" - and, by trying to sell your car on so soon, you're signalling to the world that you've been landed with one of them.
This sounds like common sense, but it creates a headache for economists. For example, back when England's currency was based on precious metals, it was well-known that "bad money drives out good". Some coins would be adulterated, forged, and otherwise tampered with in order to extract some of the value from them. Of course, when you found that you'd been landed with one of these lemons, you'd try to palm it off on some other sucker, but you'd keep the good coins safe in your wallet.
After a while, the very fact that you were trying to pay with a coin sent a signal to the vendor that that coin sucked. The value of money declined, which made it even more futile to pay with a good coin. A race to the bottom ensued that required heavy-duty government intervention to resolve.
The job market seems like it would be very vulnerable to this effect. If companies tend to retain good employees and ditch bad ones, soon the very fact that someone is after a job is itself enough to taint their application. How do we get around this? How does a good candidate signal to the employer that they are indeed a good candidate?
The answer is that they perform a task that is relatively easy for them but would be deeply painful for a poor candidate. Back in the old days, hunters used to prove their strength and tactical expertise by slaying a large predator, and anyone who hadn't yet done so was given reduced privileges within the tribe.
These days, students flock to universities in their thousands, in the safe knowledge that the time spent (wasted?) on some obscure academic subject will repay itself when they hit the job market. It's (fairly) easy for a smart, disciplined individual to pass exams, but very difficult if you're a poor candidate.
Similarly, although VISCOUNT4 may not contribute to a manager's effectiveness, the fact that someone has put themselves through expensive training courses demonstrates a great deal of commitment to the cause. Imagine: they could have gone on a three-week bender, they could have spent a weekend in Amsterdam's red light district, they could have knocked a month off their mortgage repayments, but instead they went on a training course. What dedication that individual must have!
So, counterintuitively, even if VISCOUNT4 sucks it may still be in employers' best interests to hire VISCOUNT4 practitioners. The world is weird that way.
Read the full post
Management skeptic (post #2)
In my last post, I raised the question: what advantage do "management models" such as PRINCE2 convey to the practitioner? Are they really worth the expensive training courses?
I've discussed this with many people, and they all seem to think the answer is "yes". In fact, it's almost unquestioned in management circles that applying management models can improve your management ability. In particular, implementation of quality management models like EFQM is in many cases required before people will entrust you with business. And you have to pass an exam on the Actuarial Control Loop (a variant of the management control loop I mentioned last time) before you can become an actuary.
If management models really are effective then that's fair enough. But I'm not sure the question has been answered.
In this and future posts I'll look at the various arguments provided for management models, with a description of why each gets my skeptical spider-sense tingling.
Argument #1: Management models look good on your CV
This one is actually true: if you've taken PRINCE2 (for example), it's generally assumed that you're a more competent project manager than a similar candidate who hasn't. This contrast is amplified by the number of managers who have taken PRINCE2 and therefore have an incentive to emphasise its coolness.
However, this effect isn't directly due to PRINCE2 itself, but rather to the packaging, marketing and community that surrounds it. Consider an imaginary qualification, which I'll call VISCOUNT4. This qualification is completely useless, but the VISCOUNT4 company has excellent advertising skills and soon manages to convince everyone that what their CV really needs is a VISCOUNT4 certificate.
As a result of this, VISCOUNT4 becomes immensely popular, and is subject to the positive feedback loop I mentioned above where managers with VISCOUNT4 tend to hire other managers with VISCOUNT4. The course quickly becomes a prerequisite for project management. But it still sucks.
In short, the popularity of PRINCE2 is not an argument for the effectiveness of PRINCE2, because the VISCOUNT4 scenario demonstrates another way that this popularity can arise. The PRINCE2 model cannot be said to have value simply because the PRINCE2 marketing is superb.
Read the full post
I've discussed this with many people, and they all seem to think the answer is "yes". In fact, it's almost unquestioned in management circles that applying management models can improve your management ability. In particular, implementation of quality management models like EFQM is in many cases required before people will entrust you with business. And you have to pass an exam on the Actuarial Control Loop (a variant of the management control loop I mentioned last time) before you can become an actuary.
If management models really are effective then that's fair enough. But I'm not sure the question has been answered.
In this and future posts I'll look at the various arguments provided for management models, with a description of why each gets my skeptical spider-sense tingling.
Argument #1: Management models look good on your CV
This one is actually true: if you've taken PRINCE2 (for example), it's generally assumed that you're a more competent project manager than a similar candidate who hasn't. This contrast is amplified by the number of managers who have taken PRINCE2 and therefore have an incentive to emphasise its coolness.
However, this effect isn't directly due to PRINCE2 itself, but rather to the packaging, marketing and community that surrounds it. Consider an imaginary qualification, which I'll call VISCOUNT4. This qualification is completely useless, but the VISCOUNT4 company has excellent advertising skills and soon manages to convince everyone that what their CV really needs is a VISCOUNT4 certificate.
As a result of this, VISCOUNT4 becomes immensely popular, and is subject to the positive feedback loop I mentioned above where managers with VISCOUNT4 tend to hire other managers with VISCOUNT4. The course quickly becomes a prerequisite for project management. But it still sucks.
In short, the popularity of PRINCE2 is not an argument for the effectiveness of PRINCE2, because the VISCOUNT4 scenario demonstrates another way that this popularity can arise. The PRINCE2 model cannot be said to have value simply because the PRINCE2 marketing is superb.
Read the full post
The Pink Link
It's well-known that many animals use colour to signal to each other (the classic example being baboons' bottoms), and of course this applies to humans too. We have the additional advantage that we can change our colour scheme without application of any nasty hormones.
I think I've identified one interesting example of this. It's noticeable in (British, financial-sector) offices that the colour pink is rarely worn by guys - unless that guy is a manager.
My best guess is that this originated as a statement of independence - the guy is signalling "I'm so powerful/self-confident that I don't need to obey peer pressure. That's a message that would be expected to propagate within the management community (because they like to think of themselves as standing out from the crowd) but not outside it. It's like deliberately picking the wrong urinal.
If I've noticed this trend, other people probably have too, so I suspect that the fashion is evolving into a way for management-inclined individuals to make their presence known to each other. In a spirit of scientific enquiry, I'm going to wear a pink shirt into work today and see if my new boss (who I've also seen wearing pink) pays any attention.
Watch this space.
Update: Nope, doesn't make any noticeable difference apart from getting me the occasional funny look from co-workers. I still think there's a correspondence here between managers and pink shirts, but clearly it only goes one way.
Update #2: Just received some rather good feedback from the new boss at the client company via my boss from the company that's farming me out. Bear in mind that this has passed through two layers of management, so is probably more motivation than message. However, this is the first time anyone has ever used the word "charismatic" to describe me, so I must be doing something better than I usually do.
This feedback provides slight support for the Pink Principle. It's a rather weak data point, but I've at least reverted from "skeptical" to "undecided" on this question.
Read the full post
I think I've identified one interesting example of this. It's noticeable in (British, financial-sector) offices that the colour pink is rarely worn by guys - unless that guy is a manager.
My best guess is that this originated as a statement of independence - the guy is signalling "I'm so powerful/self-confident that I don't need to obey peer pressure. That's a message that would be expected to propagate within the management community (because they like to think of themselves as standing out from the crowd) but not outside it. It's like deliberately picking the wrong urinal.
If I've noticed this trend, other people probably have too, so I suspect that the fashion is evolving into a way for management-inclined individuals to make their presence known to each other. In a spirit of scientific enquiry, I'm going to wear a pink shirt into work today and see if my new boss (who I've also seen wearing pink) pays any attention.
Watch this space.
Update: Nope, doesn't make any noticeable difference apart from getting me the occasional funny look from co-workers. I still think there's a correspondence here between managers and pink shirts, but clearly it only goes one way.
Update #2: Just received some rather good feedback from the new boss at the client company via my boss from the company that's farming me out. Bear in mind that this has passed through two layers of management, so is probably more motivation than message. However, this is the first time anyone has ever used the word "charismatic" to describe me, so I must be doing something better than I usually do.
This feedback provides slight support for the Pink Principle. It's a rather weak data point, but I've at least reverted from "skeptical" to "undecided" on this question.
Read the full post
Monday, August 04, 2008
I can has intarweb?
Yes, u can has intarweb!
In English: I'm currently on contract (basically my company decided to farm me out for a few months to another company). This means that I'm staying at a hotel. With wifi (which my flat still lacks). And sod-all to do in the evenings, pardon my French.
Given half a chance, I'll use some of the time to complete the series on management skepticism, and maybe expand it a bit. It's something I need to think through anyway.
Other random thoughts, before I forget:
Read the full post
In English: I'm currently on contract (basically my company decided to farm me out for a few months to another company). This means that I'm staying at a hotel. With wifi (which my flat still lacks). And sod-all to do in the evenings, pardon my French.
Given half a chance, I'll use some of the time to complete the series on management skepticism, and maybe expand it a bit. It's something I need to think through anyway.
Other random thoughts, before I forget:
- Interesting
- Never mind literary criticism, try management...
- I want to be this guy :)
Read the full post
Tuesday, June 17, 2008
Argh (or, In Defence of Dawkins)
One comment I hear a lot in discussions about atheism is that Dawkins (who apparently is the One True Atheist) is very simplistic in his approach to religion. This is semi-true - his book The God Delusion does indeed go lightly the "sophisticated" theology.
However, this seems vastly more acceptable when you realise that very few of the folks at whom his book is aimed will have Plantinga's works on their bedside table. In fact, they're rather more likely to be familiar with crap like this.
Given this, can I say once and for all that, despite not discussing the evolutionary transcendental argument in much depth, Dawkins' book is a fine piece of work that is generally appropriate to the audience for whom it was intended.
Read the full post
However, this seems vastly more acceptable when you realise that very few of the folks at whom his book is aimed will have Plantinga's works on their bedside table. In fact, they're rather more likely to be familiar with crap like this.
Given this, can I say once and for all that, despite not discussing the evolutionary transcendental argument in much depth, Dawkins' book is a fine piece of work that is generally appropriate to the audience for whom it was intended.
Read the full post
Thursday, June 12, 2008
DRM rant
One of the things that Linux doesn't do too well at is dealing with the bizarre obfuscations that many media companies use to protect their content. For example, I just came across an interesting-sounding movie called "The Fall". Now, you'd have thought that a trailer, at least, would be easy to find and view. Sadly not.
Step 1: Attempt to view a Flash trailer. Fail. It appears that something in the Flash code is sufficiently weird that Gnash (the open-source Flash viewer) can't handle it.
Blame: partly on open source, partly on whoever wrote the code.
Step 2: Look up the trailer on Apple's trailer site. Find that the page requires some kind of Quicktime plugin, which isn't available for Linux. Blame: mostly on Apple.
Step 3: Download the trailer directly. Discover that the URL actually leads to some kind of 138-byte redirector file. There is no apparent reason for this - I can only assume that it's intended to stop people downloading the trailer rather than streaming (why???). Blame: Apple.
Step 4: Look at the redirector's bytecode, and find the trailer address embedded in it. Guess the URL, download the trailer and watch it as nature intended. For Pete's sake, people, if you're going to obfuscate then a useful tactic would be to NOT PUT THE FILENAME AS PLAINTEXT IN THE OBFUSCATOR!!! What the hell are these people smoking?
Verdict: Apart from the Flash thing, non-Microsoft users like myself would be fine if it wasn't for the IDIOT COMPANIES who apply COMPLETELY POINTLESS "PROTECTION" to their oh-so-valuable film trailers. Gah.
Read the full post
Step 1: Attempt to view a Flash trailer. Fail. It appears that something in the Flash code is sufficiently weird that Gnash (the open-source Flash viewer) can't handle it.
Blame: partly on open source, partly on whoever wrote the code.
Step 2: Look up the trailer on Apple's trailer site. Find that the page requires some kind of Quicktime plugin, which isn't available for Linux. Blame: mostly on Apple.
Step 3: Download the trailer directly. Discover that the URL actually leads to some kind of 138-byte redirector file. There is no apparent reason for this - I can only assume that it's intended to stop people downloading the trailer rather than streaming (why???). Blame: Apple.
Step 4: Look at the redirector's bytecode, and find the trailer address embedded in it. Guess the URL, download the trailer and watch it as nature intended. For Pete's sake, people, if you're going to obfuscate then a useful tactic would be to NOT PUT THE FILENAME AS PLAINTEXT IN THE OBFUSCATOR!!! What the hell are these people smoking?
Verdict: Apart from the Flash thing, non-Microsoft users like myself would be fine if it wasn't for the IDIOT COMPANIES who apply COMPLETELY POINTLESS "PROTECTION" to their oh-so-valuable film trailers. Gah.
Read the full post
Sunday, June 01, 2008
Update on psychic testing
A while back, I mentioned that I'd hopefully be doing some trials with a friend who claims the existence of psychic powers etc. Yesterday we finally got round to doing some preliminaries.
The tests will be a lot less complex than I'd expected because, rather than testing for psychic communication between two believers (my friend and a third party), we're going to test his ability to channel energy through a pendulum into my hand. It'll be a straightforward "which hand am I holding the pendulum over" thing.
Protocol 1 (non-rigorous tomfoolery)
Our initial experiments were not terribly promising. I thought I could feel something the first time we did it, and guessed correctly. So far so good. However, I suspected that the "something" I could feel was in fact heat off my friend's hand, so for the second and third runs I covered each of my hands with a sheet of paper. I got both wrong.
Protocol 2 (slightly more rigorous)
At this point, my friend commented that, when he'd been mucking about with a fellow believer, the first run in any given sequence had generally been the most successful. He speculated that, after that point, there was some kind of psychic residue contaminating the experiment that took a while to wear off.
To eliminate this factor, we arranged a new protocol: every time we see each other (about once a week), we'll repeat the test once. For the moment, the only specific precaution against bias will be closed eyes and paper-covered hands. After five runs, we'll check the tally of results to see whether there's any statistically significant effect. If there is, we'll up the rigour. If not, we'll investigate other test options.
PS. To my friend's credit, he didn't use the "psychic residue" as an excuse for failure. In fact I had to persuade him not to include the initial negative results in the final tally.
What is a statistically significant effect?
The basic approach used for statistical testing is "significance levels". If something is "significant at the 5% level", that means that the chances of getting a false positive (an apparently significant result that appeared by accident) are 5%.
If there is no psychic effect then, over five trials, the probabilities of success are as follows:
P(5 correct guesses) = 1/32 = 3.1%
P(4 correct) = 5/32 = 15.6%
P(3 correct) = 10/32 = 31.2%
P(2 correct) = 10/32 = 31.2%
P(1 correct) = 5/32 = 15.6%
P(0 correct) = 1/32 = 3.1%
If we wanted to do a significance test at the 20% level, we would say that the result were significant if 4 or 5 successes appeared (since P(5)+P(4) < 20% < P(5)+P(4)+P(3)). This is a pretty damn easy hurdle to pass, so if we don't get 4 successes then there's probably not much point carrying on with this protocol.
If we wanted to do a success at the 5% level, we would say that the results were significant if 5 successes appeared (since P(5) < 20% < P(5)+P(4)). This is a slightly tougher hurdle - if we pass it (e.g. if we have 100% success rate) then it'll be worth applying stronger controls.
Protocol 2 scoreboard
Date: 1 June 08
Correct guesses: 0
Incorrect guesses: 0
Read the full post
The tests will be a lot less complex than I'd expected because, rather than testing for psychic communication between two believers (my friend and a third party), we're going to test his ability to channel energy through a pendulum into my hand. It'll be a straightforward "which hand am I holding the pendulum over" thing.
Protocol 1 (non-rigorous tomfoolery)
Our initial experiments were not terribly promising. I thought I could feel something the first time we did it, and guessed correctly. So far so good. However, I suspected that the "something" I could feel was in fact heat off my friend's hand, so for the second and third runs I covered each of my hands with a sheet of paper. I got both wrong.
Protocol 2 (slightly more rigorous)
At this point, my friend commented that, when he'd been mucking about with a fellow believer, the first run in any given sequence had generally been the most successful. He speculated that, after that point, there was some kind of psychic residue contaminating the experiment that took a while to wear off.
To eliminate this factor, we arranged a new protocol: every time we see each other (about once a week), we'll repeat the test once. For the moment, the only specific precaution against bias will be closed eyes and paper-covered hands. After five runs, we'll check the tally of results to see whether there's any statistically significant effect. If there is, we'll up the rigour. If not, we'll investigate other test options.
PS. To my friend's credit, he didn't use the "psychic residue" as an excuse for failure. In fact I had to persuade him not to include the initial negative results in the final tally.
What is a statistically significant effect?
The basic approach used for statistical testing is "significance levels". If something is "significant at the 5% level", that means that the chances of getting a false positive (an apparently significant result that appeared by accident) are 5%.
If there is no psychic effect then, over five trials, the probabilities of success are as follows:
P(5 correct guesses) = 1/32 = 3.1%
P(4 correct) = 5/32 = 15.6%
P(3 correct) = 10/32 = 31.2%
P(2 correct) = 10/32 = 31.2%
P(1 correct) = 5/32 = 15.6%
P(0 correct) = 1/32 = 3.1%
If we wanted to do a significance test at the 20% level, we would say that the result were significant if 4 or 5 successes appeared (since P(5)+P(4) < 20% < P(5)+P(4)+P(3)). This is a pretty damn easy hurdle to pass, so if we don't get 4 successes then there's probably not much point carrying on with this protocol.
If we wanted to do a success at the 5% level, we would say that the results were significant if 5 successes appeared (since P(5) < 20% < P(5)+P(4)). This is a slightly tougher hurdle - if we pass it (e.g. if we have 100% success rate) then it'll be worth applying stronger controls.
Protocol 2 scoreboard
Date: 1 June 08
Correct guesses: 0
Incorrect guesses: 0
Read the full post
Tuesday, May 20, 2008
Pissing off Pirsig
I've just been rereading my recent post on "the art of religion", and I'm getting a strange feeling of deja vu. There are a lot of concepts in there that, on reflection, I blatantly nicked out of the early chapters of "Zen and the Art of Motorcycle Maintenance" by Robert M Pirsig. This is the book that this blog was supposed to be about, back when I thought I'd be able to blog with discipline (ha!).
In ZAMM, Pirsig splits the world into two viewpoints: "classical" (logical, structured, analytic) and "romantic" (intuitive, free-flowing, perceptual). This is precisely what I was describing as the "scientific" and "artistic" approaches. So far soplagiaristic good.
But Pirsig spends the rest of the book dumping on this classification. He points out that the urge to classify viewpoints is itself a product of the classical approach. True romantics don't even think in terms of sciences and arts; all that they see is whether stuff resonates with them, whether it turns them on or off.
This also maps directly onto my post. It's noticeable that many people who are religious for "artistic" reasons like to describe their beliefs as scientific (e.g. scientology, Christian science, scientific creationism). This can be seen as a side-effect of their romantic viewpoint - they like the connotations of the word "science", so they attach it to their beliefs. Questions of whether this label is appropriate are as irrelevant as they are ugly.
I'm trying to say what I mean here without coming across as snide, which is quite difficult given that I'm about as analytical as it gets. I honestly don't mean to denigrate this behaviour - it's only from my viewpoint that it seems inappropriate. One could argue that the reverse behaviour - describing scientific concepts as beautiful - is just as inappropriate, and I'm guilty of that all the time.
Still, it's surprisingly hard for me to breach Pirsig's divide and see the world in terms of art not science. It's also mind-expanding, tolerance-inducing, and all that good stuff.
I'm aware that some people are bothered by my description of religion as an art rather than a science. If you're one of them, I'm more than happy to discuss whether religion succeeds or fails as a science - drop me a line in the comments section.
I'm still working in a different country from my primary source of intarweb, so it may be a few days before I respond.
Read the full post
In ZAMM, Pirsig splits the world into two viewpoints: "classical" (logical, structured, analytic) and "romantic" (intuitive, free-flowing, perceptual). This is precisely what I was describing as the "scientific" and "artistic" approaches. So far so
But Pirsig spends the rest of the book dumping on this classification. He points out that the urge to classify viewpoints is itself a product of the classical approach. True romantics don't even think in terms of sciences and arts; all that they see is whether stuff resonates with them, whether it turns them on or off.
This also maps directly onto my post. It's noticeable that many people who are religious for "artistic" reasons like to describe their beliefs as scientific (e.g. scientology, Christian science, scientific creationism). This can be seen as a side-effect of their romantic viewpoint - they like the connotations of the word "science", so they attach it to their beliefs. Questions of whether this label is appropriate are as irrelevant as they are ugly.
I'm trying to say what I mean here without coming across as snide, which is quite difficult given that I'm about as analytical as it gets. I honestly don't mean to denigrate this behaviour - it's only from my viewpoint that it seems inappropriate. One could argue that the reverse behaviour - describing scientific concepts as beautiful - is just as inappropriate, and I'm guilty of that all the time.
Still, it's surprisingly hard for me to breach Pirsig's divide and see the world in terms of art not science. It's also mind-expanding, tolerance-inducing, and all that good stuff.
I'm aware that some people are bothered by my description of religion as an art rather than a science. If you're one of them, I'm more than happy to discuss whether religion succeeds or fails as a science - drop me a line in the comments section.
I'm still working in a different country from my primary source of intarweb, so it may be a few days before I respond.
Read the full post
Sunday, May 11, 2008
Still scary
Read Flowers For Algernon again today. It still seriously freaks me out. I'm definitely developing a love/hate relationship with that book.
On the other hand, the existentialist dread it induced did inspire me to go jogging. Causing low-level damage to one's leg muscles is very life-affirming.
Read the full post
On the other hand, the existentialist dread it induced did inspire me to go jogging. Causing low-level damage to one's leg muscles is very life-affirming.
Read the full post
Saturday, May 10, 2008
How to do it
Let's say you're a creationist, and you've decided to make a lot of noise about evolution. So you reach for your keyboard and start typing:
"The human eye is composed of so many different interlocking parts that it can't possibly have..."
STOP RIGHT THERE! You've just fallen for a schoolboy error: you're about to make an argument for which thorough, accurate and catchy refutations are available. Probably you read this argument in an ISCID pamphlet, right? Didn't you consider that evilutionists may also have read that pamphlets, and prepared themselves to rebut its claims?
Instead, and I cannot emphasise this enough, you should pick an argument that they haven't come across before. Dembski actually had the more mathematically-inclined evilutionists feeling uncertain for a bit. Behe managed to look convincing for at least half an hour. I repeat: the best arguments refer to areas of academia the evilutionist is unlikely to have hitherto explored.
Exhibit A:
"It's simple, really, but the best ideas always are. Make a graph whose vertices are all possible genotypes with two vertices connected if they are one mutational step away from each other. That graph is isomorphic to a Cayley graph of a certain matrix group with respect to a standard generating set. (Surely that's obvious?) Such Cayley graphs attach in a natural way to arithmetic Riemann surfaces, as I explained in obnoxious detial in Chapter Five of my thesis. It is now a consequence of Selberg's eigenvalue conjecture for such surfaces (which everyone just knows is true) that these graphs have weak expansion properties. That is, they have relatively small Cheeger constants, which implies that they fracture easily. Which in turn implies that evolution by natural selection can not move efficiently through the graph. QED."
Pure genius at work!
Read the full post
"The human eye is composed of so many different interlocking parts that it can't possibly have..."
STOP RIGHT THERE! You've just fallen for a schoolboy error: you're about to make an argument for which thorough, accurate and catchy refutations are available. Probably you read this argument in an ISCID pamphlet, right? Didn't you consider that evilutionists may also have read that pamphlets, and prepared themselves to rebut its claims?
Instead, and I cannot emphasise this enough, you should pick an argument that they haven't come across before. Dembski actually had the more mathematically-inclined evilutionists feeling uncertain for a bit. Behe managed to look convincing for at least half an hour. I repeat: the best arguments refer to areas of academia the evilutionist is unlikely to have hitherto explored.
Exhibit A:
"It's simple, really, but the best ideas always are. Make a graph whose vertices are all possible genotypes with two vertices connected if they are one mutational step away from each other. That graph is isomorphic to a Cayley graph of a certain matrix group with respect to a standard generating set. (Surely that's obvious?) Such Cayley graphs attach in a natural way to arithmetic Riemann surfaces, as I explained in obnoxious detial in Chapter Five of my thesis. It is now a consequence of Selberg's eigenvalue conjecture for such surfaces (which everyone just knows is true) that these graphs have weak expansion properties. That is, they have relatively small Cheeger constants, which implies that they fracture easily. Which in turn implies that evolution by natural selection can not move efficiently through the graph. QED."
Pure genius at work!
Read the full post
Monday, May 05, 2008
The art of religion
Henry Neufeld is a really nice guy. He's one of those folks who, if he was a church leader in my area, I'd probably go to church with just for the sake of having more fun discussions. It helps that he has a fair amount of respect for skeptical atheism - and, unlike 99.99% of religious people, he actually understands what it means to be a skeptical atheist.
As a result, my discussions with him tend to throw up a significant number of conceptual gems. In particular, I draw your attention to this post on his site: Believing in Words and Symbols. The underlying theme is that he really only has one core belief: that there is Something out there. Everything else - the Trinity, the Resurrection - is really just a language, a set of myths that seem to convey the feelings he experiences.
It was only once I'd heard this concept expressed clearly that I realised I'd come across it before. Looking back, a number of books I've read and people I've spoken to have touched on the same thought: that the important part of religion is the central Mystery, and the rest is just the clothes we put on it. It seems to be a fairly common theme in the more mystical variants of religion - consider the Gnostic creation myths, for example, or the Buddhist koans. This kind of religion isn't a science - it doesn't claim to describe the universe exactly. Rather, it is an art that allows people to express their deep feelings more clearly.
Possibly the only reason I'm seeing this common thread so clearly is that I've reached a point in my personal journey where I'm able to appreciate it. When I was younger, I spent an inordinate amount of time making what I might describe as a scientific sweep of religion, searching through the stewpot of superstition for anything that might have real applications. I wanted to dissect demons, bottle angels, and unleash whatever power the mind might have.
Sadly, I was born too late. Whilst the early scientists might have deduced vaccines and antibiotics from old wives' tales, in this day and age almost anything useful in religion has already been ripped from its clutches and absorbed into the realm of science. There are no demons, no angels, and the only consequence of trying to "unleash the mind" is a mild headache.
These days I'm attempting a more sophisticated analysis of religion - what might be described as a psychological sweep. Religion is a wonderful resource for students of psychology. Since very little reality-based testing occurs, it tends to attract and retain superb examples of cognitive bias and glitching. I'm of the strong suspicion that some of these are universalbugs features of the human brain - the question is which ones and why.
On a more complimentary note, religious traditions often contain ways of dealing with common cognitive issues that more "rational" approaches leave out. To quote Anton LaVey:
"One of the greatest of all fallacies about the practice of ritual magic is the notion that one must believe in the powers of magic before one can be harmed or destroyed by them. Nothing could be farther from the truth, as the most receptive victims of curses have always been the greatest scoffers. The reason is frighteningly simple. The uncivilized tribesman is the first to run to his nearest witch-doctor or shaman when he feels a curse has been placed upon him by an enemy. The threat and presence of harm is with him consciously, and belief in the power of the curse is so strong that he will take every precaution against it. Thus, through the application of sympathetic magic, he will counteract any harm that might come his way. This man is watching his step, and not taking any chances.
On the other hand, the 'enlightened' man, who doesn't place any stock in such 'superstition', relegates his instinctive fear of the curse to his unconscious, thereby nourishing it into a phenomenally destructive force that will multiply with each succeeding misfortune. Of course, every time a new setback occurs, the non-believer will automatically deny any connection with the curse, especially to himself. The emphatic conscious denial of the potential of the curse is the very ingredient that will create its success, through setting-up of accident prone situations. In many instances, the victim will deny any magical significance to his fate, even unto his dying gasp - although the magician is perfectly satisfied, so long as his desired results occur. It must be remembered that it matters not whether anyone attaches any significance to your working, so long as the results of the working are in accordance with your will. The super-logician will always explain the connection of the magical ritual to the end result as 'coincidence'."
Ever since reading this, I've taken to "warding off bad luck" by drawing a favourite symbol on my chest whenever I feel I'm tempting fate. It works surprisingly well.
Evidently this useful little trick was incorporated into Catholic doctrine a while back, and has lurked there ever since. What other gems of wisdom are waiting to be separated from the dross of accumulated memes?
Read the full post
As a result, my discussions with him tend to throw up a significant number of conceptual gems. In particular, I draw your attention to this post on his site: Believing in Words and Symbols. The underlying theme is that he really only has one core belief: that there is Something out there. Everything else - the Trinity, the Resurrection - is really just a language, a set of myths that seem to convey the feelings he experiences.
It was only once I'd heard this concept expressed clearly that I realised I'd come across it before. Looking back, a number of books I've read and people I've spoken to have touched on the same thought: that the important part of religion is the central Mystery, and the rest is just the clothes we put on it. It seems to be a fairly common theme in the more mystical variants of religion - consider the Gnostic creation myths, for example, or the Buddhist koans. This kind of religion isn't a science - it doesn't claim to describe the universe exactly. Rather, it is an art that allows people to express their deep feelings more clearly.
Possibly the only reason I'm seeing this common thread so clearly is that I've reached a point in my personal journey where I'm able to appreciate it. When I was younger, I spent an inordinate amount of time making what I might describe as a scientific sweep of religion, searching through the stewpot of superstition for anything that might have real applications. I wanted to dissect demons, bottle angels, and unleash whatever power the mind might have.
Sadly, I was born too late. Whilst the early scientists might have deduced vaccines and antibiotics from old wives' tales, in this day and age almost anything useful in religion has already been ripped from its clutches and absorbed into the realm of science. There are no demons, no angels, and the only consequence of trying to "unleash the mind" is a mild headache.
These days I'm attempting a more sophisticated analysis of religion - what might be described as a psychological sweep. Religion is a wonderful resource for students of psychology. Since very little reality-based testing occurs, it tends to attract and retain superb examples of cognitive bias and glitching. I'm of the strong suspicion that some of these are universal
On a more complimentary note, religious traditions often contain ways of dealing with common cognitive issues that more "rational" approaches leave out. To quote Anton LaVey:
"One of the greatest of all fallacies about the practice of ritual magic is the notion that one must believe in the powers of magic before one can be harmed or destroyed by them. Nothing could be farther from the truth, as the most receptive victims of curses have always been the greatest scoffers. The reason is frighteningly simple. The uncivilized tribesman is the first to run to his nearest witch-doctor or shaman when he feels a curse has been placed upon him by an enemy. The threat and presence of harm is with him consciously, and belief in the power of the curse is so strong that he will take every precaution against it. Thus, through the application of sympathetic magic, he will counteract any harm that might come his way. This man is watching his step, and not taking any chances.
On the other hand, the 'enlightened' man, who doesn't place any stock in such 'superstition', relegates his instinctive fear of the curse to his unconscious, thereby nourishing it into a phenomenally destructive force that will multiply with each succeeding misfortune. Of course, every time a new setback occurs, the non-believer will automatically deny any connection with the curse, especially to himself. The emphatic conscious denial of the potential of the curse is the very ingredient that will create its success, through setting-up of accident prone situations. In many instances, the victim will deny any magical significance to his fate, even unto his dying gasp - although the magician is perfectly satisfied, so long as his desired results occur. It must be remembered that it matters not whether anyone attaches any significance to your working, so long as the results of the working are in accordance with your will. The super-logician will always explain the connection of the magical ritual to the end result as 'coincidence'."
Ever since reading this, I've taken to "warding off bad luck" by drawing a favourite symbol on my chest whenever I feel I'm tempting fate. It works surprisingly well.
Evidently this useful little trick was incorporated into Catholic doctrine a while back, and has lurked there ever since. What other gems of wisdom are waiting to be separated from the dross of accumulated memes?
Read the full post
Sunday, May 04, 2008
Plantinga's unnatural naturalism
So, firstly, I'm back off holiday. Secondly, the rest of my life is starting to settle down. I currently work in a different country from the one I live in, which is causing some problems, but that still leaves me with the occasional snippet of time for blogging.
Thirdly, my sister just started a new module of her Philosophy degree: religious philosophy. Needless to say, this has resulted in many fun discussions. So far, though, it's all been window-dressing - I know the arguments inside out. Most of it stopped being interesting a while back, which is why these days I'm more focused on religious psychology.
One guy that did catch my attention, though, was Alvin Plantinga. This guy gets points for coming far closer than average to a reasonable summary of the skeptical atheist position. However, he still commits some howlers at time, which IMO betray a comparative ignorance of science and, in particular, evolutionary biology.
As an example, I draw your attention to his "paper" (actually more a lecture transcript) An Evolutionary Argument Against Naturalism. Eschewing all the philosophical language, the central point is that Godless rationality is self-defeating, since brains that evolved by natural selection don't give a damn about truth as long as they carry on surviving and breeding.
Anyone who's been in the skepticism trenches for a while will recognise this as a variant on the transcendental argument, aka the Argument from It's My Ball And You Can't Play With It. This is one of the more annoying arguments for God because any counter-argument you make can itself be interpreted by the theist as more self-refuting rationalism. Plantinga, however, lays the argument out clearly enough that the fault lines are visible, which is why I like him.
In his article, he summarises the evolutionary position as "beliefs are adaptive". He then uses a neat example to show why this could lead to "pathological beliefs" (beliefs that are adaptive but false) as easily as true ones. Imagine a critter that enjoyed petting vicious tigers, but thought that the best way to pet a tiger was to run very fast in the opposite direction. Then its beliefs would lead to the most survival-enhancing result (legging it) so would be selected for, despite being completely unreflective of reality.
Extending this logic further, the claim is made that a brain produced by unassisted evolution will not be particularly adept at picking true beliefs; rather, it will pick beliefs that cause survival. Hence, if we were produced by evolution, our cognitive systems would be so unreliable that we couldn't justifiably say we were produced by evolution. Catch-22.
There are two objections to this argument, one obvious and the other subtle. The obvious one is the classic "stopped clock" issue: although the critter's behaviour turned out for the best this one time, that doesn't mean it'll be effective in general. Plantinga's critter is going to spend far too much time running away from cute little bunny rabbits, which is a waste of time and resources. So its beliefs are still soundly beaten by the more reality-based position that tigers are scary and scary things should be run away from.
The subtle objection is to Plantinga's characterisation of the evolutionary position. What he is describing is not belief formation in humans. It is closer to belief formation (or the creation of equivalent neurochemical constructs) in nematode worms. Nematodes have only a few "beliefs", so it is possible for evolution to act on each of the worm's underlying rules-of-thumb in turn.
Humans operate by a different method. We are selected on the basis of our belief creation methodology - the generator of our beliefs - rather than the individual beliefs themselves. From evolution's perspective, this is massively more efficient because, rather than selecting for billions of different rules, you can just select for one generator and let it get on with it. The resulting creature will be able to adjust its beliefs on the fly when it meets new evidence, and will hence be more effective.
It's not immediately obvious whether there exist "pathological generators" that could reliably produce pathological beliefs, but I'd strongly suspect not. However, I'm open to informed argument.
Read the full post
Thirdly, my sister just started a new module of her Philosophy degree: religious philosophy. Needless to say, this has resulted in many fun discussions. So far, though, it's all been window-dressing - I know the arguments inside out. Most of it stopped being interesting a while back, which is why these days I'm more focused on religious psychology.
One guy that did catch my attention, though, was Alvin Plantinga. This guy gets points for coming far closer than average to a reasonable summary of the skeptical atheist position. However, he still commits some howlers at time, which IMO betray a comparative ignorance of science and, in particular, evolutionary biology.
As an example, I draw your attention to his "paper" (actually more a lecture transcript) An Evolutionary Argument Against Naturalism. Eschewing all the philosophical language, the central point is that Godless rationality is self-defeating, since brains that evolved by natural selection don't give a damn about truth as long as they carry on surviving and breeding.
Anyone who's been in the skepticism trenches for a while will recognise this as a variant on the transcendental argument, aka the Argument from It's My Ball And You Can't Play With It. This is one of the more annoying arguments for God because any counter-argument you make can itself be interpreted by the theist as more self-refuting rationalism. Plantinga, however, lays the argument out clearly enough that the fault lines are visible, which is why I like him.
In his article, he summarises the evolutionary position as "beliefs are adaptive". He then uses a neat example to show why this could lead to "pathological beliefs" (beliefs that are adaptive but false) as easily as true ones. Imagine a critter that enjoyed petting vicious tigers, but thought that the best way to pet a tiger was to run very fast in the opposite direction. Then its beliefs would lead to the most survival-enhancing result (legging it) so would be selected for, despite being completely unreflective of reality.
Extending this logic further, the claim is made that a brain produced by unassisted evolution will not be particularly adept at picking true beliefs; rather, it will pick beliefs that cause survival. Hence, if we were produced by evolution, our cognitive systems would be so unreliable that we couldn't justifiably say we were produced by evolution. Catch-22.
There are two objections to this argument, one obvious and the other subtle. The obvious one is the classic "stopped clock" issue: although the critter's behaviour turned out for the best this one time, that doesn't mean it'll be effective in general. Plantinga's critter is going to spend far too much time running away from cute little bunny rabbits, which is a waste of time and resources. So its beliefs are still soundly beaten by the more reality-based position that tigers are scary and scary things should be run away from.
The subtle objection is to Plantinga's characterisation of the evolutionary position. What he is describing is not belief formation in humans. It is closer to belief formation (or the creation of equivalent neurochemical constructs) in nematode worms. Nematodes have only a few "beliefs", so it is possible for evolution to act on each of the worm's underlying rules-of-thumb in turn.
Humans operate by a different method. We are selected on the basis of our belief creation methodology - the generator of our beliefs - rather than the individual beliefs themselves. From evolution's perspective, this is massively more efficient because, rather than selecting for billions of different rules, you can just select for one generator and let it get on with it. The resulting creature will be able to adjust its beliefs on the fly when it meets new evidence, and will hence be more effective.
It's not immediately obvious whether there exist "pathological generators" that could reliably produce pathological beliefs, but I'd strongly suspect not. However, I'm open to informed argument.
Read the full post
Saturday, March 22, 2008
Roadtrippin' out
I'm gonna be incommunicado for the next few months. I thought I should mention it in advance so that my adoring fans (snigger) don't get too worried for me...
The first reason for this enforced absence is that I'm being shifted to the other end of the country by my company. That's probably not a bad thing - Scotland is a nice place and the work I'll be doing is interesting. It's an opportunity not to be sneezed at.
The second reason is slightly at odds with the first: I've got an exam in mid-April. I'm starting to panic. It's a really evil one, and I'm not feeling at all prepared. Given that I have three weeks left, this would not be a problem but for the third reason.
The third reason is that I'm off on a road-trip with friends from university: from LA to Miami in two weeks. The two weeks immediately prior to my exam, no less. I am going to be soooo jetlagged.
In case anyone's interested (they probably aren't), one of my more techie mates has produced a website for the road-trip. This includes a marked-out route map, so feel free to hurl tomatoes as we pass.
The fourth reason is that I will need a heck of a lot of time to recover from the first three. Jetlag + exam panic + job stress = very little brainpower, let alone inclination to blog. See you in a month or so!
Read the full post
The first reason for this enforced absence is that I'm being shifted to the other end of the country by my company. That's probably not a bad thing - Scotland is a nice place and the work I'll be doing is interesting. It's an opportunity not to be sneezed at.
The second reason is slightly at odds with the first: I've got an exam in mid-April. I'm starting to panic. It's a really evil one, and I'm not feeling at all prepared. Given that I have three weeks left, this would not be a problem but for the third reason.
The third reason is that I'm off on a road-trip with friends from university: from LA to Miami in two weeks. The two weeks immediately prior to my exam, no less. I am going to be soooo jetlagged.
In case anyone's interested (they probably aren't), one of my more techie mates has produced a website for the road-trip. This includes a marked-out route map, so feel free to hurl tomatoes as we pass.
The fourth reason is that I will need a heck of a lot of time to recover from the first three. Jetlag + exam panic + job stress = very little brainpower, let alone inclination to blog. See you in a month or so!
Read the full post
Monday, March 03, 2008
Management skeptic (post #1)
One of the best ways to think of skepticism is as an eternal game of hunt-the-value. When presented with a notion, we ask: is this valuable because it gives accurate predictions, or because it makes us feel good, or because it provides interesting questions, or what? And we don't appreciate being tricked into misclassifying a notion's value.
For example, in discussion with the Buddhist group I've been gatecrashing, one of the points that kept coming up was: does the value of meditation lie in simply sitting and breathing and focusing? Or does the Buddhist cosmology also make a difference? We had a rather lengthy rambling chat about this, after which I concluded that I wasn't going to pin these people down without bringing in a nailgun.
I feel the same about "management models" at the moment. A management model is a chunk of crystallised management philosophy, often described by flow diagrams. The simplest ones are very simple. For example, the management control loop can be represented by a circle with clockwise-pointing arrows round the edge and the words "think", "act", "evaluate", "decide" spaced round the edge.
In this case, the idea is that you should plan out what you want to do, figure out how to do it, and perform some sort of evaluation of how that approach worked for you. Once the results are in, you decide whether you need to adjust your goals, fine-tune your methodology, or simply accept that you're doing OK until the next round of evaluation comes up.
This is a very simple model, but it makes a great deal of sense. It explains, among other things, why so many New Year's Resolutions fail. People may decide what they want to achieve, and they may even work out how they're going to achieve it, but they rarely set themselves any sort of regular evaluation timetable to see how well they've done so far. Lacking an essential component of this management model, they dismally fail.
However, management models don't stop there. For example, a popular project management model called PRINCE2 actually can't be represented on anything smaller than A3. One of my managers has a placemat with it on, and it looks like Management Control Loop meets Flying Spaghetti Monster.
Given the enormous investment in time and money necessary to train as a PRINCE2 practitioner, the question becomes increasingly important: where does the value lie in this system?
To be continued
Read the full post
For example, in discussion with the Buddhist group I've been gatecrashing, one of the points that kept coming up was: does the value of meditation lie in simply sitting and breathing and focusing? Or does the Buddhist cosmology also make a difference? We had a rather lengthy rambling chat about this, after which I concluded that I wasn't going to pin these people down without bringing in a nailgun.
I feel the same about "management models" at the moment. A management model is a chunk of crystallised management philosophy, often described by flow diagrams. The simplest ones are very simple. For example, the management control loop can be represented by a circle with clockwise-pointing arrows round the edge and the words "think", "act", "evaluate", "decide" spaced round the edge.
In this case, the idea is that you should plan out what you want to do, figure out how to do it, and perform some sort of evaluation of how that approach worked for you. Once the results are in, you decide whether you need to adjust your goals, fine-tune your methodology, or simply accept that you're doing OK until the next round of evaluation comes up.
This is a very simple model, but it makes a great deal of sense. It explains, among other things, why so many New Year's Resolutions fail. People may decide what they want to achieve, and they may even work out how they're going to achieve it, but they rarely set themselves any sort of regular evaluation timetable to see how well they've done so far. Lacking an essential component of this management model, they dismally fail.
However, management models don't stop there. For example, a popular project management model called PRINCE2 actually can't be represented on anything smaller than A3. One of my managers has a placemat with it on, and it looks like Management Control Loop meets Flying Spaghetti Monster.
Given the enormous investment in time and money necessary to train as a PRINCE2 practitioner, the question becomes increasingly important: where does the value lie in this system?
To be continued
Read the full post
Monday, February 25, 2008
The Exegesis Impulse
I know, it sounds like a science fiction story, doesn't it? "Exegesis" is a very cool word in its own right, but when "impulse" enters the game you know the result is going to involve lots of polished metal and/or advanced biotech.
Sorry to disappoint. This is actually going to be a discussion of scripture.
The word "exegesis" originally comes from a Greek word meaning "to lead out" - to garner meaning from a text based on the words within it. That's a fair enough pursuit. It covers pretty much the whole range of Bible commentary - technically, any study of the Bible that doesn't make reference to contextual historical information is exegetical.
I was actually quite shocked to read this definition. For some time now, I've understood the word to have a subtly different meaning. Exegesis, in many contexts, means "filling in the gaps by making shit up". It's in this sense that I'll discuss the word.
The prevalence of exegesis
You don't have to go too far to see that exegesis is not a rare phenomenon. Your average nativity play will have a whole host of details that aren't in the original text - the three kings, for example, as opposed to an unspecified number of wise men.
A more fertile field for this activity is creationism. From a scant two chapters of Genesis, creationists have devised a complete history of events surrounding the Earth's origins. It's got everything: huge sheets of water inexplicably falling from low orbit, a fascinating "hydrological sorting" effect to explain why trilobites always appear lower than turtles in the fossil record, holes in the Earth's crust for the water to hide in afterwards, etc. Some versions even have complex relativistic effects to explain how we can see 10,000,000,000-year-old starlight in a 6,000-year-old universe.
All these variants have two things in common. Firstly, they're completely implausible. To pick on one example, hydrological sorting can't account for the different radiological signatures of different layers, because lots of swirling water is too crude a tool to distinguish between isotopes.
Secondly, they're nowhere to be found in the Bible. Genesis doesn't say anything about fossils or plate tectonics or relativity. These ideas are exegesis - people see that Genesis doesn't appear to match up with reality, so they make shit up to fill the gaps.
How far back?
Exegesis is also not a recent phenomenon - even as early as the second century AD, people were adding little "finishing touches" to scripture. For example, in the Eastern Orthodox Church, it's strongly believed that Jesus was born in a cave. This is not mentioned anywhere in the scriptures, although it does show up in a couple of Gnostic gospels and in the beliefs of sundry contemporaneous religions about their messiahs.
So how far does the rot go? How can we tell?
One easy way to detect exegesis is what you might call a "comparative biology" of stories. For this I'll call on the writings of a blogrollee of mine, who discusses his deconversion story here. Come back when you've read it, OK?
As this story indicates, a major feature of exegesis is that it gives different results every time you do it. After all, if the meaning of your source was obvious then you wouldn't need to make shit up to fill in the gaps. So all we need to look for is completely different versions of the same story, and we'll know that one or both of the authors is happily exegesising.
A tale of two gospels
Enter Matthew and Luke. It's long been known that these two gospels share a lot of material with Mark and with each other, often word-for-word. The current best guess at which came first is known as the Synoptic Hypothesis. It's generally believed that Mark was first on the scene, and that Matthew and Luke had copies of his work handy as they wrote.
The reason for this is simple. Where Mark tells a simple story or leaves a gap, Matthew and Luke tend to elaborate - and they do so in completely different ways. Consider, for example, the Nativity story. Take the version in Matthew (Matt 1:18-2:23) and the version in Luke (Luke 2:1-40), and compare them. You will find precisely five points of overlap: the names of Joseph, Mary and Jesus, and the towns of Bethlehem and Nazareth.
Everything else, and I mean everything is completely different. Herod and the slaughter of the innocents are mentioned only in Matthew. Quirinius' census is mentioned only in Luke. Matthew talks of magi. Luke talks of shepherds. Matthew says the family fled to Egypt. Luke says that the family wandered over to Jerusalem. And will you just look at the two different lineages given for Jesus...
With sufficient rhetorical wriggling (exegesis!) it's almost possible to construct a single story that covers all the options. But, to be quite blunt, why the heck would you want to? There's so little overlap between these stories that they might as well be about different people.
To me, the reason for this lack of overlap is fairly simple. The authors of Matthew and Luke would have known that Jesus' parents were called Joseph and Mary. They would have known that Jesus came from Nazareth. And they'd have noticed a prophecy in Isaiah suggesting that the Messiah would have been born in Bethlehem. They took these facts and... exegesised. Matthew used Herod as his bogeyman to drive Joseph and Mary out of Bethlehem, whilst Luke took a completely different but believably bureaucratic option by attempting to link the birth to a Roman census.
Taking this as our working hypothesis, it instantly becomes clear why the historical census of Quirinius appears to have happened well before Jesus' birth, and why no contemporary author mentions such a barbarous act as the slaughter of innocents (despite trumpeting a range of Herod's infamies). It's because Matthew and Luke were making shit up.
How far back? redux
You'll notice that, up until now, I've been treating Mark as a reliable source, and only expressing skepticism about Matthew and Luke. There's a very simple reason for this: we have no earlier Gospels, so we have no way of knowing which aspects of Mark were historical and which were innovation.
Until very recently, I wasn't really bothered by this. Probably a few things in Mark were exaggerated a bit, but I saw no reason to disagree with the core of the story. Now, though, my feelings are different.
I've seen how fast exegesis can proceed, even in this modern information age where debate and criticism thrive. I've seen that exegesis was at least as powerful in ancient times - by most estimates, Matthew and Luke were written no more than a decade or so later than Mark. I've started to explore the motivations for these differences, in particular the three-way struggle between Romans, Jews and Christians that inspired much of the Bible's antisemitism. And I'm troubled.
As far as we can tell, Jesus died no later than 40AD. As far as we can tell, Mark was written no earlier than 65AD. That's at least 25 years gap, double the distance between Mark and the other synoptic Gospels.
Given that much time, and that much room for exegesis, how do we know that Mark wasn't... making shit up?
Read the full post
Sorry to disappoint. This is actually going to be a discussion of scripture.
The word "exegesis" originally comes from a Greek word meaning "to lead out" - to garner meaning from a text based on the words within it. That's a fair enough pursuit. It covers pretty much the whole range of Bible commentary - technically, any study of the Bible that doesn't make reference to contextual historical information is exegetical.
I was actually quite shocked to read this definition. For some time now, I've understood the word to have a subtly different meaning. Exegesis, in many contexts, means "filling in the gaps by making shit up". It's in this sense that I'll discuss the word.
The prevalence of exegesis
You don't have to go too far to see that exegesis is not a rare phenomenon. Your average nativity play will have a whole host of details that aren't in the original text - the three kings, for example, as opposed to an unspecified number of wise men.
A more fertile field for this activity is creationism. From a scant two chapters of Genesis, creationists have devised a complete history of events surrounding the Earth's origins. It's got everything: huge sheets of water inexplicably falling from low orbit, a fascinating "hydrological sorting" effect to explain why trilobites always appear lower than turtles in the fossil record, holes in the Earth's crust for the water to hide in afterwards, etc. Some versions even have complex relativistic effects to explain how we can see 10,000,000,000-year-old starlight in a 6,000-year-old universe.
All these variants have two things in common. Firstly, they're completely implausible. To pick on one example, hydrological sorting can't account for the different radiological signatures of different layers, because lots of swirling water is too crude a tool to distinguish between isotopes.
Secondly, they're nowhere to be found in the Bible. Genesis doesn't say anything about fossils or plate tectonics or relativity. These ideas are exegesis - people see that Genesis doesn't appear to match up with reality, so they make shit up to fill the gaps.
How far back?
Exegesis is also not a recent phenomenon - even as early as the second century AD, people were adding little "finishing touches" to scripture. For example, in the Eastern Orthodox Church, it's strongly believed that Jesus was born in a cave. This is not mentioned anywhere in the scriptures, although it does show up in a couple of Gnostic gospels and in the beliefs of sundry contemporaneous religions about their messiahs.
So how far does the rot go? How can we tell?
One easy way to detect exegesis is what you might call a "comparative biology" of stories. For this I'll call on the writings of a blogrollee of mine, who discusses his deconversion story here. Come back when you've read it, OK?
As this story indicates, a major feature of exegesis is that it gives different results every time you do it. After all, if the meaning of your source was obvious then you wouldn't need to make shit up to fill in the gaps. So all we need to look for is completely different versions of the same story, and we'll know that one or both of the authors is happily exegesising.
A tale of two gospels
Enter Matthew and Luke. It's long been known that these two gospels share a lot of material with Mark and with each other, often word-for-word. The current best guess at which came first is known as the Synoptic Hypothesis. It's generally believed that Mark was first on the scene, and that Matthew and Luke had copies of his work handy as they wrote.
The reason for this is simple. Where Mark tells a simple story or leaves a gap, Matthew and Luke tend to elaborate - and they do so in completely different ways. Consider, for example, the Nativity story. Take the version in Matthew (Matt 1:18-2:23) and the version in Luke (Luke 2:1-40), and compare them. You will find precisely five points of overlap: the names of Joseph, Mary and Jesus, and the towns of Bethlehem and Nazareth.
Everything else, and I mean everything is completely different. Herod and the slaughter of the innocents are mentioned only in Matthew. Quirinius' census is mentioned only in Luke. Matthew talks of magi. Luke talks of shepherds. Matthew says the family fled to Egypt. Luke says that the family wandered over to Jerusalem. And will you just look at the two different lineages given for Jesus...
With sufficient rhetorical wriggling (exegesis!) it's almost possible to construct a single story that covers all the options. But, to be quite blunt, why the heck would you want to? There's so little overlap between these stories that they might as well be about different people.
To me, the reason for this lack of overlap is fairly simple. The authors of Matthew and Luke would have known that Jesus' parents were called Joseph and Mary. They would have known that Jesus came from Nazareth. And they'd have noticed a prophecy in Isaiah suggesting that the Messiah would have been born in Bethlehem. They took these facts and... exegesised. Matthew used Herod as his bogeyman to drive Joseph and Mary out of Bethlehem, whilst Luke took a completely different but believably bureaucratic option by attempting to link the birth to a Roman census.
Taking this as our working hypothesis, it instantly becomes clear why the historical census of Quirinius appears to have happened well before Jesus' birth, and why no contemporary author mentions such a barbarous act as the slaughter of innocents (despite trumpeting a range of Herod's infamies). It's because Matthew and Luke were making shit up.
How far back? redux
You'll notice that, up until now, I've been treating Mark as a reliable source, and only expressing skepticism about Matthew and Luke. There's a very simple reason for this: we have no earlier Gospels, so we have no way of knowing which aspects of Mark were historical and which were innovation.
Until very recently, I wasn't really bothered by this. Probably a few things in Mark were exaggerated a bit, but I saw no reason to disagree with the core of the story. Now, though, my feelings are different.
I've seen how fast exegesis can proceed, even in this modern information age where debate and criticism thrive. I've seen that exegesis was at least as powerful in ancient times - by most estimates, Matthew and Luke were written no more than a decade or so later than Mark. I've started to explore the motivations for these differences, in particular the three-way struggle between Romans, Jews and Christians that inspired much of the Bible's antisemitism. And I'm troubled.
As far as we can tell, Jesus died no later than 40AD. As far as we can tell, Mark was written no earlier than 65AD. That's at least 25 years gap, double the distance between Mark and the other synoptic Gospels.
Given that much time, and that much room for exegesis, how do we know that Mark wasn't... making shit up?
Read the full post
Subscribe to:
Posts (Atom)