15 December, 2010

Going through the gears

The Antikythera mechanism is the world's oldest known computer (150–100 BC). Various parts of this intricate cog-based instrument were discovered in a shipwreck back in 1900-1901, although the purpose of it, to accurately predict astronomical positions, was not determined until many decades later.


From a recent article in Nature:

Two thousand years ago, a Greek mechanic set out to build a machine that would model the workings of the known Universe. The result was a complex clockwork mechanism that displayed the motions of the Sun, Moon and planets on precisely marked dials. By turning a handle, the creator could watch his tiny celestial bodies trace their undulating paths through the sky.

The mechanic's name is now lost. But his machine, dubbed the Antikythera mechanism, is by far the most technologically sophisticated artefact that survives from antiquity. Since a reconstruction of the device hit the headlines in 2006, it has revolutionized ideas about the technology of the ancient world, and has captured the public imagination as the apparent pinnacle of Greek scientific achievement.


Now, some clever big kids have reconstructed it...

With freakin' LEGO!!


Read More...

03 December, 2010

Evolutionary morality - what's the problem?

Seriously, what is the issue that theists have with a non-devine account for morality? Let's settle this once and for all (he said naively).

Here is what I see as a perfectly rational account for evolutionary morality. The overall explanation is one of empathy and the so-called 'Golden rule' - treat others as you would like others to treat you. It is important to stress that this 'rule' is not the type of law that a theist would claim is prescribed from a deity. That is, we don't have to follow the golden rule, but most of us choose to because we can see the consequences of what would happen if we did not. For example, I wouldn't like it if someone stole from me, so I don't steal from other people. On the basis of this, I consider stealing to be wrong. When the majority of a population consider stealing to be wrong, the consensus usually becomes what is known as 'morality'. This still allows for a minority of a population to think differently - a thief might think it is OK to steal, or might think it is wrong but will do it anyway.

This leads us to moral relativism. More on this later, but first I want to get back to the evolution of morality...

All social animals have to co-operate to survive. This means that the survival of the herd is crucial for the survival of the individual. Any member of a population of social animals that ventures out on its own has a poor chance of survival - be it an isolated deer in a field or a lone mountaineer in the Alps. This is not to say that they cannot survive - some might - but in general there is safety in numbers.

Where'd they go?

Genes promoting co-operation and empathy, therefore, would be selected for in a social population, and through trial and error the 'golden rule' would unintentionally take hold. A population in which the 'golden rule' wasn't followed wouldn't sustain itself for very long, as its members would either kill each other or be killed by predators due to a lack of co-operation.

An interesting theory on the roots of morality and the Golden rule comes from game theory. This can be simplified (*extremely simplified*) into a game called the Prisoner's Dilemma. Here is a synopsis of the game from Wiki:

Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated the prisoners, visit each of them to offer the same deal. If one testifies for the prosecution against the other (defects) and the other remains silent (co-operates), the defector goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

So the choice is to defect or co-operate. In a one-off situation, the obvious answer seems to be to defect. But it gets more interesting when, rather than being a one-off situation, the game is played continuously with previous actions remembered by the participants (as would occur in real life situations). The longer the game is played, a pay-off matrix emerges and the more likely it is that the participants will develop a strategy in which they actually co-operate most of the time, as when they defect, their opponent will simply retaliate in the next round.

Richard Dawkins has a chapter on the relationship between game theory and morality in the Selfish Gene. In it he explains how the above game can be applied to social animals in a 'you scratch my back, I'll scratch yours' type of way. Animals with the capacity to remember past events will reward those who co-operate and punish those who defect. This leads to a situation where the majority of a population will be co-operative, as they want to avoid the harsh penalties imposed on those who defect. From this interplay, co-operative behaviour (which loosely means 'being nice') becomes the norm and is considered to be 'moral'. Of course, if the vast majority of people are co-operative, then isolated incidences of defective behaviour can be rewarding, but it is ultimately unsustainable as the co-operative masses will retaliate. Importantly, all of this behaviour is still directly related to the Golden rule and is a consequence of the individual's desire to survive.

Evidence that co-operative behaviour with occasional retaliation is the optimal social standpoint is given by the computer program Tit-for-Tat that was entered into the iterated prisoners dilemma (IPD) tournament, a competition devised to develop the best strategy to win at repeated playing of the prisoners dilemma. Many complex computer programs were entered in the tournament, but Tit-for-Tat won using a very simple strategy:

It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to co-operate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness." When the opponent defects, on the next move, the player sometimes co-operates anyway, with a small probability (around 1–5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents

The story of Tit-for-Tat is well worth a read.

Based on the Tit-for-Tat program, the overall qualities needed to be socially successful are being nice, forgiving and non-envious but retaliating to defective behaviour when necessary. When you think about it, these qualites are exactly what any successful social animal possess:

- Unless provoked, the agent will always co-operate

- If provoked, the agent will retaliate

- The agent is quick to forgive

- The agent must have a good chance of competing against the opponent more than once.

Tit-for-tat, eventually...

Game theory, therefore, provides an intriguing and plausible reason for why humans are predominantly moral beings, and have the capacity to co-operate and forgive. Of course, none of this means that they have to be co-operative, although we have devised laws to punish those who are not. This is because there are no absolute moral laws; instead, there are evolutionary behaviours that allow populations to reach a sustainable equilibrium.

Which brings us back to moral relativism.

A typical theistic response to the idea of moral relativism (the position that moral propositions do not reflect absolute or universal truths) is to propose an emotionally-charged situation and claim that the moral relativist cannot claim that it is absolutely wrong. A common example is torturing or raping children. I have heard many theists claim that I can't say that torturing children is absolutely wrong. That may be true. But I can say it is wrong according to my moral standards because I would not like to be tortured, and so I imagine other people also don't like to be tortured (the fact that they are children is irrelevant - torture is torture).

It's a difficult subject to talk about, but one can imagine a hypothetical scenario in which torturing a child is acceptable, if, say, the alternative is that 100 children will be tortured. Horrible, I know, but I am simply making the point that emotionally-charged scenarios are not necessarily a fool-proof approach to tackling moral relativism. They do not actually have much substance other than to cloud the mind of an undecided observer in a debate.

However, if theists insist on using these types of emotionally-charged scenarios (and they do), then they can easily be used against the notion of divinely-inspired morality - a nice example of Tit-for-Tat in action! This is because if morality is simply the reflection of the nature of a deity, then it is dependent on something, and so it cannot be absolute. It is, in fact, completely arbitrary as the deity could have any number of whims or characteristics, which would automatically become entrenched in morality (see Euthyphro's dilemma). For example, if God (being omnipotent) decided tomorrow that child torture was morally good, would all Christians embrace it and actively engage in it? I very much doubt that they would. Why? Because morality exists apart from God.

Human morality came about through the evolution of social behaviour. That theists want to ascribe something as wonderful as morality to an unproven deity is just another example of how religion tries to fix what ain't broke.

Read More...