Friday, 30 April 2010

incomplete lists

On one level, I hate to be provided with incomplete information, but on another I view it as a challenge to come up with the missing data. An example will explain what I mean: a few years ago, I read that seven metals were known in antiquity, but the author neglected to enumerate what these metals were. Consequently, I set about working out the list for myself (using the internet is too easy and is certainly no challenge).

Gold was probably known to prehistoric societies, because it occurs in its native form (i.e., as pure metal) in streams and rivers; copper also occurs in native form and with tin is used to make the alloy bronze; silver is referred to in the biblical book of Genesis; lead was produced in Egypt as early as 3500 BC; and without iron there would have been no Iron Age. But that is only six metals. What was the seventh? Cinnabar (mercury sulphide) has also been known since the Stone Age, although it was probably used principally as a colouring agent in pottery, but mercury is easily extracted by heating, and it also occasionally occurs in native form. However, brass (an alloy of copper and zinc) was produced by the Romans for use in coinage and jewellery, having been produced accidentally, like bronze, at least two millennia earlier. Unfortunately, that brings the total to eight, so it is likely that the now forgotten author got it wrong. Alternatively, perhaps mercury was discounted because it is liquid under normal conditions and is therefore not a ‘real’ metal.

At this point, you may be wondering what all this has to do with my recent visit to Beijing. Here is the explanation: while checking out possible eating options in the neighbourhood of our hotel, we came across a Szechuanese restaurant. We were looking for somewhere to sample traditional Pekingese dumplings, so I picked up a promotional leaflet for future reference. Inside, I came across the following statement: “Sichuan cookery is one of the eight well-known Chinese cuisines”.

Now you see the problem. What are the other seven? The leaflet didn’t say. As with the metals of antiquity, it is easy to add the first few to the list: Cantonese, Shanghainese, Pekingese; but then it becomes harder. The four listed so far are distinct in style and ingredients, and other regional cuisines are too similar to these main archetypes to be considered completely separate. For example, there are many Chiu Chow restaurants in Hong Kong, but this style, which originates in the city of Shantou (formerly known as Swatow) in eastern Guangdong province, is only subtly different to Cantonese. I’ve eaten in a Hunanese restaurant in Hong Kong, but from that brief impression the food is similar to that of Sichuan. The Mongolian hot pot is a popular winter dish in Hong Kong, but I’ve no idea what else originates in that region, and Hainan chicken is the only dish I’ve sampled that comes from the large island off China’s south coast.

Anyway, I’ve managed to come up with eight, although whether I’ve listed the eight that the anonymous author of the Szechuanese restaurant’s leaflet had in mind is unlikely. Perhaps someone with a better knowledge of Chinese food than mine can come up with a more authoritative list.

Thursday, 29 April 2010

social contract

As someone from a semi-rural background, I’m a big fan of urban parks. Some, like Hyde Park in London and Central Park in New York, have an international reputation, but the really interesting ones are those that are unknown outside their immediate neighbourhood and are there simply as an amenity for the local population. We came across one such local park on our recent trip to Beijing, directly across the road from our hotel.

Side Park is a small oasis of calm amid the bustling chaos of the surrounding streets, with a huge wrought iron gate that appears to be permanently closed. Entry and egress are through a much smaller gate on the side. I half wondered if this was a throwback to the days of imperial China, when the largest and most magnificent gate or door was reserved for the exclusive use of the emperor, but the last Qing emperor is unlikely to have lived long enough to have seen this park, let alone this splendid gateway, so I dismissed the thought.

Just inside the gate, my attention was drawn to a large notice board to which was pinned an oddly intriguing poster with text in English and simplified Chinese. I reproduce it here in full so that you can judge for yourself whether my description is appropriate:
Love China, love Beijing.
Live in harmony with all.
Maintain public order.
Work well and hard.
Be honest and trustworthy.
Be industrious and thrifty.
Obey the laws and safeguard public order.
Be ready to support the just cause.
Encourage healthy trends.
Plant trees and flowers.
Protect the environment.
Care for one another.
Protect public property.
Safeguard national cultural relics.
Advocate science.
Respect teachers and education.
Always seek self-improvement.
Cherish the young and show respect to the elderly and each other.
Show mutual respect to soldiers and other public servants.
Assist the disabled and the poor.
Strive to improve social conditions.
Maintain your health, observe birth control practices.
Work for the good of society.
Be gentle and polite to all others.
Be broad-minded and open-minded.
Take pleasure in helping others.

The first thing to note is that this is not a contract but a set of rules, exhortations to behave in a manner that will be beneficial to the collective and a total denial of individuality. A contract is a two-way process, yet nowhere on this notice is there anything that could be construed as telling the citizen what they will get in return. Perhaps this personal benefit is implicitly understood by the expected readership.

However, my comments here should not be understood as an implied criticism of China or its social and political policies, because there are few if any rules on this list against which one could make a serious case. On the other hand, were Boris Johnson in London or Michael Bloomberg in New York to attempt to introduce such a ‘social contract’ in their respective cities, they would probably be laughed out of office. But that is because we in the West are accustomed to thinking that liberal democracy is the optimum form of government, and that the Chinese system, which in any case is poorly understood, is still evolving and is not nearly as monolithic as it is usually perceived to be, is altogether too authoritarian.

Given that the planet is in a parlous state precisely because of our espousal of the individual as the cornerstone of a successful society, it should be obvious that the paradigm for any future form of social organization that can deal with the mess will be a lot closer to the collectivism of the Chinese model than to the outmoded and discredited capitalist system, which once brought prosperity and important social progress but has now led us by the nose to the brink of ruin.

Wednesday, 28 April 2010

extramural activities

Before I visit a place for the first time, I usually construct a mental map. This isn’t a conscious process, merely the synthesis of a series of impressions gained from books, magazines, newspapers, television—the usual sources—and information picked up as a result of visits to other places that I have deemed to be similar. I don’t know why I bother, because I’m invariably wrong. When I came to Hong Kong in 1974, I arrived thinking that it was full of seedy bars, that drugs were dealt on every street corner, and that corruption was rampant. It is true that in 1974 Hong Kong still had the air of a third world city, but then I had no previous experience of such places. Tripoli, where I worked in 1968, was little more than a bustling provincial town, despite being a capital city, and any poverty was well out of sight.

However, my mistakes are harmless enough. Most of the time. Before we went to Beijing last Thursday, I looked up a street map and a map of the subway system, on the basis of which I made a mental comparison with London and the famous ‘map’ of its underground system. Unfortunately, the problem with a cartogram, which is what both diagrams are, is that it is not drawn to scale. And the street map didn’t show the subway stations, so there was no way of gauging the distance between adjacent stations on the network. Given that I walked from Tate Modern to Piccadilly Circus the last time I was in London, I assumed that the same would be possible in Beijing, and that all the city’s attractions would be within walking distance of each other. I was disabused of this notion when I discovered that we would need to take a taxi from our hotel, located in what might loosely be termed ‘central’ Beijing, to the nearest subway station.

To tell the truth, we had only a vague idea of how to get to anywhere, but at least we were at a subway station. Solution? Ask a policeman. Works every time. So we were shown which station to go to for the Great Wall, the first destination on our ‘planned’ itinerary. Once there, we were advised to catch the 919 bus, the stop for which was easily located. Too easily. We discovered two 919 buses at different stops. I don’t know whether we asked the wrong questions, but we certainly got on the wrong 919. We had been told that it was quite a long ride (and only six yuan), so we settled down to watch the city go by as we stopped and started along a busy toll road. Eventually, we turned off the main road into a small industrial town north of Beijing, where we were informed that the bus had reached the end of its journey.

There seemed little point in returning to Beijing, partly because we were clearly going in the right direction, so we looked for a railway station. It wasn’t difficult, but we learned that we’d have to wait two hours for the next train. We were also able to learn the town’s name, which was painted in large characters above the station’s shabby portico. I will gloss over our two hours in Nankou, which was worth seeing but not worth going to see, to misappropriate Doctor Johnson’s verdict on the altogether less interesting attraction of the Giant’s Causeway, and move on to the train journey. However, I should note in passing that the first photograph in yesterday's post was taken there.

Looking north from Nankou railway station.

The section of the Great Wall that we were heading for is located behind the ridge of mountains in the preceding photograph, and the railway line sweeps around in a wide curve to enter a steep-sided ravine. The first thing that we noticed was the almost complete absence of greenery on the hillsides. However, many bushes covered with white flowers were scattered across the slopes, which made an eye-catching sight. After a short while, a section of the Wall came into view, and it was indeed spectacular. There was a station about half a mile further on, but the train didn’t stop!

We came to, and passed, another section of the Great Wall, less spectacular, but there was no station. Finally, we stopped in a station that my wife was certain was the right place. Fortunately, the train doors were locked, so we were prevented from making what would have been a serious error of judgement. After a few minutes, the train started to move again, back the way we had come. At least it appeared to be retracing its tracks. In fact, we were now on a different line, and we eventually arrived at another station, where everyone got off. Clearly, it was the end of the line.

A sign proclaimed “800 metres to the Great Wall”. It seemed further, but only about four hours later than we’d originally anticipated, we finally arrived. The only disappointment was that our late arrival meant that we couldn’t walk far enough along the Wall to escape the crowds, because the early departure of the last train back to Beijing left us with little more than two hours to explore. This was sufficient to get a feel for the place, but we weren’t able to venture as far as we would have done had we had more time, and I’ve a feeling we’ll be back. I was especially surprised to see old ladies resolutely climbing up some of the steepest parts of the Wall (I think that those accompanying and helping them hadn’t considered that it is harder to descend than ascend, particularly if you aren’t very agile). Perhaps they wanted to buy a tacky tee-shirt proclaiming “I climbed up the Great Wall” from one of the souvenir shops when they returned to their starting point. As experienced rock climbers (real climbing involves the use of your hands), we didn’t.

There were a couple of interesting observations to make on the return journey. First, we were on the right side of the train to see that at one point the Great Wall ran the full length of the skyline and was hugely impressive with the setting sun behind it. Second, I thought I saw the stationmaster at the first station we came to stand to attention as we passed through. I looked out for this at the next station, and sure enough there he was, resplendent in his green uniform and peaked cap, with his green and red flags in one hand, standing to attention in a little painted square box on the platform. This was repeated at every station, even though we didn’t stop at any of them. Was this an indication that the Chinese are a regimented people? I reserve judgement until my next post, which will appear at this address tomorrow.

Crowds throng the Great Wall.

But some sections were less crowded.

Tuesday, 27 April 2010

northern capital

This is not a discussion about collateralized debt obligations, credit default swaps or other larcenous forms of financial legerdemain. In fact, ‘northern capital’ is the literal translation of Beijing, where my wife and I have spent the last five days, just as ‘southern capital’ is the literal translation of Nanjing, the Chinese capital during several important periods in Chinese history, most recently immediately prior to the Communist victory in the Chinese Civil War in 1949.

Over the next few days, I will be explaining why we almost didn’t get to see the Great Wall, reporting on ‘The Social Contract of Beijing’, telling you about the two essential food items that any self-respecting visitor to Beijing should seek out, commenting on the capital’s transport system (including an explanation for the odd fact that versions of the top-of-the-range models from Mercedes, BMW and Audi for the China market are six inches longer than the equivalent models sold elsewhere), noting in passing that you should keep off the grass in Beijing’s parks, and remarking on a few other curious observations that I made along the way.

Meanwhile, here are a few photographs to whet your appetite:

What is happening here? Answers on a postcard to the usual address.

Great Wall.

Another great piece of wall.

The obvious question is this: in terrain this rugged, do you really need to build such a bloody big wall just to keep people out?

Doorway, Forbidden City.

Ceramic wall plaque with dragons about 1.4 metres high, Forbidden City.

Monday, 19 April 2010

relatively incorrect

I acquired almost all my knowledge of English grammar between 1956 and 1960. The first of these years was also my final year at junior school, and as learning environments go it was exceptionally brutal. My teacher was an old battleaxe called Miss Lewis who routinely caned boys in front of the rest of the class for what I only later deduced was the ‘crime’ of not being very smart (stupidity was frequently interpreted as laziness in those days). I may well have been the laziest boy in the class, but I was clever enough to be able to avoid such punishment. Our instruction manual was First Aid in English, which introduced me to nouns, verbs, adjectives, adverbs and other parts of speech.

In 1957, I moved to the local grammar school, where my English teacher for the first three years was Mrs Wilson, or ‘Fisheyes’ as she was affectionately known by her pupils (she was also my form mistress for two of those years). Apart from covering a wide range of literature, including poetry and drama, her lessons also introduced me to adjectival clauses, adverbial clauses and relative clauses (more on that, eventually). Fisheyes was one of only two teachers whom I look back on with anything approaching respect.

Fast forward to 1987, and I’m leaving my home town in the UK to return to Hong Kong. As luck would have it, the second teacher whom I remember with respect, Mrs Hulton, was travelling on the same train down to London. I can’t recall all the details of our conversation during that four-hour journey, but I do remember her telling me that English grammar had not been taught in British schools for the previous two decades, which may explain the following sentence. It appeared in a news item on the BBC website on the final day of 2009, and it describes the latest recipients of ‘honours’ in the UK:
[Patrick] Stewart found worldwide fame in the TV series Star Trek: The Next Generation that ran from the late 1980s to [the] mid-1990s.
This sentence is typical of many such sentences to be found nowadays almost everywhere. If you attended a British state school during the 1970s or 1980s, or you learned English as a second language for business, you will probably think that there is nothing remarkable about the sentence, but think again. Like the creator of this sentence, you clearly have little idea what a relative clause is, so here is a brief primer on the handling of relative clauses:

There are two types of relative clause, as illustrated by the following examples:
The beer that I drank today was purchased last week.

The beer, which I drank today, was purchased last week.

The first is an example of a defining relative clause; it presumes that the reader has no previous knowledge of the beer, or other noun being thus qualified, and it also presumes that there is another category of beer, perhaps the beer that was drunk yesterday or the day before, that we have no knowledge of. It therefore defines the category ‘beer’ as the beer that was drunk today. The second is an example of a non-defining relative clause; such a construction is used when the beer, or other noun being qualified, refers to knowledge presumed to be already in the reader’s possession. No definition is thus required. You should note that the commas are an essential part of this construction.

Analysing the sample BBC sentence, we see that the writer has unintentionally employed a defining relative clause, which implies that there is another TV series, of which we have no knowledge, with the title Star Trek: The Next Generation. This of course is nonsense, but the confusion could have been avoided by writing the sentence with a non-defining clause:
[Patrick] Stewart found worldwide fame in the TV series Star Trek: The Next Generation, which ran from the late 1980s to [the] mid-1990s.
It may seem pedantic to insist on such corrections, and it’s true that the reader can apprehend the intended meaning here, after a moment’s reflection, without ambiguity, but there are many other instances where it simply isn’t clear what the writer intended. Take this example, from a recent news report on an imminent supernova in our galactic backyard:
The blast from the thermonuclear explosion could strip away the Earth’s ozone layer that keeps out deadly space radiation, scientists said.
You may understand relative clauses but be relatively unversed in science, in which case the unavoidable assumption must be that there is more than one ozone layer. This would avoid the ambiguity:
The blast from the thermonuclear explosion could strip away the Earth’s ozone layer, which keeps out deadly space radiation, scientists said.
At least it would if modern writers understood the subtleties of our language. Although some older writers tended to use ‘which’ for both types of clause, they scrupulously inserted the essential comma(s) when employing a non-defining clause. Now, it seems, ‘that’ has usurped ‘which’ as the universal word to use in these circumstances, and the comma appears to have become an optional extra, making a nonsense of many sentences. Given that ‘that’ can also be used as a conjunction, an adverb, a demonstrative pronoun and a demonstrative adjective, this change is inefficient at best and ambiguous in most contexts.

I’m neither a linguist nor a professional grammarian. I’m merely a concerned amateur who cares about precision in language usage, and it’s the rank amateurs we need to worry about, those who don’t think carefully about the meaning of what they write. Because if changes in grammar are being driven by people who have little or no idea of what grammar is, then the clarity and accuracy of written English cannot fail to be compromised. Is this a good thing? I think not.

Friday, 16 April 2010

technology and its impact

In the modern world, we tend to take technology for granted and are never aware of its social impact unless we have experience of the world before a new invention becomes widely used. This has been the case since the beginning of the Industrial Revolution and the development of the steam engine, which accelerated the trend away from the countryside into new industrial towns and cities and fuelled a huge increase in population wherever this invention was used to power new factories.

However, it was only with the advent of the railways that significant changes in the daily lives of ordinary people began to kick in. Before the 1830s, few travelled further from where they were born than the distance they could walk in half a day. Only those rich enough to own a horse or afford stagecoach travel ventured further afield. Although early third-class travel must have been uncomfortable, it became not only possible but also affordable to visit the next town, a business opportunity that Thomas Cooke was quick to exploit, although later patrons of the agency that bears his name no longer need to be temperance campaigners.

The second influence of the railways was on time. There is no point in trying to run a rail passenger service if you can’t tell people when each train will depart, but in the pre-railway age every town and village had its own time: whatever the local church clock said. The solution? A standard time for the whole country. Whether the trains ran according to the timetable was a different problem.

Nowadays, trains are part of the landscape, so we are unlikely to reflect on what life might have been like without them. And the entire modern world is run on timetables, from work shifts to television schedules, from the opening hours of shops and supermarkets to when the local bar has its ‘happy hour’. If we think at all, we think only that it has always been this way as we hurry from one place to another.

In the twentieth century, the automobile changed the game again, although its effects were different in Europe and in the open expanses of the United States. When Henry Ford introduced his Model T in 1908, it was the first car to be mass produced and was deliberately marketed at a price that made it easily affordable by the average American, which paved the way for the automobile to become an icon of the American way of life. In Europe, meanwhile, the motor car remained a plaything of the well-to-do until after the Second World War. Only a handful of people now alive will remember what a world without cars was like, and cars are now regarded as an essential, even if it is necessary to take out a bank loan to buy one. However, an unfortunate side-effect of the mass ownership of cars was a steady decline in the railways and the eventual closure of branch lines that were no longer profitable.

Radio was another key technology of the twentieth century, although its impact was relatively modest compared with the later development of television. Both brought mass entertainment into the home, effectively killing off the self-devised entertainment that had been the norm in the nineteenth century. The result has been that the user is no longer required to think. Sitting passively in front of a TV set is a fatally easy way to waste a lot of time, and you have to be at least 60 years old to remember what it was like when television was the latest novelty, so you are not likely to reflect on how it affects your life.

Like television, personal computers were available but not widely used for about ten years. By today’s standards, the PCs of the 1980s had very limited capabilities and required some technical knowledge to use, but they were a huge step up from a typewriter for producing written material. I ditched my typewriter for a PC in 1984 for this reason, but it was another seventeen years before I felt the need to upgrade to take advantage of the PC’s newer capabilities, like organizing music, video and photographs. If you were born in the 1980s or later, you probably can’t imagine how those primitive early machines could be used to do useful work, just as you probably never think about the excitement generated by the tiny black and white televisions of the 1950s or the enthusiasm created by Stephenson’s Rocket and other early steam locomotives.

And then we have the mobile phone. I remember covering the then new invention in the 1980s as a journalist for an electronics magazine, and nobody thought that they would have the impact they ultimately have had, or that they would become so ubiquitous—unsurprising, really, given that those early models were the size of a house brick and twice as heavy. Like the technologies already discussed, mobile phones allow the user to do things they previously weren’t able to, but, unlike the earlier inventions, the real social impact has not been on what people can do but on how they behave—on what is considered acceptable behaviour.

Put bluntly, people are ruder and more impatient than they were a quarter of a century ago. And mobile phones are a key factor in this change. How often have you been engaged in conversation when the phone in the other person’s pocket starts ringing? And what happens next? Your companion answers the phone, and your conversation is terminated, perhaps permanently. I regard this as impolite. I sometimes call my wife in her office and receive no answer. I know that she is in her office, but she never picks up the phone if she is talking to someone. This is unusual nowadays, when people will even respond to emails while someone they are supposed to be talking to is in the room, but I think that it is the proper thing to do in this situation.

And then there are the people who think it acceptable to use a mobile phone while driving. This is more than rude; it is inconsiderate and highly dangerous. Yet people do it because they can. Whether they should is not considered. Now almost everyone has a mobile phone, yet few stop to think how important they have become in their lives. The behaviour that I have described has become an automatic response, like Pavlov’s dogs in response to the ringing of a bell.

I’m old enough to remember a time when car ownership was far from universal, when the principal form of home entertainment was the radio, when computers were large and remote, tended by men in white coats in air-conditioned rooms, so I tend to view new technologies with a degree of scepticism, but if for you those technologies have always been around, you will probably wonder why I’m even bothering to discuss them. However, you might want to think about how it will affect your life when the next big thing comes along.

Monday, 12 April 2010

democratic deficit

If you were to ask the average elector in a modern liberal democracy to define the term ‘democracy’, it’s a fairly safe bet that you’d hear something along the lines of ‘one man, one vote’. Clearly, this commonly used phrase must date back to an era when women were denied a vote, but in the interests of accuracy I’m not going to modify it merely to pander to the dictates of so-called political correctness.

However, those who parrot such phrases rarely stop to ask whether all votes are equal, although in every system I’ve looked at it is not the case that all votes carry the same weight. A classic example from recent history is the US presidential election of 2000. Although Al Gore gained more individual votes than George W. Bush, the exigencies of the electoral college system meant that the latter ‘won’. Al Gore’s votes were in the ‘wrong’ places.

The United Kingdom operates a similarly anomalous system. When the country’s forthcoming general election was announced, all the talk was of ministers fanning out across the country to canvass for votes. I will stake my house that no political heavyweights will set foot in the constituency where I’m registered to vote. It’s the largest (by area) in England, and you could stick a blue rosette on a dancing pig and it would still be elected as a Conservative MP. The current MP is stepping down, ostensibly for health reasons, but in fact, like many of his colleagues, he was caught with his fingers in the till. However, I still cannot see anything other than a Conservative victory there.

This scenario is replicated across at least two-thirds of the UK’s constituencies: most people vote the way they’ve always voted, meaning that the results in these constituencies are a foregone conclusion. The election will be decided by so-called ‘swing voters’ in only a handful of constituencies. All other votes will have a value of zero.

And then there is the question of what proportion of the electorate actually vote for the winning party. In the 1997 general election, the Labour Party achieved a landslide victory—with a huge majority in Parliament—with only 43.2 percent of the vote, a result of the massively unfair ‘first past the post’ system. Margaret Thatcher had a much smaller majority in 1979 despite being supported by 43.9 percent of those who bothered to vote. And when voter turnout is factored into the equations, the winning party was supported by 33.4 and 30.8 percent of the electorate in 1979 and 1997, respectively. Not exactly democratic, is it?

Nevertheless, despite this tenuous level of support, the winning party proceeds to claim that it has been given a ‘mandate’ to carry out the measures it proposed in its manifesto. The voters are ignored for the next four or five years while being treated to the truly disgusting spectacle of professional politicians claiming to be better informed about education than teachers, to know more about the delivery of healthcare than doctors, and to have a better understanding of law and order issues than policemen.

So will I be voting on 5th May, given the apparent futility of the exercise? Yes, I will. I regard people who can’t be bothered to vote with contempt and have only two words for them: Emily Davison. Not bothering to vote is not an abstention, although abstention is an honourable course of action given that one’s vote is regarded with such arrogance by politicians. If you want to abstain, you must still visit your local polling station and in the official jargon ‘spoil your ballot’. I will probably write the following on my ballot this year (knowing from my time as a candidate in local elections that all the candidates have to see any ballot that may be considered invalid):

“I couldn’t find anyone on this list who didn’t make me want to throw up.”

Thursday, 8 April 2010

rocket science

When I was growing up in the 1950s, the most difficult job that one could imagine was that of a brain surgeon. This gave rise to a popular expression of incredulity directed at someone’s inability to understand a relatively mundane concept:

“It’s not brain surgery!”

However, sometime during the 1970s, probably in response to what would have been seen as the marvel of space exploration, brain surgery went out of fashion as a basis for comparison. In its place, the new remonstration:

“It’s not rocket science!”

The term ‘rocket scientist’ even made it into Shania Twain’s 1998 hit That Don’t Impress Me Much as a proxy for a smart-arse. But did you notice the ‘dumbing down’ that this change in popular usage implies?

Ask yourself: what, exactly, is rocket science? For every action, there is an equal and opposite reaction. That’s it. The whole of rocket science can be expressed in ten words.

It’s not brain surgery.

Wednesday, 7 April 2010

revenge is sour

People will believe anything, if the climate for propagation of that belief is favourable. For example, The Protocols of the Elders of Zion, which first appeared in Russia in 1903 and purports to outline a Jewish plan for world domination, is actually a hodge-podge of earlier political satire that was regurgitated by antisemitic groups in imperial Russia to justify pogroms against the Jews. The Protocols are now regarded in the West as some kind of hoax, except perhaps by extreme right-wing groups, yet copies of this document are widely available in Arab cities throughout the Middle East, where it is understood to be genuine by most ordinary people.

My source for this statement is an old school friend who has worked in many parts of the region. He also informed me recently that these same ordinary people , fed as they are by virulently anti-Israeli media and whipped up by demagogues like Mahmoud Ahmadinejad of Iran, attribute the 2001 terrorist attacks on the World Trade Center and the Pentagon to Mossad. It is easy to laugh at the credulity of such people and to think that this proposition is so ridiculous as to be unworthy of serious consideration. However, if you were to look at this situation from the point of view of someone who is consistently given only one side of an extremely complex story, who has in any case a long-standing prejudice against Jews in general and the state of Israel in particular, you would probably feel that such an idea is not merely possible; it is also credible. And, whatever the story, you would have no way of discerning whether what you were reading, watching or listening to was the unembellished truth or a fabrication, or a mixture of the two.

Although news media can be routinely disbelieved, as they were, frequently, in the former Eastern bloc, communications technology has moved on since then, and the Soviet era now seems like a bygone age. It should no longer be possible to manipulate or suppress specific items of news in such a systematic way. However, there are times when no manipulation is required, merely the subtle juxtaposition of otherwise unrelated pieces of information. For example, it will not have escaped the notice of Palestinians that Israel gained considerably from the al-Qaeda attacks on the USA, which enabled it to prosecute a militaristic agenda in the occupied territories and southern Lebanon under the guise of participating in the so-called ‘War on Terror’, although it should be said in mitigation that this period coincided with an extended wave of suicide bombings in Israel.

However, Israel has always acted in what it perceives to be its own interests, regardless of international opinion. This history of unilateral action starts with the activities of Irgun prior to the formal declaration of the state of Israel, which included the bombing of the King David Hotel in Jerusalem in 1946 and the Deir Yassin (a Palestinian village that had signed a non-aggression pact with its Jewish neighbours) massacre in 1948. The ideology of this terrorist organization, that only active retaliation will deter the Arabs and that only armed force can ensure a Jewish state, continues to inform the foreign policy of Israel’s Likud Party, which has been in power for most of the past three decades, to this day. And one of the men responsible for the hotel bombing, Menachem Begin, later became Israeli prime minister and a Nobel peace laureate.

If anyone doubts how the various armed Jewish groups active in Palestine between the collapse of the Ottoman Empire and the formation of the state of Israel are viewed today, consider the case of the assassination of Lord Moynes, British resident minister of state, in Cairo in 1944. This murder was carried out by two members of Lehi, an Irgun splinter group known to the British as the Stern Gang, who were duly tried and executed in 1945. However, in an exchange involving prisoners captured in Sinai and Gaza during the Yom Kippur War, their bodies were returned to Israel, where they were given state funerals with full military honours, in 1975.

In fact, the history of Israel includes a range of events that support the contention that the country cares not for international opinion or the niceties of international law if it regards a contemplated action as either its ‘right’ or in its own interests. From the abduction of Adolf Eichmann from Argentina in 1960 to the rescue of hostages in Entebbe in 1976, from the bombing of the Osiraq nuclear reactor in Iraq in 1981 to the kidnapping of Mordechai Vanunu in Italy in 1986, Israel has shown little regard for the sovereignty of other nations while fiercely defending its own sovereignty. Of course, any reasonable person will feel that the first three, at least, of these actions were fully justified.

It is an unfortunate facet of the Israel/Palestine dilemma that Arab hostility towards Israel is so deeply entrenched; it has not gone away simply because both Egypt and Jordan have signed peace treaties with Israel. It is far more likely that military restraint by Arab states is a direct result of the expected response from Israel, which can be crudely summed up as “bloody my nose and I’ll tear your fucking legs off!” As a military strategy this obviously works extremely well, but it cannot be the basis for a permanent peace settlement.

Given this background, an obvious question presents itself: is peace possible? It is clear that the Palestinians are negotiating from a position of extreme weakness, so it is likely that they will accept almost anything. The question is therefore what concessions Israel is prepared to make. On current evidence, the answer is very little. It is, for example, totally unprepared to talk to Hamas, given its often stated policy of not talking to terrorists. But this begs an important question: what, precisely, is a terrorist?

For example, Margaret Thatcher notoriously branded Nelson Mandela a ‘terrorist’, yet Mandela’s personal qualities were crucial in enabling a peaceful end to apartheid, although it should be admitted that he remains a hate figure, even today, among the Afrikaner community. This blinkered mindset also drove Thatcher’s policy of not allowing terrorists ‘the oxygen of publicity’, which was clearly a failure: subsequent events in Northern Ireland confirm this view. The so-called ‘peace process’ there became possible only because Thatcher’s successors were prepared to talk to Sinn Féin, widely regarded as the political arm of the Provisional IRA and a proscribed organization during Thatcher’s time as prime minister. And although the Northern Ireland peace process is obviously flawed, because dissident republicans continue to commit atrocities, this is on a slowly diminishing scale as these groups become increasingly marginalized.

So is there a lesson here for Israel? The firing of rockets by Islamist militants from Gaza is unacceptable by any reasonable yardstick, but this does not justify the continuing military reprisals against the entire Gaza Strip, which are heavy-handed and indiscriminate: the full-scale invasion in January 2009 killed a disproportionate number of innocent civilians, including children—casualties that could easily have been avoided had the Israeli army not been so trigger-happy in its operations.

Even more questionable has been the deliberate policy of assassinating men whom Israel has branded ‘terrorist leaders’. The murder of Hamas leader Muhammad al-Mabhouh in Dubai in February is merely the latest in a long line of such actions. There can be no doubt that this was a thoroughly unpleasant individual; and the official Israeli line, that there is no evidence its agents were involved, may well be correct; nevertheless, these are weasel words, the kind of dissembling that the country routinely indulges in when quizzed about whether or not it possesses nuclear weapons. However, whether Mossad was or was not responsible is irrelevant when set alongside the approving reaction to the killing by Israeli politicians such as opposition leader Tzipi Livni. It should be noted that no judicial process is ever involved in these kinds of operations, yet even the most despicable terrorist or child murderer should be accorded the forensic niceties of an open trial, with guilt or innocence established by the age-old process of examining and weighing up the evidence.

And then there is the problem of Israeli settlements on land that the country has occupied illegally since 1967 and that the Palestinians hope will become the basis of an independent Palestine. This is a critical issue; there are currently 400,000 Jewish settlers in the West Bank and more than a quarter of a million in East Jerusalem, although the majority of new building in the former is to the west of Israel’s ‘separation barrier’, probably because there is an expectation that settlements east of the barrier may one day be abandoned as part of any peace agreement. However, mention of Jewish settlements west of the barrier glosses over the obvious fact that in erecting its security wall, Israel has effectively annexed land that is, or should be, part of a future Palestinian state.

The other problem with all these settlements is the belief among orthodox Jews that this territory forms part of the ancient biblical kingdom of Judaea, which disappeared from the historical record, and the Bible, with the first successful invasion of the so-called promised land since that by the Israelites, by the Assyrians in 600 BC. And while the obliteration of Judaea may be the basis of the legend of the lost tribes of Israel, it is no basis on which to found geopolitical territorial claims in the twenty-first century AD. Were it otherwise, one could formulate a reasonable case for the return of al-Andalus (Spain) to Morocco on the grounds that it was occupied by Islam for almost eight centuries before the final acts of the reconquista in the fifteenth century. And Islamic peoples had occupied Palestine prior to the twentieth century for as long as the Jews prior to the destruction of Jerusalem by the Romans in AD 70.

However, while the continuing blockade of the Gaza Strip and regular military and police actions against Palestinians in both Gaza and the West Bank—the routine demolition of Palestinian homes being particularly egregious—are indefensible, a case can be made for Israel to retain control of the Golan Heights, at least for the foreseeable future. It should be remembered that Syria used this strategic location to bombard northern Israel in the early stages of the 1967 war, and given that there is no peace treaty between Israel and Syria, relinquishing this area would at present be a serious mistake.

If Syria were the only danger to the state of Israel, then some rapprochement between the two countries really ought to be possible. Unfortunately, there is a much more serious and implacable enemy: Hizballah, the ‘Party of God’. It could be argued that Israel made a rod for its own back with regard to this organization, which was formed originally by Iranian Revolutionary Guards as part of Iran’s policy of exporting its Islamic revolution and was a direct response to Israel’s invasion of southern Lebanon in 1982. The rationale behind this invasion was the destruction of the Palestine Liberation Organization (PLO), which was then active in Lebanon, although, in retrospect, at least part of the motivation appears to have been Israeli prime minister Menachem Begin’s personal animosity towards PLO leader Yasser Arafat, an animosity that Israel continued until the latter’s death in 2004 despite Arafat’s abandonment of violence and recognition of Israel’s right to exist in peace in 1993.

The mistake that Israel continues to make with regard to militant organizations is to think that they are all the same. This is seen in the ultimate excuse for the 1982 invasion—the attempted assassination of the Israeli ambassador in London. This attack was not carried out by the PLO, which had been observing a negotiated ceasefire for some time prior to the invasion, but by an organization that was opposed to the PLO and headed by the notorious terrorist Abu Nidal. Although Israel inflicted huge losses on its adversaries in the ensuing war and will have thought itself successful, the irony is that it only succeeded in replacing a militant organization that was pursuing legitimate goals through illegitimate means with one whose ultimate goal is the destruction of the state of Israel. And one whose fighters proved a match for the Israeli Defence Forces in the brief war of 2006. A rematch is likely sooner rather than later.

This points to the single most dangerous factor in the entire Middle East stand-off: given its relatively overt sponsorship of Hizballah, the likelihood that it is, despite protestations, attempting to develop nuclear weapons, and its leadership by a man who shows all the symptoms of being clinically insane but is probably a shrewd and devious operator, Iran cannot be ignored. Certainly, no one in Israel is making that mistake. On the contrary, the worry is that Israel, given its previous record in such matters, will decide to take unilateral action. There are already calls by some in government and the military for pre-emptive strikes against Iranian nuclear facilities. There are echoes here of the 1962 Cuban missile crisis, in which some American military leaders, perhaps afraid to be thought weak, urged on President Kennedy the most dangerous course of action: ‘surgical’ strikes on the missile sites. Similar action by Israel would also be the most dangerous option. The consequences are impossible to predict but are likely to be catastrophic for Israel, the region and, quite possibly, the world.

Meanwhile, the question that it is impossible to avoid asking is this: when will the Israeli government realize that, despite being enshrined in scripture, the doctrine of an eye for an eye does not work? That terrorism cannot be defeated by force of arms alone? That cruelty begets cruelty? Retaliation, especially if it is over the top, may provide a small measure of satisfaction in the short term, but like a tube of Pringle’s potato chips the taste does not linger. The flavour of these potato chips is engineered to disappear in the mouth almost instantly, which encourages the consumer to eat the next chip, and the next chip, until the tube is empty and the promise has remained unfulfilled. It is like this too with retaliation: heavier and heavier reprisals for atrocities committed against your side never provide the degree of satisfaction that had been anticipated. The desire to seek revenge is ultimately self-defeating; revenge, despite the popular saying to the contrary, is sour.

Monday, 5 April 2010


When we say that something will probably happen, we may not be aware that this is a vague statement lacking in mathematical precision. In other words, how probable is ‘probable’? Clearly, the minimum standard required for this adjective to be applicable is that an event be more likely to happen than not to happen, and this may be the criterion that most people instinctively use, but a good case can be made for reserving ‘probably’ for events that have a far greater likelihood of occurring than 50 percent.

In fact, most people have only the sketchiest notion of probability: for example, anyone who gambles in a casino will probably lose money. And everyone will lose eventually if they play long enough. Yet gamblers tend to think that a long losing streak will be compensated for by a long winning streak. Some may even believe that they have a ‘system’, especially for roulette, wantonly disregarding the built-in bias in favour of the house, which for roulette in an American casino gives a long-term return of $94.74 for every $100 gambled. It is worth pointing out, although it tends to be forgotten nowadays, that the original purpose of a casino, notably the one in Monte Carlo, was not as a place where one could win money, even if some did, but as a place where one could be seen to lose money (if a player lost, say, $20,000, it demonstrated to those who witnessed the loss that here was someone who could afford to lose such an amount—an early if extreme example of conspicuous consumption).

Probability theory throws up some interesting cases, particularly with regard to inherently improbable events. For example, the universe consists predominantly of hydrogen and helium, heavier elements making up a very small fraction of the total mass. There is only one place where these heavier elements are formed: in the cores of super-massive stars such as red giants. Anyway, to cut a complicated technical story down to more manageable proportions, carbon is formed when three helium nuclei collide within a timescale that is measurable in fractions of a nanosecond (one nanosecond equals one million millionth of a second). If the third nucleus arrives after the window of opportunity has closed, then the result of the fusion of the first two nuclei, a highly unstable isotope of beryllium, will have decayed again to helium. However, despite the intrinsic unlikelihood of such an event, the opportunities for successful three-way fusion are so vast that given sufficient time carbon will form in significant quantities. It is thus possible to say with complete confidence that every carbon atom in every protein molecule in my body, or your body, and every carbon atom in the billions of tons of carbon dioxide that have been carelessly added to our atmosphere, there to contribute to the greenhouse effect and global warming, was formed in the centre of a star in the improbable process just described.

Mention of unstable (i.e. radioactive) isotopes recalls an interesting application of probability theory involving quantum mechanics. Every radioactive isotope has a fundamental property known as its half-life. This is the time taken for precisely half the atoms in a given sample to decay, either to another radioactive isotope or to a stable one. In other words, it is possible to say exactly how many atoms will decay in a given time; however—and this is the odd part—it is not possible to tap a given atom on the shoulder and say: “okay, you’re next.” This is the uncertainty that is built into quantum mechanics.

Back in the real world, a good test of how well you understand probability is the following problem, which elicited a great deal of discussion on BBC Radio 4’s flagship news and current affairs program Today a few years ago and as far as I can recall was never satisfactorily explained.

Imagine that you are a participant in a television game show in which you are shown three locked boxes. You are told that one of the three contains $10,000, while the other two are empty. You are allowed to select one box, and if you choose correctly the money is yours. Let us refer to your selection as box #1. Now, before you are allowed to open your chosen box, the show’s compere, who knows in advance which box contains the money, opens one of the other boxes (call this box #2) to show that it is empty. Now comes the offer: do you want to stick with box #1, or would you prefer to change your mind and choose box #3?

At this time, I do not propose to provide a detailed analysis of the problem, but I will point out in passing that if you know what you’re doing, you will now choose box #3. Why?