Thursday, April 15, 2021

The Winner Takes it All - Williams 2013-2018

 Sung to the tune of ABBA's classic:

I don't wanna talk
About races we've gone through
Though it's hurting me
Now it's Autocourse history
I've played all my stops
And that's what you've done too
Nothing more to say
No more Quali compounds to play

The winner takes it all
Paddy Lowe standing small
Beside the victory
That's his destiny

I was in Lawrence Stroll’s arms
Thinking I belonged there
I figured it made sense
Building me a fence
Building me a DIL
Thinking I'd be strong there
But I was a fool
Playing by Jakob’s rules

The FIA may throw the dice
Their minds as cold as ice
And someone way down here
Loses one point held dear
The winner takes it all
Ed Wood has a trackday fall
It's simple and it's plain
Why should I lodge a protest within the specified timeframe?

But tell me Valterri, does Toto kiss
Like Johnny used to kiss you?
Does it feel the same
When Tony Ross calls your name?
Somewhere deep inside
You must know we miss you
But what can I say?
Claire must be obeyed

Charlie Whiting will decide
The likes of me abide
Spectators of the show
Correlation with tunnel figures staying low
The Grand Prix is on again
A lover or a friend
A big thing or a small
The winner takes it all

I don't wanna talk to the media
If it makes you feel sad
And I understand
You've come to shake Felipe’s hand
I apologize
If it makes you feel bad
Seeing me so tense
No downforce, but plenty of gearbox compliance
But you see
The winner takes it all
The winner takes it all…

Saturday, March 13, 2021

Brands Hatch F1 Tyre Test 1985

A beautiful summer's day in 1985, the greatest circuit in Britain, and the greatest F1 cars and drivers of all time, preparing for the European Grand Prix. No zoom lens, so there's something of a polite distance maintained between photographer and object.

Marc Surer passes through the cutting at Pilgrim's drop.
Senna turning into Hawthorn Bend. Don't turn left.
Keke in the FW10. The FW10B would only make its debut at the European Grand Prix itself.
Prost coasting through the apex of Hawthorn. Don't want to overheat the left-front.
Prost gently easing through the dappled light into Stirling's. Careful with the right-front temperature.
Prost carefully accelerating out of Stirling's. Don't want to overheat those rears.
Stefan Johansson's Ferrari at Hawthorn Bend. Note how far forward the drivers still are in 1985.
Bernard Asset-style atmospheric shot of Clearways.
Ayrton Senna da Silva turning into Graham Hill Bend.
Boutsen's Arrows up the hill towards Druids.
Stefan Bellof (Tyrrell), shortly before his death at Spa in a Porsche 956.
Toleman plunging down from Westfield through Dingle Dell.

Tuesday, June 23, 2020

Thruxton British F3 - March 1988

Pitlane walkabout: Jyrki Jarvilehto shows off his blisters

Exhaust-blown rear wing

Ample opportunity for future Ferrari engineers to 'top-up' the fuel tank

Derek Warwick. For sure.

Steve Rider, looking haunted after being boot-lidded down the A303 by Des Lynam.

Interesting front anti-roll bar design. Note for aerodynamicists: those spiral things are called 'springs'.

And we're off! Martin Donnelly is on pole, but makes a poor start. In yellow is Philippe Favre, behind is Lehto, and to the outside is Damon Hill.

Donnelly and Favre are wheel-to-wheel through Allard, with Hill trying the wide-line, and Lehto sneaking through on the inside (hidden by Eddie Irvine).

By the end of lap 1, Favre has run wide into the complex, and both of the Cellnet cars of Hill and Donnelly have missed a gear. Lehto leads, and holds his lead to the finish.

Sunday, June 07, 2020

Brands Hatch F3000 1988

Johnny Herbert leads out of Paddock Hill Bend from Donnelly, Martini, Grouillard, Foitek, Moreno, Blundell, Alesi, Langes, Ferte and the rest.
Alesi exits stage left at Druids.
Martin Donnelly pressures Herbert in the early stages.
A battle for 3rd develops between Martini, Foitek, Moreno, Grouillard and Blundell.
Foitek creams Moreno into the tyre barrier at the top of Paddock.
Moreno's Reynard 88D sustains heavy damage.
At the restart, Donnelly leads away this time, followed by Martini, Herbert, Foitek, Grouillard, Blundell and the rest.
Catastrophe at Pilgrim's Drop.
Foitek is still in the car adjacent to the far barrier, and Grouillard is being freed from his GDBA Lola.
Medical assistance arrives for Johnny Herbert.
Andy Wallace is carried away, accompanied by GEM team-mate Gary Evans.
Herbert is worked on for something in the region of 40 minutes, during which time marshals fought a frequent battle to repel spectators trying to crawl up the undergrowth on the right of the track.
Martin Donnelly wins the re-started race.

Saturday, January 18, 2020

Trump and opinion dynamics

The January 2020 issue of PhysicsWorld includes an article on 'The physics of public opinion', written by Rachel Brazil. The article focuses on the claim made by French physicist, Serge Galam, that Donald Trump's victory in the 2016 US Presidential election can be explained using his model of 'minority opinion spreading'.

Galam's work models how opinions evolve in a network of human agents. The idea is that an individual's beliefs can be changed by social interactions with people holding other beliefs. However, Galam also represents the fact that some people can be more stubborn in retaining their initial beliefs. In particular, he models the way in which an initial minority opinion can eventually become the majority opinion, if the proportion of stubborn people is larger in the initial minority group than it is in the initial majority group:

'An opinion that starts in the minority can quickly spread as long as it is above a base threshold...As few as 2% more stubborn agents on one side puts the tipping point at a very low value of around 17%, which leads to the unfortunate conclusion that to win a public debate, what matters is not convincing a majority of people from the start, but finding a way to increase the proportion of stubborn agents on your side.'

This is interesting work, but more problematic is Galam's attempt to use a variation on this theme to explain Trump's 2016 victory:

'In the case of the 2016 US presidential elections, Galam says the prevailing factor was peoples’ "frozen prejudices". He argues that Trump’s outrageous statements, though initially seen as repellent by most voters, managed to activate their hidden or unconscious prejudices. First, many Trump supporters shifted to Hillary Clinton, rejecting his statements with great outrage, leading to a decrease in support. But the initial outrage led to more public debates with an automatic increase in the number of local ties. At those points, “it’s like flipping a coin, but with a coin biased along the leading prejudice”, Galam says. Then many voters started to swing in favour of Trump.'

There are two problems with this. The first is the explanatory dependence upon the theoretical concept of 'frozen' or 'unconscious' prejudices. This sounds almost like a retreat into the mysterious world of Freudian psychoanalysis, with its array of unverifiable unconscious motives.

Moreover, the notion that there is some form of latent fascism or Nazism within society, just waiting for an opportunity to gain ascendancy, plays the role of a tribal myth within Progressive politics. The Enlightened Ones urge continual vigilance against this ever-present threat, and such appeals perform the function of enhancing group cohesion. Galam's work therefore falls into an extant genre of Progressive literature, which has flourished in the wake of the Brexit referendum and Trump's election victory. 

The second problem with Galam's proposal is that there is already an adequate and much simpler explanation: 

Trump won the 2016 election because a critical proportion of blue-collar voters rationally assessed their changing economic circumstances and prospects, and concluded that their interests were better represented by Trump than the Democrats.

Unfortunately, to accept this explanation would require those within Progressive politics to accept their own culpability in bringing Trump to power. Better, perhaps, to believe in the existence of sinister, hidden prejudices.

Indeed, if there's one phenomenon which does call out for an explanation within social physics, it's the very spread of Progressive politics and political-correctness in recent decades. Back in the 1980s, politically-correct opinions were minority opinions held only by vocal, stubborn and fanatical groups. Today, these ideas have spread to become mainstream within the professional middle-classes. 

Sadly, one suspects that those working in academia are prejudiced about the nature of prejudice.

Thursday, January 09, 2020

Formula One and Machine Learning

The power of Machine Learning, based upon artificial neural networks, has become all-too-obvious over the past decade. Early this year, it was announced in Nature that a Deep Learning algorithm, developed by Google Health, is better than human experts at identifying breast cancer in mammograms.

Naturally, there's also been much chatter in recent years about the potential use of such Artificial Intelligence (AI) in Formula One. For example, one can find Jonathan Noble's article, 'Why Artificial Intelligence could be F1's next big thing', on Autosport.com, apparently suggesting that AI could be used by both trackside engineers and those in race-support roles:

"Getting through the mountains of data generated in Formula 1 can be a 'needle in a haystack' process for teams searching for performance. There's technology on the way that could make a huge difference...The AI being talked about right now will be used at first to help better manage access to data. The computers will learn to know which data needs to be saved; which data needs to be prioritised so there can be rapid access to it. Plus it needs to be one step ahead and bring up data that is needed next."

Perhaps the idea is that if AI can spot patterns in a bunch of tits, then it could also be used by a bunch of tits to spot patterns in data. 

From inside the teams, the arrival of the Machine Learning advocates sometimes resembles a flock of seagulls swooping noisily from one landfill site to another, seeking easy pickings from the technically clueless decision-makers, squawking and chirping happily about 'convolutional neural networks', and 'GPUs running in the cloud' as they descend upon unwitting mechanical engineers and aerodynamicists.

Perhaps Formula One needs to carefully scrutinise some of the claims made by the Machine Learning (ML) community, particularly vis-a-vis its capabilities in the fields of forecasting and data-mining. A recent paper published in PLoS by Makridakis, Spiliotis, and Assimakopoulos compares the performance of ML algorithms, versus standard statistical methods, for making future predictions from time-series data. The aggregated errors were quantified using two measures, symmetric Mean Absolute Percentage Error (sMAPE), and the Mean Absolute Scaled Error (MASE). Unfortunately for the Machine Learning advocates, the statistical methods had the lowest error levels, as represented in the chart below.   


So, good news if you're an F1 engineer: you can cling onto your Excel spreadsheets for at least a little longer. Or better still, learn to use the statistical package R.

As Makridakis et al justifiably assert, "the importance of objectively evaluating the relative performance of the ML methods in forecasting is obvious but has not been achieved so far raising questions about their practical value to improve forecasting accuracy and advance the field of forecasting. Simply being new, or based on AI, is not enough to persuade users of their practical advantages over alternative methods."

Also provided in the paper by Makridakis et al is a useful table (below), which can be used as a guide to distinguish those applications where Machine Learning is demonstrably powerful (games, image and speech recognition), from those applications where it isn't (currently) the right tool for the job.


Machine Learning advocates can be expected to thrive in an environment lacking technically knowledgeable management. Coincidentally, there are two articles on Autosport.com extolling the virtues of Artificial Intelligence, the aforementioned 'Why Artificial Intelligence could be F1's next big thing', and 'The dangerous AI tool that could dominate F1'. In the latter, Serguei Beloussov, boss of Acronis, asserts: "In F1, there are ultimately three areas that you can apply machine learning - one is the race strategy, [the others are claimed to be logistics/operations and design]. There is some advantage, but not so much - because a race is a highly random activity, so it is relatively difficult to make a sustainable project because there is a lot of randomisation."

Now, Serguei is right about the difficulty of applying Machine Learning to race strategy, but he's completely misunderstood the principal reason. The problem is not the random element, and indeed, the random element (safety-cars and suchlike) is not the factor which dominates the logic of F1 race-strategy. 

To the disappointment of many, F1 race-strategy is a perturbation of deterministic logic: when teams devise their race strategies, they do the deterministic calculations involving tyre-compound offsets, tyre-degradation, pit-losses, fuel-consumption and so forth, and then apply perturbations to the timing of pitstops based on game-theoretic considerations of undercuts and overcuts, and the importance of hedging against (or catching) safety-car and virtual safety-car periods. There's a random element, but it's not the dominant element.

No, what makes Machine Learning so difficult (at present) to apply to F1 race strategy is the fact that F1 is a game in which the rules are constantly changing. The sporting and technical regulations are constantly changing from one year to the next, altering the rules on starting tyre-sets, how many tyre compounds are available or need to be used, how difficult overtaking is, whether refuelling is permitted etc.; moreover, the performance characteristics of the tyres change from one race to the next, and the compounds and construction change from one year to the next. It's much more difficult to train an artificial neural network when the past data is, like this, essentially a collection of similar, but different games. 

For example, you might try and estimate the overtaking difficulty at Paul Ricard based upon one year of data, without taking into account the fact that there was a headwind down the Mistral on that particular weekend, or the fact that the DRS effect was much stronger/weaker under the set of aero regulations in force that year; there might even have been a higher level of tyre degradation that year, which can have a disproportionate effect on traction, reducing the overtaking difficulty more than the pure lap-time deficit alone would indicate.

So, whilst it's difficult to see a long-term future in which all aspects of F1 activity are not influenced by artificial intelligence, in the short and medium-term, perhaps it's best to employ standard engineering practice: look at the nature of the problem, and choose the right tool for the job, rather than grabbing a sexy new tool and trying to find an application for it.  

Sunday, July 14, 2019

The problem of refuelling

FIA President Jean Todt has floated the idea of re-introducing refuelling to Formula 1, largely it seems to reduce the running-weight of the cars. According to Andrew Benson's BBC report, "Todt said he had been warned that the reintroduction of refuelling would likely lead to teams' race strategies being too similar to each other but countered that that was a product of there being too much simulation in F1."

Quite. So let's have a quick look at why refuelling doesn't necessarily make race-strategy more interesting. Suppose we have the following fairly typical parameter values:

Tyre-deg: 0.05 sec/lap
Fuel-effect: 0.033 sec/kg
Fuel-consumption: 1.5 kg/lap

Suppose that the deterministically optimal first pit-stop would be after 20 laps. That requires a fuel-load of 30kg. Suppose that the second stint would also require a fuel-load of 30kg. The lap-time penalty for 30kg of fuel would be 30*0.033 = 1 sec.

After 20 laps, the cumulative tyre degradation would be 20*0.05 = 1 sec.

Suppose two cars with identical zero-fuel-load lap-times are racing each other. As we approach the first pit-stop window, there would be no benefit of the car behind trying to pit first to undercut the car ahead: the penalty of taking on 30kg of fuel cancels out the advantage of switching to a fresh set of tyres. The conventional undercut logic, which permits overtaking between equally matched cars, would be lost.

With this set of parameter values, there would also be no benefit to the car behind running longer: the cumulative tyre deg cancels out the benefit of lapping on almost empty fuel tanks. However, one might suspect that this result only follows from choosing a special set of parameter values, so let's assume that the tyre-deg is lower, at 0.03 sec/lap. Surely this would tip the balance in favour of running longer?

Well, suppose the car behind is planning to run 3 laps further in the race, to lap 23. That requires a starting fuel-load of 23*1.5 = 34.5kg. That's an extra 4.5kg which the car behind needs to carry around for the first 20 laps of the race. With a fuel-effect of 0.033 sec/kg, that's a lap-time penalty of 0.033*4.5 = 0.15 secs on every lap of the first 20 laps. So, if we assume both cars are running in free air, the car behind would have lost 3 secs of cumulative time after 20 laps (assuming the extra weight didn't also increase the tyre-deg).

With the assumed tyre-deg of 0.03 secs/lap, the cumulative deg after 20 laps would be 0.6 secs. Which is less than the 1 sec penalty for taking on 30kg of fuel. Hence, the car running to lap 23 would make up 0.4 secs/lap on the car which has pitted on lap 20. Over 3 laps, that would be 1.2 secs.

Would the car behind be able to overcut the car ahead? Unfortunately not. That extra fuel-weight has already cost it 3 secs over the first 20 laps of the race. The 1.2 secs regained still leaves a net loss of 1.8 secs when it finally pits on lap 23.

Obviously, if both cars were running in traffic, and the car ahead was unable to exploit its superior potential lap-time over the first 20 laps, then the overcut might still work.

In summary, however, we can see why refuelling pushes strategies towards the deterministic optima: if you try to overcut an opponent, the greater fuel-weight necessary for that is counter-productive; conversely, if there's any benefit to be had from undercutting an opponent, that benefit would be even greater in the absence of refuelling. 

Saturday, June 29, 2019

Flocculation and the Payne effect

With the quality of racing in contemporary Formula One reaching something of a nadir, some parties have sought a quick-fix by proposing that Pirelli revert to their thicker-gauge 2018 tread design. With tyres back on the agenda, then, perhaps it's a good moment to look a little bit deeper at the composition of a racing tyre tread. 

A modern pneumatic tyre-tread contains rubber. Rubber itself consists of long chains of polymer molecules. The chains are mutually entangled, and in its raw form it is a highly-viscous liquid. It is not, however, elastic. It only becomes a viscoelastic solid when it undergoes 'vulcanization', whereby sulphur crosslinks are created between the molecular chains. This transforms the already entangled collection of polymer chains into a 3-dimensional network. 

So far, so familiar. However, a modern tyre-tread is a rubber composite. In addition to the network of vulcanized rubber, it contains a network of 'filler' particles. These filler particles are not just dispersed as isolated particles in the rubber matrix; rather, they agglomerate into their own 3-dimensional network. (The term for this agglomeration is 'flocculation').

The rubber network and filler network interpenetrate each other. Hence, the elasticity, viscosity, and ultimately the frictional grip of a tyre-tread is attributable to three sources: (i) the cross-links and friction between rubber polymer molecules; (ii) the bonds between filler particles; and (iii) the bonds between filler particles and the rubber molecules.


'High-performance' racing tyres, of course, are something of a world of their own, and tend to use carbon-black as a filler in high concentrations because it increases hysteresis (i.e., viscous dissipation) and grip. One can find statements in the academic literature such as the following:

"For a typical rubber compound, roughly half of the energy dissipation during cyclic deformation can be ascribed to the agglomerated filler, the rest coming from [rubber polymer] chain ends and internal friction [of polymer network chains]," (Ulmer, Hergenrother and Lawson, 1988, 'Hysteresis Contributions in Carbon Black-filled rubbers containing conventional and tin end-modified polymers').

Given the higher concentration of filler in a racing tyre, one might expect more than half of the energy dissipation, and therefore the frictional grip, to come from the agglomerated filler.

And now comes the interesting bit. Filled rubber compounds suffer from the 'Payne effect'. This is typically defined by the variation in both the storage modulus, and the loss modulus (or tan-delta) of the tyre when it is subjected to a strain-sweep under cyclic loading conditions. (The storage modulus is related to the elasticity or stiffness of the material, and the loss modulus is related to the viscous dissipation).


Typical graphs, such as that above, show that the storage modulus decreases as the amplitude of the strain is increased, whilst the loss-modulus or tan-delta reaches a peak at strains of 5-10%.

The Payne effect is typically attributed to the breaking of bonds between filler particles, as Pirelli World Superbike engineer Fabio Meni attests:

'Riders constantly talk about how their tires "take a step down" after a few laps, so I asked Meni what physical process in the rubber is responsible for this perceived drop in properties. "This has a name", he began. "It is called the Payne effect."

"In the compound," Meni continued, "the carbon-black particles are not present as separate entities but exist as aggregates - clusters of particles. As the tire is put into service, the high strains to which it is subjected have the effect of breaking up these aggregates over time, and this alters the rubber's properties."

Meni went on to say that it is not so much that the tire loses grip as it feels different to the rider. This is 'the step' that the rider feels after a few laps, after which the tire's properties may change little through the rest of the race.

In fact, I would quibble with this slightly: in the world of Formula One tyres, the way in which a tyre is treated at the beginning of a stint will often determine the subsequent degradation slope. If you abuse a tyre, it remembers it, and punishes you. Damage to the filler network appears to reduce grip, not merely soften a tyre.

One intriguing twist to the Payne effect is that there may be circumstances in which it is possible for a tyre to recover from damage to the filler network: "Much of the softening remains when the amplitude [of the strain in a cyclic strain-sweep] is reduced back to small values and the original modulus is recovered only after a period of heating at temperatures of the order of 100 degrees C or higher," (A.N.Gent, 'Engineering with Rubber', 2012, p115).


So heat is capable of annealing a damaged filler network, restoring the bonds between filler particles. The anneal temperature quoted here by Gent is not dissimilar to the maximum tyre-blanket temperatures currently permitted by Pirelli in Formula One...

As a final flourish on this subject, for those who like a bit of scanning electron microscopy, images of filler-reinforced rubber which has been in a state of slip across a rough surface, reveal that there is a modified 'dead' surface layer, about a micron-thick, in which the carbon-black filler particles are absent, (image below from work conducted by Marc Masen of Imperial College).


To paraphrase Homer Simpson, "Here's to tyres: the cause of, and solution to, all of Formula One's problems."

Wednesday, February 06, 2019

Assessing the nuclear winter hypothesis

After falling into disrepute for some years, the nuclear winter hypothesis has enjoyed something of a renaissance over the past decade. In the January 2010 edition of Scientific American, two of the principal proponents of the hypothesis, Alan Robock and Owen Brian Toon, published an article summarising recent work. This article focused on the hypothetical case of a regional nuclear war between India and Pakistan, in which each side dropped 50 nuclear warheads, with a yield of 15-kilotons each, on the highest population density targets in the opponent's territory.

Robock and his colleagues assumed that this would result in at least 5 teragrams of sooty smoke reaching the upper troposphere over India and Pakistan. A climate model was developed to calculate the effects, as Robock and Toon report:

"The model calculated how winds would blow the smoke around the world and how the smoke particles would settle out from the atmosphere. The smoke covered all the continents within two weeks. The black, sooty smoke absorbed sunlight, warmed and rose into the stratosphere. Rain never falls there, so the air is never cleansed by precipitation; particles very slowly settle out by falling, with air resisting them...

"The climatic response to the smoke was surprising. Sunlight was immediately reduced, cooling the planet to temperatures lower than any experienced for the past 1,000 years. The global average cooling, of about 1.25 degrees Celsius (2.3 degrees Fahrenheit), lasted for several years, and even after 10 years the temperature was still 0.5 degree C colder than normal. The models also showed a 10 percent reduction in precipitation worldwide...Less sunlight and precipitation, cold spells, shorter growing seasons and more ultraviolet radiation would all reduce or eliminate agricultural production.," (Scientific American, January 2010, p78-79).

These claims seem to have been widely believed within the scientific community. For example, in 2017 NewScientist magazine wrote a Leader article on the North Korean nuclear problem, which asserted that: "those who study nuclear war scenarios say millions of tonnes of smoke would gush into the stratosphere, resulting in a nuclear winter that would lower global temperatures for years. The ensuing global crisis in agriculture – dubbed a “nuclear famine” – would be devastating," (NewScientist, 22nd April 2017).

But is there any way of empirically testing the predictions made by Robock and his colleagues? Well, perhaps there is. In 1945, the Americans inflicted an incendiary bombing campaign on Japan prior to the use of nuclear weapons. Between March and June of 1945, Japan's six largest industrial centres, Tokyo, Nagoya, Kobe, Osaka, Yokohama and Kawasaki, were devastated. As military historian John Keegan wrote, “Japan's flimsy wood-and-paper cities burned far more easily than European stone and brick...by mid-June...260,000 people had been killed, 2 million buildings destroyed and between 9 and 13 million people made homeless...by July 60 per cent of the ground area of the country's sixty larger cities and towns had been burnt out,” (The Second World War, 1989, p481).

This devastation created a huge amount of smoke, so what effect did it have on the world's climate? Well Robock and Brian Zambri have recently published a paper, 'Did smoke from city fires in World War II cause global cooling?', (Journal of Geophysical Research: Atmospheres, 2018, 123), which addresses this very question.

Robock and Zambri  use the following equation to estimate the total mass of soot $M$ injected into the lower stratosphere:
$$
M = A\cdot F\cdot E\cdot R \cdot L \;.
$$ $A$ is the total area burned, $F$ is the mass of fuel per unit area, $E$ is the percentage of fuel emitted as soot into the upper troposphere, $R$ is the fraction that is not rained out, and $L$ is the fraction lofted from the upper troposphere into the lower stratosphere. Robock and Zambri then make the following statements:

"Because the city fires were at nighttime and did not always persist until daylight, and because some of the city fires were in the spring, with less intense sunlight, we estimate that L is about 0.5, so based on the values above, M for Japan for the summer of 1945 was about 0.5 Tg of soot. However, this estimate is extremely uncertain."

But then something strange happens at this point, because the authors make no attempt to quantify the uncertainty, or to place confidence intervals around their estimate of 0.5 teragrams.

I'll come back to this shortly, but for the moment simply note that 0.5 teragrams is one-tenth of the amount of soot which is assumed to result from a nuclear exchange between India and Pakistan, a quantity of soot which Robock and his colleagues claim is sufficient to cause a worldwide nuclear winter.

Having obtained their estimate that 0.5 teragrams of soot reached the lower stratosphere in 1945, Robock and Zambri examine the climate record to see if there was any evidence of global cooling. What they find is a reduction in temperatures at the beginning of 1945, before the bombing of Japan, but no evidence of cooling thereafter: "The injection of 0.5–1 Tg of soot into the upper troposphere from city fires during World War II would be expected to produce 0.1–0.2 K global average cooling...when examining the observed signal further and comparing them to natural variability, it is not possible to detect a statistically significant signal."

Despite this negative result, Robock and Zambri defiantly conclude that "Nevertheless, these results do not provide observational support to counter nuclear winter theory." However, the proponents of the nuclear winter hypothesis now seem to have put themselves in the position of making the following joint claim:

'5 teragrams of soot would cause a global nuclear winter, but the 0.5 teragrams injected into the atmosphere in 1945 didn't make a mark in the climatological record.'

Unfortunately, their analysis doesn't even entitle them to make this assertion, precisely because they failed to quantity the uncertainty in that estimate of 0.5 teragrams. The omission rather stands out like a sore thumb, because there are well-known, routine methods for calculating such uncertainties.

Let's go through these methods, starting with the formula $M = A\cdot F\cdot E\cdot R \cdot L \;.$ The uncertainty in the input variables here propagates through to the uncertainty in the output variable, the mass $M$. It seems reasonable to assume that the input variables here are mutually independent, so the uncertainty $U_M$ in the output variable can be inferred by a simple formula from the uncertainties attached to each of the input variables:
$$
U_M = \sqrt{(U_A^2 + U_F^2+U_E^2+U_R^2+U_L^2)} \;.
$$ $U_A$ is the uncertainty in the total area burned, $U_F$ is the uncertainty in the mass of fuel per unit area, $U_E$ is the uncertainty in the percentage of fuel emitted as soot into the upper troposphere, $U_R$ is the uncertainty in the fraction that is not rained out, and $U_L$ is the uncertainty in the fraction lofted from the upper troposphere into the lower stratosphere.

Next, to infer confidence intervals, we can follow the prescriptions of the IPCC, the Intergovernmental Panel on Climate Change. The 2010 Scientific American article boasts that Robock is a participant in the IPCC, so he will surely be familiar with this methodology.

First we note that because $M$ is the product of several variables, its distribution will tend towards a lognormal distribution, or at least a positively skewed distribution resembling the lognormal. The IPCC figure below depicts how the upper and lower 95% confidence limits can be inferred from the uncertainty in a lognormally distributed quantity. The uncertainty $U_M$ corresponds to the 'uncertainty half-range' in IPCC terms.  
   

The IPCC figure "illustrates the sensitivity of the lower and upper bounds of the 95 percent probability range, which are the 2.5th and 97.5th percentiles, respectively, calculated assuming a lognormal distribution based upon an estimated uncertainty half-range from an error propagation approach. The uncertainty range is approximately symmetric relative to the mean up to an uncertainty half-range of approximately 10 to 20 percent. As the uncertainty half-range, U, becomes large, the 95 percent uncertainty range shown [in the Figure above] becomes large and asymmetric, "(IPCC Guidelines for National Greenhouse Gas Inventories - Uncertainties, 3.62).

So, for example, given the large uncertainties in the input variables, the uncertainty half-range $U_M$ for the soot injected into the lower stratosphere in 1945 might well reach 200% or more. In this event, the upper limit of the 95% confidence interval would be of the order of +300%. That's +300% relative to the best estimate of 0.5 Tg. Hence, at the 95% confidence level, the upper range might well extend to the same order of magnitude as the hypothetical quantity of soot injected into the stratosphere by a nuclear exchange between India and Pakistan. 

Thus, the research conducted by Robock and Zambri fails to exclude the possibility that the empirical data from 1945 falsifies the nuclear winter hypothesis for the case of a regional nuclear exchange. 

In a sense, then, it's clear when Robock and Zambri refrained from including confidence limits in their paper. What's more perplexing is how and why this got past the referees at the Journal of Geophysical Research...

Sunday, January 13, 2019

Thruxton British F3 1989

The thickness of the atmospheric thermal boundary layer falls to a global minimum over Thruxton. Hence Thruxton is very cold. So much so, in fact, that the British Antarctic Survey have a station there, built into the noise-attenuation banking at the exit of The Complex, (much like a Hobbit-hole), where the younger scientists train to work in a frozen environment before travelling to the Halley Research Station on the Brunt ice shelf.

The late Paul Warwick in the Intersport Reynard. Puzzlingly, in the background there appear to be no takers for the shelter provided by the parasols.
In 1869, John Tyndall discovered why the sky is blue. If he'd lived in Thruxton, the question wouldn't even have occurred to him. Note the characteristic Wiltshire combination of distant mist, a stand of lifefless trees, and flat wind-swept expanses.
Marshals assist a driver who has entered a turnip field. The Wiltshire economy is entirely dependent upon (i) the annual turnip yield, and (ii) government subsidies into the thousand-year consultation process for a Stonehenge bypass/tunnel. The buildings in the background are what people from Wiltshire refer to as a 'collection of modern luxury flats and town-houses.' 
One of the drivers is distracted by an ancient ley line running tangential to the Brooklands kink.

Friday, January 11, 2019

Silverstone Tyre Test 1990

Generally speaking, it was impossible to see a car with the naked eye at Silverstone. However, with the assistance of the world's best astronomical optics, I was occasionally able to pluck an image out of the infinitesimally small strip separating the cold, grey sky from the wooden fence posts and metal railings.

Satoru Nakajima in the pioneering raised-nose Tyrrell 019. This image was obtained with the Wide Field and Planetary Camera on the Hubble Space Telescope.
Alessandro Nannini in the Benetton. This shot was taken with the 100-inch reflector on Mount Wilson.
Nigel Mansell, lighting up the front brake discs as he prepares for the turn-in to Copse. Nigel set the fastest lap on the day I attended the test; a mid-season pattern of performance which led Ferrari to reward Nigel by handing his chassis over to Prost.
Ayrton Senna in characteristic pose, head dipped forward and tilted towards the rapidly approaching apex of Copse corner .