Monday 28 December 2015

DfT: Number of cyclists on M32 still zero

If you go into a bike shop you can pick up a freebie map of where to cycle in the city —one that's actually good for walking and running too.

You can more easily get a map of where not to cycle. In any overview map of the city, it's the red roads on the cover, along with the junctions between them.

 

Which is quite a coincidence really, as those are, essentially, the locations of the Department of Transport traffic survey sites


This is why the DfT traffic survey, showing a levelling off in cycle use, has to be treated with some scepticism. Frankly, if the number cyclists on the A370 and Cumberland Basin Flyover, or Temple and Newfoundland Ways was measured at more than zero, we'd actually suspect a measurement error.

Joe Steinsky tears into the numbers, with a graph we've stolen without any attribution.



The sheer variance in that cycling graph shows its flaws —you have to worry about how meaningful it is.

Even so, it's going to be taken up by those people who find it supports their opinions. In Bristol, that's the usual anti-cycling lobby: the evening post, the evening post commenters, random twitter haters and conservative party council and mayoral candidates.

If you encounter it —don't be afraid to ask "where are the numbers of people walking or on a train?", as you aren't going to see any of them on the M32, A370 or Temple Way either.

Out of Bristol, the group people need to worry about is actually the DfT themselves. Their 2013 traffic model predicted a fall in miles cycled after 2015, the data they've published appears aligned with this, so helps congratulate their modelling team on their skills, perhaps promoting them to the DEFRA Flood Modelling project. Which is why the cycling troublemakers need to be pushing back on this, not just asking for the council to collect better data on routes such as the BBRP, suspension bridge, Create Bridge crossing (oh, wait, BRT2 killed that), Prince St Bridge (oh, wait, BRT broke that), the Chocolate Path (oh, wait, BRT2 again), but maybe the castle path, farm pub path, eastville park path, and up through Stoke Park to UWE, etc. And then advocate the DfT to include such data in their modelling of urban use. And include walking too. Because that's a legitimate form of transport and is as much at risk of neglect as the cycling.

At this point, the locals who sneer at Bristol traffic for being car-hating extremists will accuse of us discounting data we don't like, just as they themselves do with all climate change research that brings bad news, models that predict warmer winters with more floods.

For those people, know that we slagged off the survey the moment we encountered it in progress.

This is it: four people on a footbridge overlooking the M32. We faulted it at the time for being an inadequate way of measuring car traffic in a modern city. It is only this week that we discover that one of the people sitting there with a little "clicker" was waiting for cyclists and presumably getting bored at the inaction.

 

Four people sitting on a footbridge, counting cars on one single weekday in a year. And using that for the traffic statistics and future predictions of Bristol's road needs? That is not modern "data science". In fact, it's more of a practical A-level project —though even there you could do more with a camera and at-leisure replay.

Manually counting one morning in 365 just doesn't produce valid data. Was it a weekday? Which day? Was it a schoolday? Or was it half term with reduced traffic counts? Was it raining? Such experiments may have been a viable strategy in 1963, maybe even 1974 with the M32 open and three Austin Allegros an hour driving down the "Bristol Parkway" to see the shopping wonder that was 1970s Broadmead.

Nowadays, it's a historical relic of a process, no doubt hooked up to a traffic model that considers pedestrians at a junction "a cost", values cyclists as being worthless (OK, that still holds), and considers time in a traffic jam is as cost, rather than a valued quiet texting time between the office and the hell of parental responsibilities.

If you want modern data, throw the hi-viz tops off the footbridge, use the now-rolled out ANPR camera arrays to log vehicle movements, and start to do some decent analysis of the data beyond just "how many"
  1. Split by vehicle type and time of day. Do vans come in earlier or later?
  2. What fraction of the traffic is in-city vs "Greater Avon" vs out of Town?
  3. How many pass out the city on a different route shortly after entering it?
  4. When do people commute in each direction? Is it a simple 9-5? How many are 9:30 to 4:30, vs. 08:00 to 18:30? And does that vary from direction into town?
  5. Do vehicles coming into town on the M32 return the same way? Or do they take a different route? (Not as unusual as you think; from city to N. Fringe, M32 after 09:00 is fast, but for a return between 17:00 and 18:30, Filton Ave has more predictability).
  6. There's apparently a rise in vans. How many are for internet-shopping deliveries vs. independent locals vs. service organisations?
  7. How many people commute from Wales? By motorbike? (it's a free bridge crossing, see).
  8. During school half terms, do many commuters change their driving schedule and/or route?
  9. Do red cars go faster? This'd be a really interesting question to answer, something you could do today by combining the M5 ANPR dataset with one of vehicle make/model/colour. It's not enough to measure the ratio of red to other colours, you need to compensate for the fleet, to make it more "do red Mark IV Vauxhall Astras go faster than other colours"?
Ignoring the final question, which is more of social commentary than anything else, the other questions all directly define the motor vehicle use that's made of the city's road infrastructure. Information that could be used in some way, not just for better DfT modelling, but for moving traffic understanding beyond simple anecdotes.

Transport for London have a data team; they do churn through the oystercards, the C-zone stats: they do understand some of the use of the city? Not Bristol —and clearly, not the Department of Transport, who are still shaping the country's transport infrastructure based on four people and a clipboard.

[footnote: that cover is from a 4th edition A-Z. Look at its rendering of the inner-ring road and see if you spot what's changed?]

Sunday 27 December 2015

Press, politicians and 100 year floods

This post covers basic probability theory.


The media and the politicians seem to be completely confused by the concept of flood frequency, particularly in the abuse of the concept of "a hundred year flood".

The use of that term creats the misguided idea that you get such a flood every hundred years, and that having had one a few years ago, you aren't going to see another one for nearly a century.

This is shows a complete misunderstanding of statistics and probability. Which for the people being evacuated from their houses you can partly understand —you can't expect them all to have studied maths to A-level or remembered the details. What is wrong is that the press keeps using the same term, along with "20 year flood", misleading the people. And the politicians, they are equally a bunch of Oxbridge PPE-graduates who don't have a single cartesian coordinate between them —but should at least have those science advisors to explain the basics. FFS, there is the whole "Royal Society" which is meant to explain science to royalty, and, given we still live in a feudal state, the crown's ministers, Cameron included.


A "hundred year flood" really means a "1% chance per year flood". Assuming that the effects of the previous year's weather has no bearing on its successors, the probability of having a 1% flood the year after a 1% flood is, wait for it: 1%. The probability of having one in the five years after is, wait for it: five percent. And in 15 years, it's 15%. So the fact that York is currently underwater for the first time since 2000, means that the the two-flood-in-15-year-event, which had ~15% probability, has occurred. Which is not impossible, even for a "hundred year event". In fact, when you start counting since, say, 1995, you are looking at the probability of two 1% floods happening in a 20 year period —which is actually 20%: 1 in 5.

For the curious, assuming that the flood events are entirely independent, it'd follow a Poisson Distribution

Except, certainly within a single winter, we know the events are not independent —if the ground is saturated from previous rainfall, the rivers bloated from previous storms, then the probability of another storm triggering a flood is higher. If the land is already full of water, then it only takes a little bit more to tip things over the edge.


That "hundred year flood" really means, then:

The meteorologists' model of rainfall over a single winter, of the volume and frequency of rainfall, predicts the probability of flood of a specific volume occurring at 1%.


The probability of  a 1% flood re-occuring may follow a poission distribution —and hence the likelihood of multiple floods happening within a few decades is actually quite high.

Floods do appear to be happening more often than even a Poission distribution would apply, so what does that mean?

Some hypotheses spring to mind
  1. The rainfall model is correct and we've simply had the misfortune to have a rare-but-not-impossible series of storms.
  2. The rainfall model is correct, but the estimates of probability of storms within a season are wrong —that is, bad historical data created optimistic estimates.
  3. Year-on-year flood events are not independent.
  4. Changes in the terrain: farming differences, the building of houses on flood plains, etc, changed the runoff of the system, so amplifying the effect of rain
  5. The rainfall model is in fact wrong due to failures such as the failure to consider the impact of global warming on the evaporation of water, the actions and position of the gulf stream, and/or the fact that with warmer air, it falls more as a a liquid ("rain"), than in a crystalline form ("snow" and "hail").
  6. There was a more pessimistic (i.e. accurate) estimate of rainfall, but managerial or political pressure discounted it in favour of one which played down the risks, reducing the requirements and cost of flood defences, and obviated the need to press for changes in the agriculture system within the catchment area
Note also that these hypotheses are not exclusionary. The model could have failed to consider global warming, been based on bad historical data, and not planned ahead for the conversion of flood plains into suburban housing estates —then been downplayed by politicians who disagreed with the answers.. Which, when you think about it, is entirely possible.

That's why the term "hundred year flood" is so bogus. More accurate is "a 1% flood based on a broken or pre-global warming model with incomplete data without considering urban sprawl, and probably downplayed for political reasons". Using the term "100 year flood" does nothing but create unrealistic expectations that the floods aren't going to re-occur, year-on-year.


Someone in the press could look at the model, the data, the politics and determine what's actually happened, then try and explain it in a way which doesn't use terms like "hundred year flood". Because the science is there, the maths is there —and someone needs to hold the politicians and the scientists to account.

[These photos are all from Jan 4, 2014, showing the Avon fairly close to breaking its banks. Avon Crescent was actually underwater in winter 1990