Following on from our coverage of the fact that its sporty cars that lose from the 20 mph zone, we should look at the impact of the figures on bicycles.
Before anyone points out that bicycles are exempt from the limit, consider that if it becomes a speed that people expect, then going above it would create problems for people assuming you are heading more slowly. Though as any tax-dodger will point out, most people passing your or pulling out assume you are near-stationary and plan their manoeuvre accordingly.
Even so, fast bikes don't have a place in the city. Stick to 20 or less: and if you can do that on the uphills you've earned those numbers.
What it does mean is that just like fast cars, tribars aren't needed in the city, nor are carbon wheels or polka-dot socks.
Yet here we see someone doing 6-8 mph in exactly that setup.
And while it's nice to see that they are staying way below that 20 mph limit, they do have their arms on the tribars. Which is another way of saying "their hands are a long way from the brakes".
As they reach the end of Kensington Place, they approach the give way point at Lansdown Road, where this lack of braking ability almost catches up with them. Because there's a Range-Rover heading north from Lansdown Place —below that 20 mph— and the cyclist needs to make a bit of an emergency swerve from going into it. Which could have damaged the paintwork on the car as well as written off some carbon wheels.
Bikesnob is always taking the piss of triathletes. It doesn't really apply here, but it does send a message to all: plan ahead, keep your hands near the brakes especially as you approach junctions where you are expected to give way.
Note the yellow lines on the road. This is the Clifton Village RPZ, as you can see on a weekday it is now a wasteland.
Saturday 28 March 2015
Friday 27 March 2015
Bristol's 20 mph zones: it's the hot cars that lose
The 20 mph zone has been up for a year now, so it's time to review it as drivers
- It doesn't make things slower. Really. It's the delays at junctions & in queues that increase journey time.
- It makes things calmer. There's less pressure to put your foot down when you do clear a junction.
- Similarly, as you approach a junction, you can coast down more gently.
- As you are going a bit slower, you can take time to look around, which gives you a better view of pedestrians. This is tangibly better at night.
- It actually helps you pulling out from side roads to main roads. Why? As everyone is moving slower, the time window for you to do things like pull a right turn with cars approaching from both directions is larger.
- Fuel economy? No obvious difference. The engine may be less efficient in 3rd than 4th, but you don't have to accelerate so hard, and can coast down. Of course, anyone in a hybrid car is laughing as their petrol engine can work even less, and gain more regenerative braking from the gentle slow down.
- Keeping track of your speed? Third gear low RPM seems to work. If you feel the need to go to 4th, you are going too fast.
- Does everyone follow 20 mph? 22-25 is more realistic daytime speed; at night the speed goes up to 30 until the minicabs come out, when it ramps up 40 mph just when the drunk people start walking home.
- Increased road rage? No obvious difference. As the average commute speed is < 20 mph in Bristol (TomTom's unverified data, not ours), it's hard to see how it could be made worse.
- Collapse of businesses due to increased white van journey time. Not obvious. Congestion is the limiting factor, not maximum speed.
- Passing Bicycles? No harder or easier. It's still irritating to be behind someone going along at 12 mph. But the speed limit doesn't make passing harder. We just want fitter cyclists out there.
- Then there's the "two shopping trolley" man wandering round the streets these days. We have no idea why he has two shopping trollies full of his entire belongings, but he does, he goes down the roads (not the pavements) at about 4 mph. Again, 20 mph doesn't make a difference.
So who loses? People who spent money on fast cars. You shell out all that cash for a nimble toy, for an extra digit or two at the end of your car brand logo —"i", "GT", etc—, tinted windows and some wheels that just scrape easily. The key "performance" benefits are tighter suspension, and most of all the ability to accelerate better.
Which is now utterly wasted, as most vehicle's 0-20 numbers are relatively similar. And pootling around at 20 mph means no need for suspension that lets you do 90 degree turns at 35 mph.
That's enough to drive you to road rage: not the fact that you are doing 20 mph, but the fact you spent a lot of money on your status toy and are doing 20 mph. Even if you want to go faster, there'll be someone in front who doesn't, who appears to drive at exactly 20 mph on a fast ratrun road like Ashley Down, Pembroke Road or Filton Ave. It's almost as if some people, on noticing an important person driving a high-end Audi or BMW SUV actually take their foot off the accelerator, dropping from 24 mph to 20. Which we consider unacceptable and strongly condemn anyone doing this.
For those people who have spent the money, they expect something in return.
Fortunately, every so often the opportunity arises. Here is one our instrumented tax-dodgers going down Nugent Hill, Cotham, using the bike contraflow to get to the Arley Hill evening traffic jam, and so on to Stokes Croft. As they join the Arley Hill queue, you can see a green light allowing some traffic to slowly get out to places more interesting.
And here you can see the BMW 3 series YF57KTE getting some return on investment. First they can come off the speed bump while accelerating (suspension), then put their foot down to catch up with the cars in front. Those cars in front who are going through on orange we note. But as the BMW is now almost joined up with them, it can do the "part of the same group" gambit and carry on through on the red light.
That tactic has given them a bit more speed than the vehicles in front, forcing them to negotiate what is effectively a chicaned right turn fairly aggressively: that suspension at work again.
They then need to put their brakes on, not because of the speed limit but because they've caught up with the car in front.
There: 15 seconds of real driving, out of probably 30 minutes of suffering. Not much —but that's all a 20 mph zone offers, at least during the early evening commute.
And here you can see the BMW 3 series YF57KTE getting some return on investment. First they can come off the speed bump while accelerating (suspension), then put their foot down to catch up with the cars in front. Those cars in front who are going through on orange we note. But as the BMW is now almost joined up with them, it can do the "part of the same group" gambit and carry on through on the red light.
That tactic has given them a bit more speed than the vehicles in front, forcing them to negotiate what is effectively a chicaned right turn fairly aggressively: that suspension at work again.
They then need to put their brakes on, not because of the speed limit but because they've caught up with the car in front.
There: 15 seconds of real driving, out of probably 30 minutes of suffering. Not much —but that's all a 20 mph zone offers, at least during the early evening commute.
Labels:
20mph,
arley-hill,
bmw,
cotham,
nugent-hill,
RLJ
Saturday 14 March 2015
An introduction to surveys
Richard Payne of ITV has asked by way of our strategic code-sharing and data mining partners, Twitter, how to explain bias and self-selection in surveys.
This is a topic dear to our hearts for a number of reasons:
Subset: Some or all entities within a subset. Example, some of the population of Bristol or some of the residents of an area within Bristol.
Proper subset: A subset of a set which is actually smaller than the original set. (fancy mathematical word: Cardinality). Examples: some but not all of the population of Bristol, or some but not all of the residents of an area within Bristol.
What is important here is that, by definition, a subset of a population must not contain any members outside that population. As examples, a subset of the population of Bristol must exclude people from North Somerset. Similarly, a subset of the residents of an area within Bristol must not contain anyone who does not live within that area.
In our survey we actually measured the origin of our self-selected sample to assess this. We could have just ignored them, but instead chose to include them in our answers on the basis that it was easier just to leave them in.
Data: Numbers. May be analysed by somebody with a statistical background to reach some meaningful conclusions. Without those mathematical skills you'll end up with something as useful as having a rabbit do your tax return.
Measurement: Using some form of scientific mechanism to come up with data about the things you measure. Examples: determining the weight of someone with a weighing scale. Determining the parking and driving habits of people by recording where they park or tracking where they drive.
Invalid Measurement: trying measure something by using the wrong tools, badly calibrated tools or reading the numbers off wrong. Example: determining the weight of people by having people jump onto a trampoline and using a stopwatch to time how long it takes for them to stop bouncing.
Poll: Asking people for their opinions. This is different from a survey in that it is assessing the beliefs of those people, rather than through measurement. Example: asking someone how much they think they weigh rather than putting them on a weighing scale. Asking people about parking and driving rather than actually recording or tracking them.
Leading questions. A sequence of questions which may, unintentionally or not, change the answers to follow-on questions. As example of leading questions, imagine the following sequence
Census: Measuring or polling a Population. Examples: people whose weight you want to measure, or the residents of an area whose opinion on parking you want to known. A census of a population is the only way to come up with a value of the measurement or poll which can be considered 100% accurate in terms of sample set. Everything else is incomplete and therefore inaccurate to some degree.
Survey: Measuring or polling a proper subset of a population —with the goal being to extrapolate the results to the entire population. Examples: weighing only some of the people in Bristol to extrapolate the weight of everyone in the city, or polling some of the residents in part of the city to extrapolate to the opinions of all the residents of that area.
Sample: The proper subset of a population used in a survey. Examples: some but not all of the population of Bristol, or some but not all of the residents of an area within Bristol. Another term is Sample Set.
Defensible: Something which you can present to people who understand statistics without being laughed at.
Invalid Sample Set: A sample for a survey which cannot be used to extrapolate to the entire population. Examples:
Statistical Outliers
These are a fun thing in experiments. Something way out of the expected. You can include these in your answer, though you can also try and work out how the outliers got in there and then discount them —this is especially useful if you are trying to make sure the survey reaches the conclusions you want it to.
Look at our question on the number of wing-mirrors replaced since the 20 mph rollout.
70% of the respondees claimed that they hadn't replaced any wingmirrors since the 20 mph zone. This was utterly unexpected, and, if used when trying to determine the average number of wingmirrors lost per resident per year, we get an arithmetic mean of 0.87 mirrors/year -less than one!
Yet can be explained if we include two other facts from our survey
Causality and co-relatedness
Again, fascinating. Merely because two things appear correlated over time, doesn't mean that one causes another.
In this question Why has congestion got worse in Bristol over 25 years?, 17% said BT added an extra digit to all the phone numbers in the early 1990s. Some people may say "so what?", or even "the growth in Bristol's population caused BT to add more numbers; that same population growth increased the number of cars, hence the resultant congestion". We say something else: It was the adding of that digit which made it possible, in a pre-mobile-phone era, to move to the city. That 17% were right. And from this survey nobody can prove us wrong!
Invalid Survey
Any survey that can be considered invalid from a statistical perspective. Common causes are: invalid sample sets, leading questions, bad measurement, leading questions and bad-analysis, including confusing correlation for causation.
For some examples:
This problem of the don't-know answer is particularly bad in any self-selected survey because the members of the population who don't not hold opinions tend not to participate in it. Instead you get that subset of the population who hold opinions one way or the other. It is also common for any survey which requires an action on behalf of the respondee, be it jump on a trampoline holding a stopwatch, or fill in a paper questionnaire and then post it.
Summary
As you can see, it is a lot easier to produce an invalid survey than a valid one. More subtly, its very easy to misinterpret a invalid survey for a valid one without knowledge of the sampling and measuring process, and knowledge of statistics.
For that reason, while surveys can provide some data about a subject, you can't consider the conclusions to be valid without knowing about the sampling, measuring and analysis —and any bias of the surveyors.
When you reviewing a survey, you should really query
Further Reading
We hope readers found this introduction to surveys and censuses informative and timely. Please practice what have learned by using some of the terms introduced above in your everyday conversation —at least once per day. Example uses
For anyone interested in learning more about this topic, here are some great online books on the topic
This is a topic dear to our hearts for a number of reasons:
- We consider ourselves to Bristol's premier data-driven traffic analysis site.
- We recently conducted a survey on traffic issues for the city —a survey which has been completely ignored by the evening post, the BBC and ITV.
- We have just received an SERC grant for a new project to measure the weight of the city using a stopwatch and a trampoline —and plan to conduct our survey at the BRI next week.
Subset: Some or all entities within a subset. Example, some of the population of Bristol or some of the residents of an area within Bristol.
Proper subset: A subset of a set which is actually smaller than the original set. (fancy mathematical word: Cardinality). Examples: some but not all of the population of Bristol, or some but not all of the residents of an area within Bristol.
What is important here is that, by definition, a subset of a population must not contain any members outside that population. As examples, a subset of the population of Bristol must exclude people from North Somerset. Similarly, a subset of the residents of an area within Bristol must not contain anyone who does not live within that area.
In our survey we actually measured the origin of our self-selected sample to assess this. We could have just ignored them, but instead chose to include them in our answers on the basis that it was easier just to leave them in.
Data: Numbers. May be analysed by somebody with a statistical background to reach some meaningful conclusions. Without those mathematical skills you'll end up with something as useful as having a rabbit do your tax return.
Measurement: Using some form of scientific mechanism to come up with data about the things you measure. Examples: determining the weight of someone with a weighing scale. Determining the parking and driving habits of people by recording where they park or tracking where they drive.
Invalid Measurement: trying measure something by using the wrong tools, badly calibrated tools or reading the numbers off wrong. Example: determining the weight of people by having people jump onto a trampoline and using a stopwatch to time how long it takes for them to stop bouncing.
Poll: Asking people for their opinions. This is different from a survey in that it is assessing the beliefs of those people, rather than through measurement. Example: asking someone how much they think they weigh rather than putting them on a weighing scale. Asking people about parking and driving rather than actually recording or tracking them.
Leading questions. A sequence of questions which may, unintentionally or not, change the answers to follow-on questions. As example of leading questions, imagine the following sequence
- Are you aware that being overweight can lead to an increase in coronary heart disease and diabetes?
- Do you believe that overweight people should be billed by the NHS for medical care for weight-related conditions.
- Are you a fat bastard?
Census: Measuring or polling a Population. Examples: people whose weight you want to measure, or the residents of an area whose opinion on parking you want to known. A census of a population is the only way to come up with a value of the measurement or poll which can be considered 100% accurate in terms of sample set. Everything else is incomplete and therefore inaccurate to some degree.
Survey: Measuring or polling a proper subset of a population —with the goal being to extrapolate the results to the entire population. Examples: weighing only some of the people in Bristol to extrapolate the weight of everyone in the city, or polling some of the residents in part of the city to extrapolate to the opinions of all the residents of that area.
Sample: The proper subset of a population used in a survey. Examples: some but not all of the population of Bristol, or some but not all of the residents of an area within Bristol. Another term is Sample Set.
Defensible: Something which you can present to people who understand statistics without being laughed at.
Invalid Sample Set: A sample for a survey which cannot be used to extrapolate to the entire population. Examples:
- Including people from North Somerset in a survey to determine the average weight of the population of Bristol.
- Weighing only those Bristolians who have been referred to the BRI heart clinic in a survey to determine the average weight of the population of Bristol and using a trampoline and a stopwatch to do so.
- Using too small a survey set for the size of the total population. Example, weighing two people and attempting to reach a conclusion about the weight of the entire population of the city.
- Attempting to conduct an opinion poll of residents of part of the city without excluding non-residents of that region.
- Attempting to conduct an opinion poll of residents of the city within, say, residents parking zone, yet deliberately choosing to exclude parts of the area —such as, say, Kingsdown and the city centre.
- Excluding some of the population on the basis that they do not meet some criteria. Example: excluding anyone who doesn't own a car from any opinion poll on the topic of residents parking.
- Conducting an opinion poll of the residents of part of the city by only asking those people who have opinions on one specific outcome of the survey. Example: asking only people opposed to residents parking of their opinion on the topic. Conducting a survey by requiring participants to perform some action such as posting in their survey. The latter tends to something called self-selecting samples.
In our survey 32% of respondees declared they couldn't afford a car. These people don't have valid opinions on parking, nor on other parts of our own survey.
These are a fun thing in experiments. Something way out of the expected. You can include these in your answer, though you can also try and work out how the outliers got in there and then discount them —this is especially useful if you are trying to make sure the survey reaches the conclusions you want it to.
Look at our question on the number of wing-mirrors replaced since the 20 mph rollout.
70% of the respondees claimed that they hadn't replaced any wingmirrors since the 20 mph zone. This was utterly unexpected, and, if used when trying to determine the average number of wingmirrors lost per resident per year, we get an arithmetic mean of 0.87 mirrors/year -less than one!
Yet can be explained if we include two other facts from our survey
- the number of respondees who asserted that they lived outside the city: 54%
- The number of respondees who asserted that they were too poor to own a car: 32%
As we are measuring the impact of 20 mph zones, we should be discounting those people from our analysis of this question:
Discounting non-car owners: 70-32 = 38. Therefore of respondees who owned car, only 38% of them got through the year without needing a new mirror.
Discounting non-residents. 54-38: -16! Which seems impossible, unless you consider that many of those non residents will have driven into a 20 mph zone, and so lost a mirror.
Once you discount the non-car owners and non-residents, we get the result we expected: since the 20 mph rollout, everyone in the 20 mph zone has lost one or more wing-mirrors, with the average number being 3. At 15-25 pounds a shot, that wingmirror-tax is yet another tax on the hard-working motorist.
Causality and co-relatedness
Again, fascinating. Merely because two things appear correlated over time, doesn't mean that one causes another.
Invalid Survey
Any survey that can be considered invalid from a statistical perspective. Common causes are: invalid sample sets, leading questions, bad measurement, leading questions and bad-analysis, including confusing correlation for causation.
For some examples:
- Asserting facts about the average weight of Bristolians through an opinion poll with leading questions conducted at the BRI heart clinic of 4-5 people, without even excluding any attendees from North Somerset. That fails: invalid sample set and leading questions.
- Asserting facts about the entire population of Bristol's opinions on residents parking through an opinion poll with leading questions conducted against a self-selected sample set of some people who care about the subject.
- Getting your maths wrong when you add things up, divide the answers, etc.
- Misinterpretation of results. Reaching the wrong conclusions. If you want to reach a set of conclusions, you are less likely to question the sampling or analysis if the outcome agrees with your expectation. This is sometimes called confirmation bias
This problem of the don't-know answer is particularly bad in any self-selected survey because the members of the population who don't not hold opinions tend not to participate in it. Instead you get that subset of the population who hold opinions one way or the other. It is also common for any survey which requires an action on behalf of the respondee, be it jump on a trampoline holding a stopwatch, or fill in a paper questionnaire and then post it.
Summary
As you can see, it is a lot easier to produce an invalid survey than a valid one. More subtly, its very easy to misinterpret a invalid survey for a valid one without knowledge of the sampling and measuring process, and knowledge of statistics.
For that reason, while surveys can provide some data about a subject, you can't consider the conclusions to be valid without knowing about the sampling, measuring and analysis —and any bias of the surveyors.
When you reviewing a survey, you should really query
- The population for which the survey is meant to be analysing
- The sampling process conducted in order to get a valid sample set
- How things were measured
- If it is some form of poll, the sequence and content of the questions.
- Outliers: what were they? were any discounted?
- What compensation have you made for non-participants?
- How do you defend your claim that this survey can be extrapolated to the population it was meant to.
Further Reading
We hope readers found this introduction to surveys and censuses informative and timely. Please practice what have learned by using some of the terms introduced above in your everyday conversation —at least once per day. Example uses
- "Please can I sample some of your chips"
- "the causality relationship between eating chips and being overweight is not clear",
- "your survey is utterly indefensible due to its painfully awful selection bias and leading questions —your attempt to extrapolate it to any larger population hence so ridiculous you'd fail a GSCE if you sat one this week"
For anyone interested in learning more about this topic, here are some great online books on the topic
- Probability and statistics EBook UCLA. wiki-based book. Notable for a section of uses and abuses of statistics.
- Introduction to Probability. Dartmouth College.
Foundational undergraduate probability book, with the history of the topic included too. Did you know that early work in probability was funded by french nobility so that they could win more money gambling with their peers? you do now. - Think Stats: Probability and Statistics for Programmers, Green Tea Press.
—great read which makes few assumptions about the ability or willingness of readers to stare at equations.
Friday 6 March 2015
Clifton: last days of Christchurch School paveparking
Royal Park is yet to get its markings, so we can view the invaluable space the council will be taking away, ruining with ugly yellow lines
Here, by Christchurch School, the lines already exist -but as they can be ignored, they have a beauty of their own
Round the corner, there's more pavement space. Its inevitable the council will put down double yellow lines here —just so schoolchildren can get to school safely
Yet look at our archives: we've evidence back from 2008 that there's enough space for cars and children!
Given that small children on scooters can already squeeze past parked cars, there is no justification in adding more ugly yellow lines —or even enforcing the ones that are already there
Labels:
christchurch-school,
clifton,
paveparking,
royal-park,
RPZ,
school-keep-clear
Thursday 5 March 2015
Royal York Crescent: ruined!
Royal York Crescent is the grandest of the Clifton terraces, overlooking the entire city.
Only now, look out the window and what do you see? Yellow lines.
Completely out of place in a historic environment, such as this pleasant rural-esque scene of a pickup paveparked behind some bollards
Utterly inappropriate removal of echelon parking opportunities on a blind corner.
There's only a few days until the car-hating council start enforcing these awful yellow lines
A few more days for the village to live
It's worth remembering that the motorist-haters were pushing for bike parking on these streets.
bike racks would ruin one of Bristol's —nay, one of Britain's—greatest Georgian crescents!
Only now, look out the window and what do you see? Yellow lines.
Completely out of place in a historic environment, such as this pleasant rural-esque scene of a pickup paveparked behind some bollards
Utterly inappropriate removal of echelon parking opportunities on a blind corner.
There's only a few days until the car-hating council start enforcing these awful yellow lines
A few more days for the village to live
It's worth remembering that the motorist-haters were pushing for bike parking on these streets.
bike racks would ruin one of Bristol's —nay, one of Britain's—greatest Georgian crescents!
Labels:
clifton,
paveparking,
royal-york-crescent,
RPZ
Clifton RPZ: Yellow lines ruin Wetherell Place
Here's the junction of Wetherell Place and Frederick Place, forever ruined
Remember before, the quaintness of the village-in-a-city look
From a quiet-villagesque pavement to park on, to what ? double yellow lines
This totally destroys the character of the area
And for what? For safe walking round the city? For the benefit of people too poor to afford a car?
Unacceptable.
Remember before, the quaintness of the village-in-a-city look
From a quiet-villagesque pavement to park on, to what ? double yellow lines
This totally destroys the character of the area
And for what? For safe walking round the city? For the benefit of people too poor to afford a car?
Unacceptable.
Labels:
clifton,
frederick-place,
RPZ,
wetherell-place
Subscribe to:
Posts (Atom)