Thursday, 22 March 2018

The Uber Dev Team just Killed Someone

A software development team at Uber just killed someone.

The Uber AV killing is going to go down in software development as one of the "things you must never do" examples up there with Therac-25 and various Flight Control System failures. The datasets and traces of the collision, will, along with all analysis, be reviewed, torn apart and rebuilt into some fairly brutal dissection of the entire software development process by the Association of Computing Machinery Risks Group, which is basically a documentation of fuckups of one form or another.


Without waiting for any analysis, police and others are already saying "it wasn't uber's fault". That's the wrong way to view it.

Better to try and categorise it

  1. Uber's system made a mistake —one that can be fixed and so never repeated. Other AV manufacturers can also use the data from the collision to avoid it in their systems too. 
  2. The collision was unavoidable given the action of the user and the general category of LIDAR-enabled Autonomous Vehicle.
Option 2 means we would now have make a choice. Do we consider the death-rate of AVs "Acceptable", or push for more to separate vulnerable road users from AVs. That could be banning AVs from cities "it's just too hard right now", or trying to do something banning pedestrians from walking around, requiring cyclists to have some beacon implanted in the bike and charged up on a regular basis to announce its presence, etc. That's the category of "admitting your tech can't do what was originally promised"

It is a lot better politically if it does turn out to be the uber team at fault. Or a least blame the driver for inadequate supervision.


How society and industry reacts to this, the first fully self-driving car killing will reflect on the decisions and priorities which society makes. It should be the duty of the software developers to be honest about the incident and open about whether it and similar events can be prevented in future.

Unlike all "human at the wheel" collisions, the actions of the uber car can be fully replicated: with the recorded sensor data from the car and the same version of the software, exactly the same set of actions can be expected. Therefore we can see where it failed.

Here is the list of places

  1. Sensing: failure to detect the victim with the sensors available.
  2. Interpretation: failure to recognise the victim
  3. Anticipation: failure to anticipate their movements (and/or the vehicle's) and/or failure to conclude from the anticipated trajectories of pedestrian & vehicle that a collision is likely
  4. Planning: failure to come up with a plan to avoid a collision
  5. Execution: failure to execute a valid plan.
What do we think?

Sensing: Uber, like the Google/Waymo vehicle has LIDAR: light-frequency radar Works perfectly well at night, wider angle of view than a set of headlamps. Range? Unknown. These are the state-of-the-art in sensing, very expensive, and why Tesla are claiming "you can get by without them". LIDAR is what engineers believe is needed. Yet here it has failed. Does it have known failure conditions? Rain, snow, hail, probably fog and smoke too. Doesn't recognise glass. Presumably doesn't work into low-sun either. "It was dark and they had dark clothes on" is not an excuse. 

Intepretation: First realistic way things could have failed. 

Anticipation: Not a fast moving vehicle, a person crossing the road. No obvious unexpected movements. This should not have been hard. If it is: give up AVs now.

Planning: Fun one. After concluding that a collision is likely, the car has to come up with a set of actions (brake, swerve, brake+swerve, accelerate, sound horn, ...). None of these actions seem to have been attempted.

Execution: Car skids on rain/snow/oil; swerving to avoid one crash triggers a second, etc. No actions appear to have been executed, so unless the car computer couldn't communicate with the steering/engine/brakes. not a failure point.

We suspect then: interpretation and anticipation. Or the sensors got an echo but discarded it amongst all the other inputs it had to deal with in a limited time. That is "prioritisation". 

We humans make exactly the same mistakes all the time. mostly you get away with it. When you don't, well, it could be skid and a dent, or you can be sliding down the crash barrier in a motorway, passenger and driver both trying to keep that steering wheel straight, everyone in the car screaming thinking  they are all going to die. Such events happen with all too much regularity across the country. and generally, unless fatal, nobody gives a fuck. Now its done by software, it gets a lot of press. We shall have to see what the outcome is.

If it was a software/system error, then it can be fixed. Uber can add a new scenario to test their software on, even before running it in a real car, and the limited set of AV manufacturers (Waymo, GM, Ford, ...) can use the same data in qualifying their software. Everyone can learn from it and repeats can be avoided. 

That's if we want that, and it is avoidable. The alternative is blame the victim, exonerate the software, carry on promising a safe utopia, now qualified with "some people may still die". 

That's a major change, as it is accepting that death is inevitable in driving the way we don't accept it for trains and planes. We do make that distinction in cars today, but that's because we all believe that "we" don't make mistakes, or that personal freedom "right to drive across town" is better than safety for all.

This death is going to have to make us spell out our priorities.

For now though, let's add Elaine Herzberg to the list of people killed by cars. She is one of the few to get a mention beyond the local press. That doesn't make her death any better.

No comments: