This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Accident Investigation Analysis – Bath 7 February 2017: Summary & Comments



Richard Brown of the Railway Accident Investigation Branch (RAIB) began by explaining the origins of the RAIB that was formed in October 2015. It was set up after the Ladbroke Grove accident inquiry recommendation that there should be an independent body to investigate railway accidents. Prior to that the Health and Safety Executive both set the operating rules and sat in judgement on itself, which could lead to a conflict of interest. The RAIB is modelled on the branches that investigate Marine and Air accidents. The purpose of its investigations is to improve safety, not to apportion blame or to carry out prosecutions. It was stressed several times that the RAIB considered witness statements to be confidential, only statements of fact or its conclusions being available to other authorities such as the law courts.

 

The role of the accident investigator was then described – in essence believe no one! This scepticism extended to the investigator, the lead being taken by a non-specialist to avoid the confirmation bias that a specialist might have.

 

The remainder of the talk was concerned with the various methodologies for carrying out accident analysis. It was suggested that these approaches could be used in other areas and, particularly as numerous anecdotes were woven into the presentation, murmurs of agreement seemed widespread.

 

Accident investigation in the UK had began by adopting a ‘What, How, Why?’ line of inquiry, something that was now called causal analysis, that led naturally to constructing a timeline of events. Some examples were shown of how this had been used in practice by the RAIB, building up wall charts of notes in sequence with different columns for the different ‘actors’.

 

Academic interest in the subject had resulted in many formal approaches to accident analysis being developed and into categorising them as Sequential, Epidemiological or Systemic. Examples were given for each of these categories. Sequential included timeline, Fault Tree Analysis, Event Tree Analysis and Sequential Time Event Plotting. Epidemiological methods are based on the idea that there are latent and active causes to an accident. A lack of training or of management oversight might not cause accidents in themselves but could be latent factors should circumstances change from the normal or expected. The ‘Swiss Cheese Model’ and the US Navy Human Factors Analysis and Classification System (HFACS) were given as examples. The Australian Transport Safety Board had adapted the HFACS classifications to make it more relevant to the transport arena The Systemic methods claim to be more holistic and to adopt a risk management strategy such as STAMP (System-Theoretic Accident Model and Processes).

 

I was interested in the relatively new role of the RAIB as I have always been doubtful of the role of legal council in historic rail accident investigations. A ‘no blame’ system is essential if safety is to be improved. The sceptical attitude of the investigator chimed with me, and I’m sure many of others, particularly if they have been involved in evaluation, diagnosis, commissioning or risk assessment. It has been similarly expressed elsewhere as the detectives’ ABC – Assume nothing. Believe nobody. Check everything. Having said that a bit of “I wonder if ..” can be useful. I remember being asked to look at an electric fire that had ‘failed’. I decided to check the fuse in the plug. Did they have a screwdriver by any chance? A screwdriver with a triangular tip and globules of metal was produced. “You didn’t happen to push this into the socket did you?” Not surprisingly the ‘failure’ was at the consumer unit!

I did wonder if the field had been taken over by academics, always wanting to classify and identify systems, analysis to excess. What happens if something doesn’t fit a category? Ignore it or make it a PhD topic? If we are getting better at making systems safer then there will be fewer ‘data points’ to analyse in ever more ‘angels dancing on a pin’ detail. Our gut feeling as engineers is that accidents are deterministic – there is a cause. As they get fewer perhaps we have to consider them as probabilistic – they will happen, we don’t know when but there won’t be many.

 

P.S. Windrow is a North American term for material such as hay strewn in a row after harvesting or for composting. It can equally apply to the pile (row) of snow at the side of a snow ploughed road, which is maybe what the truck driven by the driver that went missing ran into. [From an example given during the talk].

 
Upton Nervet Crossing A ‘near-miss’ accident here was mentioned in the talk. The link describes an accident in which an HST 125 collided with a parked car, resulting in the death of the train driver, five passengers and the car driver. Considering that in 1968 a Class 81 locomotive heading a twelve-coach train collided with a 120-ton transformer on a low-loader trailer, killing three in the locomotive cab and eight passengers at Hixon crossing, the death toll at Upton seems very high. Did Hixon lead the railways to be complacent about train-car collisions at crossings? Do we learn the right lessons?

 

Links:
RAIB
Peter Underwood

  • While it might be a good thing for accident inspectors to 'believe nothing', (without proof), there have been many accidents caused by people not believing the situation that they are in.


    It has been reported that pilots flying in clouds have doubted the artificial horizon indicators, (do they ever fail?), and have as a result inverted their aircraft. A classic on the railways was in 1937 when the signalman at Battersea Park was convinced that the interlocking was faulty and that was why he couldn't clear the signal as he wanted so he broke open the seals and released the lock.


    I'm sure a fascinating book could be written on how easily humans panic when things are only slightly beyond the normal and don't choose the 'most obvious' action but go to the fantastic or very unlikely. An ex-work colleague told me that he had been sitting in his car parked on his sloping driveway watching his wife working in their front garden. All of a sudden the front garden appeared to be slipping, possibly an earthquake, (in Bath!). He got out of the car and ran to warn his wife. Meanwhile the car, with a slipping handbrake, continued its way down the drive, losing the driver's door to the gatepost!


  • There was actually a much more recent example on the railways (on Shap on the WCML) where a driver in very poor weather conditions (driving rain) hadn't appreciated that hos train was rolling back down the hill until he went past a green signal and saw it receding into the distance!


    RAIB Report here - assets.publishing.service.gov.uk/.../R152011_110815_Shap_Summit.pdf


    Regards


    Peter
  • Thanks for that Peter.

    I don't think that in the Shap case that the driver had created an alternate reality, I suspect he just fell asleep, though that must be impossible as now we have mathematical models to 'prove' that can't happen! Perhaps the models are the alternate reality?

    It is a subject in itself but we create these statistical systems to convince ourselves that we are doing the right thing but often seem to forget that they only work for the 'general' not the 'particular'. Thus a model can do a good job of predicting how many insurance claims will be made in a year, (the general), but not that Mr. Jones will claim, (the particular).

    Returning to the accident, is life for drivers, (excepting shift patterns), being made too easy? Are driverless trains the way to go?

    However the Shap incident came about it is incredible that a modern train should be running back at speed - 19th century problems 'fixed' with better couplings and automatic brakes! Given that 'going with the flow' has been a prime requirement of railway working for over 100 years is not having a warning system a bit surprising?