ISHN Guest Blog

Originally posted at Phil La Duke’s Blog:
philladuke.wordpress.com/2013/04/29/a-storm-in-texas/

Nearly every safety professional worth his or her salt has been told that he or she needs to look at both leading and lagging indicators; it’s good advice, in fact, it’s advice I’ve given many times in articles and speeches over the years. But in my last post (two weeks ago—I spent the last week at a customer site and with the travel travails I just couldn’t bring myself to hammer out a post, deepest apologies to my fans and detractors alike) I questioned the value of tracking (not reporting or investigating, mind you, just tracking) near misses. 

Well, as you can imagine the weirdoes, fanatics, and dullards came out in droves to sound off and huff and puff about things I never said (reading comprehension skills are at a disgraceful low these days).  Not everyone one who reads my stuff is a whack-job however, and some of the cooler heads insisted that tracking near misses was important because near miss reporting is a key leading indicator; it’s not…and it is, but like so much of life, it’s complicated.

Near misses in themselves aren’t leading indicators; they are things that almost killed or injured someone, and most importantly, they are events that happened in the past.  Not that anything that happens in the past has to be automatically counted out as a lagging indicator, but unless you still cling to the idea proffered by Heinrich that there is a strict statistical correlation between the number of near misses and fatalities, near misses are no more a leading indicator than your injury rate, lost work days, or first aid cases. 

They simply tell you that something almost happened, and nothing more. 

Now some of you might try to argue that if you have ENOUGH near misses you are bound to eventually have a fatality, but that does hold up to careful scrutiny.  Leading indicators are often expressions of probability, and like the proverbial coin that is tossed an infinite number of times, the probability of the outcome does not change because of the frequency of the toss. If you were to toss the coin 400 times and it came up tails, the probability that the 401st toss would come up heads is still 50:50.

So knowing that tracking near misses doesn’t really shed any light on what is likely to happen mean we should stop investigating near misses?

Certainly not.

But we really do need to stop thinking that the data is telling us things that it isn’t. On the other hand, near miss reporting is indeed a leading indicator; if we accept (as I do) that when people report near misses they: a) are more actively engaged in safety day-to-day (and I suppose someone could argue that this doesn’t necessarily correlate) and b) the more the individual reports near misses the better he or she is at identifying hazards (again, this is a leap of faith, but  I believe in most cases this to be true.) So if you want to gage the robustness of your safety process I suppose the level of participation in near miss reporting is a good indicator.

The whole exercise got me thinking about indicators, and how often safety professionals (and everyone else on God’s green Earth for that matter) tend to be misled by data because of the erroneous belief that the data is saying things that it isn’t.

Causefusion

Regular readers of my blog will recognize the concept of “causefusion”.  The term was coined by Zachery Shore in his book, Blunder: Why Smart People Make Bad Decisions which he uses to explain how people mistake correlation and cause-and-effect. 

According to Shore, causefusion works something like this [1]: People who floss their teeth live longer than people who don’t floss or who floss irregularly; therefore flossing your teeth makes you live longer.  It makes sense, right? Yes, except that it is wrong. There are other possibilities for this correlation, for instance, isn’t it possible that people who are more interested in their health overall might be more likely to floss regularly?

In a world where eager safety professionals provide data to operations people who are hungry for quick fixes, causefusion happens a lot; and it’s a real danger because it leads us away from the true causes of injuries and may blind us to real shortcomings in our processes.

Another way that we can be lead by indicators is the paradigm effect. When we think of the word “paradigm” we think of the definition, “a typical example” or “viewpoint”, but in the world of science paradigm there is another, lesser known definition, “a worldview underlying the theories and methodology of a particular scientific subject,”

Joel Barker pointed out how damaging paradigms (in the scientific sense) can be.  Barker believed that there were many instances where the worldview is so powerfully believed that any new evidence that does not support the worldview is ignored. Consider the dangers of ignoring critical new information relative to worker safety because you believe in a particular tool or methodology so strongly that you can’t even consider another viewpoint.

A third way that we mislead ourselves is when we see patterns that aren’t there.  This phenomena is wonderfully described in another book that I really believe is important to the world of safety, Why We Make Mistakes: How We Look Without Seeing, Forget Things in Seconds, and Are All Pretty Sure We Are Way Above Average by Joseph T. Hallinan.

According to Hallinan—and the latest in brain research supports his contention—the human brain tends to see patterns even where there are none.  So in cases where safety professionals desperately seek answers and are under pressure to initiate action, the pressure to see patterns where there are none can be extreme.

Perhaps the most misleading indicator is one of the most common: zero recordables.  Too often safety professionals (and operations, as well, for that matter) see a trend of recordables as evidence that they are at far less risk of injuries and fatalities than they are.  This isn’t to say that they AREN’T at less risk, but there isn’t anything more than a correlation between the two elements; they might be good but they are just as likely to be lucky.


[1] The example is mine and mine alone, don’t get all huffy and bother Shore.