Total recordable incident rate: An overview

The Total Recordable Incident Rate (TRIR) has become a universal metric. For the few that are not familiar with the TRIR calculation, let’s do a brief review. The Total Recordable Incident Rate is a calculation that takes the total number of OSHA Recordable incidents (see here for definition of an OSHA Recordable incident), multiplies that number by a normalizing figure, 200,000 (which represents the total number of hours 100 employees would log in 50 weeks based on a 40-hour work week) and then divides that number by the total hours worked for a site or the aggregation of sites being analyzed.

The calculation outlined above ultimately looks like this: (Number of Recordable Incidents x 200,000)/ Number of hours worked.

If you ask a group of executives or management personnel at a company to point to a single metric that outlines their safety performance, nine out of ten times they are going to reference the company’s Total Recordable Incident Rate. Furthermore, if you ask that same group to benchmark their safety performance both internally (against other sites) or externally (against other organizations) they are more than likely going to compare a single number, the TRIR. And it’s not just our own organizations that use the TRIR as a “gold standard” for safety performance indication. The regulatory bodies have standardized the TRIR calculation to benchmark across industries and to identify potential problem areas for additional inspection or investigation. The simplicity of the single number metric, and the regulatory body standardization have influenced the boardroom and has turned the TRIR into the most widely used and accepted metric in the safety profession today. In this article, I will describe one fundamental problem with the TRIR and provide some concrete examples to outline why I believe the Severity Based Incident Rate might be a better benchmarking metric.

 

Problems with the TRIR

In recent years, safety professionals and safety research groups have begun to question the validity of the TRIR metric for many different reasons. I believe, as many that are questioning the metric do, that TRIR has a place in the analysis of safety performance. However, as an industry I feel that we have put too much attention and trust into the single number without taking a more critical look at some of its fundamental problems. In this article I’d like to focus on one problem in particular, the fact that the TRIR metric as calculated today stores too little information to be considered an adequate benchmarking metric.

As many of you are aware there are different categories for an OSHA recordable incident. The OSHA 300a form does a good job at outlining those different categories. For clarity I have listed them below:

  • Other Recordable cases (these would be medical treatment, or anything that doesn’t fall into the additional categories)
  • Days Away cases
  • Restricted or Job Transfer cases
  • Fatalities

I list these to illustrate the point that the definition of an OSHA Recordable case covers a wide range of injury outcomes. Take the following two examples. Outcome A, an employee is using a box cutter, the box cutter slips and cuts his/her hand, and the employee receives three stiches (Other Recordable, most likely). Outcome B, an employee working at heights with no fall protection, slips and falls 20 feet and tragically passes away (Fatality case).

One of the main problems with the TRIR calculation today is that these two outcomes are treated the same. They both count as one recordable. While we never want to see an employee get hurt, I think we would all agree that we would much rather see outcome A at our facility, rather than outcome B. When I mentioned earlier that the TRIR calculation doesn’t have enough embedded information, this is what I meant. Every OSHA Recordable regardless of outcome severity, gets counted the same. This lack of information can distort our view of safety performance. Assuming, we would all rather see outcome A at our facility as opposed to outcome B, shouldn’t that be reflected in the calculation when we are looking at safety performance? But how do we embed more information in the incident rate calculation?

 

Embedding more Information

The solution is to use a Severity Based Incident Rate (SBIR). By weighting our incident outcomes, we are embedding more information into our incident rate calculation and therefore creating a more appropriate stratification for benchmarking. See the example below:

 

Site

Hours

Injuries

Recordables

Fatalities

Days Away/Restricted

Other Recordables

First Aids

TRIR

SBIR

Site A

698,122

42

6

0

0

6

36

1.72

8.6

Site B

616,764

14

5

1

2

2

9

1.62

20.6

 

In the example above, we have two sites, with very similar hours worked, very similar recordable counts, but dis-similar recordable outcomes. As you can see in the table above, Site B is experiencing much more severe recordable injuries. In the traditional TRIR view Site B is performing better than Site A, with a TRIR of 1.62 compared to 1.72.

However, when we inject information into this calculation through weighting, we get a much different picture. Because of the difference in incident outcomes between the two sites, Site B’s incident rate skyrockets past Site A. Taking incident severity into consideration when we are analyzing our incident rate allows us to embed more information into the rate and allows us to more appropriately stratify the sites to identify potential red flags that bubble up to the surface.

Efficient allocation of resources for an EHS department is critical for the improvement of safety management overtime. If you are a Regional EHS Manager, a Director or a V.P., where are you going to focus your efforts? In the example above, when we look at the traditional TRIR metric, it isn’t clear. However, when we look at the Severity Based Incident Rate, we get a much clearer picture. Adding the Severity Based Incident Rate to your organization’s metric toolbelt is simple because we are using categorizations that are already being collected per OSHA requirements. As our organization’s safety management system matures, we could also consider the potential outcome of incidents.

Embedding more information into the incident rate calculation using the Severity Based Incident Rate provides organizations with a more accurate internal benchmarking metric and helps organizations better understand where they should focus their limited EHS resources.

 

Wrap-up

As I mentioned above, this is not an argument for throwing out the Total Recordable Incident Rate. Nor is it an argument that the Severity Based Incident Rate is the only metric we need to define safety performance. Ultimately, they are both lagging, and as an industry, we need to focus our efforts on leading and monitoring metrics rather than the lagging metrics to identify potential risk before an incident occurs. With that being said, I believe that the Severity Based Incident Rate is a step in the right direction. It’s worthwhile taking a second look at the traditional metrics that we have been using to measure safety performance to see if they are showing us what we hoped to see. Utilizing the Severity Based Incident Rate in your organization can help embed more information into your incident rate analysis, which will help improve internal benchmarking and ultimately help you more efficiently allocate your scarce EHS resources to improve safety outcomes.

 

If you have questions or want to learn more about any of the information outlined in this article, please reach out to me at angelodcianfrocco@gmail.com.