Thought LeadershipIn 1980, Dov Zohar addressed various implications of assessing safety climate through a 40-item questionnaire in order to improve safety-related outcomes. Zohar wrote one of the first scholarly works pertaining to safety climate and I was intrigued. His work led to more research on my part during a time when there were many heated debates regarding the assessment of broader forms of organizational climate – not safety alone. Amongst the many arguments, there were questions raised regarding the validity and ethical use of surveys to gather culturally-based information through surveys alone. Many believed that culture assessments should largely be addressed by direct observation and interviews rather than through the use of surveys. In the late 1980s, I began using various self-developed safety perception surveys which proved quite useful.

Through the 1990s, the climate-culture debates continued and became more pronounced. However, this led to more common uses of surveys to better understand one’s culture for safety, primarily through climate surveys. Since that time, I have seen several common mistakes that should be avoided when examining ones climate and culture for safety. Averting these pitfalls can make for a more accurate and useful understanding of your safety climate.

1. The response scale. There are endless varieties of arguments related to the number of response choices that pertain to the level of disagreement or agreement with survey statements. Some surveys use 4, 5, 6, 7, or even more response choices for each statement. However, many researchers believe that using an even number of choices with no “uncertain” or “neither agree nor disagree” option in the middle, works best. This is especially true within climate surveys as opposed to political polls. Like many, I prefer the use of a 4-point scale because it forces respondents to choose a position, not simply resort to a neutral response. In safety, most people have strong opinions that need to be reflected in the data. Wider-ranging scales, from 6 to 10-points, provide too much of a gradient and makes interpretation even more difficult. Can you imagine the frustration of a line or field worker in having to choose along a 10-point continuum – my head would hurt! 

2. Your statements. Using the right kinds of statements or questions in order to obtain useful information from your employees is a necessity. Statements that relate to front-line leadership, openness in communications, or tools, equipment, and facilities, as well as a host of other critical statements is required if you want a well-painted picture and in order to intervene in a cogent way.

By example, if you want to begin to understand how front-line leadership communicates with its workers and the degree of their two-way communications, your statements need to get to that understanding, with the right types of statements. Don’t add to your employees’ confusion by asking more than one question in a single statement. Ask one question at a time, by not using the word and, because it often leads to two questions being asked, rather than one. “Double-barreled” questions like these produce ambiguous data.

3. Questionnaire length. Some surveys are too short or too long. When they’re too short they typically don’t capture enough important information regarding what should be a robust reflection of your culture for safety.

Generally, I have seen 10 or 20 statements used in an attempt to cover multiple dimensions (categories) that should reveal what people believe about their organization and its culture for safety.

In contrast, more lengthy surveys with 60 to 80 or more items, may take 25 minutes or longer to complete. These types of surveys are unwieldy and bothersome for the user. Typically, a larger number of survey items can be collapsed into a lesser number and would prove more useful. Who would want to work though such a questionnaire?  Respondents tire quickly and end-up giving useless or thoughtless responses because the survey is too burdensome to complete.

4. Aggregating data. Another common problem that I have seen in working with safety climate surveys is the way people aggregate their data. For example, if you work in a plant and simply lump all your data from management and the workforce together, it’s going to be reasonably poor data. These higher-level results may allow management to feel good about their support but the data is rather limiting.

However, if you segregate comparison data with regard to senior management, supervisors, and employee perceptions, you can begin to examine the perception gaps between groups, with lower-level data that becomes increasingly useful and actionable.

If for example, management believes it is very open in receiving information and feedback from their workers regarding hazards and other concerns, but employees don’t feel nearly the same, you have work to do in order to close that particular gap. There are times to aggregate data in collective ways, quite possibly when comparing and ranking plants or locations, but on most occasions, organizational data should not be compiled in a higher-level manner, with a view from 30,000 feet.   

Make it better…

These are just a few common mistakes that can lead to less than desirable safety data that could make huge differences in better understanding your culture for safety. This is particularly true if you are attempting to construct your own survey without the support and validation of a group of experts. Making your survey data more useful and actionable will allow you to intervene more cogently with appropriate support, improvements, training, and other distinct ways that will leave a positive impression on your safety culture. I have said it for many years, it matters what you measure and how you measure it.