Were any of these paradigm shifts influenced by scientific research? Did any change in perspective adopt and adapt the best in the prior approaches as the result of objective and empirical research evidence? Were any components of a proposed "new" approach to safety management dropped because of a lack of research support?
I think the answer to each of these questions is "no." Am I right?
Be honest. Change in safety management is rarely driven by empirical and objective research data. Instead, we convince ourselves to try a new safety management technique after reading a slick promotional brochure, hearing a sales pitch at a safety conference, or skimming a "pop psychology" book.
In this article, the third of a three-part series on critical research principles for safety, I offer suggestions for how to apply research to your safety management needs.
Solicit assistanceAfter reading last month's column, you're convinced (I hope) that specific changes in methods of managing the human dynamics of safety should be based on empirical evidence rather than common-sense opinion. But you're probably also convinced you don't have the time for research.
You barely have time to skim the variety of management approaches available and select the one that sounds the best.
Why not tell social or behavioral scientists of your evaluation needs? As a researcher in a university psychology department for almost 40 years, I assure you behavioral and social scientists are constantly looking for useful empirical questions they can address with their students. Of course, it's advantageous if you live close to a college or university so you can make a personal request and visit the research group periodically for an update. This is not necessary, however, given communication channels such as email. But you must be able to formulate your research question(s) appropriately.
How to attract interestYou need to "sell" your idea or need to a researcher by using language that shows respect for the scientific method, while also sparking the interest of the scientist. This three-part series aims to do just that.
This series has revealed how researchers talk. Last month, I defined the scientific method as empirical, objective, self-correcting and progressive. In July (part one), I defined three basic criteria required for a cause-and-effect relationship, and explained why these criteria cannot be reached with the standard "accident investigation" (which should be called an "incident analysis"). Researchers are skeptical about assertions of "cause-and-effect" relationships, and they insist on defining concepts and ideas in operational terms. "Operational" means the factor or program component is explained with objective language that can be similarly understood by everyone who receives the explanation. It's essential to state your research question(s) in operational terms.
And remember, researchers love theory. They like to test a theory, particularly when their data could show support of one theoretical concept over another. Plus, researchers like to organize and interpret their data within a theory or principle. So attempt to relate your research question(s) to a theoretical concept or principle.
It's best to find one or more relevant theories from the literature, but feel free to apply your own theoretical notions to your research question(s). It's likely the researcher will be able to link your conceptual analysis to one or more other theories previously proposed and tested in the research literature.
Do it yourselfBefore wrapping up this treatise on the value of research for safety pros, I must make one more plea for conducting your own management-related research. My ISHN article last month reviewed a straightforward DO IT research process. An intervention target is operationally defined, and then objective measures of this target are taken systematically before, during and after a particular management intervention is put into effect.
Intervention evaluation can be as simple as counting the occurrences of a certain behavior or event throughout an intervention process. For example, every time I board an airplane I give the flight attendant an "Airline Lifesaver" card that contains a safety-belt reminder message I want read at the end of the flight. Half the cards promise a prize for reading the safety-belt message. At the end of the flight, I complete a data log to track the intervention condition (prize vs. no prize), airline, length of flight, gender and age of flight attendant, and if the announcement is read.
I bet you could count reactions or results from something you do on a regular basis for safety, and track the ongoing impact or change influenced by your intervention. You could, for example, count every safety conversation you have with another person and note whether the result of each interaction is positive, negative or neutral. Such data could be helpful in learning what kinds of interpersonal communication lead to the most constructive results.
One thing is certain, tracking your safety conversations will influence more safety conversations. Research is motivating. Tracking interventions for safety will increase the occurrence of safety-management interventions. For more than 15 years I have given the Airline Lifesaver card on more than 1,000 flights. I would not have kept implementing this simple intervention if I were not tracking its impact.
Bottom line: Without more safety management research, intervention programs to prevent workplace injuries will come and go according to the latest "pop psychology" fad. Research makes possible the continuous improvement of safety management programs, enabling successively better safety performance.