The president of the Association for Psychological Science recently published a column titled, “The Publication Arms Race.” Dr. Lisa Feldman Barret’s central point is that we professors are rewarded mainly for our publications, and mainly for the quantity of those publications. The entry ticket to a faculty position at a college or university, and the key criterion for subsequent contract renewal, tenure, promotion, and other professional honors and awards is the number of published articles in the “right” journals.

The criteria have become inflated (hence the arms race catch phrase). In the academy we are rewarding publication quantity, while hoping for other outcomes as well. Dr. Barret laments that the numbers game may cause academicians to do things that don’t really advance our knowledge but get published, and bypass research that might advance knowledge but be less likely to get published.

The article inspired me to reflect on a classic paper written by Dr. Steven Kerr in 1975 with the intriguing title, “On the Folly of Rewarding A, While Hoping for B.” Kerr gives the example of the common university practice of rewarding research while hoping that the professors will also be excellent teachers. The open secret in academia is that Dr. Kerr and Dr. Barret are right. It certainly does happen that some professors produce research of quality as well as quantity, and are also excellent teachers. But again, the incentives are awarded for volume of publications in the right journals, not necessarily for the quality of the contributions, nor for what happens in the classroom.

Measuring the wrong things

These examples from the academic world are representative of a broader, pervasive problem. How often do we measure, track, and reward the wrong things in hopes that we are thereby getting the right things. As Dr. Kerr said so beautifully, we are surprisingly prone to reward A while hoping for B.

Here’s an example. Do you know how your credit score is determined? I once had the shocking experience of helping my daughter buy a car, only to find that her credit score was better than mine. Please accept my assurances that I actually was then, and am now, an excellent credit risk. But by the measures applied at the time to determine my credit score and thereby, my risk-worthiness, I was not.

My daughter’s income at the time was a fraction of mine, and she had only been employed for a couple of years. No matter. You can’t be late paying a bill, regardless of the reason. That’s a ding. You can’t use a large portion of your credit card limit, even if your limit is low (your decision), and you pay it in full each month and never use the revolving charge option. Another ding. You can’t cancel credit cards, even if you sign up for one in a store that day to get the instant ten-percent off, and then cancel it (as my wife did over and over and over).

A late or missed payment is taken as an indicator you can’t pay, never mind that you misplaced the bill and double-paid the next month. Keeping a low credit card limit (say $5k) and using $2-3k of it each month, is taken as an indicator you are close to maxing out your credit; not that you didn’t see any reason to accept the credit card company’s offers to increase your credit limit. Cancelling credit cards is taken as an indicator you can’t keep up with payments on them.

Missing the bottom line

Do you know how U.S. News and World Report determines its annual rankings of colleges and universities? Even a cursory look at the criteria and their weightings shows some A vs. B factors. “Faculty Resources” counts for 20 percent. How is that measured? Answer: class size, faculty salary, percentage of faculty with the highest degree in their fields, student-faculty ratio, and the proportion of faculty who are full-time. Schools with smaller classes, which pay their faculty more, which have a higher percentage of their faculty full-time with the highest degrees, and which on average have fewer students-per-faculty member, score the best in this category. Do any of those “faculty resources” measures actually tell you anything about the quality of the faculty as teachers, the presumed primary mission of professors?

“Expert Opinion” is another 20-percent factor. If “top academics” say a college is great… well, it must be. But how is the actual teaching? “Student Excellence” is a ten-percent factor. Schools that are more selective and accept mainly applicants with high grades and high SATs or ACTs are thereby rated as better schools. Again, how does student excellence affect the quality of the teaching?

None of the factors making up the other 50 percent relate directly or even indirectly to teaching excellence.

The numbers game

One of the constant problems with any organizational “program” is the tendency to assess it in a check-the-box, by-the-numbers way. I have seen behavior-based safety programs fall into the numbers trap. How many observation teams do we have? How many observations per week or month are conducted? How many training classes were held? Are those numbers giving us a safer workplace? I have seen some well-intentioned near-miss programs focus so much on the number of close call incidents collected (sometimes with a required minimum) that they have little to say about the seriousness of the incident, nor whether subsequent corrective actions were taken.

Or take some safety incentive programs that reward accident-free weeks, months, quarters or years. What does that say about the excellence of the workforce’s safety performance, and management’s safety leadership? You hope it relates, but this type of incentive can also induce hiding injuries and under-reporting.

As we are reminded by Drs. Barret and Kerr, if you want B, be careful to actually measure and reward B, not A.