The United States has thousands of workforce development and training programs, run by the public, social, and private sectors. Some are excellent; others, not so much. The problem is that we don’t know which are which.
According to the Georgetown University Center on Education and the Workforce, spending on programs in the U.S. for those not going to four-year colleges — everything from federal and state jobs initiatives to on-the-job training, certifications, community college, and employer training — is at least $300 billion a year. But according to the World Bank, only 30 percent of youth employment programs are successful, with many of those offering only marginal benefit. And most programs have no positive effect at all.
Most existing training programs do try to assess their effectiveness. Many measure cost per student. Some measure job placement rates. A minority track on-the-job retention. These metrics are useful but miss the big picture, in part because they mistake a program’s cost for its value.
Think about it. If a program has a low cost per student but fails to actually help people forge a solid career, then the fact that the failure is cheap does not make it any less of a failure. Conversely, some programs may promise high rates of job retention, but at such a high cost per student that the program proves impractical or impossible to scale.
Conducting an accurate cost-benefit analysis requires incorporating costs and job placement and also accounts for how participants are doing after they leave the program. We need to adopt something similar to a “total cost of ownership” (TCO) analysis. Now common in industry, TCO considers both direct and indirect costs over time. Applying a form of TCO to workforce programs makes sense because, instead of concentrating on inputs (in the form of spending), this approach emphasizes outcomes (in the form of long-term results).
For the last two years we have been implementing Generation, a youth employment program that is part of the McKinsey Social Initiative. So far, Generation has served nearly 10,000 young people in five countries: India, Kenya, Mexico, Spain, and the United States. As we sought to measure Generation’s results, we began to understand the limitations of current practice.
We developed a new metric — cost per employed day (CPED) over the first six months — that we believe better defines how well employment programs work.
Here’s an example. Program X serves 1,000 students at a cost of $1,000 each, or $1 million total. Five hundred individuals are placed into work (a 50 percent “job placement” rate), and they stay employed for an average of 60 days in the first six months.
That adds up to 30,000 days on the job, at a cost of $33 per employed day.
Program Y, in contrast, has an up-front cost of $2,000 per student, but a placement rate of 80 percent, and graduates stay on the job for an average of 120 days. That comes out to 96,000 working days, or $21 per employed day.
Program Y, which at first blush looks twice as expensive as Program X, provides far more value in terms of helping participants find and keep gainful employment. At Generation, the CPED figure varies depending on the market, ranging from about $5 in India to $26 in the United States.
Adopting more-accurate measures of success increases accountability. And accountability drives results.
For example, once Generation managers realized the power of CPED, they used it to make operational improvements. On the basis of what we learned from CPED, we began to work more closely with employers to track retention rates and we increased our emphasis on mentoring in the first days on the job.
Generation is also developing tools to improve data collection and management. While the data needed to make comparisons with other job training programs does not yet exist, our sense is that using CPED would reveal tens of billions of dollars in inefficient spending, in the form of programs with subpar CPED performance.
Perhaps the biggest challenge to widespread use of CPED is that workforce development programs are fragmented, with thousands of providers and almost as many ways of doing things. That makes getting basic information next to impossible. And because reporting requirements vary from place to place, practitioners spend an inordinate amount of time fulfilling compliance obligations that may be pointless.
CPED, by contrast, provides a simple and effective way to measure performance. For it to be adopted more widely, or even to become standard, all programs would need to collect data on cost per student, job placement, and retention. There should be a centralized database in which this information can be gathered and then easily accessed. Funders could help by adopting CPED and mandating that programs collect the necessary data.
Despite the promise shown by CPED, we have significant work ahead to improve this new metric and make it the standard across training programs. Today, for instance, many programs would struggle to measure CPED at the three-month mark, let alone at the six-month mark. Our hope is that once we, Generation, and other programs take this next step, we can extend the timeline for CPED, and perhaps even incorporate wages — both of which would make CPED a richer, even more accurate metric. While CPED can continue to be improved, it’s a big step in the right direction and can help us better measure the effectiveness of worker training programs.
“What gets measured gets managed” has become a cliché. Like many clichés, this one earned its status because there is a large element of truth to it. In a world in which 73 million young people are unemployed and over 200 million more struggle in unstable or dead-end jobs, it is surely possible to do much better. Data and metrics are part of the solution.