5.2.3 Decide the level of rigor feasible given organizational capacity and operational circumstances

Many organizations in the YEO field want to prove the effectiveness of their program and fill the field’s evidence gap. Rigorous impact evaluations are perceived as one of the most valid ways to prove the effectiveness of a program and fill the field’s evidence gap. However, not all organizations are prepared for rigorous evaluation from a programmatic, budgetary, or capacity standpoint.

In some cases, too much rigor can complicate implementation and delay evaluation. Circumstances may prevent effective implementation. Two case studies from the United Kingdom’s Department for International Development’s (DFID) Financial Education Fund (FEF), designed to support educational projects aiming to help African citizens increase their financial knowledge and ability, illustrate some of the benefits and challenges of using rigor.

Noteworthy Results: Impact Evaluation Case Studies from the Financial Education Fund

A randomized control trial of the Nakekeli Imali program targeted a population sample of 11,719 (5,013 of whom were trained) South African mineworkers with access to a wide array of formal and informal financial products. The challenge with the RCT came with the treatment group. Evaluators found it difficult to “catch” the miners who were supposed to participate and ultimately only 61.6 percent of the treatment group actually attended the financial literacy workshops. As the design of the evaluation only involved impact, and not process evaluation, there were no other types of evaluation to supplement the RCT. This is one of the challenges that can occur when an evaluation is too rigorous for the operational circumstances.

Another case study involves a cross-sectional time series evaluation of Camfed’s Financial Education Program for young women in rural Zambia. Due to ethical considerations, the organization did not want to randomize. The organization looked at a sample of 10,701 young women in rural areas of 8 districts. They sampled two “comparable” villages and two intervention villages. However, after the baseline survey was completed, they realized that while the village demographics appeared the same, the baseline survey identified differing characteristics in terms of education and income levels. Evaluators decided to switch to a longitudinal study. However, the baseline survey had not contemplated that eventuality therefore surveyors had not collected the contact information they would need for such a study.

The two case studies reflect the real world operational challenges that can complicate rigorous evaluations. In both cases, the organizations learned valuable information about their programs and beneficiaries. Camfed learned about the characteristics of the young women that participate in the program and they deepened their understanding of impact evaluation. In both cases, the impact evaluations cost over a quarter of the total budget of the program (the RCT of Nakekeli Imali was 27 percent of budget and Camfed’s evaluation was 40 percent of budget, all directed towards surveying.) For more on the cost implications of RCTs, see “Consider time and cost investments of before beginning.