9.7 Randomizing and Matching Address Attribution and Selection Issues Allowing for a More Accurate Understanding of Program Impact

Many in the field feel that for too long, too much emphasis has been placed on inputs and outputs rather than outcomes and impact. This is often due to time or budget limitations. However, impact evaluations are critical to determining if interventions are really effective in achieving the desired outcomes and to advocate for YEO programs at the policy level. Impact evaluation means not just tracking what you do and how much you do, but carrying out a specific study to determine if and how your intervention directly causes the change–or impact. In other words, defining the difference you make compared with if your organization did not intervene. Box 9.7.1 presents a few key issues that should be considered before designing an impact evaluation.

9.7.1 Checklist: Have You Considered the Following Issues When Planning an Impact Evaluation?

✔Is a monitoring system in place? Impact evaluations do not replace the need for good quality monitoring. Therefore, a solid monitoring system with a well formulated results chain, indicators, and standard data collection instruments should already be in place before moving to an impact evaluation.

✔Do your learning objectives relate to establishing causality? Impact evaluations do not answer all the different questions that an organization or program may be interested in (such as the quality of the implementation process). They answer cause-and-effect questions; that is, they show that measured results occur as a result of a specific intervention. Therefore, before deciding on any type of evaluation, program managers and other stakeholders should prioritize their learning objectives.

✔Are criteria for an impact evaluation met? The resources for an impact evaluation are best utilized when an intervention is strategically relevant, innovative or untested, and replicable.

✔Is the program design and operational context conducive to impact evaluation? The choice of an impact evaluation method depends on many factors. Has the intervention already started? Is there excess demand for the program? How are beneficiaries selected? Is the program delivered all at once? Ideally, an impact evaluation is always planned jointly with the program and integrated into its design stage.

✔Are sufficient resources available? Timing, capacity, budget, or political constraints may create barriers to an impact evaluation. Depending on the outcomes you want to measure, it may take several years until results become available and the evaluation may cost well over $100,000. This needs to be planned for from the outset.12

 

 

 

Key attributes of impact evaluations include:

  • Proving attribution: Impact evaluations measure what changes in young people can be attributed to the program and what might have resulted from other factors. This helps YEO programs understand the real impact of their intervention and disentangle the effects of the program and the effects from changes in the environment. For example, if the environment of the program has worsened dramatically (e.g., because of conflict, bad weather, etc.) leading to an overall worsening in people’s living conditions, an impact evaluation may still highlight the positive impacts of the intervention compared to non-beneficiaries who are even worse off.
  • Minimizing selection bias: Selection bias usually occurs when program participants and nonparticipants differ in characteristics that cannot be observed easily (dynamism, aggression, ambition, etc.). This is why any impact evaluation rests on the principle of identifying treatment and comparison groups that are as similar as possible, both in observable and unobservable characteristics.
  • Asking the right questions: Even the best designed evaluation methods will fail to accurately determine impact if youth misunderstand questions posed in surveys or interviews. When developing questions, evaluation designers have to review their assumptions about young people and check whether or not those assumptions skew questions. Box 9.7.2 describes lessons learned about developing questions during a randomized control trial in Ghana.

Randomizing and matching are two common evaluation methods that address selection and attribution issues– though randomized designs exhibit higher validity. Randomized Control Trials need to be contemplated from the beginning of program design. One program group’s conventional evaluation approach is to use a pre- and post-intervention and a control group comparison. In randomized evaluation designs, subjects (individuals, communities, etc.) are randomly assigned to the project and control groups, ensuring the two groups have the same distribution of observed and unobserved characteristics at the start of the project. Agreed upon indicators are measured pre- and post-intervention to detect and compare differences in the project and control group. Matching involves using data and econometric tools to extract a comparison group that matches the data set from program participants. Treatment and control groups are surveyed at the start of the project and again after project implementation. If the groups are well matched, then any statistically significant difference between the two groups on impact variables is indicative of a potential project impact.76

Which method is applicable to a specific program largely depends on the design and operational context of the intervention in terms of timing (did the program already start), coverage (universal or not), and the selection of beneficiaries (random assignment, eligibility ranking, or other).

9.7.2 Noteworthy Results: Interviewing Youth in the YouthSave Project

The Center for Social Development is conducting an impact assessment of the YouthSave project3 using a cluster randomized control trial design involving 50 treatment and 50 control group schools across eight of ten regions in Ghana.

Results from cognitive interviews (N=20) and pretests (N=51) of the YouthSave Questionnaire with randomly selected youth ages 12 to 14 from four Junior High Schools (JHS) in Mampong and Koforidua and feedback from project partners, including Save the Children revealed that:

  • Youth found certain questions difficult to answer, such as estimating the distance from their home to the nearest financial institution.
  • Certain terms like “saving”, “a class about money”, “financial institution” and “basic needs” had to be defined and explained by interviewers. We could not assume that youth knew what we meant.
  • Some questions were phrased in a way that assumed youth save money.
  • A distinction needed to be made between having a plan for spending money and actually following this plan.
  • Youth had difficulty remembering what they had done with money they had in the last 30 days.
  • It was important to ask youth not just whether they have had financial education and how many hours they received, but what they actually learned.
  • Some questions were greatly skewed and were discarded, like a question about owing money (almost no youth said they owed money).
  • Ten point response scales were problematic; most youth gave responses at either extreme.
  • All choices on five-point response scales needed anchors, not just the extremes and middle choice. Youth wanted to know what each point represented beyond just a number.
     

The Population Council provides another example of an organization that is utilizing Randomized Control Trials to understand a program’s impact.

9.7.3 Noteworthy Results: The Population Council Uses Randomized Control Trials and Mixed Methods to Understand HIV/AIDS and Financial Outcomes

In semi-rural KwaZulu Natal, South Africa, the Population Council implements a program called Siyakha Nentsha or “Building together with young people” to combat the poverty, unemployment, early pregnancy and high rates of HIV/AIDS that impacts young people. The program’s goal is to build assets with young people through financial skills, social grants and future planning. They worked with 10th and 11th grade students ranging in age from 16 to 25 years old.

The Population Council completed randomized control trials to determine the program mix that would have the most impact. Randomization included three groups: the first group received HIV education and social support, the second group received HIV education, social support and financial capabilities, and the third group had a delayed intervention.

Methods for evaluation included attendance rosters, longitudinal study with focal group discussions that included participants, parents, mentors and GPS

coordinates. Interim data on location, cell phone ownership, status diaries, video and school quality assessments was also collected.

Program results included the following: delay in sexual debut, secondary abstinence, fewer partners, condom confidence, improved budgeting and planning skills, pursuing income-generating activities, having savings, social capital, higher self-esteem, and birth certificate.

For more information, see the following resources:

Hallman, K and Roca, E. 2011. “Reducing the social exclusion of girls,” www.popcouncil.org/pdfs/TABriefs/ PGY_Brief27_SocialExclusion.pdf.

Hallman, K. 2005. “Gendered socioeconomic conditions and HIV risk behaviours among young people in South Africa,” African Journal of AIDS Research 4(1): 37–50. Abstract: www.popcouncil.org/ projects/abstracts/AJAR_4_1.html.

 

  • 1.
  • 2. For additional information, see: Hempel, Kevin, and Nathan Fiala. Measuring Success of Youth Livelihood Interventions: A Practical Guide to Monitoring and Evaluation. Washington, DC: Global Partnership for Youth Employment. 2011. This report was developed under the World Bank-supported Global Partnership for Youth Employment, under which the International Youth Foundation serves as Secretariat. The guide provides highly practical advice to practitioners about effective approaches for designing and developing impact evaluations for youth employability interventions. www.gpye.org/measuring-successyouth- livelihood-interventions
  • 3. Supported by The MasterCard Foundation, YouthSave investigates the potential of savings accounts as a tool for youth development and financial inclusion in developing countries, by co-creating tailored, sustainable savings products with local financial institutions and assessing their performance and development outcomes with local researchers. The project is an initiative of the YouthSave Consortium, led by Save the Children in partnership with the Center for Social Development at Washington University in St. Louis, the New America Foundation, and the Consultative Group to Assist the Poor (CGAP).