5.2.1 Differentiate between process and impact evaluations

Process and impact evaluation work to achieve similar, though not identical, goals. Process evaluation is more likely to employ qualitative techniques to answer questions about several issues, including the fidelity of a program—whether it is being implemented as intended. It may focus on how to improve performance by identifying which components of the program work and which do not, due to either design or implementation challenges. Process evaluation also contributes to major evaluation questions such as relevance, efficiency, effectiveness, and sustainability. When done well, process evaluation can be used to “detect and correct” implementation problems, but also to discover bigger, underlying problems. Process evaluation, in tandem with impact evaluation, notes contextual information that may have affected program impact, looks more deeply at how resources were used, and can bring evaluation closer to participant’s experience 1.

Impact evaluations measure the outcome of a program or intervention. They seek a causal link—hoping to identify whether the program led to changes in young people (or families, institutions, policies, etc). Impact evaluation uses primarily quantitative research and can employ varying degrees of scientific rigor.

Practical Tips: Genesis Analytics on Selecting Quantitative Impact Evaluation Methods

The following quantitative methods can be used for impact evaluation. The following tips help explain the types of impact evaluation and when they might be appropriate.

  • A cross-sectional study uses a different sample (from the same population) for pre and post work but large samples and rigorous sampling is needed to ensure that pre- and post-training samples are random and representative.
  • A longitudinal study is a powerful way to detect direct impact on individuals; however, staying in touch with participants over time can be difficult.
  • Non-randomized, quasi-experimental studies compare similar treatment and comparison groups through pre and post-surveys but various external factors may interfere with the variable being measured. 
  • Randomized control trials (RCTs) can prove causality between two very similar groups—a treatment and control group—but are not always practical to implement. RCTs assume that a program was of high quality, relevant, and in demand by the recipients. For more information on RCTs, see Randomized Control Trials (RCTs).”
  • 1. Excerpted from the presentation of Ms. Alyna Wyatt, Senior Associate in the Business and Development Practice at Genesis Analytics and Team Leader, Financial Education Fund, at the 2012 GYEOC.