Skip to content

STAY UP TO DATE

Join our FREE network for latest news, events, training, research, sponsorship opportunities and more

Types of evaluation

 

Many types of evaluation exist, consequently evaluation methods need to be customised according to what is being evaluated and the purpose of the evaluation.[1,2] It is important to understand the different types of evaluation that can be conducted over a program’s life-cycle and when they should be used. A comprehensive program evaluation is likely to include three different types of evaluation.

These are:

• Process evaluation (measures program strategies)

• Impact evaluation (measures program objectives)

• Outcome evaluation (measures program goals/aims)

Think about evaluation in the early planning stages.

Although some evaluation will not be conducted until after a program concludes, evaluation strategies should be considered early in the planning process to ensure that opportunities to collect data are not missed.

Each type of evaluation is described in more detail below.

 

Process evaluation

Process evaluation is used to measure program development, delivery and progress. It assesses factors such as the program’s quality, availability, appropriateness, target group reach and how well the strategies were received by the intended audience. Questions could include:

• Did we reach the intended participants? 
• Was the program liked/accepted? 
• Was the program cost effective?
• What worked well? 
• What would we do differently next time? 
• Were participants satisfied?

Process evaluation describes what happens once the program has started, and may involve recording and counting inputs and outputs such as number of sessions held, attendance and response rates, the number of resources developed and distributed, program costs, and the time taken to develop resources. Using process evaluation is important for identifying the strengths and weaknesses of program delivery and areas for improvement.

 

Impact evaluation

Impact evaluation is used to measure the immediate effect of the program and is aligned with the programs objectives. Impact evaluation measures how well the programs objectives (and sub-objectives) have been achieved.[3,4]

Impact evaluation will help answer questions such as:

• How well has the project achieved its objectives (and sub-objectives)?

• How well have the desired short term changes been achieved? 

For example, an objective may be to provide a safe space and learning environment for young people, without fear of judgment, misunderstanding, harassment or abuse. Impact evaluation will assess the attitudes of young people towards the learning environment and how they perceived it. It may also assess changes in participants’ self esteem, confidence and social connectedness. Impact evaluation measures the program effectiveness immediately after the completion of the program and up to six months after the completion of the program.

 

Outcome evaluation

Outcome evaluation is concerned with the long term effects of the program and is generally used to measure the program goal. Consequently, outcome evaluation measures how well the program goal has been achieved.[1,3]

Outcome evaluation will help answer questions such as:

• Has the overall program goal been achieved? 

• What, if any factors outside the program have contributed or hindered the desired change?

• What, if any unintended change has occurred as a result of the program? 

Your program outcome evaluation may measure changes to: mental and physical wellbeing, prevalence of an STI or BBV, or a change in sexual or drug-related behaviours.

Outcome evaluation measures changes at least six months after the implementation of the program (longer term). Although outcome evaluation measures the main goal of the program, it can also be used to assess program objectives over time. It should be noted that it is not always possible or appropriate to conduct outcome evaluation in some programs.

 

Summative evaluation

At the completion of the program it may also be valuable to conduct summative evaluation. This considers the entire program cycle and assists in decisions such as:

• Do you continue the program? 

• If so, do you continue the program?

• Is it possible to implement the program in other settings? 

• How sustainable is the program?  

• What elements could have helped or hindered the program? 

• What recommendations have evolved out of the program? 

 

Resources

SHBBV Program Planning Toolkit

 

References
  1. Hawe, P., Degeling, D., Hall, J. 1990. Evaluating health promotion: A health worker’s guide, MacLennan & Petty, Sydney.
  2. Taket A. 2010. In Liamputtong L (ed). Research methods in health: Foundations for evidence-based practice. Oxford University Press: South Melbourne.
  3. Hanson W., Creswell, J., Plano Clark, V., Petska, K., and Creswell, D. 2005. Mixed methods research designs in counseling psychology. Journal of Counseling Psychology 52 (2): 224–235.
  4. Leech, N. and Onwuegbuzie, A. 2009.  A typology of mixed methods research designs. Quality & Quantity 43: 265–275.
  5. Greene, J. and Caracelli, V. 2003. Making paradigmatic sense of mixed methods practice. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 91–110).Thousand Oaks, CA: Sage.
  6. Bowling, A. 1997. Research methods in health: Investigating health and health services. Place Published: Open University Press.
  7. Nastasi, B., and Schensul, S. 2005. Contributions of qualitative research to the validity of intervention research. Journal of School Psychology 43 (3): 177-195.
  8. Barbour, R. 2001. Education and debate. British Medical Journal 322 (7294): 1115-1117.
  9. Golafshani, N. 2003. Understanding reliability and validity in qualitative research. The Qualitative Repor 8 (4): 597-607.
  10. Ovretveit, J. 1998. Evaluating health interventions. Berkshire: Open University Press.
  11. Nutbeam, D., and A. Bauman. 2006. Evaluation in a nutshell. North Ryde: McGraw-Hill.