< Research Blog

Ten things to know about program evaluation and the Calgary Homeless Foundation

Posted May 2nd, 2017

By Nick Falvo, Ph.D.

Nick Falvo is the Director of Research and Data at Calgary Homeless Foundation


The Canadian Evaluation Society (CES) recently invited me to speak on a panel discussion. I was asked to speak to how my organization, the Calgary Homeless Foundation (CHF), views program evaluation.

The CES defines evaluation as: “the systematic assessment of the design, implementation or results of an initiative for the purposes of learning or decision-making.”

With this in mind, here are 10 things to know about program evaluation in general and specifically its role at CHF:

  1. Formal program evaluation typically has a logic model.[1] In order to build the logic model, evaluators interview staff with direct knowledge of the program being evaluated. They also do a lot of readings—those readings include funding agreements and any reports that led to the genesis of the program being evaluated. The process of producing the logic model should be iterative, with multiple drafts and a feedback process involving program staff. Eventually, both the evaluators and program staff should agree on the logic model’s content.
  1. Some program evaluators learn on the job, while others have formal training. There are at least three ways to obtain program evaluation credentials in Canada: 1) there are several diploma and certificate programs offered by members of the Consortium of Universities for Evaluation Education; 2) the CES offers non-credit courses through its Essential Skills Series; and 3) evaluators can receive credentials via the CES Credentialed Evaluator program.
  1. How ‘arm’s length’ program evaluators should be is often the subject of debate. In Canada’s federal government, most evaluation is done internally. Having said that, when the federal government does evaluation internally, it often outsources many of the specific tasks (e.g., survey administration, the facilitation of focus groups, etc.).
  1. There are advantages to having program evaluation done externally. Those advantages include: 1) capacity (perhaps no in-house staff have comparable levels of expertise, including expertise pertaining to data analysis); 2) perspective (that external person might be able to guide the conversation in a way that insiders, who might have designed the program, might not think about it); and 3) independence (i.e. the avoidance of bias).
  1. There are advantages to having program evaluation done internally. An external evaluator won’t understand the program when they begin the evaluation and will take time to learn; by contrast, when a program evaluation is done internally, all basic information about the program under evaluation is known by evaluators up front. Put differently, when you pay an external person to come in and evaluate your program, your staff have to explain the program’s basics to the external person—and while this is happening, you’re paying both your staff and the external person! Another advantage of doing program evaluation in-house is that it builds capacity—i.e. staff get to know their work better.
  1. Like many non-profit organizations, the Calgary Homeless Foundation typically (CHF) chooses not to use external program evaluators. One major reason for this is cost.
  1. CHF uses a logic model for the housing programs that it funds. This logic model was built ‘in house’ via a process led by a CHF staff person (namely, my colleague Janice Chan). Our logic model begins by outlining the major contributing factors to homelessness. It then lists the major inputs required to successfully house persons who are experiencing homelessness. It then discusses outputs (e.g., number of clients housed within a given time frame), outcomes (i.e., housing stability) and impact (namely, more client independence). This logic model is presented below.

 

LogicModelFlowchart

 

  1. CHF’s performance indicators are linked to the CHF’s logic model. CHF’s performance indicators[2] and quarterly monitoring of performance by programs it funds are linked to the logic model presented above (put differently, there’s ongoing monitoring against our logic model). That said, our performance indicators are only one part of our overall performance measurement. With our overall evaluative process, we also ask the following questions about programs that we fund: Has the program in question been providing us with good financial reporting on time? How collaborative have program staff been?
  1. CHF contracts (for the programs we fund) contain explicit information about our performance indicators. Indeed, contracts spell out expected performance by that funded program on those indicators. Funded programs are expected to achieve outcomes that are 10% greater than their cohort’s average the previous year. (Click on this link to see the template version of the program outline section of one of our typical contracts; see pp. 10-11 of the template for information on performance indicators.)
  2. Some programs funded by CHF have their own logic models. One reason for this is that CHF isn’t always that program’s sole funder. Likewise, CHF funds some non-housing programs (e.g., outreach, prevention) for which this logic model isn’t a neat fit.[3] In such cases, we develop arrangements (and funding contracts) that seek to reconcile this tension.

 

In sum. Program evaluation is an important tool to demonstrate performance and efficiency. Whether an internal or external approach is adopted, the outcome can provide important information to guide funding and programming decisions.

A PDF version of the present blog post is available to download here: Ten things to know about program evaluation and the Calgary Homeless Foundation


The author wishes to thank the following individuals for invaluable assistance with this blog post:  Carla Babiuk, John Burrett, Janice Chan, John Ecker, Louise Gallagher, Penny Hawkins, Kara Layher, Lindsay Lenny, Kevin McNichol, Natalie Noble, Rob Shepherd, Tim Veitch and Jeannette Waegemakers Schiff.  Any errors lie with the author.


[1] Behind the logic model is a theory of change.

[2] CHF’s performance indicators (for programs it funds) will be the subject of a future blog post.

[3] See this previous blog post for an overview of the various program types funded by CHF.

< Research Blog