Project Managers' Guide to Performance Measurement and Evaluation

9. IMPLEMENTING THE PERFORMANCE MEASUREMENT EVALUATION APPROACH

9.1 Laying the Groundwork

Notwithstanding the challenges, realistic, thorough and timely planning helps to ensure that the benefits of performance measurement and evaluation are realized. Developing a Logic Model and Performance Indicators are key tasks that help to focus the Family Violence Initiative and related activities. These tools set the boundaries of your accountability and serve as a ‘touchstone’ for mapping your progress. Determining how and when you will collect and report information – as well as the resources that you will need to do this - is the next part of the performance measurement and evaluation strategy.

Laying the Groundwork – First Steps
  • Agree on goals and objectives.
  • Create a Logic Model.
  • Develop Performance Indicators.
  • Develop a performance measurement and evaluation strategy.
  • Plan & coordinate your performance measurement and evaluation activities as you develop your workplans.

The Family Violence Initiative has a long history of working together. When you prepare your annual workplan, it is also a good time to plan and coordinate performance measurement and evaluation activities with your colleagues in the Family Violence Initiative. Coordinated planning will increase the likelihood of success – and is particularly critical when there are partners involved.

Collaborative Approaches – A Unique Challenge

In many cases, the Department of Justice Canada is one of several contributing partners in a project. It such cases, it is important to ensure that all partners are clear on the expectations of the project. Ensure that all partners agree on the evaluation plan and on who will be responsible for the evaluation plan and its implementation.

Some performance measurement and evaluation questions to consider during the FVI workplan development include:

  • What will decision-makers need to know about the Family Violence Initiative?
  • Is evaluation of certain projects and activities appropriate (and if so, when?)
  • Given the nature of activities and projects that you will undertake, and how far along they are in the implementation cycle, what kind of performance measurement information will be required?
  • Is there sufficient internal and external capacity – including financial resources – to gather information? If not, how can that capacity be developed?
  • What kinds of approaches or methods are best suited to the context and to the kind of information you will need?
  • What are the most cost-effective ways of gathering information? Are there any possibilities of linking performance measurement or evaluation activities? Think creatively: if you were funding similar types of projects, would a cluster evaluation[2] be appropriate?
  • How can you and your partners and other key stakeholders get the most benefit out of the exercise?
  • Who will be responsible for the evaluation, and who will be involved in the process?
  • Can information be gathered and analyzed in a timely way for reporting?
Considerations – Policy Activities

Policy activities pose challenges for performance measurement and evaluation. Policies may be tied to multiple, high-level and distant outcomes. The policy development process itself is subjective and iterative in nature. Time frames are uncertain and ever changing. Problems with measurement, causation and attribution also exist. Yet policies influence and even drive change processes. For example, changes to the Criminal Code related to stalking and criminal harassment and victim protection legislation have had a significant impact on improving the criminal justice response to family violence issues. Knowing what’s working well and what requires improvement is critical.

Sound Familiar?

“The causal gap creates uncertainty on two levels. First, no matter how good your planning and analysis is, you can rarely be certain that today’s policy inputs and outputs will in fact lead to the desired social and economic outcomes. Most of the high-level outcomes targeted by government policies are affected by many factors outside of the government’s control….even if the desired outcomes occur, we can rarely be certain that the government’s intervention (as opposed to other unrelated factors) was the primary cause.”[3]

Tools such as the logic model and performance indicators help to develop a picture of how policies are expected to influence change. Monitoring the policy development process through tools such as policy reviews, scans and opinion research are potential strategies. Policy research and evaluation – along with program development and testing – are ways that cumulatively, help to measure the effectiveness of policies.

Considerations – Research

Collecting performance measurement and evaluation information on the Family Violence Initiative research function is also challenging. Some ideas to consider include:

  • peer reviews,
  • client satisfaction, and
  • user feedback surveys.
Considerations – Projects: Justice Partnership and Innovation Fund – Programs and Public Legal Education

Project-level evaluations are one of the key building blocks of the performance measurement and evaluation strategy. The Justice Partnership and Innovation Fund requires proposers to prepare an evaluation plan as part of proposal development. Typically, proposals should include an overview of the project; its target audiences; expected outcomes; indicators of success/impact; data sources and methods; who will conduct the evaluation; how stakeholders and partners will be involved; the time frame, and how the evaluation information will be used. Logic Models – or project “roadmaps” are increasingly important tools.

What is a Logic Model?

A logic model is a picture of how a program or project works – including the theory and assumptions behind it. Where feasible, proposers should be encouraged to learn about and apply this tool[4]. Logic models can take many forms. They may focus on outcomes; activities; concepts or theories; or a combination of the above. According to the Kellogg Foundation, a logic model is a learning tool that helps you to learn about your program or project and make continuous improvements: “A logic model’s value is in the process of creating, validating, and then modifying the model.”[5]

Some questions to consider when you review proposed evaluation plans include:

  • Project description: Is the project description adequate? Is there a logical and realistic link between the project objectives, activities, outputs and outcomes?
  • Determine up front what success would look like. Are the indicators of success appropriately matched to the project outputs and outcomes?
  • Are the data sources and data collection methods appropriate to the situation, are different population perspectives being considered (culture, language, gender, age, literacy level, etc) and are the data collection instruments valid and reliable?
  • Have any ethical issues or considerations been appropriately addressed?
  • Are the confidentiality provisions appropriate?
  • Who is involved in the evaluation (i.e. the party conducting the evaluation, partners, participants) and how will they be involved?
  • Is the time frame realistic?
  • How important is the information that will be collected – and to whom?
  • How will you, your partners, the project and other stakeholders use the information?
  • Are the resources appropriate and sufficient to get the job done?
  • How realistic is the evaluation plan?
  • What improvements could be made?

Plan – and allow sufficient time for - evaluations! For example, if a training project takes one year to implement, consider doing a follow-up survey six months after completion to find out how the training has impacted participants.

Quantitative versus Qualitative Data….Not an Either/Or Situation Quantitative information are things that can be counted and measured, like the number of participants in a project, the number of brochures distributed, the number of locations at which a workshop was given. Qualitative information provides in-depth, descriptive information that can give context or meaning to the experience, such as people’s perceptions or feelings about what happened and how the project affected them. Ideally, you will want to gather quantitative and qualitative data on family violence issues.

9.2 Implementation & Monitoring

Planning is only the first step. It is important that evaluation plans be activated as soon as a project commences. As a project moves from proposal to implementation stage, it is likely the project objectives and activities – and potentially its outputs and outcomes - will be fine-tuned. Keeping track of this evolution will help the project and ensure that the evaluation has the correct focus and asks the right questions in the right way. When monitoring the implementation of the evaluation plan, consider the following questions:

  • Are the evaluation questions still relevant?
  • Are different participant perspectives (culture, language, gender, age, literacy level, etc.) being taken in to consideration?
  • Are any problems with data collection emerging and if so, are solutions being identified and implemented?
  • What is being learned along the way? Is feedback being used to improve the project on an ongoing basis?

Encourage projects to provide evaluation updates, along with any modifications to their evaluation plans when they submit their formal progress and/or interim reports. In addition, ongoing communication, as well as with informal “check-ins” and on-site visits are ways to determine how a project is doing.


  • [2] Cluster evaluations look at how well a collection of similar projects meet a particular objective of change. Cluster evaluations are a potential way for the Family Violence Initiative to look across projects to identify common threads, themes and impacts and to identify lessons learned. They are not a substitute for project-level evaluations. W.K. Kellogg Foundation (1998) Evaluation Handbook. Battle Creek, MI, W.K. Kellogg Foundation. http://www.wkkf.org.

  • [3] Schacter, Mark (2002) “What will Be, Will Be”: The Challenge of Applying Results-based Thinking to Policy. Ottawa: Institute on Governance, p. 15. www.iog.ca.

  • [4] Logic Models are not currently a requirement of FVI projects funded under the JPIF, however the practice should be encouraged. This may require the provision of technical assistance to funding recipients. Examples are included in Appendix 3.

  • [5] W.K. Kellogg Foundation (1998) Evaluation Handbook. Battle Creek, MI, W.K. Kellogg Foundation. http://www.wkkf.org.

Date modified: