Multi-Year Evaluation Plan
2008-09 to 2010-11

Appendix A :
Department of Justice Program Evaluation Policy


The Department of Justice Program Evaluation Policy is built upon the principles of the Government Evaluation Policy (, April 1, 2001). This policy, in keeping with the new management framework for the Government of Canada, Results for Canadians (, reflects the view that public service managers are expected to define anticipated results, continually focus attention towards results achievement, measure performance regularly and objectively, and learn and adjust to improve efficiency and effectiveness.

What is Program Evaluation and how is it used?

Program evaluation[1] employs a set of applied research instruments that provides a systematic, objective assessment of elements of a policy’s or program’s[2] performance. Program evaluation contributes to strategic/corporate decision-making, innovation and accountability practices at all levels. Its purpose is to provide managers and other stakeholders with timely, relevant, credible and objective information on the continued relevance of government and departmental policies and programs, the impacts they are producing and opportunities for using alternative and more cost-effective policy and programming instruments.

Program evaluation acts as a feedback loop within the policy development process. It serves as a test of the ultimate success of policies by determining whether they accomplished what they set out to and, if not, why not? Program evaluation provides support to policy makers and line managers on matters such as the identification of expected policy and program outcomes, the development of performance frameworks, the monitoring of program and policy implementation, accountability reporting and the establishment of client-oriented service standards.

Program evaluation also provides information mid-way through a program (while the program activities are forming or happening) by examining various processes including the delivery of the program, the quality of its implementation and the assessment of the organizational context, and program inputs.

Program evaluation assists in promoting organizational learning within government, for example by communicating benchmarks for the use and management of policy instruments and program delivery mechanisms.

Finally, program evaluation, as one element of the departmental comptrollership function, is conducted in co-operation and co-ordination with other review processes, specifically audit and management-led reviews.

The Glossary of Terms at the conclusion of this document provides more detailed information on the components and concepts involved in the evaluation process.


The objective of the Department of Justice Program Evaluation Policy is to ensure that the Department has credible, timely, strategically focussed, objective and evidence-based information on the performance of its policies and programs.


It is the Department of Justice policy that key departmental policies and programs are:

Key departmental policies and programs are those that involve large expenditures or a high level of risk, those for which the government or the Department requires strategic information, or those in which the central agencies, Parliament or the public has expressed a particular interest.


The Deputy Minister

The Deputy Minister[3] is responsible for:

Audit and Evaluation Committee

The Audit and Evaluation Committee meets periodically to assist the Deputy Minister in discharging his/her responsibilities with respect to audit and program evaluation. It should be noted that, periodically, the Chairperson, as a member of Executive Council, will inform the Executive Council of the activities of the Committee.

In its role with respect to evaluation activities, the Audit and Evaluation Committee is responsible for:

Direct Reports and Policy and Program Managers

Direct Reports and Policy and Program Managers are responsible for:

Evaluation Division Director and Staff

The Evaluation Division is responsible for:


Once completed and approved, all reports are posted on the Department’s Internet and intranet sites in both official languages within 60 working days after the Audit and Evaluation Committee’s approval . The reports are also accessible by the public in accordance with the Treasury Board Review Policy and the Access to Information and Privacy Acts.


When designing the evaluation approach and especially in the preparation of evaluation questions for the evaluation of any departmental program or policy, special consideration will be given to the relevance and inclusion of questions that examine the differential impacts of programs and policies on employment equity groups, linguistic groups, gender and other relevant diversity groups.


Policies are found on the Treasury Board Internet site:


Enquiries about this policy should be directed to:

Director, Evaluation Division
Office of Strategic Planning and Performance Management


An operation or work process internal to an organisation, intended to produce specific outputs (e.g. products or services). Activities are the primary link in the chain through which outcomes are achieved.
A broad, high-level statement of a desired outcome, in general terms, to be achieved over an unspecified period of time. A goal should reflect an organization’s “Mission”.
Logic Model
A graphic representation of the program “theory” or “action”. It consists of a logical chain of if-then relationships; if x occurs, then y will occur, that shows the linkage from the activities through the sequence of outcomes
A statement identifying an organization’s business, purpose and reason for existence – critical areas within which goals, objectives and standards should be set.
A statement of specific results to be achieved over a specified period of time. Objectives are generally lower-level and shorter term than a goal.
The effect of the outputs of a program on client or target groups. In other words, outcomes/results are the changes a program or policy hopes to achieve. Outcomes/Results focus on what the program or policy makes happen rather than what it does (i.e. the intended results of the project, not the process of achieving them). They may be described as immediate, intermediate or final, direct or indirect, intended or unintended.
A unit of service provided, product provided, or people served by a program or policy, or a count of goods and services produced.
Performance Measurement
Consists of tracking program performance against goals over time to provide an assessment of a program’s performance, including measures of productivity, effectiveness, quality, and timeliness. Performance Measurement can help provide objective perspectives for defending or expanding a program, rather than allowing it to suffer from relatively arbitrary or habitual decisions. Ongoing monitoring systems, which emphasize indicators and analysis linked to improvement, can help track and improve results over time and can also prove to be a valuable source of information in the formal evaluation process.
Program evaluation
Employs a set of applied research instruments to provide a systematic, objective assessment of elements of a program’s performance. This information provides managers and other stakeholders with timely, relevant, credible and objective information on the continued relevance of government and departmental policies and programs, the impacts they are producing and opportunities for using alternative and more cost-effective policy and programming instruments. Depending on the timing of the evaluation, it can consist of:
  • a formative, implementation or mid-term evaluation which provides information mid-way through a program by examining the delivery of the program, the quality of its implementation and the assessment of the organizational context, personnel procedures and inputs; or
  • a summative or impact evaluation which determines the overall impact a program has had by examining the effects or outcomes of programs.

Summative Evaluations focus on three primary concerns:

  • issues of relevance, or more aptly whether or not program or policy instruments , continue to address strategic priorities and/or actual needs, i.e. the extent to which the objectives and mandate of the program or policy are still relevant and the extent to which the activities and outputs of a program or policy are consistent with the mandate and plausibly linked to the attainment of stated objectives and intended impacts;
  • issues of success, including the degree to which program or policy instruments are meeting stated objectives (i.e. impacts), and without unwarranted, undesirable impacts; and
  • issues of cost-effectiveness such as whether the most efficient means are used to achieve objectives relative to alternative approaches including whether another level of government could assume responsibility for the policy or program instrument.
Program Evaluation Process
Consists of four stages: planning and design; data gathering and analysis; reporting; and follow-up.Program Evaluation Process
  • Stage 1: Planning Stage

    The planning stage consists of developing plans for the approach to the evaluation of existing, new or substantially altered programs or policies. The planning stage involves intensive consultations with program managers, clients and other interested stakeholders. It is important that this be done at the beginning of a new program or policy or as early on as possible in the development of a program or policy to ensure that the objectives are stated in a manner that allows for the ready identification of performance indicators and the systematic collection of performance information required for organizational learning and management decision-making. As part of the planning stage, evaluation undertakes an analysis of available data to determine the degree to which a range of issues can be addressed using existing data as well as the need for the collection of new data elements. The planning stage culminates in the production of a RMAF document (or an evaluation framework, assessment framework or evaluation workplan).

  • Stage 2: Data Gathering and Analysis

    The data gathering and analysis stage involves the actual fieldwork for the completion of the evaluation project as well as the analysis of the findings from the various sources, including the monitoring of ongoing performance measures. For more complex projects, the data gathering and analysis stage may extend over more than one fiscal year.

  • Stage 3 : Reporting Stage

    The reporting stage consists of reporting evaluation findings to the Deputy Minister, departmental managers, central agencies, Parliament and ultimately the public.

  • Stage 4 : Follow-up

    Follow-up activities involve the formulation of recommendations for changes where warranted in terms of any of the areas listed above. The Program area being evaluated is required to prepare a management response. The Evaluation Division is available to assist program managers to formulate action plans as part of their management response to ameliorate any outstanding issues based on evaluation findings. This follow-up evaluation service can also include assistance in monitoring the implementation of the action plan.

Results-based Management and Accountability Framework (RMAF)
A blueprint for managers to help them focus on measuring and reporting on outcomes throughout the lifecycle of a policy or program. RMAFs are a requirement of the Treasury Board Policy on Transfer Payments ( and are commonly required by Treasury Board in the approval of new or renewed programs. RMAFs are also called for under the Treasury Board Evaluation Policy whenever they make sense for the purpose of measuring and reporting on results. RMAFs generally include:
  • a clear statement of the roles and responsibilities of the main partners involved in delivering the policy or program;
  • a clear articulation of the resources to be applied and the objectives, activities, outputs and key results/outcomes to be achieved, along with their linkages (see Glossary of Terms for a description of each of these terms);
  • an outline of the performance measurement strategy, including costs and performance information (key indicators) that will be tracked;
  • the schedule of major evaluation work expected to be done; and
  • an outline of the reporting provisions as appropriate for funding recipients and those for the Department.

RMAFs are a useful management tool for significant policies or programs, regardless of whether they are produced in compliance with an "official" government requirement. However, when an RMAF is not specifically required by TB and where a manager nonetheless wishes to have a framework to assist in the evaluation of a program or policy, it is sometimes called an evaluation framework, assessment framework or evaluation workplan. Essentially, these terms are equivalent to an RMAF but have more flexibility in their components (because they are not required by Treasury Board).