Crimes Against Humanity and War Crimes Program Evaluation

3. Methodology

The evaluation made use of multiple lines of evidence in order to support robust findings. The methodology included six main lines of evidence: a review of performance information, documents, files, and databases; key informant interviews; a survey of departmental staff; case studies; country studies; and cost comparisons for the remedies under the Program.

The evaluation matrix (which links the evaluation questions, indicators, and lines of evidence) and the data collection instruments were developed with the input of the CAHWC Evaluation Working Group. The evaluation matrix is included in Appendix B and the data collection instruments are in Appendix C.

Each of the evaluation methods is described more fully below. This section also includes a brief discussion of methodological challenges.

3.1. Review of Performance Information, Documents, Files, and Databases

The review of performance information, documents, files, and databases was conducted both to inform the development of data collection instruments and to address the majority of the evaluation questions. The review included documents from internal and publicly available sources, including the following:

  • the 2008 CAHWC Evaluation
  • the Program Performance Measurement Strategy
  • updated process flow charts for remedies
  • statistics for the Program from annual reports (covers 1998 to 2011) and updated statistics
  • PCOC meeting minutes, reports, and policy decisions
  • Steering Committee minutes
  • relevant legislation and international conventions and protocols, and international reports
  • field guides and manuals
  • training materials
  • Departmental Performance Reports and Reports on Plans and Priorities
  • budget speeches and Speeches from the Throne

As part of the review of performance information, the evaluation included a trends analysis. This was based on the Program’s annual reports and updated information provided by Program partners. The analysis involved establishing the baseline using data for 2003–04 to 2008–09 and then determining trends up to and including (when available) 2014–15.

3.2. Key Informant Interviews

The key informant interviews conducted for this evaluation addressed the majority of evaluation questions, and were a key line of evidence in gathering information on the need for the CAHWC Program, as well as the effectiveness of its activities. A list of potential key informants was prepared, and interview guides tailored to each key informant group were developed in consultation with the Evaluation Working Group. Forty-four interviews were conducted with a total of 49 key informants. The specific categories of key informants are included in Table 2.

The following scale has been applied to report on interviews:

  • A few
  • Some
  • Many
  • Most
  • Almost all
Table 2: Key informant interviews
Position Number of key informants
Program staff
Justice 4
RCMP 7
CBSA 5
IRCC 4
Total Program staff 20
External Interviewees
Other Government of Canada stakeholders 4
International peer community (representatives of agencies in other countries working in immigration, security, and humanitarian law) 13
International NGOs and academics 12
Total External Interviewees 29
Total 49

3.3. Survey of Departmental Staff

To assess the opinions of departmental staff involved in the Program, the evaluation included an anonymous and confidential bilingual web-based survey. The survey was online for over four weeks — from June 8 to July 10, 2015.Footnote 12 During this period, two reminders were sent to potential participants to increase the response rate. Out of a potential 122 respondents, 68 Program staff responded to the survey for a response rate of 56%. Once the survey was finished, open-ended questions were coded and the survey data was analyzed using SPSS, a statistical software package.

Table 3 provides a profile of survey respondents.

Table 3: Survey respondent profiles (n=68)
  Number of respondents % total respondents
Q1: For what department/agency do/did you work?Table note i
Justice 19 28%
RCMP 19 28%
IRCC 8 12%
CBSA 22 32%
Q2: Where do/did you work? Check all that apply.Table note ii
Ottawa/Gatineau 34 50%
Regional units 22 32%
Overseas missions 17 25%
Other 2 3%
Q3: How long have you worked in areas related to addressing war crimes, crimes against humanity, or genocide?Table note i
Less than one year 5 7%
One to five years 26 38%
Six to 10 years 17 25%
Over 10 years 20 29%

Source: Survey of departmental staff

Table note i

Totals may not sum to 100% due to rounding.

Return to table note i referrer

Table note ii

Multiple response question.

Return to table note ii referrer

3.4. Case Studies

The case studies focussed on five of the remedies available under the Program:Footnote 13

  • denial of visas/entry to persons outside of Canada
  • admissibility/eligibility/exclusion to Canada’s refugee claim determination system
  • criminal investigationFootnote 14 and prosecution
  • revocation of citizenship
  • inquiry and removal under IRPA

Each case study included a review of available relevant documents and files, and key informant interviews. A total of 12 interviews were conducted for the case studies.

3.5. Country Studies

The evaluation included a review of three countries — France, the United Kingdom, and New Zealand — that have programs for addressing individuals involved in crimes against humanity, war crimes, or genocide. These countries were chosen to provide some regional distribution (New Zealand and the two European countries), include a mixture of civil law (France) and common law jurisdictions (the United Kingdom and New Zealand) and to not overlap with the countries used in the 2008 evaluation of the CAHWC Program (which conducted studies of Australia, the Netherlands, and the United States). Each country study included a review of relevant publicly available documents and literature and interviews or written submissions from the officials within the country. A total of nine individuals participated in the country studies.

3.6. Cost Comparisons

The cost comparison analyzes the costs of the six remedies (denial of visas; admissibility/eligibility/exclusion from refugee claim determination system; extradition on request; criminal investigation and prosecution; revocation of citizenship; and inquiry and removal under IRPA).Footnote 15 A mapping of the processes with associated costs was conducted for the 2008 evaluation.Footnote 16 The process mapping was updated in 2014 and again in 2015. For the evaluation, the partner departments and agencies were consulted and each developed a methodology to provide updated costing information based on available data. The process maps are in Appendix D.

3.7. Limitations

The evaluation faced a few methodological limitations. These are listed below by line of evidence.

Review of performance data, documents, files, and databases

The method of performance reporting provides a snapshot for each fiscal year for each remedy. For example, the performance reports for the Program include the number of individuals who were denied refugee status or received removal orders under IRPA on the basis of complicity or commission of crimes against humanity. While illustrative of the Program’s annual activities and outputs, reporting on aggregate numbers prevents the Program from linking these activities to its intermediate and ultimate outcomes, and precludes the ability to more accurately assess the success rate and efficiency of remedies. The Program lacks a central, program-wide database to track individual-level progress through the remedy(ies). Such a database could be used to track key information, such as dates, how many remedies were involved, and systematically track factors that affected progress through the key stages of the remedy(ies).

Interviews, case studies, and the survey

The interviews with key informants and case study participants, as well as the survey of departmental staff, have the possibilities of self-reported response bias and strategic response bias. Self-reported response bias occurs when individuals are reporting on their own activities and so may want to portray themselves in the best light. Strategic response bias occurs when the participants answer questions with the desire to affect outcomes.

For each of these three lines of evidence, the participation rate was also less than desired:

  • For key informant interviews, several external key informants (NGOs, academics, the international peer community) declined to participate due to lack of sufficient knowledge of the CAHWC Program or simply did not respond to multiple requests.
  • The initial design of the case study was based on using a specific case and including interviews of staff from relevant partners involved in the processing of the case. In most instances, a hypothetical scenario rather than a specific case was used, which resulted in there being few interviewees, and mostly not including interviewees from all potential partners involved in the specific remedy. This meant that hand-off points or how partners work together could not be discussed for most remedies. Case studies were a missed opportunity for this evaluation and should be rethought for the next one.
  • For the staff survey, the response rate was substantially lower than it was in 2008 even though the survey questionnaire was intentionally designed to be of similar content and length to that used in 2008.

The desire going into the survey had been to compare the 2008 results to the 2015 results. This was not possible because the 2008 raw data were not available, the evaluation reported survey results without including those respondents who responded “don’t know” or “not applicable”, and also did not provide the resulting number who responded. Given the small sample sizes and this lack of information related to the 2008 responses, it was decided not to compare the 2008 and 2015 survey results.

Mitigation strategy

The mitigation strategy for the above methodological limitations was to use multiple lines of evidence from different stakeholder groups as well as different types of evidence. For example, the evaluation gathered information from the Program partners as well as external key informants. In addition, the evaluation used both quantitative and qualitative data collection methods to answer evaluation questions. By using triangulation of findings from these different sources, the evaluation was able to strengthen its conclusions despite the limitations.

Date modified: