Crimes Against Humanity and War Crimes Program

Appendix E: Research Methods and their Limitations

Research Methods and their Limitations

This Appendix briefly examines each of the key research methods used during the evaluation and assesses the level of coverage and response rate of each method. It also assesses any limitations of the methods and their impact on the validity of the evaluation findings taken as a whole.

Interviews

As Table 1 below indicates, members of the Evaluation Advisory Committee (EAC) identified a total of 108 potential interviewees to be interviewed during the evaluation. The evaluation team was able to contact, schedule and interview a total of 75 interviewees from the identified list. The participation rate in the interviews was just over 70 percent.

Many of those who were not available or declined indicated that another person who was available for the interview had the necessary experience to provide relevant answers. Most importantly, all of the identified organizational units taking part in or cooperating with the Program in some way were interviewed.

Table 1: Comparison of Interviewees Identified vs. those Interviewed
Group Interviewees Identified Interviewed
Participating Departments 37 26
Other Government Departments 12 10
External Canadian Stakeholders 25 20
International Stakeholders 34 19
Total 108 75

Survey of Participating Departments

The response rate for the survey or program staff was 82 percent. A total of 93 staff responded from all four departments, including a cross section from those managing headquarters Crimes Against Humanity and War Crimes units to those staffing regional offices or working in Canadian Embassies and High Commissions abroad.

Table 2: Responses to the Survey of Participating Departments
Group Invited to Respond Respondents
Canada Border Services Agency 54 45
Citizenship and Immigration Canada 14 12
Department of Justice 27 22
Royal Canadian Mounted Police 18 14
Total 113 93

Case Studies

The evaluation team undertook the analyses of five different cases, each involving the use of a different program remedy through file and document reviews and interviews with key departmental personnel. The five cases studied covered:

  • Case 1: Criminal Prosecution;
  • This case focused only on the processes involved in carrying out a generic criminal prosecution under the Crimes Against Humanity and War Crimes Act.
  • Case 2: Revocation of citizenship and deportation;
  • Case 3: Inquiry and removal under the Immigration and Refugee Protection Act;
  • Case 4: Denial of visa; and
  • Case 5: Denial of status under 35 (1) b of the IRPA.

The case studies were able to achieve their intended purpose of illustrating, in particular, the operation of cross-departmental coordination and information-sharing mechanisms of the Program. The completion of the process flow diagrams developed for the cost comparison allowed the evaluation to cross-check the results of case study interviews with the process steps illustrated in the diagrams.

Country Studies

In particular, the case studies of national programs and institutions for dealing with crimes against humanity and war crimes in Australia, the Netherlands and the United States allowed the evaluation to examine the emphasis given different remedies in different jurisdictions. It also allowed for a comparison of the results achieved and of the level of integration across departments. Unfortunately, it did not allow for a comparison of costs either on a whole program or unit of output basis. The main reason for this has been the fact that each of the countries chosen for comparison does not deal with war crimes through an integrated program with its own budget. In general, cost data on programs in other jurisdictions is not available since the costs are subsumed in the operating budgets of the departments involved.

Comparison of Remedy Costs

The cost comparison began with a series of consultations with participating departments to develop the process flow diagrams presented in Appendix D. This was followed by the development of a costing table for each of the remedies subsequently circulated to the relevant staff in each of the participating departments. Finally, the evaluation was able to compile estimated costs for each separate step in the remedies available under the Program and to arrive at an estimated average total cost for each remedy (on a per-case basis). This could then be used in an assessment of program and remedy cost effectiveness.

Media Analysis

The media analysis focused on a review of how specific crimes against humanity and war crimes cases were covered in Canadian media. It was not designed to assess the impact of media coverage on program awareness and opinions regarding war crimes issues among the Canadian public. The media analysis demonstrates, however, that high profile cases before the courts for criminal prosecution under the Act, or as part of the process of removing a suspect from Canada, generate virtually all media coverage of the Program and the issues.

Limitations

In overall terms, the evaluation was carried out in accordance to the methods originally proposed in the study design and to access all, or very nearly all, of the data expected. There were some limitations in each of the methods but none undermined the overall validity of the evaluation findings.

  • Although only just over 70% of identified interviewees were able to be interviewed, all relevant organizations and organizational units were able to participate in the interviews;
  • The response rate to the departmental survey was very high at over 82%. Nonetheless it is important to note that, for any given question, sometimes the number of answers can be very small. If the respondents had been chosen through a statistically valid random sample process, this would severely limit the validity of the results. It is less of a concern with, as in this case, a directed sample in which program staff identified those persons they felt were most knowledgeable of the issues.
  • For the interviews, the staff survey and the case studies, there is an obvious risk of selection bias since the samples in all three were selected based on the advice of program staff. This was done to maximize the content available to the evaluation team. Since the interviewees and survey respondents represent very nearly a census of knowledgeable persons, this is less of an issue than for the case studies.
  • The case studies were able to examine the changing nature of interdepartmental coordination and communications during the evaluation period since most of the cases were multi-year in duration. With only five case studies carried out, there is a real limit to generalizations drawn from the studies. They were not, however, intended to be representative of the Program as a whole, but to provide illustrations of some of the phenomena (such as the role of coordination) found in the broader based methods such as the interviews and staff survey.
  • The country studies were able to illustrate very different national organizational and policy responses to the problem of war crimes, even though they did not provide cost data to allow for cost effectiveness comparisons across countries. Of course, they suffer from the same problem of limited sample size as the case studies.
  • The comparison of estimated remedy costs did provide a very clear pattern of the differential in estimated remedy costs on a per case basis. Cost effectiveness estimates can be derived from this data but it requires qualitative weighting of the value of a given outcome. Higher costs do not automatically indicate lower cost effectiveness.
Date modified: