The Effectiveness of Substance Abuse Treatment with Young Offenders

4. Defining Treatment Success

Reviewing the adolescent substance abuse treatment literature in both offender and non-offender samples highlights the fact that there is very little agreement on defining programmatic success (McNeece et al., 2001). This situation is compounded by the fact that substance abuse affects many different physical, mental, and behavioural outcomes and so obtaining universal agreement on which set of outcomes define program success is quite difficult. An additional obstacle standing in the way of solid scientific process is that even if a universal set of indicators for program success were unanimously selected, there exists considerable disagreement on how these outcomes should be measured (Catalano et al., 1990).

Despite these concerns, some researchers have provided promising recommendations to build on for the future. More specifically, Webster-Stratton and Taylor (1998) proposed four standards that should be met in order to classify an intervention as appropriately empirically supported.

  • a detailed scientific report on the outcomes is available - they considered publication of the article in a peer-reviewed journal as sufficiently meeting this standard;
  • short- and long-term effects demonstrated in a randomized controlled trial compared to no treatment or an alternate treatment approach- the authors proposed that this type of experimental design, where subjects are randomly assigned to either a treatment or comparison condition, is essential in order to answer whether the intervention is truly effective;
  • effects demonstrated on a primary predictor of adolescent substance abuse, violence, anddelinquency- as persuasively argued by the authors, unless the evaluators assess the impact of the program on one of the primary predictors of these negative outcomes, it is impossible to know whether the intervention will be effective and whether program participation brought about the changes in this area; and,
  • a manual is available describing the intervention- this type of information is deemed critical in order to facilitate replication by other interested researchers or program administrators.

Recent evidence supplied by Dunford (2000) emphasizes the importance of appropriate adherence to the second standard mentioned above. He demonstrated in a program evaluation of a domestic violence program for spousal abusers that significantly different conclusions were reached depending on whether the comparison group data were considered in the analyses of program effectiveness. More specifically, when the comparison group data were excluded, the results indicated that the program was a success (as measured by pre-post improvements). However, when the comparison group data were considered in the analyses, it was found that the results did not provide a treatment effect (e.g., the treatment group performed equally well to the comparison group). This example clearly illustrates the pitfalls that occur when classical experimental design methodology is not incorporated into a program evaluation.

The Treatment Outcome Working Group sponsored by the Office of National Drug Control Policy (ONDCP) in the United States has also tackled the issue of defining effective substance abuse treatment. This working group consisted of a panel of treatment and evaluation experts and the cumulative results of their efforts was the establishment of standards and protocols for defining substance abuse program effectiveness, which encompassed a wide range of physical, mental, and behavioural variables. These included:

  • reduction in primary drug use;
  • improved employment and educational situation;
  • improved interpersonal relationships;
  • improved medical status and general improvements in health;
  • improved legal status;
  • improved mental health status; and,
  • improved non-criminal public safety (ONDCP, 1996, cf. McNeece et al. 2001).

Evaluating substance abuse treatment across a global constellation of factors has also been heralded by correctional investigators dealing with young offenders. More specifically, some researchers have argued due to the multiple influences that substance abuse has on an individual, the effectiveness of the program must be reviewed across each of these separate outcome measures rather than on a single outcome such as recidivism (Mears et al., 2001).

Based on these recommendations, it is clear that program evaluators should explore the impacts of the program on multiple outcome measures and not limit these indices to reductions in re-offending and/or substance abuse. Furthermore, and arguably most important, as discussed by Andrews and Bonta (1998), program evaluators must ensure that changes observed on intermediate outcome measures (e.g., dynamic factors) targeted during treatment are linked to the outcome variables of interest to ensure the program is responsible for any observed post-treatment effects.

Date modified: