nextgenforensic

CoSA: An inconvenient truth

Ian A. Elliott

This week I read the new (July) edition of the International Journal of Offender Therapy and Comparative Criminology. Inside was a new outcome study of the Circles of Support and Accountability (CoSA) re-entry program in the South East region of the United Kingdom, by Andrew Bates and colleagues. I consider myself to be a supporter of CoSA and its mission, and I think it’s an excellent program implemented by motivated, diligent, and benevolent individuals. Myself and Ian McPhail have written positively about CoSA on this very blog. Nonetheless, I have constant lingering concerns about the inconvenient truth that, as yet, there is simply not enough evidence to suggest that CoSA programs are effective in their aim to significantly reduce sexual reoffending by Core Members (the individual to whom support and accountability is provided).

My colleague Gary Zajac and I recently outlined in detail the reasons why in a report from our 2013 CoSA evaluability study for the National Institute of Justice. Our criticisms focused on the varying quality of previous outcome studies and the lack of statistically significant results in the one randomized-controlled experimental study. Thus, I hoped Bates and colleagues’ new analysis would represent further opportunity for those with furrowed brows to shake-off our concerns about the lack of quality evidence for CoSA. And Bates and colleagues’ abstract delivers: “[The] incidence of violent and contact sexual reconviction in the comparison group was significantly higher than for the Circles cohort.” (p. 861). Unfortunately, the paper does not make good on promises made in the abstract. This post is not intended to be a personal attack on the works of fellow researchers. Below, I simply aim to highlight what I believe are shortcomings in the paper because I feel that they replicate many of the problems that persist throughout the existing research on CoSA and its effectiveness.

Firstly, the study persists in an ongoing trend of inaccurately reporting previous findings related to the efficacy of CoSA which, whether intentionally or not, presents the the CoSA approach in a positively-biased manner. For example, contrary to testimony in either Bates et al., the original studies, or others that have cited it,  Wilson et al.’s 2009 replication study does not represent evidence of significant reductions in sexual reoffending attributable to CoSA [Edit: here I’m referring to the main analysis. Wilson et al. also provide a 3-year fixed analysis of a smaller subset (19 CoSA vs 18 comparison) that did find significant reductions in sexual recivism]. Due to the nature of the small numbers in the contingency table for that analysis, the authors should have reported the Fisher’s Exact Test (which was not statistically-significant) not the significant Chi-Square statistic. We have provided explanations as to why this is an error in two published works (Elliott & Beech, 2012; Elliott et al., 2013). Bates et al. fall short of referring to Wilson et al.’s findings as evidence of significant reductions, but do go so far as to describe them as “dramatic reductions ”.

Another example is that they report the RCT findings of Duwe (2012) thusly:

“MnCOSA significantly reduced three of the five recidivism measures examined (e.g., rearrest, reconviction, reincarceration for a new offense, reincarceration for a technical violation revocation, and reincarceration for either a new offense and/or a technical violation revocation).” (p. 864)

Bates et al. use this statement as part of claim that Duwe’s data “further supported the use of Circles as a way of reducing the risk of sex offenders released into the community”. They fail to mention, however, that reconviction and new offense re-incarcerations (the variables measured in Bates et al.’s own study) were not reduced to a statistically-significantly level. To wit, the authors fail to inform the reader that the only high-quality (and methodologically- and statistically-speaking it really was an excellent study) randomized-controlled experimental evaluation of the effectiveness of CoSA did not find significant reductions in either reconviction or re-incarceration for a new sexual offense. I believe that this omission blurs Duwe’s findings in a way that presents CoSA in an undeservingly favorable light.

Secondly, similar to previous CoSA cohort studies, questions also arise regarding both the findings and the methods reported in the paper. Unlike prior studies, Bates and colleagues present information on what appear to be adequate techniques for matching – grouping individuals who were deemed suitable for a Circle, but where a Circle was not available, as their comparison group. However, they go on to explain that individuals who “withdrew from the process after being assessed as suitable” were seemingly also added to the comparison group. This is contrary to recommended practice (on the basis that doing so stacks the deck towards finding a treatment effect).

Instead, the authors should have included these non-completers in the treatment group, as is recommended by a panel of experts. There is also a further issue in including non-completers as comparisons. In their introduction, Bates and colleagues cite desistance theory as a way to suggest that by deciding to engage with CoSA, a Core Member is actively “making a statement to himself and others that he wishes to desist” and that the “role of the Circle is to maintain his motivation and commitment” (p. 866) – the intimation being that they share characteristics distinctive to desisters. Consequently, there is likely to be a relative difference in motivation to stop offending between those who choose to engage in CoSA and those who withdraw or decline potentially making enrollers and non-enrollers fundamentally different.

The authors’ description of the ’90 day’ rule – that to be included in the analysis participants must have been engaged in CoSA for at least 90 days, because otherwise they would not have “significantly benefited from the process” – is also poorly rationalized. The authors explain this decision, stating that it was made “in accordance with previous international Circles research (see R. J. Wilson et al., 2007b; R. J. Wilson et al., 2009)”. It is of concern that I was not able to find reference to any ‘90 day’ rule in either of the cited papers, suggesting either that the original authors failed to report a major inclusion/exclusion criterion for their analysis or that it has been created by/misattributed by Bates et al.

Regardless of source though, CoSA is advertised as specializing in supporting individuals at the highest-levels of risk, those for whom recidivism is expected to occur within days or weeks rather than months or years, and to provide support from the point of reentry. Ergo, any ’90 day’ rule whiffs a little of cherry-picking: excluding data from a period during which program users are at the highest likelihood of program failure. In Bates et al.’s study, the ’90 day’ rule led to the exclusion of ten Core Members from the CoSA group; five of whom were recalled to prison for breach of release conditions (one of whom is described in the paper as having been recalled after concerns relating to a “relationship with a vulnerable woman who had young children”: p. 867) and a further 4 who decided to withdraw from the program. Given CoSA’s high-risk specialization, it also seems contrary to CoSA’s stated aims to find, as Bates et al. note, a meaningful number of low-risk Core Members in this sample. If CoSA is targeted at the highest risk sex offender, questions remain as to why low-risk offenders are receiving CoSAs at all.

Thirdly, and perhaps most importantly, despite what is anticipated after reading the study’s abstract, Bates et al. do not find evidence of (statistically) significant differences between Core Members and the comparison group on new sexual offenses. A quick analysis of a contingency table using Bates et al.’s data – 3 Core Members with new sexual offenses vs. 5 in the comparison group – is not statistically significant (Fisher’s Exact Test (FET) = 0.719). In terms of actual vs. expected failures (as calculated from recidivism rates associated with each risk level on the RM2000 risk assessment tool) neither group were committed new sex offenders with significantly-different frequency than what would be expected, based on the RM2000 data presented.Of the 25 Core Members in Bates et al. where data were available and who were at risk for at least five years, 6 were expected to fail; 3 did (non-significant, FET (2-sided) = .463). Of the 21 comparison group members where data were available and who were at risk for at least five years, 5 were expected to fail; 5 did (non-significant, FET (2-sided) = 1.0). Individual-level data on the risk levels of those who committed new offenses were only presented for the CoSA group, not for the matched group. Consequently, it is not possible to examine the expected vs. actual failures within each risk category or to know whether one group had relatively disproportionate numbers in one or more risk categories.

Finally, in the analysis presented in the paper, the authors present statistically-significant findings only when sexual offenses and violent offenses (which included a conviction for property damage) are grouped together. This appears to be a post-hoc analysis/reporting decision, since previous outcome studies of CoSA typically present separate statistics for new sexual offenses, new violent offenses, and sexual and violent offenses combined. To subsequently conclude that “the Circles participants reoffended sexually or violently at a rate one quarter that of the comparison group of persons referred to, but not placed, in a Circle.” (p. 878) seems misleading. The use of the word ‘or’ insinuates that sexual and violent crimes were analyzed separately, especially for interested readers who, having read the previous studies, may have expected separate analyses to be reported.

All told, the take-away is that a study that finds no significant reductions in sexual reconvictions between Core Members and a non-CoSA comparison group is still touted as evidence of the effectiveness of a program with the stated aim of reducing sexual reoffending.

Many of us think CoSA is an excellent program and want to see it thrive. It is perhaps the case that so intensely wanting CoSA to be successful is clouding the ability to remain objective about its effectiveness. Still, it damages the credibility of the program (particularly in the eyes of policy makers) to construct an evidence-base that is so vulnerable to a great many valid and grave criticisms. These criticisms, however, are clearly fixable and CoSA remains a program of genuine promise. I’m also aware that CoSA has benefits beyond those measured merely by reconviction rates (something myself and Gary Zajac also discuss in the NIJ report).

But, far from being an exemplar of evidence-based practice in sex offender management, I would argue that the evidence-base for CoSA is built on extremely shaky foundations. What is required is a rigorous experimental evaluation. Supporters of the program need to unite to demand improved standards of practice (especially in Core Member selection) and to lobby sponsors and policy wonks for a large-scale, preferably cross-cultural, rigorous experimental evaluation of CoSA in order to provide opportunity for supporters to make an evidence-based case for the much-vaunted benefits of CoSA.

Author’s note: I owe a debt of gratitude to fellow editors Kelly Babchishin and Ian McPhail for their cleverness and consequently beneficial suggestions on earlier drafts of this post.

Suggested citation:
Elliott, I. A. (2014, June 6). CoSA: An inconvenient truth. [Weblog post]. Retrieved from http://wp.me/p2RS15-4A.


Want to submit your own post? Click here to find out how!

Email us: nextgenforensicblog@gmail.com
Follow us: @nextgenforensic

Advertisements

One thought on “CoSA: An inconvenient truth”

  1. I thank everyone for their comments and feedback on my recent post related to Circles of Support and Accountability (CoSA), both positive, negative, and neutral. The posts on nextgenforensic are intended to generate debate – it is frustrating that much of that debate has occurred in membership-only arenas. Thus I want to make sure the following is [hopefully] read by anyone who reads the above post.

    If anyone reading the post has been given the impression that I sought to imply that Robin Wilson, Andrew Bates, Andrew McWhinnie, or any of the authors on the papers I cited are, or were, engaging in any unethical conduct of any kind; nothing could be further from the truth. It has been pointed out that my piece, or comments that I made on the ATSA listserve or on LinkedIn, have the potential to be interpreted that way. For this, I sincerely apologize. I, and I’m sure all of those reading, will know of and respect each of them for their passion and dedication in developing one of the most influential and innovative programs in our field. If they feel I have unfairly targeted them personally; I also sincerely apologize. It is a very small field of study.

    I do not, however, in any way apologize for writing the piece – it is my interpretation of the evidence-base and the context that surrounds it. I believe that there has been very little genuine critical analysis in reviews of the CoSA literature and the interpretations of results in published CoSA-related research. And I believe that lack is to its detriment. If I write strongly, it’s because I believe strongly and I have limited space in which to make my point. To have hedged my bets would only have served to feed into the very point I was making.

    Dr. Wilson has rightfully reiterates that CoSA is different. It is a dynamic social movement that became a ‘program’ – it is about communities of many forms and sizes nobly taking responsibility for community safety – and therefore it is difficult to treat it in the same way we do more traditional approaches. I wholeheartedly agree. What the post asks is that we choose to do assessments of its effectiveness (as has been done), then we should incorporate as much methodological rigor as is possible.

    I am not a practitioner. I am not a policy-maker. I am not a director of a non-profit organization. I am a researcher. In this instance, given a task and a meaty subject, I read, I synthesized, I considered, I critically appraised, and I wrote – in plain sight, with my name attached. If publishing critiques of recently-published research papers and the context in which they are rooted is not my role as an academic and as a Research Member of ATSA, then I am at pains to understand what my role is.

    I genuinely wish everyone involved in CoSA and ATSA my very best wishes. For what it’s worth, I think the CoSA program is truly promising and I hope to see it succeed.

    Regards,
    Ian

    “[Benjamin] would say that he was given a tail to keep the flies off, but that he would sooner have no tail and no flies.” – George Orwell, Animal Farm.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: