Chambless & Hollon, 1998 Defining ESTs
The Chambless and Hollon article provided a firm foundation for considering the issue of Empirically Supported Therapies (ESTs). In it, they provide a summary of their (fairly commonsense and relatively liberal) recommendations for how a therapy might achieve the much-vaunted status of becoming an EST. Though I largely agree with many of the points addressed in the article, the following are some issues that raised my eyebrows a bit:
-Several times, Chambless and Hollon refer to the fact that they “have more confidence in inferences derived from controlled experimentation than those derived from purely correlational analyses,” (p. 8) and that such controlled inferences merit the greater balance of subsequent study. Though this is a very sensible – and, indeed, necessary – foundation for considering the development of the empirical base for ESTs, it seems to belie an important element in the process of treatment design: that they may (and, perhaps, should) be developed, initially, “in the field” by practicing clinicians. For the reasons mentioned by Chambless and Hollon (e.g. lack of clinical sample representativeness, lack of replicability, cost-effectiveness), many lab-derived treatments have come under fire by practitioners. Even those that are thoroughly vetted for their effectiveness (as opposed to efficacy) are often not easily disseminated. However, there remain a large number of practitioners of clinical psychology in various settings who attempt to create novel, targeted (read: “specific” in the Chambless and Hollon sense) treatments quite frequently. It seems both good and proper to have clinical researchers survey these treatments and use them as the basis for future study (that they do not more readily do so already could be construed as a sort of chicken-and-egg “starting point error” in the pursuit of achieving treatment uniformity). Though they are not derived from controlled experimentation, they do have the advantage of – upon rigorous examination, of course -- having a distinct likelihood of achieving the lofty aim of getting ESTs into clinical practice more readily.
-Chambless and Hollon note that “any given therapy tends to do better in comparison with other interventions when it is conducted by people who are expert in its use than when it is not.” (p. 12) I found this interesting, as this could be the result of competence/”mastery,” bias, or some combination of the two. The result of this, then, has implications for both research design and staffing. Perhaps another bit of research methodology that psychology should borrow from medicine is some variation on the “double-blind” study. For clinical psychology, this iteration might involve bringing in “non-expert treaters” (culled, perhaps, from 1st year students in terminal Master’s degree programs in Counseling Psych, Clinical Psych., or Social Work) to be trained as “blinded” therapists, administering more controlled treatment in psychotherapy research studies. Another option, of course, would be to have “researchers with differing orientations collaborate on comparative outcome research,” as suggested by Hunsley & DiGiulio.
Hunsley & DiGiulio, 2002 Dodo,
I was very heartened to read the Hunsley & DiGiulio article. I find myself in strong agreement (both clinically and scientifically) with their exposure of the living Dodo and his absurd verdict as the hoax that it is, and I certainly hope the field of psychotherapy research has since taken note of the obsolescence of the prevailing notion of psychotherapeutic equivalence. That said, there is a small fly I’d like to throw in the CBT-flavored ointment that is the implicitly-prescribed salve smeared throughout the article. As I see it, there are two substantial problems precluding the trumpeting of behaviorisms final triumph in the kingdom of psychotherapy:
1) The issue of therapy “classes” (e.g. the problematic ones that Hunsley & DiGiulio cite in the Smith, et. al (1980) article) remains complex for behavioral interventions. It could be argued (and, indeed, often is argued) that any intervention is ultimately trying to change behavior, and so there is a strong magnetic pull towards calling almost anything behavioral. Indeed, in our post-Beck and post-Linehan era, it sometimes looks like almost any prefix could be affixed to “BT” to create a new treatment. This, then, could confound any attempt at future meta-analysis even further, leading to the same types of categorization errors that seem to have cropped up in the past, and thus causing the Dodo to again rise, Phoenix-like, from its own ashes – yet more resilient than before, due to our own best intentions.
2) Behavioral intervention, at its core, is an inherently “quantitative-analysis-friendly” sort of treatment. With its inclusion of explicit data points, observable change, and largely manualized approaches, it is an easy fit for the science of a field that strongly relies on statistically analysis as the coin of the realm for legitimacy. However, this fit may, in fact, beg the question: just because behaviorism lends itself to a quantitative analytic paradigm does not mean it is a) the best potential option within the paradigm, b) that the paradigm cannot accommodate other alternative, or c) that the paradigm is indeed optimal for the phenomena under examination. The first of these points is largely addressed by the meta-analyses cited by Hunsley & DiGiulio, but the second two are more problematic.
In terms of accommodating other alternatives: it may be that we have yet to develop effective tools of measuring change according to the mechanisms proposed to be involved in such interventions. Self-report surveys are often a crude measure of internal change processes, but they are (at present) the best method we have. That they will be less capable of measuring their target processes than direct behavioral observation or report is evident; that this should be a real problem in getting at reliably comparable treatment effects in comparative psychotherapy research is apparently less so. It is the responsibility, therefore, of psychotherapy researchers to remain ably abreast of current assessment tools, and to re-assess potentially “debunked” clinical therapeutic methods of superior tools for their assessment emerge.
In terms of the question of quantitative analysis as the best method for analyzing psychological phenomena: this is clearly a larger issue for another time. However, it remains important to take note and remember that such analysis cannot be taken a priori to be the best and only determinant of effectiveness and success in a field that is, by definition, fraught with qualitative assessment and subjective response.
1 comment:
Two thoughts, really. The first comes from the beginning of this blog--your point about experiments versus correlational studies. I agree that the putative superiority of experimental designs is overstated, but we might disagree about how much. As for your point about qualitative research, I'll be interested to hear more during class. It is a dilemma--it will be great to see your reaction to the Meehl piece next week.
Post a Comment