A few weeks back I received an email from Dr. Kevin Carroll, a marriage and family therapist in Iowa. Attached were the findings from his doctoral dissertation. The subject was near and dear to my heart: the measurement of outcome in routine clinical practice. The findings were inspiring. Although few graduate level programs include training on using outcome measures to inform clinical practice, Dr. Carroll found that 64% of those surveyed reporting utilizing such scales with about 70% of their clients! It was particularly rewarding for me to learn that the most common measures employed were the…Outcome and Session Rating Scales (ORS & SRS)
As readers of this blog know, there are multiple randomized clinical trials documenting the impact that routine use of the ORS and SRS has on retention, quality, and outcome of behavioral health services. Such scales also provide direct evidence of effectiveness. Last week, I posted a tongue-in-cheek response to Alan Kazdin’s broadside against individual psychotherapy practitioners. He was bemoaning the fact that he could not find clinicians who utilized “empirically supported treatments.” Such treatments when utilized, it is assumed, lead to better outcomes. However, as all beginning psychology students know, there is a difference between “efficacy” and “effectiveness” studies. The former tell us whether a treatment has an effect, the latter looks at how much benefit actual people gain from “real life” therapy. If you were a client which kind of study would you prefer? Unfortunately, most of the guidelines regarding treatment models are based on efficacy rather than effectiveness research. The sine qua non of effectiveness research is measuring the quality and outcome of psychotherapy locally. After all, what client, having sought out but ultimately gained nothing from psychotherapy, would say, “Well, at least the treatment I got was empircally supported.” Ludicrous.
Dr. Carroll’s research clearly indicates that clinicians are not afraid of measurement, research, and even statistics. In fact, this last week, I was in Denmark teaching a specialty course in research design and statistics for practitioners. That’s right. Not a course on research in psychotherapy or treatment. Rather, measurement, research design, and statistics. Pure and simple. Their response convinces me even more that the much talked about “clinician-researcher” gap is not due to a lack of interest on practitioners’ parts but rather, and most often, a result of different agendas. Clinicians want to know “what will work” for this client. Research rarely address this question and the aims and goals of some in the field remain hopelessly far removed from day to day clinical practice. Anyway, watch the video yourself: