As anyone knows who reads this blog or has been to one of my workshops, I am a fan of feedback. Back in the mid-1990’s, I began using Lynn Johnson’s 10-item Session Rating Scale in my clinical work. His book, Psychotherapy in the Age of Accountability, and our long relationship, convinced me that I needed to check in regularly with my clients. At the same time, I started using the Outcome Questionnaire (OQ-45). The developer, Michael Lambert, a professor and mentor, was finding that routinely measuring outcome helped clinicians catch and prevent deterioration in treatment. In time, I worked with colleagues to develop a set of tools, the brevity of which made the process of asking for and receiving feedback about the relationship and outcome of care, feasible.
Initial research on the measures and feedback process was promising. Formally and routinely asking for feedback was associated with improved outcomes, decreased drop-out rates, and cost savings in service delivery! As I warned in my blogpost last February, however, such results, while important, were merely “first steps” in a scientific journey. Most importantly, the research to date said nothing about why the use of the measures improved outcomes. Given the history of our field, it would be easy to begin thinking of the measures as an “intervention” that, if faithfully adopted and used, would result in better outcomes. Not surprisingly, this is exactly what has happened, with some claiming that the measures improve outcomes more than anything since the beginning of psychotherapy. Sadly, such claims rarely live up to their initial promise. For decades the quest for the holy grail has locked the field into a vicious cycle of hope and despair, one that ultimately eclipses the opportunity to conduct the very research needed to facilitate understanding of the complex processes at work in any intervention.
In February, I wrote about several indirect, but empirically robust, avenues of evidence indicating that another variable might be responsible for the effect found in the initial feedback research. Now, before I go on, let me remind you that I’m a fan of feedback, a big fan. At the same time, its important to understand why it works and, specifically, what factors are responsible for the effect. Doing otherwise risks mistaking method with cause, what we believe with reality. Yes, it could be the measures. But, the type research conducted at the time did not make it possible to reach that conclusion. Plus, it seemed to me, other data pointed elsewhere; namely to the therapist. Consider, for example, the following findings: (1) therapists did not appear to learn from the feedback provided by measures of the alliance and outcome; (2) therapists did not become more effective over time as a result of being exposed to feedback. In other words, as with every other “intervention” in the history of psychotherapy, the effect of routinely monitoring the alliance and outcome seems to vary by therapist.
Such results, if true, would have significant implications for the feedback movement (and the field of behavioral health in general). Instead of focusing on methods and interventions, efforts to improve the outcome of behavioral health practice should focus on those providing the service. And guess what? This is precisely what the latest research on routine outcome measurement (ROM) has now found. Hot off the press, in the latest issue of the journal, Psychotherapy Research, Dutch investigators de Jong, van Sluis, Nugter, Heiser, and Spinhoven (2012) found that feedback was not effective under all circumstances. What variable was responsible for the difference? You guessed it: the therapist–in particular, their interest in receiving feedback, sense of self-efficacy, commitment to use the tools to receive feedback, and…their gender (with women being more willing to use the measures). Consistent with ICCE’s emphasis on supporting organizations with implementation, other research points to the significant role setting and structure plays in success. Simon, Simon, Harris and Lambert (2011), Reimer and Bickman (2012), and de Jong (2012) have all found that organizational and administrative issues loom large in mediating the use and impact of feedback in care.
Together with colleagues, we are currently investigating both the individual therapist and contextual variables that enable clinicians to benefit from feedback. The results are enticing. The first will be presented at the upcoming Achieving Clinical Excellence conference in Holland, May 16-18th. Other results will be reported in the 50th anniversayry issue of the journal, Psychotherapy, to which we’ve been asked to contribute. Stay tuned.