“What works” in therapy? Believe it or not, that question–as simple as it is–has and continues to spark considerable debate. For decades, the field has been divided. On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).
While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years! Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services. Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome. At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work. After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors? More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.
The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness. Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations. Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples). On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors. Little wonder that outcomes have stagnated. Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.
What to do? For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing. Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.
Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery. One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback. Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!
Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work. Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy. And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.
My fear? Focusing on measurement and feedback misses the point. Simply put: it’s not seeking feedback that is important. Rather, it’s what feedback potentially engenders in the user that is critical. Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service. “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. ).
Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.” Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others. Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players. Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board. In other words, their immense knowledge is context specific.
A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge. I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference. I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place. Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).
Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.