In 1997, Wampold and colleagues published a study that revolutionized psychotherapy outcome research. It addressed a question that had long divided the field; specifically, were some therapeutic approaches more effective than others?
Each side in the debate claimed the data supported their position — and, like a Rorschach ink blot, the available evidence could be interpreted in sharply different, but seemingly valid ways.
(Do you see two researchers sitting opposite one another but looking at the same computer screen?)
In the debate between contrasting positions, its easy to miss the study’s main contribution. True, the authors found no difference in outcome between competing treatment approaches. However, the important question, given years of conflicting results, was “why?”
Turns out, many previous studies — particularly those reporting significant differences between methods — included comparisons to approaches not intended to be therapeutic. If one really wanted to know whether a treatment (e.g., CBT, IPT, ACT, EMDR) was more effective, it had to be directly compared to an approach intended to help — what Wampold and colleagues termed, a “bona fide” treatment. Doing otherwise, they argued, conflated artifacts of research design with actual research results.
“Well, of course,” you say? And yet, a more recent debate among researchers makes obvious, the findings from two-and-a-half decades ago apparently aren’t that obvious. Consider, for example, deliberate practice. Despite rising interest in the topic in psychotherapy, and the publication of several studies, it’s true impact on performance is a matter of debate (1, 2, 3, 4, 5).
Following a meta-analysis of the available research, Miller, Chow, Wampold, Hubble, del Re, Maeschalck and Bargmann (2018) argued that an accurate understanding of deliberate practice would only be, “likely when: (1) research included in any analysis is an actual study of DP; and (2) the criteria for what constitutes DP are standardized, made explicit, accepted by researchers, and applied consistently across studies” (p. 7). They proposed “bona fide” deliberate practice included four, research-based criteria: (1) individualized learning objectives; (2) ongoing feedback regarding performance and learning; (3) involvement of a coach; and (4) successive refinement through repetition most often conducted alone.
Last week, the first study of deliberate practice to meet all four criteria was published in Training and Education in Professional Psychology. It deals with improving therapist ability to handle conversations typically considered difficult in treatment. Bottom line: DP was superior to self-reflection and generalized to novel challenges. I will post the study here as soon as an online or print version becomes available. Until then, my colleague, Daryl Chow, and I discuss the results and practical implications in the video below.
Until next time,
Scott
Scott D. Miller, Ph.D.
International Center for Clinical Excellence
P.S.: The November FIT Intensive is now SOLD OUT. Click here for more information or secure your spot for the upcoming FIT Supervision/Consultation Intensive.
Vivian Baruch says
Good stuff! I’ve reshared the video. I hope it gets as much attention as it deserves.