OK, this post may not be for everyone. I’m hoping to “go beyond the headlines,” “dig deep,” and cover a subject essential to research on the effectiveness of psychotherapy. So, if you fit point #2 in the definition above, read on.
It’s easy to forget the revolution that took place in the field of psychotherapy a mere 40 years ago. At that time, the efficacy of psychotherapy was in serious question. As I posted last week, psychologist Hans Eysenck (1952, 1961, 1966) had published a review of studies purporting to show that psychotherapy was not only ineffective, but potentially harmful. Proponents of psychotherapy responded with the own reviews (c.f., Bergin, 1971). Back and forth each side went, arguing their respective positions–that is, until Mary Lee Smith and Gene Glass (19
77) published the first meta-analysis of psychotherapy outcome studies.
Their original analysis of 375 studies showed psychotherapy to be remarkably beneficial. As I’ve said here, and frequently on my blog, they found that the average treated client was better off than 80% of people with similar problems were untreated.
Eysenck and other critics (1978, 1984; Rachman and Wilson 1980) immediately complained about the use of meta-analysis, using an argu
ment still popular today; namely, that by including studies of varying (read: poor) quality, Smith and Glass OVERESTIMATED the effectiveness of psychotherapy. Were such studies excluded, they contended, the results would most certainly be different and behavior therapy—Eysenck’s preferred method—would once again prove superior.
For Smith and Glass, such claims were not a matter of polemics, but rather empirical questions serious scientists could test—with meta-analysis, of course.
So, what did they do? Smith and Glass rated the quality of all outcome studies with specific criteria and multiple raters. And what did they find? The better and more tightly controlled studies were, the more effective psychotherapy proved to be. Studies of low, medium, and high internal validity, for example, had effect sizes of .78, .78, and .88, respectively. Other meta-analyses followed, using slightly different samples, with similar results: the tighter the study, the more effective psychotherapy proved to be.
Importantly, the figures reported by Smith and Glass have stood the test of time. Indeed, the most recent meta-analyses provide estimates of the effectiveness of psychotherapy that are nearly identical to those generated in Smith and Glass’s original study. More, use of their pioneering method has exploded, becoming THE standard method for aggregating and understanding results from studies in education, psychology, and medicine.

As psychologist Sheldon Kopp (1973) was fond of saying, “All solutions breed new problems.” Over the last two decades the number of meta-analyses of psychotherapy research has exploded. In fact, there are now more meta-analyses than there were studies of psychotherapy at the time of Smith and Glass’s original research. The result is that it’s become exceedingly challenging to understand and integrate information generated by such studies into a larger gestalt about the effectiveness of psychotherapy.
Last week, for example, I posted results from the original Smith and Glass study on Facebook and Twitter—in particular, their finding that better controlled studies resulted in higher effect sizes. Immediately, a colleague responded, citing a new meta-analysis, “Usually, it’s the other way around…” and “More contemporary studies find that better methodology is associated with lower effect sizes.”
It’s a good idea to read this study, closely. If you just read the “headline”–“The Effect of Psychotherapy for Adult Depression are Overestimated–or skip the method’s section and read the author’s conclusions, you might be tempted to conclude that better designed studies produce smaller effects (in this particular study, in the case of depression). In fact, what the study actually says is that better designed studies will find smaller differences when a manualized therapy is compared to a credible alternative! Said another way, differences between a particular psychotherapy approach and an alternative (e.g., counseling, usual care, or placebo), are likely to be greater when the study is of poor quality.
What can we conclude? Just because a study is more recent, does not mean it’s better, or more informative. The important question one must consider is, “What is being compared?” For the most part, Smith and Glass analyzed studies in which psychotherapy was compared to no treatment. The study cited by my colleague, demonstrates what I, and others (e.g., Wampold, Imel, Lambert, Norcross, etc.) have long argued: few if any differences will be found between approaches.
The implications for research and practice are clear. For therapists, find an approach that fits you and benefits your clients. Make sure it works by routinely seeking feedback from those you serve. For researchers, stop wasting time and precious resources on clinical trials. Such studies, as Wampold and Imel so eloquently put it, “seemed not to have added much clinically or scientifically (other than to further reinforce the conclusion that there are no differences between treatments), [and come] at a cost…” (p. 268).





with his story. Then, he played—doing with one hand what many would think impossible with two. When asked what drove him to continue in the face of so many challenges, he said, in a quiet yet confident voice, “Because there is so much to learn!”