“Everyone understands how a toilet works, don’t you?” ask cognitive scientists Sloman and Fernbach.
The answer, according to their research, is likely no. Turns out, peoples’ confidence in their knowledge far outstrips their ability to explain how any number of simple, every day items work — a coffeemaker, zipper, bicycle and yes, a toilet. More troubling, as complexity increases, the problem only worsens. Thus, if you struggle to explain how glue holds two pieces of paper together — and most, despite being certain they can, cannot — good luck accounting for how an activity as complicated as psychotherapy works.
So pronounced is our inability to recognize the depth of our ignorance, the two researchers have given the phenomenon a name: the “Illusion of Explanatory Depth.” To be sure, in most instances, not being able to adequately and accurately explain isn’t a problem. Put simply, knowing how to make something work is more important in everyday life than knowing how it actually works:
- Push the handle on the toilet and the water goes down the drain, replaced by fresh water from the tank;
- Depress the lever on the toaster, threads of bare wire heat up, and the bread begins to roast;
- Replace negative cognitions with positive ones and depression lifts.
Our limited understanding serves us well until we need to build or improve upon any one of the foregoing. In those instances, lacking true understanding, we could literally believe anything — in the case of the toilet, a little man in the rowboat inside the tank makes it flush — and be just as successful. While such apparent human frailty might, at first pass, arouse feelings of shame or stupidity, truth is operating on a “need to know” basis makes good sense. It’s both pragmatic and economical. In life, you cannot possibly, and don’t really need to know everything.
And yet, therein lies the paradox: we passionately believe we do. That is, until we are asked to provide a detailed, step-by-step, scientifically sound accounting — only then, does humility and the potential for learning enter the picture.
When research on routine outcome monitoring (ROM) first began to appear, the reported impact on outcomes was astonishing. Some claimed it was the most important development in the field since the invention of psychotherapy! They were also quite certain how it worked: like a blood test, outcome and alliance measures enabled clinicians to check progress and make adjustments when needed. Voila!
Eight years ago, I drew attention to the assertions being made about ROM, warning “caution was warranted. ” It was not a bold statement, rather a reasoned one. After all, throughout the waning decades of the last millennium and into the present, proponents of cognitive (CT) and cognitive behavioral therapy (CBT) had similarly overreached, claiming not only that their methods were superior in effect to all others, but that the mechanisms responsible were well understood. Both proved false. As I’ve written extensively on my blog, CT and CBT are no more effective in head to head comparisons with other approaches. More, studies dating back to 1996 have not found any of the ingredients, touted by experts as critical, necessary to success (1, 2, 3).
That’s why I was excited when researchers Mikeal, Gillaspy, Scoles, and Murphy (2016) published the first dismantling study of the Outcome and Session Rating Scales, showing that using the measures in combination, or just one or the other, resulted in similar outcomes. Some were dismayed by these findings. They wrote to me questioning the value of the tools. For me, however, it proved what I’d said back in 2012, “focusing on the measures misses the point.” Figure out why their use improves outcomes and we stop conflating features with causes, and are poised to build on what most matters.
On this score, what do the data say? When it comes to feedback informed treatment, two key factors count:
- The therapist administering the measures; and
- The quality of the therapeutic relationship.
- Therapists with an open attitude towards getting feedback reach faster progress with their patients;
- Clinicians able to create an environment in which clients provide critical (e.g., negative) feedback in the form of lower alliance scores early on in care have better outcomes (1, 2); and
- The more time a therapists spend consulting the data generated by routinely administering outcome and alliance measures, the greater their growth in effectiveness over time.
In terms of how FIT helps, two lines of research are noteworthy:
- In a “first of its kind” study, psychologist Heidi Brattland found that the strength of the therapeutic relationship improved more over the course of care when clinicians used the Outcome and Session Rating Scales (ORS & SRS) compared to when they did not. Critically, such improvements resulted in better outcomes for clients, ultimately accounting for nearly a quarter of the effect of FIT.
- Brattland also found therapists, “significantly differed in the influence of … [FIT] on the alliance, in the influence of the alliance on outcomes, and the residual direct effect of [FIT] … posttreatment” (p. 10). Consistent with other studies, such findings indicate routine measurement can be used to identify a clinician’s “growth edge” — what, where, and with whom — they might improve their ability to relate to and help the diverse clients met in daily work. Indeed, the combination of FIT, use of aggregate data to identify personal learning objectives, and subsequent engagement in deliberate practice has, in the only study in history of psychotherapy to date, been shown to improve effectiveness at the individual practitioner level.
“Inch by inch, centimeter by centimeter,” I wrote back in 2012, “the results of [new] studies will advance our understanding and effectiveness.” I’m hopeful that the discussion in this and my two prior posts (1, 2) will help those interested in improving their results avoid the vicious cycle of hope and despair that frequently accompanies new ideas in our field, embracing the findings and what they can teach us rather than looking for the next best thing.
Until next time,
Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence