It is a standard maxim of good business practice: Under Promise, OverDeliver (or UPOD). As my father used to say, “Do your best, and then a little better.” Sadly, history shows that the field of behavioral health has followed a difference course: Over Promise, Under Deliver. The result? O, PUDs.
The most gripping account of the field’s failed promises is Robert Whitaker’s Mad in America: Mad Science, Bad Medicine, and Enduring Mistreatment of the Mentall Ill. In fact, Whitaker’s book inspired me to write what became my most popular article, downloaded from my website more often than any other: Losing Faith. In it, I document how, each year, new models, methods, and diagnoses appear promising to revolutionize mental health care, only later to be shown ineffective, wrong, and, in some instances, harmful. Remember Multiple Personality Disorder? Satanic Ritual Abuse? Xanax for panic disorder? Johnsonian-style Interventions for Addiction? Co-dependence? Thought Field Therapy? Rebirthing? How about SSRIs? Weren’t they supposed to be much better than those, you know, old-fashioned tricyclics? The list is endless.
“Not to worry,” current leaders and pundits advise, “We’ve made progress. We have a new idea. A much better idea than the old one. We promise!”
However, when it comes to claims about advances in the field of behavioral health, history indicates that caution is warranted. That includes, by the way, claims about the use of feedback tools in therapy. As readers of this blog know, I have, for several years, been championing the use of simple checklists for guiding and improving the quality and outcome of treatment. Several studies document–as reviewed here on this blog–improved outcomes and decreased drop out and deterioration rates. These studies are important first steps in the scientific process. I’ve been warning however that these studies are only, first steps. Why?
Studies to date, while important, suffer from the same allegiance effects and unfair comparisons of other RCT’s. With regard to the latter, no study compares feedback with an active control condition. Rather, all comparisons have been to “treatment as usual.” Such research, as a result, says nothing about why the use of the measures improves outcomes. At the same time, several indirect, but empirically robust, avenues of evidence indicate that another variable may be responsible for the effect! Consider, for example, the following findings: (1) therapists do not learn from the feedback provided by measures of the alliance and outcome; (2) therapists do not become more effective over time as a result of being exposed to feedback. Such research indicates that focus on the measures and outcome may be misguided–or at least a “dead end.”
Such shortcomings are why researchers and clinicians at ICCE are focused on the literature regarding expertise and expert performance. Focusing on measures misses the point. Already, there is talk about methods for insuring fidelity to a particular way of using feedback tools. Instead, the research on expertise indicates that we need to help clinicians develop practices which enable them to learn from the feedback they receive.
Several studies are in progress. In Trondheim, Norway, the first ever study to include an active control comparison for feedback is underway. I fully expect the control to be as effective as the simple use of checklists in treatment. In a joint research project being conducted at agencies in the US, UK, Canada, and Australia, research is underway investigating how top performing therapists use feedback to learn and improve compared to average and below average clinicians. Such studies are the necessary second step to insure that we understand the elements responsible for the effective use of feedback. Inch by inch, centimeter by centimeter, the results of such studies will advance our understanding and effectiveness. The gains I’m sure will be modest at best–and that’s just fine. In fact, the latest feedback research using the ORS and SRS found in small, largely insignificant effects! (I’m still waiting for permission to publish the entire article on this blog). Until then, interested readers can find a summary here). Such findings can be disturbing to those who have heard others claim that “feedback is the most effective method ever invented in the history of the field!” OPUD is dangerous. It keeps the field stuck in a vicious cycle of hope and despair, one that ultimately eclipses the opportunity to conduct the very research needed to facilitate understanding of the complex processes at work in any intervention. People loose faith until the “next best thing” comes along.
I’m excited about the research that is in process. Stay tuned for updates. Until then, let’s agree to UPOD.