It is a standard maxim of good business practice: Under Promise, OverDeliver (or UPOD). As my father used to say, “Do your best, and then a little better.” Sadly, history shows that the field of behavioral health has followed a difference course: Over Promise, Under Deliver. The result? O, PUDs.
The most gripping account of the field’s failed promises is Robert Whitaker’s Mad in America: Mad Science, Bad Medicine, and Enduring Mistreatment of the Mentall Ill. In fact, Whitaker’s book inspired me to write what became my most popular article, downloaded from my website more often than any other: Losing Faith. In it, I document how, each year, new models, methods, and diagnoses appear promising to revolutionize mental health care, only later to be shown ineffective, wrong, and, in some instances, harmful. Remember Multiple Personality Disorder? Satanic Ritual Abuse? Xanax for panic disorder? Johnsonian-style Interventions for Addiction? Co-dependence? Thought Field Therapy? Rebirthing? How about SSRIs? Weren’t they supposed to be much better than those, you know, old-fashioned tricyclics? The list is endless.
“Not to worry,” current leaders and pundits advise, “We’ve made progress. We have a new idea. A much better idea than the old one. We promise!”
However, when it comes to claims about advances in the field of behavioral health, history indicates that caution is warranted. That includes, by the way, claims about the use of feedback tools in therapy. As readers of this blog know, I have, for several years, been championing the use of simple checklists for guiding and improving the quality and outcome of treatment. Several studies document–as reviewed here on this blog–improved outcomes and decreased drop out and deterioration rates. These studies are important first steps in the scientific process. I’ve been warning however that these studies are only, first steps. Why?
Studies to date, while important, suffer from the same allegiance effects and unfair comparisons of other RCT’s. With regard to the latter, no study compares feedback with an active control condition. Rather, all comparisons have been to “treatment as usual.” Such research, as a result, says nothing about why the use of the measures improves outcomes. At the same time, several indirect, but empirically robust, avenues of evidence indicate that another variable may be responsible for the effect! Consider, for example, the following findings: (1) therapists do not learn from the feedback provided by measures of the alliance and outcome; (2) therapists do not become more effective over time as a result of being exposed to feedback. Such research indicates that focus on the measures and outcome may be misguided–or at least a “dead end.”
Such shortcomings are why researchers and clinicians at ICCE are focused on the literature regarding expertise and expert performance. Focusing on measures misses the point. Already, there is talk about methods for insuring fidelity to a particular way of using feedback tools. Instead, the research on expertise indicates that we need to help clinicians develop practices which enable them to learn from the feedback they receive.
Several studies are in progress. In Trondheim, Norway, the first ever study to include an active control comparison for feedback is underway. I fully expect the control to be as effective as the simple use of checklists in treatment. In a joint research project being conducted at agencies in the US, UK, Canada, and Australia, research is underway investigating how top performing therapists use feedback to learn and improve compared to average and below average clinicians. Such studies are the necessary second step to insure that we understand the elements responsible for the effective use of feedback. Inch by inch, centimeter by centimeter, the results of such studies will advance our understanding and effectiveness. The gains I’m sure will be modest at best–and that’s just fine. In fact, the latest feedback research using the ORS and SRS found in small, largely insignificant effects! (I’m still waiting for permission to publish the entire article on this blog). Until then, interested readers can find a summary here). Such findings can be disturbing to those who have heard others claim that “feedback is the most effective method ever invented in the history of the field!” OPUD is dangerous. It keeps the field stuck in a vicious cycle of hope and despair, one that ultimately eclipses the opportunity to conduct the very research needed to facilitate understanding of the complex processes at work in any intervention. People loose faith until the “next best thing” comes along.
I’m excited about the research that is in process. Stay tuned for updates. Until then, let’s agree to UPOD.
ron says
Hiya Scott,
That is an interesting post on your blog this morning re: an RCT showing minimal diffference in outcome between feedback informed treatment and non-feedback informed.
I was wondering where that leaves the findings produced on page 12 of FIT 1:
”Miller (2011) summarized the impact of routinely monitoring and using outcome and alliance data from 13 RCTs involving 12,374
clinically, culturally and economically diverse consumers and found:
Routine outcome monitoring and feedback as much as doubles the“effect size” (reliable and clinically significant change);
Decreases dropout rates by as much as half;
Decreases deterioration by 33%;
Reduces hospitalizations and shortens length of stay by 66%;
Significantly reduces cost of care compared to non-feedback groups (which increased in cost).
Additional evidence indicates that regular, session-by-session feedback (as opposed to less frequent intervals, i.e., every third session, pre- and post-services, etc.; Warren et al., 2010) is more effective in improving outcome and reducing dropouts.”
A further question I have from FIT relates to training, Scott. I work in a psychology dept. in the NHS and as you know NHS treatments have to adhere to NICE guidelines: specific treatments for specific symptoms.
Employers are now looking at whether practitioners have the appropriate qualification to treat specific symptoms. Psychologists and CBT practioners are automatically deemed appropriatly trained even though both can be argued to be integrative in orientation. My own qualification, which is integrative, is in question!!
I wonder what your thoughts are on this stance and whether counter-evidence exists?
Thanks for your time Scott. Keep up the good work.
Ron
Steve Andreas says
Scott;
In your post of March 12, 2012 you write (line 9) “Second, treatments in medicine do not function in any way similar to treatments in medicine.” I think there must be a mistake in this sentence, as it appears to be self-contradictory.
A few lines later, you write: “Despite widespread belief, no psychotherapy model has been shown to contain ingredients specific to the assumed cause of the problem.”
The NLP model of phobia/PTSD is quite specific: the client has images of a terrifying memory in which they are inside the memory, so it elicits the same terror as the original event.This diagnosis specifies the treatment precisely–to teach the client to experience the memory as if they were an outside observer, so they see the person in the memory freaking out, but can do so calmly. A 9-minute video of this complete treatment is available on YouTube at:
http://www.youtube.com/watch?v=mss8dndyakQ and a 4 minute 25-year follow-up video with the client is also available at: http://www.youtube.com/watch?v=mss8dndyakQ
A 14-minute video follow-up with a Vietnam Vet treated with the same process is available at:
http://www.youtube.com/watch?v=Ud35xqGc1PQ
scottdm says
Steve:
Thanks for the catch. I’ve now edited that line to read, “Psychological treatments do not function in any way similar to treatments in medicine.” As for your subsequent comments, ALL approaches CLAIM specific processes and ingredients. NONE…I repeat NONE…have any empirical support for their claims. Being able to describe or even demonstrate a specific ingredient is not the same as that ingredient actually being causal. Think Mesmerism and animal magnetism.