SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Final Making Sense of Making Sense of Negative Research Results about Feedback Informed Treatment

February 19, 2020 By scottdm 21 Comments

“Everyone understands how a toilet works, don’t you?” ask cognitive scientists Sloman and Fernbach.

The answer, according to their research, is likely no.  Turns out, peoples’ confidence in their knowledge far outstrips their ability to explain how any number of simple, every day items work — a coffeemaker, zipper, bicycle and yes, a toilet.   More troubling, as complexity increases, the problem only worsens.  Thus, if you struggle to explain how glue holds two pieces of paper together — and most, despite being certain they can, cannot — good luck accounting for how an activity as complicated as psychotherapy works.

So pronounced is our inability to recognize the depth of our ignorance, the two researchers have given the phenomenon a name: the “Illusion of Explanatory Depth.”  To be sure, in most instances, not being able to adequately and accurately explain isn’t a problem.  Put simply, knowing how to make something work is more important in everyday life than knowing how it actually works:

  • Push the handle on the toilet and the water goes down the drain, replaced by fresh water from the tank;
  • Depress the lever on the toaster, threads of bare wire heat up, and the bread begins to roast;
  • Replace negative cognitions with positive ones and depression lifts.

Simple, right?

Our limited understanding serves us well until we need to build or improve upon any one of the foregoing.  In those instances, lacking true understanding, we could literally believe anything — in the case of the toilet, a little man in the rowboat inside the tank makes it flush — and be just as successful.   While such apparent human frailty might, at first pass, arouse feelings of shame or stupidity, truth is operating on a “need to know” basis makes good sense.  It’s both pragmatic and economical.  In life, you cannot possibly, and don’t really need to know everything.

And yet, therein lies the paradox: we passionately believe we do.  That is, until we are asked to provide a detailed, step-by-step, scientifically sound accounting — only then, does humility and the potential for learning enter the picture.

When research on routine outcome monitoring (ROM) first began to appear, the reported impact on outcomes was astonishing.  Some claimed it was the most important development in the field since the invention of psychotherapy!  They were also quite certain how it worked: like a blood test, outcome and alliance measures enabled clinicians to check progress and make adjustments when needed.  Voila!

Eight years ago, I drew attention to the assertions being made about ROM, warning “caution was warranted. ” It was not a bold statement, rather a reasoned one.   After all, throughout the waning decades of the last millennium and into the present, proponents of cognitive (CT) and cognitive behavioral therapy (CBT) had similarly overreached, claiming not only that their methods were superior in effect to all others, but that the mechanisms responsible were well understood.  Both proved false.  As I’ve written extensively on my blog, CT and CBT are no more effective in head to head comparisons with other approaches.  More, studies dating back to 1996 have not found any of the ingredients, touted by experts as critical, necessary to success (1, 2, 3).

That’s why I was excited when researchers Mikeal, Gillaspy, Scoles, and Murphy (2016) published the first dismantling study of the Outcome and Session Rating Scales, showing that using the measures in combination, or just one or the other, resulted in similar outcomes.  Some were dismayed by these findings.  They wrote to me questioning the value of the tools.  For me, however, it proved what I’d said back in 2012, “focusing on the measures misses the point.”  Figure out why their use improves outcomes and we stop conflating features with causes, and are poised to build on what most matters.

On this score, what do the data say?  When it comes to feedback informed treatment, two key factors count:

  1. The therapist administering the measures; and
  2. The quality of the therapeutic relationship.

As is true of psychotherapy-in-general, the evidence indicates that who uses the scales is more important that what measures are used (1, 2).  Here’s what we know:

  • Therapists with an open attitude towards getting feedback reach faster progress with their patients;
  • Clinicians able to create an environment in which clients provide critical (e.g., negative) feedback in the form of lower alliance scores early on in care have better outcomes (1, 2); and
  • The more time a therapists spend consulting the data generated by routinely administering outcome and alliance measures, the greater their growth in effectiveness over time.

In terms of how FIT helps, two lines of research are noteworthy:

  • In a “first of its kind” study, psychologist Heidi Brattland found that the strength of the therapeutic relationship improvedThe Therapeutic Relationship more over the course of care when clinicians used the Outcome and Session Rating Scales (ORS & SRS) compared to when they did not.  Critically, such improvements resulted in better outcomes for clients, ultimately accounting for nearly a quarter of the effect of FIT.
  • Brattland also found therapists, “significantly differed in the influence of … [FIT] on the alliance, in the influence of the alliance on outcomes, and the residual direct effect of [FIT] … posttreatment” (p. 10).  Consistent with other studies, such findings indicate routine measurement can be used to identify a clinician’s “growth edge” — what, where, and with whom — they might improve their ability to relate to and help the diverse clients met in daily work.  Indeed, the combination of FIT, use of aggregate data to identify personal learning objectives, and subsequent engagement in deliberate practice has, in the only study in history of psychotherapy to date, been shown to improve effectiveness at the individual practitioner level.

“Inch by inch, centimeter by centimeter,” I wrote back in 2012, “the results of [new] studies will advance our understanding and effectiveness.”  I’m hopeful that the discussion in this and my two prior posts (1, 2) will help those interested in improving their results avoid the vicious cycle of hope and despair that frequently accompanies new ideas in our field, embracing the findings and what they can teach us rather than looking for the next best thing.

Filed Under: Feedback Informed Treatment - FIT

More Making Sense of Negative Research Results about Feedback Informed Treatment

January 30, 2020 By scottdm 17 Comments

Is it just me or has public discourse gone mad?

A brief perusal of social media largely finds accusation, name calling, and outrage instead of exploration, dialogue and debate.  Not that any of the latter options were ever simple, straightforward, or successful, but somehow, somewhere, taking a stand has replaced extending a hand.

Thus, slightly more than a year ago, I was compared to an ignorant, cult leader by a person — a researcher and proponent of CBT — who’d joined an open discussion about a post of mine on Facebook.  From there, the tone of the exchange only worsened.  Ironically, after lecturing participants about their “ethical duties” and suggesting we needed to educate ourselves, he labelled the group “hostile” and left, saying he was going to “unfriend and block” me.

As I wrote about in my last blogpost, I recently received an email from someone accusing me of “hiding” research studies that failed to support feedback informed treatment (FIT).  Calling it “scandalous,” and saying I “should be ashamed,” they demanded I remove them from my mailing list.  I did, of course, but without responding to the email.

And, therein lies the problem: no dialogue. 

For me, no dialogue means no possibility of growth or change — on my part or other’s.  To be sure, when you are public person, you have to choose to what and whom you respond.  Otherwise, you could spend every waking moment either feeling bad or defending yourself.  Still, I always feel a loss when this happens.  I like talking, am curious about and energized by different points of view.

That’s why when my Dutch colleague, Kai Hjulsted, posted a query about the same study I’d been accused of hiding, I decided to devote several blogposts to the subject of “negative research results.”  Last time, I pointed out that some studies were confounded by the stage of implementation clinicians were in at the time the research was conducted.  Brattland et al.’s results indicate, consistent with findings from the larger implementation literature, it takes between two and four years to begin seeing results.  Why?  Because becoming feedback-informed is not about administering the ORS and SRS — that can be taught in a manner of minutes — rather, FIT is about changing practice and agency culture.

(By the way, today I heard through the grapevine that a published study of a group using FIT that found no effect has, in its fourth and fifth years of implementation, started to experience fairly dramatic improvements in outcome and retention)

As critical as time and ongoing support are to successful use of FIT, these two variables alone are insufficient for making sense of emerging, apparently unsupportive studies.  Near the end of my original post, I noted needing to look at the the type of design used in most research; namely, the randomized controlled trial or RCT.

In the evaluation of health care outcomes , the RCT is widely considered the “gold standard” — the best way for discovering the truth.   Thus, when researcher Annika Davidsen published her carefully designed and executed study showing that adding FIT to the standard treatment of people with eating disorders made no difference in terms of retention or outcome, it was entirely understandable some concluded the approach did not work with this particular population.  After all, that’s exactly what the last line of the abstract said, “Feedback neither increased attendance nor improved outcomes for outpatients in group psychotherapy for eating disorders.”

But what exactly was “tested” in the study?

Read a bit further, and you learn participating “therapists … did not use feedback as intended, that is, to individualize the treatment by adjusting or altering treatment length or actions according to client feedback” (p. 491).  Indeed, when critical feedback was provided by the clients via the measures, the standardization of services took precedence, resulting in therapists routinely responding, “the type of treatment, it’s length and activities, is a non-negotiable.”  From this, can we really conclude FIT was ineffective?

More, unlike studies in medicine, which test pills containing a single active ingredient against others that are similar in every way except they are missing that key ingredient, RCTs of psychotherapy test whole treatment packages (e.g., CBT, IPT, EMDR, etc.).  Understanding this difference is critical when trying to make sense of psychotherapy research.

When what is widely recognized as the first RCT in medicine was published in 1948, practitioners could be certain streptomycin caused the improvement in pulmonary tuberculosis assessed in the study.  By contrast, an investigation showing one psychotherapeutic approach works better than a no treatment control does nothing to establish which, if any of, the ingredients in the method are responsible for change.  Consider cognitive therapy (CT).  Many, many RCTs show the approach works.  On this score, there is no doubt.  People who receive it are much better off than those placed on a waiting list or in control groups.  That said, how cognitive therapy works is another question entirely.  Proponents argue its efficacy results from targeting the patterns of “distorted thinking” causally responsible for maladapative emotions and behaviors.  Unfortunately, RCTs were never designed and are not equipped to test such assumptions.  Other research methods must be used — and when they have been, the results have been surprising to say the least.

In my next post, I will address those findings, both as they apply to popular treatment models such as CT and CBT but also, and more importantly, to FIT.

 

Filed Under: Feedback Informed Treatment - FIT

Making Sense of Negative Research Results about Feedback Informed Treatment

January 16, 2020 By scottdm 10 Comments

A ship’s captain who successfully sails through a strait at night learns nothing, and adds nothing, to their knowledge of the world.

(Please hang with me.  I promise this post will not be a long, metaphysical rant).

Returning to the example.  As paradoxical as it may strike one at first blush, a captain whose ship founders on the rocks while sailing through the strait both learns and adds to their knowledge.  As philosopher Ernst von Glasersfeld once opined, “The only aspect of that ‘real’ world that actually enters into the realm of experience is its constraints.”

The principle identified by von Glasersfeld applies not only to life lessons, but also to scientific advancement and, of course, feedback informed treatment (FIT).  Indeed, identifying and learning from “constraints” — that is, when things go wrong — is the very purpose of FIT.

It’s why, for example, when a client refuses to complete the outcome and alliance measures, my first impulse is to “lean in” and explore their reasons, rather than instantly set the scales aside.   It’s also why I’m most intrigued by studies which find that FIT fails to improve outcome (1, 2).  In both instances, my curiosity is piqued.  “Finally,” I think, “a chance to learn something …”.   Doing so, cognitive science has long shown, is not as easy or straightforward as simply adjusting our beliefs in light of new facts.  Quite to the contrary.

We are prone to see what we expect, fit the “different” into our current way of viewing the world or ignore it altogether.  One brief example before turning attention to FIT (aka Routine Outcome Monitoring [ROM]).  For most of the history of the field, the failure to engage in and respond to psychological intervention has been attributed to a host of client variables (e.g., degree or type of dysfunction, poor attachment history, IQ, etc.).  Therapists, for their part, have been held accountable for making the correct diagnosis and administering the right treatment.

And yet, despite continuous growth in the size of the DSM, and number of treatment approaches, no improvement in the outcome of psychotherapy has occurred in the last 50 years — a fact I first talked publicly about in 2014 and which über-researchers James Prochaska and John Norcross finally acknowledged in the most recent issue of the American Psychologist.  While some have argued that the field’s flat outcomes indicate the effectiveness therapy has reached a natural limit, an alternate point of view is that we should consider looking beyond the current ways of thinking about what matters most in successful treatment.

On this score, one possibility has been staring the field in the face for decades: the impact of the individual therapist on outcome.  Research has long shown, for example, that who does the treatment contributes 5 to 9 times more to outcome than the type of therapy, psychiatric diagnosis, or client history.  The same body of evidence documents some practitioners are consistently more effective than others.  When researcher Scott Baldwin and colleagues looked into why, they found 97% of the difference was attributable to therapist variability in the alliance.  Said another way, more effective therapists are able to establish a strong working relationship with a broader and more diverse group of clients.  I hope you’re seeing new possibilities for improving effectiveness.  If you’re a regular reader of my blog, you already know my colleagues and I published the only study to date documenting that a focus on therapist development via routine outcome measurement, feedback, and deliberate practice improves both agency and individual practitioner outcomes.

Turning to FIT, in a recent post, I talked about the strong sense of “anticipointment” I felt when thinking about the future of our field.  A colleague from the Netherlands, Kai Hjulsted, responded, saying he’d been having the same feeling about FIT!  The source, he said, was a study by a Dutch researcher conducted in a crisis intervention setting which, “contrary to expectations,” found, “Patients with psychiatric problems and severe distress seeking emergency psychiatric help did not benefit from direct feedback.”

I was well aware of this study, having served on the researcher’s dissertation committee.  And over the last decade, multiple studies have been published showing little or no benefit from feedback (e.g., 1, 2, 3).

How to make sense of such findings?  Having spoken with numerous practitioners (and even some researchers), I can tell you the tendency is to fit the results into our current way of viewing the world.  So, seen through a traditional medicopsychiatric lens, the inevitable conclusion is FIT does not work with people with certain, specific diagnosis (e.g., severe distress, in crisis, or those with eating disorders).  Such a conclusion makes no sense, however, if the totality of evidence is considered.  Why?  Because the results are decidedly mixed.  Thus, in one study, FIT makes a difference with people in crisis, in another it does not.  With one group of “severely distressed” clients, feedback appears to make matters worse, with another, chances of improvement increase 2.5 times.

What then can we conclude?

An answer begins to emerge as soon as we’re able to get beyond thinking of FIT as just one more in a long list of treatment methods rather than a fundamental, organizing principle of agency and practice culture.  As is hopefully obvious, learning to administer measurement scales takes little time.

Cultural change, by contrast, is a much longer process.  How long?  Norwegian researcher Heidi Brattland and colleagues found it took three years ongoing training and support to successfully implement FIT.  Had they stopped to evaluate, like all other studies to date, after an average of 4 hours of instruction, no impact on outcomes would have been recorded.

While its now clear that time and support are critical to successful implementation, these two variables alone are not sufficient to make sense of emerging, apparently unsupportive studies of FIT.  Addressing such findings requires we look at the type of design used in most research: the randomized controlled trial.   That I’ll do in my next post, in particular addressing two, top notch, well-executed studies many have assumed show FIT is not effective in psychological care for people with eating disorders and severe distress.

And so, as I asked at the outset, please “hang with me.”

Filed Under: Feedback Informed Treatment - FIT

  • « Previous Page
  • 1
  • …
  • 20
  • 21
  • 22
  • 23
  • 24
  • …
  • 108
  • Next Page »

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

There are no upcoming Events at this time.

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (109)
  • Brain-based Research (2)
  • CDOI (12)
  • Conferences and Training (62)
  • deliberate practice (29)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (64)
  • excellence (61)
  • Feedback (36)
  • Feedback Informed Treatment – FIT (230)
  • FIT (27)
  • FIT Software Tools (10)
  • ICCE (23)
  • Implementation (6)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (9)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (37)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland Hypertension icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training