SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Some Common Questions (and Answers) about Feedback Informed Treatment

November 6, 2019 By scottdm 7 Comments

Mr. Gomm was my sixth grade teacher.  Tall and angular, with a booming voice and stern demeanor, he remains a forbidding figure from my childhood.

I’ll never forget the day he slammed his open hand on my desk, bellowing “That, Mr. Miller, is an assumption!”  Turning abruptly, he walked to the chalkboard, and began writing, one capital letter at a time: A, S, S, U, M, E.

“And do you know what happens when we assume, Mr. Miller?” he asked.

“Speak up!” I remember him commanding.  But I just sat there, like a deer in the headlights.

I’m sure you know what happened next.  Returning to the board, he quickly drew slash marks between the S and U, and U and M:  ASS/U/ME.

For emphasis, he then tapped each section loudly with his chalk as he spoke, “Let me spell it out for you, Mr. Miller.  To assume is to make an ass out of you and me!”

It’s a moment I recall with absolute clarity.  Only later — much later —  did I come to realize I’d not understood the point he’d been trying to make at the time.  Indeed, at recess, all anyone could talk about was that Mr. Gomm said the word “ass” out loud in class.  Soon, we were applying our new knowledge to anything anyone did on the playground that we didn’t like: butting in line, missing a critical shot in kick ball, or any of the other possible social faux pas among marginally pubescent adolescents.

“You just made an ass out of you and me!” we repeated with glee at the slightest provocation.

Beyond the obvious irony involved, it turns out tr. Gomm was only half right.  Yes, assuming — supposing without proof — is fraught with risk.  That said, presuming — taking something for granted based on probability — is, as the incident so clearly demonstrates, just as problematic.  In his mind, he’d made a sensible presumption: we would get “it.”  After all, he knew us.  We were his students.  We made the same mistake.  Based on our experience of him, we figured he was teaching us another valuable lesson, and then assumed we’d understood what he’d said.

Prior to last week, I’d not thought of Mr. Gomm for ages.  I was was reminded of him after puzzling over a slew of questions about feedback-informed treatment (FIT) posted on our online discussion forums at the International Center for Clinical Excellence.  On the surface, all appeared to be straightforward requests for information, requiring nothing more than a simple and direct response.  The trouble was that any answer one might give ended up confirming assumptions contained in the queries that were fundamentally untrue or inaccurate.

While the particulars varied, a theme shared by many of the posts was whether one could or should trust scores on the Outcome and Session Rating Scales (ORS & SRS) with certain clients — in particular, people who were shy, mandated into treatment, cognitively compromised, or emotionally disturbed.

To be sure, it’s not the first time I’d encountered such concerns.  Indeed, they frequently come up at the beginning of introductory workshops on FIT:

“Court ordered clients won’t be truthful.”

“The feedback from client’s with (borderline personality disorder, bipolar, psychosis) won’t be reliable or valid.”

“People from this (age group, culture) are not (accustomed to or incapable of) providing feedback to professionals.”

When I have several hours to teach, interact, and illustrate, I usually ask people to wait with such observations, promising an answer will emerge in time.  In the truncated, two-dimensional space of most social media interactions, however, I’ve found a similar evolution of understanding much more challenging.  Hence this post.

Of course, in the best of all worlds, people would get more training.  Answers are available. Given the simplicity of the scales — you can learn to administer and score the ORS and SRS in less than a minute — the temptation to dive in, presuming our existing clinical knowledge and experience applies to their use, is simply too great for most to resist.  Consider this: several hundred thousand practitioners have download the measures from my website in just the last couple of years!  Of these, fewer than 2 or 3% have had any training!  In the end, the unquestioned assumptions brought to the process cause most to get stuck and eventually give up.

So, what about the concerns noted above?

All make perfect sense IF the ORS and SRS are thought of as assessments, the helpfulness of which depend on the accuracy of the data collected.  By contrast, were the measures primarily seen as tools to help engage clients, an entirely different set of assumptions becomes possible.  For example, rather than interpreting high ORS scores of a court-ordered client as evidence of dishonesty or denial that must be confronted or overcome, they could be treated as an opportunity to connect with, explore, and understand their experience and world view.

In practical terms that means taking client scores at face value.  Leaving traditional assumptions aside, the clinician would first acknowledge and then respond logically to what is reported on the scale.  “I see from your responses, you are doing quite well,” continuing, “So, why did you decide to come see me today?”  Should the client say, as most readily do, they were sent by the courts (or employer, parents, or partner), the clinician responds by asking them to complete the measure as if they were the person who sent them.  After all, from their perspective, that’s why they are there!  The discussion can then turn to closing the gap between the client’s and referral source’s scores, beginning, for instance, with asking, “What have they missed about you that, once recognized, will lead them to score you higher?”  Along the way, the result of this line of inquiry is greater participation of the client in treatment — the factor long ago established as the number one process-related predictor of outcome (see Orlinsky, Grawe, & Parks, 1994).

And what about the other questions?

As already stated, answers are available — ones that leave most thinking, “Duh, why didn’t I think of that?”  To be blunt, we can’t so long as we are unaware we are thinking something else!  That’s where attending an in-depth training in FIT can prove helpful.  We’ll not only challenge your thinking, we’ll provide a thorough grounding in the principles and skills of using feedback to inform and improve the quality of mental health and substance abuse services with a broad and diverse clinical population — training which, research shows, improves therapist effectiveness.

Filed Under: Feedback Informed Treatment - FIT

Is THAT true? Judging Evidence by How Often its Repeated

October 22, 2019 By scottdm 11 Comments

I’m sure you’ve heard it repeated many times:

The term, “evidence-based practice” refers to specific treatment approaches which have been tested in research and found to be effective;

CBT is the most effective form of psychotherapy for anxiety and depression;

Neuroscience has added valuable insights to the practice of psychotherapy in addition to establishing the neurological basis for many mental illnesses;

Training in trauma-informed treatments (EMDR, Exposure, CRT) improves effectiveness;

Adding mindfulness-based interventions to psychotherapy improves the outcome of psychotherapy;

Clinical supervision and personal therapy enhance clinicians’ ability to engage and help.

Only one problem: none of the foregoing statements are true.  Taking each in turn:

  • As I related in detail in a blogpost some six years ago, evidence-based practice has nothing to do with specific treatment approaches.  The phrase is better thought of as a verb, not a noun.  According to the American Psychological Association and Institute of Medicine, there are three components: (1) the best evidence; in combination with (2) individual clinical expertise; and consistent with (3) patient values and expectations.  Any presenter who says otherwise is selling something.
  • CBT is certainly the most tested treatment approach — the one employed most often in randomized controlled trials (aka, RCT’s).  That said, studies which compare the approach with other methods find all therapeutic methods work equally well across a wide range of diagnoses and presenting complaints.
  • When it comes to neuroscience, a picture is apparently worth more than 1,000’s of studies.  On the lecture circuit, mental illness is routinely linked to the volume, structure, and function of the hippocampus and amygdala.  And yet, a recent review compared such claims to 19th-century phrenology.  More to the point, no studies show that so-called, “neurologically-informed” treatment approaches improve outcome over and above traditional psychotherapy (Thanks to editor Paul Fidalgo for making this normally paywalled article available).
  • When I surveyed clinicians recently about the most popular subjects at continuing education workshops, trauma came in first place.  Despite widespread belief to the contrary, there is no evidence that learning a “trauma-informed” improves a clinician’s effectiveness.  More, consistent with the second bullet point about CBT, such approaches have not shown to produce better results than any other therapeutic method.
  • Next to trauma, the hottest topic on the lecture circuit is mindfulness.  What do the data say?  The latest meta-analysis found such interventions offer no advantage over other approaches.
  • The evidence clearly shows clinicians value supervision.  In large, longitudinal studies, it is consistently listed in the top three, most influential experiences for learning psychotherapy.   And yet, research fails to provide any evidence that supervision contributes to improved outcomes.

Are you surprised?  If so, you are not alone.

The evidence notwithstanding, the important question is why these beliefs persist?

According to the research, a part of the answer is, repetition.  Hear something often enough and eventually you adjust your “truth bar” — what you accept as “accepted” or established, settled fact.  Of course, advertisers, propagandists and politicians have known this for generations — paying big bucks to have their message repeated over and over.

For a long while, researchers believed the “illusory truth effect,” as it has been termed, was limited to ambiguous statements; that is, items not easily checked or open to more than one interpretation.  A recent study, however, shows repetition increases acceptance/belief of false statements even when they are unambiguous and simple- to-verify.  Frightening to say the least.

A perfect example is the first item on the list above: evidence-based practice refers to specific treatment approaches which have been tested in research and found to be effective.  Type the term into Google, and one of the FIRST hits you’ll get makes clear the statement is false.  It, and other links, defines the term as “a way of approaching decision making about clinical issues.”

Said another way, evidence-based practice is a mindset — a way of approaching our work that has nothing to do with adopting particular treatment protocols.

Still, belief persists.

What can a reasonable person do to avoid falling prey to such falsehoods?

It’s difficult, to be sure.  More, as busy as we are, and as much information as we are subjected to on a daily basis, the usual suggestions (e.g., read carefully, verify all facts independently, seek out counter evidence) will leave all but those with massive amounts of free time on their hands feeling overwhelmed.

And therein lies the clue — at least in part — for dealing with the “illusory truth effect.”  Bottom line: if  you try to assess each bit of information you encounter on a one-by-one basis, your chances of successfully sorting fact from fiction are low.  Indeed, it will be like trying to quench your thirst by drinking from a fire hydrant.

To increase your chances of success, you must step back from the flood, asking instead, “what must I unquestioningly believe (or take for granted) in order to accept a particular assertion as true?”  Then, once identified, ask yourself whether those assumptions are true?

Try it.  Go back to the statements at the beginning of this post with this larger question in mind.

(Hint: they all share a common philosophical and theoretical basis that, once identified, makes verification of the specific statements much easier)

If you guessed the “medical model” (or something close), you are on the right track.  All assume that helping relieve mental and emotional suffering is the same as fixing a broken arm or treating a bacterial infection — that is, to be successful a treatment containing the ingredients specifically remedial to the problem must be applied.

While mountains of research published over the last five decades document the effectiveness of the “talk therapies,” the same evidence conclusively shows “psychotherapy” does not work in the same way as medical treatments.  Unlike medicine, no specific technique in any particular therapeutic approach has ever proven essential for success.  None.  Any claim based on a similar assumptive base should, therefore, be considered suspect.

Voila!

I’ve been applying the same strategy in the work my team and I have done on using measures and feedback — first, to show that therapists needed to do more than ask for feedback if they wanted to improve their effectiveness; and second, to challenge traditional notions about why, when, and with whom, the process does and doesn’t work.   In these, and other instances, the result has been greater understanding and better outcomes.

Filed Under: Brain-based Research, evidence-based practice, Feedback Informed Treatment - FIT, PTSD

The Skill that Heals, or Kills…

October 2, 2019 By scottdm 2 Comments

lovers-deathImagine a power so great that those who possess it are able to heal the sick, and those without it, cause death. By definition, it would qualify as a superpower — and, in fact, one Marvel comic character has claimed this one for their own.

More than seven dozen studies have investigated the impact of this “power” on the outcome of psychotherapy, finding that it contributes nine times more to success than whatever treatment method is employed (1 [see table, p. 258]).  And now, a population-based study out of the UK has shown that diabetic patients whose physicians wield this special power have a lower risk of cardiovascular events and mortality.

Strangely, while the evidence shows this ability can be greatly enhanced with proper instruction (1), little time is spent in graduate or medical schools helping students acquire or refine it.  The trend continues after formal training.  For example, search the web for continuing education on the subject and the offerings are few and far between.  And finally, if you think clinical experience contributes to the development of the skill, think again.  Despite widespread belief to the contrary, time is not a good teacher, with studies showing no correlation between the strength of the power and the number of years a practitioner has been in the field.

So, what exactly is the “it” we are talking about?

Notice your reaction when I tell you…

EMPATHY

Skeptical?  Surprised?  Bemused?  Knew it all along?

Whatever your response, the documented power of empathy to heal (or harm) makes clear more attention to the skill is warranted in our professional development efforts.  What steps can clinicians take in this regard?  A recent meta-analysis containing every study on the subject to date concludes, since “clients’ reports of therapist empathy best predict eventual treatment outcome, … regularly assessing … the client’s experience of empathy, instead of trying to intuit whether therapist behavior is empathic or not” is key.

Regularly assessing the client’s experience instead of trying to intuit.

Two decades ago, my colleagues and I developed a brief tool to do just that.  Known as the Session Rating Scale, or SRS, it’s been vetted in numerous clinical trials and shown to be a valid and reliable way for clinicians to solicit feedback from clients regarding the quality of the therapeutic relationship (including empathy).

If you don’t already have a copy, you can get yours free by clicking here.  Several web-based systems exist for administering and interpreting the data you gather, all of which offer free trials.

P.S.: The photo at the outset of this post displays two cards from the Thoth Tarot: (1) The Lovers; and (2) Death.  The first is about the possibilities inherent in uniting through love and acceptance.  The second, about transformation.  Sounds like psychotherapy, eh?

Filed Under: Feedback Informed Treatment - FIT

  • « Previous Page
  • 1
  • …
  • 23
  • 24
  • 25
  • 26
  • 27
  • …
  • 108
  • Next Page »

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

There are no upcoming Events at this time.

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (109)
  • Brain-based Research (2)
  • CDOI (12)
  • Conferences and Training (62)
  • deliberate practice (29)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (64)
  • excellence (61)
  • Feedback (36)
  • Feedback Informed Treatment – FIT (230)
  • FIT (27)
  • FIT Software Tools (10)
  • ICCE (23)
  • Implementation (6)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (9)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (37)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland Hypertension icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training