SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Finding Feasible Measures for Practice-Based Evidence

May 4, 2010 By scottdm Leave a Comment

Let’s face it.  Clinicians are tired.  Tired of paperwork (electronic or othrwise).  When I’m out and about training–which is every week by the way–and encouraging therapists to monitor and measure outcomes in their daily work few disagree in principle.  The pain is readily apparent however, the minute the paper version of the Outcome Rating Scale flashes on the screen of my PowerPoint presentation.

It’s not uncommon nowadays for clinicians to spend 30-50% of their time completing intake, assessment, treatment planning, insurance, and other regulatory forms.  Recently, I was in Buffalo, New York working with a talented team of children’s mental health professionals.  It was not uncommon, I learned, to spend most of two outpatient visits doing the required paperwork.  When one considers that the modal number of sessions consumers attend is 1 and the average approximately 5 its hard not to conclude that something is seriously amiss.

Much of the “fear and loathing” dissipates when I talk about the time it usually takes to complete the Outcome and Session Ratings Scales.  On average, filling out and scoring the measures takes about a minute a piece.  Back in January, I blogged about research on the ORS and SRS, including a summary in PDF format of all studies to date.  The studies make clear that the scales are valid and reliable.  Most important, however, for day-to-day clinical practice, the ORS and SRS are also the most clinically feasible measures available.

Unfortunately, many of the measures currently in use were never designed for routine clinical practice–certainly few therapists were consulted.  In order to increase “complaince” with such time consuming outcome tools, many agencies advise clinicians to complete the scales occasionally (e.g., “prime numbers” [5,7, 11 and so on]) or only at the beginning and end of treatment.  The very silliness of such ideas will be immediately apparent to anyone who ever actually conducted treatment.  Who can predict a consumer’s last session?  Can you imagine a similar policy ever flying in medicine?  Hey Doc, just measure your patient’s heart rate at the beginning and end of the surgery!  Inbetween? Fahgetabotit.  Moreover, as I blogged about from behind the Icelandic ash plume, the latest research strongly favors routine measurement and feedback.  In real-world clinical settings feasibility is every bit as important as reliability and validity.  Agency managers, regulators, and policy makers ignore it at their own (and their data’s) peril.

How did the ORS and SRS end up so brief and without any numbers?  When asked at workshops, I usually respond, “That’s an interesting story.”  And then continue, “I was in Israel teaching.  I’d just finished a two day workshop on ‘What Works.'” (At the time, I was using and recommending the 10-item SRS and 45-item OQ).

“The audience was filing out of the auditorium and I was shutting down my laptop when the sponsor approached the dais.  ‘Scott,’ she said, ‘one of the participants has a last question…if you don’t mind.'”

“Of course not,” I immediately replied.

“His name is Haim Omer.  Do you know of him?”


Dr. Haim Omer

“Know him?” I responded, “I’m a huge fan!”  And then, feeling a bit weak in the knees asked, “Has he been here the w h o l e time?”

Haim was as gracious as ever when he finally made it to the front of the room.  “Great workshop, Scott.  I’ve not laughed so hard in a long time!”  But then he asked me a very pointed question.  “Scott,” he said and then paused before continuing, “you complained a bit about the length of the two measures you are using.  Why don’t you use a visual analog scale?”

“That’s simple Haim,” I responded, “It’s because I don’t know what a visual analog measure is!”

Haim described such scales in detail, gave me some examples (e.g., smiley and frowny faces), and even provided references.  My review on the flight home reminded me of a simple neuropsychological assessment scale I used on internship called “The Line Bisection Task”–literally a straight line (a measure developed by my neuropsych supervisor, Dr. Tom Schenkenberg).   And the rest is, as they say, history.

Filed Under: deliberate practice, excellence, Feedback Informed Treatment - FIT Tagged With: continuing education, Dr. Haim Omer, Dr. Tom Schenkenberg, evidence based practice, icce, ors, outcome rating scale, session rating scale, srs

Feedback, Friends, and Outcome in Behavioral Health

May 1, 2010 By scottdm Leave a Comment


My first year in college, my declared major was accounting.  What can I say?  My family didn’t have much money and my mother–who chose my major for me–thought that the next best thing to wealth was being close to money.

Much to her disappointment I switched from accounting to psychology in my sophomore year.  That’s when I first met Dr. Michael Lambert.


Michael J. Lambert, Ph.D.

It was 1979 and I was enrolled in a required course taught by him on “tests and measures.”  He made an impression to be sure.  He was young and hip–the only professor I met while earning my Bachelor’s degree who insisted the students call him by his first name.  What’s more, his knowledge and passion made what everyone considered the “deadliest” class in the entire curriculum seem positively exciting.  (The text, Cronbach’s classic Essentials of Psychological Testing, 3rd Edition, still sits on my bookshelf–one of the few from my undergraduate days).  Within a year, I was volunteering as a “research assistant,” reading and then writing up short summaries of research articles.

Even then, Michael was concerned about deterioration in psychotherapy.  “There is ample evidence,” he wrote in his 1979 book, The Effects of Psychotherapy (Volume 1), “that psychotherapy can and does cause harm to a portion of those it is intended to help” (p. 6).  And where the entire field was focused on methods, he was hot on the trail of what later research would firmly establish as the single largest source of variation in outcome: the therapist.  “The therapist’s contribution to effective psychotherapy is evident,” he wrote, “…training and selection on dimensions of…empathy, warmth, and genuineness…is advised, although little research supports the efficacy of current training procedures.”  In a passage that would greatly influence the arc of my own career, he continued, “Client perception…of the relationship correlate more highly with outcome that objective judges’ ratings” (Lambert, 1979, p. 32).

Fast forward 32 years.  Recently, Michael sent me a pre-publication copy of a mega-analysis of his work on using feedback to improve outcome and reduce deterioration in psychotherapy.  Mega-analysis combines original, raw data from multiple studies–in this case 6–to create a large, representative data set of the impact of feedback on outcome.  In his accompanying email, he said, “our new study shows what the individual studies have shown.”  Routine, ongoing feedback from consumers of behavioral health services not only improves overall outcome but reduces risk of deterioration by nearly two thirds!    The article will soon appear in the Journal of Consulting and Clinical Psychology.

Such results were not available when I first began using Lambert’s measure–the OQ 45–in my clinical work.  It was late 1996.  My colleagues and I had just put the finishing touches on Escape from Babel, our first book together on the “common factors.”

That’s when I received a letter from my colleague and mentor, Dr. Lynn Johnson.


Lynn D. Johnson, Ph.D.

In the envelop was a copy of an article Lynn had written for the journal, Psychotherapy entitled, “Improving Quality in Psychotherapy” in which he argued for the routine measurement of outcome in psychotherapy.  He cited three reasons: (1) providing proof of effectiveness to payers; (2) enabling continuous analysis and improvement of service delivery; and (3) giving consumers voice and choice in treatment.  (If you’ve never read the article, I highly recommend it–if for no other reason than its historical significance.  I’m convinced that the field would be in far better shape now had Lynn’s suggestions been heeded then).

Anyway, I was hooked.  I soon had a bootleg copy of the OQ and was using it in combination with Lynn’s Session Rating Scale with every person I met.

It wasn’t always easy.  The measure took time and more than a few of my clients had difficulty reading and comprehending the items on the measure.  I was determined however, and so persisted, occasionally extending sessions to 90 minutes so the client and I could read and score the 45-items together.

Almost immediately, routinely measuring and talking about the alliance and outcome had an impact on my work.  My average number of sessions began slowly “creeping up” as the number of single-session therapies, missed appointments, and no shows dropped.  For the first time in my career, I knew when I was and was not effective.  I was also able to determine my overall success rate as a therapist.  These early experiences also figured prominently in development of the Outcome Rating Scale and revision of the Session Rating Scale.

More on how the two measures–the OQ 45 and original 10-item SRS–changed from lengthy Likert scales to short, 4-item visual analog measures later.  At this point, suffice it to say I’ve been extremely fortunate to have such generous and gifted teachers, mentors, and friends.

Filed Under: Feedback Informed Treatment - FIT Tagged With: behavioral health, cdoi, continuing education, evidence based practice, holland, icce, Michael Lambert, Paychotherapy, public behavioral health

Bringing up Baseline: The Effect of Alliance and Outcome Feedback on Clinical Performance

April 29, 2010 By scottdm 1 Comment

Not long ago, my friend and colleague Dr. Rick Kamins was on vacation in Hawaii.  He was walking along the streets of a small village, enjoying the warm weather and tropical breezes, when the sign on a storefront caught his eye.  Healing Arts Alliance, it read.  The proprietor?  None other than, “Scott Miller, Master of Oriental Medicine.”

“With all the talking you do about the alliance,” Rick emailed me later, “I wondered, could it be the same guy?!”

I responded, “Ha, the story of my life.  You go to Hawaii and all I get is this photo!”

Seriously though, I do spend a fair bit of time when I’m out and about talking about the therapeutic alliance.  As reviewed in the revised edition of The Heart and Soul of Change there are over 1100 studies documenting the importance of the alliance in successful psychotherapy.  Simply put, it is the most evidence-based concept in the treatment literature.

At the same time, whenever I’m presenting, I go to great lengths to point out that I’m not teaching an “alliance-based approach” to treatment.  Indeed–and this can be confusing–I’m not teaching any treatment approach whatsoever.  Why would I?  The research literature is clear: all approaches work equally well.  So, when it comes to method, I recommend that clinicians choose the one that fits their core values and preferences.  Critically, however, the approach must also fit and work for the person in care–and this is where research on the alliance and feedback can inform and improve retention and outcome.


Lynn D. Johnson, Ph.D.

Back in 1994, my long time mentor Dr. Lynn Johnson encouraged me to begin using a simple scale he’d developed.  It was called…(drum roll here)…”The Session Rating Scale!”  The brief, 10-item measure was specifically designed to obtain feedback on a session by session basis regarding the quality of the therapeutic alliance.  “Regular use of [such] scales,” he argued in his book Psychotherapy in the Age of Accountability, “enables patients to be the judge of the…relationship.  The approach is…egalitarian and respectful, supporting and empowering the client” (Johnson, 1995, p. 44).  If you look at the current version of the SRS, you will see Lynn is listed on the copyright line–as Paul Harvey would say, “And now you know…the rest of the story.”  Soon, I’ll tell you how the measure went from a 10-item, Likert scale to a 4-item visual analog scale.

Anyway, some 17 years later, research has now firmly validated Lynn’s idea: formally seeking feedback improves both retention and outcome in behavioral health.  How does it work?  Unfortunately science, as Malcoln Gladwell astutely observes, “all too often produces progress in advance of understanding.”  That said, recent evidence indicates that routinely monitoring outcome and alliance establishes and serves to maintain a higher level of baseline performance.   In other words, regularly seeking feedback helps clinicians attend to core therapeutic principles and processes easily lost in the complex give-and-take of the treatment hour.

Such findings are echoed in the research literature on expertise which shows that superior performers across a variety of domains (physics, computer programming, medicine, etc.) spend more time than average performers reviewing basic core principles and practice.


At an intensive training in Antwerp, Belgium

The implications for improving practice are clear: before reaching for the stars, we should attend to the ground we stand on.  It’s so simple, some might think it stupid.  How can a four item scale given at the end of a session improve anything?  And yet, in medicine, construction, and flight training, there is a growing reliance on such “checklists” to insure that proven steps to success are not overlooked.  Atul Gawande reviews this practice in his new and highly readable book, The Checklist Manifesto: How to Get Things Right.  Thanks go to Dan Buccino, member of the International Center for Clinical Excellence, for bringing this work to my attention.  (By the way, you can connect with Dan and Lynn in the ICCE community.  If you’re not a member, click here to join.  It’s free).

The only question that remains is, I suppose, with all the workshops and training on “advanced methods and specialized techniques,” will practitioners interested in bringing up baseline?

Filed Under: Feedback Informed Treatment - FIT Tagged With: icce, Malcolm Gladwell, ors, outcome rating scale, session rating scale, srs

  • « Previous Page
  • 1
  • …
  • 85
  • 86
  • 87
  • 88
  • 89
  • …
  • 108
  • Next Page »

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

There are no upcoming Events at this time.

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (109)
  • Brain-based Research (2)
  • CDOI (12)
  • Conferences and Training (62)
  • deliberate practice (29)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (64)
  • excellence (61)
  • Feedback (36)
  • Feedback Informed Treatment – FIT (230)
  • FIT (27)
  • FIT Software Tools (10)
  • ICCE (23)
  • Implementation (6)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (9)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (37)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland Hypertension icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training