SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
info@scottdmiller.com 773.404.5130

Making Sense of Client Feedback

January 4, 2021 By scottdm Leave a Comment

Kitchen NightmaresI have a guilty confession to make.  I really like Kitchen Nightmares.  Even though the show finished its run six L O N G years ago, I still watch it in re-runs.  The concept was simple.  Send one of the world’s best known chefs to save a failing restaurant.

Each week a new disaster establishment was featured.  A fair number were dives — dirty, disorganized messes with all the charm and quality of a gas station lavatory.  It wasn’t hard to figure out why these spots were in trouble.  Others, by contrast, were beautiful, high-end eateries whose difficulties were not immediately obvious.

Of course, I have no idea how much of what we viewers saw was real versus contrived.  Regardless, the answers owners gave whenever Ramsey asked for their assessment of the restaurant never failed to surprise and amuse.   I don’t recall a single episode where the owners readily acknowledged having any problems, other than the lack of customers!  In fact, most often they defended themselves, typically rating their fare “above average,” — a 7 or higher on a scale from 1 to 10.

Contrast the attitude of these restaurateurs with pop music icon Billy Joel.  When journalist Steve Croft asked him why he Billy Joelthought he’d been so successful, Joel at first balked, eventually answering, “Well, I have a theory, and it may sound a little like false humility, but … I actually just feel that I’m competent.”  Whether or not you are a fan of Joel’s sound, you have to admit the statement is remarkable.   He is one of the most successful music artists in modern history, inducted into the Rock and Roll Hall of Fame, winning a Grammy Legend Award, earning four number one albums on the Billboard 200, and consistently filling stadiums of adoring fans despite not having released a new album since 1993!  And yet, unlike those featured on Kitchen Nightmares, he sees himself as merely competent, adding “when .. you live in an age where there’s a lot of incompetence, it makes you appear extraordinary.”

Is humility associated with success?  Well, turns out, it is a quality possessed by highly effective effective therapists.  Studies not only confirm “professional self-doubt” is a strong predictor of both alliance and outcome in psychotherapy but actually a prerequisite for acquiring therapeutic expertise (1, 2).  To be clear, I’m not talking about debilitating diffidence or, as is popular in some therapeutic circles, knowingly adopting a “not-knowing” stance.  As researchers Hook, Watkins, Davis, and Owen describe, its about feedback — specifically, “valuing input from the other (or client) … and [a] willingness to engage in self-scrutiny.”

Low humility, research shows, is associated with compromised openness (3).  Sound familiar?  It is the most common reaction of owners featured on Kitchen Nightmares.  Season 5 contained two back-to-back episodes featuring Galleria 33, an Italian restaurant in Boston, Massachusetts.  As is typical, the show starts out with management expressing bewilderment about their failing business.  According to them, they’ve tried everything — redecorating, changing the menu, lowering prices.  Nothing has worked.  To the viewer, the problem is instantly obvious: they don’t take kindly to feedback.  When one customer complains their meal is “a little cold,” one of the owners becomes enraged.  She first argues with Ramsey, who agrees with the customer’s assessment, and then storms over to the table to confront the diner.  Under the guise of “just being curious and trying to understand,” she berates and humiliates them.  It’s positively cringeworthy.  After numerous similar complaints from other customers — and repeated, uncharacteristically calm, corrective feedback from Ramsey — the owner experiences a moment of uncertainty.  Looking directly into the camera she asks, “Am I in denial?”  The thought is quickly dismissed.  The real problem, she and the co-owner decide, is … (wait for it) …

Ramsey and their customers!   Is anyone surprised the restaurant didn’t survive?

closed for businessSuch dramatic examples aside, few therapists would dispute the importance of feedback in psychotherapy.  How do I know?  I’ve meet thousands over the last two decades as I traveled the world teaching about feedback-informed treatment (FIT).  Research on implementation indicates a far bigger challenge is making sense of the feedback one receives (4, 5, 6)  Yes, we can (and should) speak with the client — research shows therapists do that about 60% of the time when they receive negative feedback.  However, like an unhappy diner in an episode of Kitchen Nightmares, they may not know exactly what to do to fix the problem.  That’s where outside support and consultation can be critical.  Distressingly, research shows, even when clients are deteriorating, therapists consult with others (e.g., supervisors, colleagues, expert coaches) only 7% of time.

Since late summer, my colleagues and I at the International Center for Clinical Excellence have offered a series of intimate, virtual gatherings of mental health professionals.  Known as the FIT Cafe, the small group (10 max) gets together once a week to finesse their FIT-related skills and process client feedback.  It’s a combination of support, sharing, tips, strategizing, and individual consultation.  As frequent participant, psychologist Claire Wilde observes, “it has provided critical support for using the ORS and SRS to improve my therapeutic effectiveness with tricky cases, while also learning ways to use collected data to target areas for professional growth.”FIT Winter Cafe 2021

The next series is fast approaching, a combination of veterans and newbies from the US, Canada, Europe, Scandinavia, and Australia.  Learn more or register by clicking here or on the icon to the right.

Not ready for such an “up close and personal” experience?  Please join the ICCE online discussion forum.  It’s free.  You can connect with knowledgeable and considerate colleagues working to implement FIT and deliberate practice in their clinical practice in diverse settings around the world.

OK, that’s it for now.  Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

Filed Under: deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT, Therapeutic Relationship

The Expert on Expertise: An Interview with K. Anders Ericsson

June 23, 2020 By scottdm 13 Comments

Anders and ScottI can remember exactly where I was when I first “met” Swedish psychologist, K. Anders Ericsson.  Several hours into a long, overseas flight, I discovered someone had left a magazine in the seat pocket.  I never would have even given the periodical a second thought had I not seen all the movies onboard — many twice.  Its target audience wasn’t really aimed at mental health professionals: Fortune.  

Bored, I mindlessly thumbed through the pages. Then, between articles about investing and pictures of luxury watches, was an article that addressed a puzzle my colleagues and I had been struggling to solve for some time: why were some therapists more consistently effective than others?

In 1974, psychologist David F. Ricks published the first study documenting the superior outcomes of a select group of practitioners he termed, “supershrinks.”  Strangely, thirty-years would pass before another empirical analysis appeared in the literature.

The size and scope of the study by researchers Okiishi, Lambert, Nielsen, and Ogles (2003), dwarfed Rick’s, examining results from standardized measures Fortuneadministered on an ongoing basis to over 1800 people treated by 91 therapists.  The findings not only confirmed the existence of “supershrinks,” but showed exactly just how big the difference was between them and average clinicians.  Clients of the most effective experienced a rate of improvement 10 times greater than the average.  Meanwhile, those treated by the least effective, ended up feeling the same or worse than when they’d started — even after attending 3 times as many sessions!   How did the best work their magic?  The researchers were at a loss to explain, ending their article calling it a “mystery” (p. 372).

By this point, several years into the worldwide implementation of the outcome and session rating scales, we’d noticed (and, as indicated, were baffled by) the very same phenomenon.  Why were some more effective?  We pursued several lines of inquiry.  Was it their technique?  Didn’t seem to be.  What about their training?  Was it better or different in some way?  Frighteningly, no.  Experience level?  Didn’t matter.  Was it the clients they treated?  No, in fact, their outcomes were superior regardless of who walked through their door.  Could it be that some were simply born to greatness?  On this question, the article in Fortune, was clear, “The evidence … does not support the [notion that] excelling is a consequence of possessing innate gifts.”

So what was it?

Enter K. Anders Ericsson.  His life had been spent studying great performers in many fields, including medicine, mathematics, music, computer programming, chess, and sports.  The best, he and his team had discovered, spent more time engaged in an activity they termed, “deliberate practice” (DP).  Far from mindless repetition, it involved: (1) establishing a reliable and valid assessment of performance; (2) the identification of objectives just beyond an individual’s current level of ability; (3) development and engagement in exercises specifically designed to reach new performance milestones; (4) ongoing corrective feedback; and (5) successive refinement over time via repetition.

I can remember how excited I felt on finishing the article.  The ideas made so much intuitive sense.  Trapped in a middle seat, my row-mates on either side fast asleep, I resolved to contact Dr. Ericsson as soon as I got home.

Anders replied almost immediately, giving rise to a decade and a half of correspondence, mentoring, co-presenting, and friendship.  And now he is gone.  To say I am shocked is an understatement.  I’d just spoken with him a few days prior to his death.  He was in great spirits, forever helpful and supportive, full of insights and critical feedback.  I will miss him — his warmth, encouragement, humility, and continuing curiosity.  If you never met him, you can get a good sense of who he was from the interview I did with him two weeks ago.  Let me know your thoughts in the comments below.

Until next time, I wish you health, peace, and progress.

Scott

 

Filed Under: deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT

Please, don’t use my scales…

December 12, 2019 By scottdm 3 Comments

stopOr, at least that’s what I said in response to his question.  The look on his face made clear my words caused more confusion than clarity.

“But then, how will I found out which of the therapists at my agency are effective?” he asked.

“The purpose of FIT,” I replied, “is not to profile, but rather help clinicians respond more effectively to their clients.”

And I’ve found myself giving similar advise of late —  in particular, actively counseling practitioners and clinic directors against using the ORS and SRS.

Here’s another:

“We need a way to meet the new Joint Comission/SAMHSA requirement to use a standardized outcome measure in all therapeutic work.”

My reply?

FIT is purposefully designed — and a significant body of evidence indicates it does — help those in treatment achieve the best results possible.  Thus, while integrating measures into care has, in some countries, because a standard of care, using them merely to meet regulatory requirements is de facto unethical.  Please don’t use my scales.

One more?

“I don’t (or won’t) use the scales with all my clients, just those I decide it will be clinically useful with.”

What do I think?

The evidence clearly shows stop 2clinicians often believe they are effective or aligned with clients when they are not.  The whole purpose of routinely using outcome and alliance measures is to fill in these gaps in clinical judgement.  Please don’t use my scales. 

Last, as I recently blogged about, “The scales are really very simple and self-explanatory so I don’t think we really need much in the way of training or support materials.”

My response?

We have substantial evidence to the contrary.  In sharp contrast to the mere minutes involved in downloading and learning to administer measures, actual implemention of FIT takes considerable time and support —  more than most seem aware of or willing to invest.

PLEASE DON’T USE MY SCALES!

While I could cite many more examples of when not to use routine outcome measures (e.g., “we need a way to identify clients we aren’t helping so we can terminate services with them and free up scarce clinical resources” or “I want to have data to provide evidence of effectiveness to funding sources”) — I will refrain.

As one dedicated FIT practitioner recently wrote, “Using FIT is brutal. Without it, it’s the patients’ fault. With fit, it’s mine. Grit your way through . . . because it’s good and right.”

I could not have said it any better.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
ICCE Advanced FIT Intensive 2020 Scott D MillerICCE Fit Supervision Intensive 2020 Scott D Miller

Filed Under: Feedback, Feedback Informed Treatment - FIT, FIT

What does losing your keys have in common with the treatment of trauma?

April 24, 2019 By scottdm 9 Comments

keysLast week, I was preparing to leave the house and could not locate my keys.  Trust me when I say, it’s embarrassing to admit this is not an infrequent occurrence.

Logic and reason are always my first problem solving choices.  That’s why I paused after looking in the kitchen drawer where I am supposed to keep them, along with my wallet and glasses, and found it empty.  When did I last have them?  Not finding them there, the “search” began.

Upstairs to the bedroom to check my pants pockets.  No.  Downstairs to the front closet to look in my coat.  No.  Back upstairs to the hamper in the laundry room.  No.  Once more, down the stairs to the kitchen hutch.  I sometimes leave them there.  This time, however, no.  I then headed back up the stairs to the master bathroom — my pace now a bit frantic — and rummaged through my clothing.  No.  They’ve gotta be on my office desk.  Down two flights of stairs to the basement.  Not there either.

In a fit of pique, I stormed over to the landing, and yelled at the top of my voice, “DID SOMEONE TAKE MY KEYS?” the accusation barely concealed.  Although my head knew this was nuts, my heart was certain it was true. They’ve hidden them!

“No,” my family members kindly reply, then ask, “Have you lost them again?”

“Arrgh,” I mutter under my breath.  And that’s when I do something that, in hindsight, make no sense.  I wonder if you do the same?  Streetlight EffectNamely, I start the entire search over from the beginning — pants, coat, hamper, closet, hutch, office — often completing the exact same cycle several times.  Pants, coat, hamper, closet, hutch, office.   Pants, coat, hamper, closet, hutch, office.  Pants, coat, hamper, closet, hutch, office.

I can’t explain the compulsion, other than, by this point, I’ve generally lost my mind.  More, I can’t think of anything else do.  My problem: I have somewhere to go!  The solution: Keep looking (and it goes without saying, of course, in the same places).

(I did eventually locate my keys.  More on that in a moment)

Yesterday, I was reminded of my experience while reading a newly released study on the treatment of trauma.   Bear with me as I explain. Over a decade ago, I blogged about the U.S. Veteran’s Administration spending $25,000,000 aimed at “discover[ing] the best treatments for PTSD” despite a virtual mountain of evidence showing no difference in outcome between various therapy approaches.

Since that original post, the evidence documenting equivalence between competing methods has only increased (1, 2).  The data are absolutely clear.  Meta-analyses of studies in which two or more approaches intended to be therapeutic are directly compared, consistently find no difference in outcome between methods – importantly, whether the treatments are designated “trauma-focused” or not.   More, other highly specialized studies – known as dismantling research – fail to provide any evidence for the belief that specialized treatments contain ingredients specifically remedial to the diagnosis!  And yes, that includes the ingredient most believe essential to therapeutic success in the treatment of PTSD; namely, exposure (1, 2).

The new study confirms and extends such findings.  Briefly, using data drawn from 39 V.A. treatment centers, researchers examined the relationship between outcome and the degree of adoption of two so-called “evidence-based,” trauma-informed psychotherapy approaches — prolonged exposure and cognitive processing therapy.  If method mattered, of course, then a greater degree of adoption would be associated with better results.  It was not.  As the authors of the study conclude, “programs that used prolonged exposure and cognitive processing therapy with most or all patients did not see greater reductions in PTSD or depression symptoms or alcohol use, compared with programs that did not use these evidence-based psychotherapies.”

Winston Churchill Quote About History Repeating Itself History Doesn't Repeat Itself But It Rhymes | Quote"history Does - QUOTES BY PEOPLE

So what happens now?  If history, and my own behavior whenever I lose my keys, is any indication, we’ll start the process of looking all over again.  Instead of accepting the key is not where we’ve been looking, the field will continue it’s search.  After all, we have somewhere to go — and right back to the search for the next method, model, or treatment approach, we go.

It’s worse than that, actually, as looking over and again in the same place, keeps us from looking elsewhere.  That’s how I generally find my keys.  As simple and perhaps dumb as it sounds, I find them someplace I had not looked.

And where is the field not looking?  As Norcross and Wampold point out in an article published this week, “relationships and responsiveness” are the key ingredients in successful psychological care for people who are suffering as a result of traumatic experiences, going on to say that the emphasis on model or method is actually harmful, as it “squanders a vital opportunity to identify what actually heals.”

Improving our ability to connect with and respond effectively to the diverse people we meet in therapy is the focus on Deliberate Practice Intensive, held this August in Chicago, Illinois.  Unlike training in protocol-driven treatments, studies to date show learning the skills taught at the workshop result in steady improvements in clinicians’ facilitative interpersonal skills and outcomes commensurate with the rate of improvement seen in elite athletes.  For more information or to register, click here.

Until next time,

Scott

Scott D. Miller, Ph.D.
International Center for Clinical Excellence
FIT Deliberate Practice Aug 2019 - ICCEFIT Training of Trainers Aug 2019 - ICCEFIT Implementation Intensive Aug 2019 - ICCE

Filed Under: evidence-based practice, Feedback, Feedback Informed Treatment - FIT, Therapeutic Relationship

It’s Time to Abandon the “Mean” in Psychotherapy Practice and Research

April 8, 2019 By scottdm 7 Comments

car seatRecognize this?  Yours will likely look at bit different.  If you drive an expensive car, it may be motorized, with buttons automatically set to your preferences.  All, however, serve the same purpose.

Got it?

It’s the lever for adjusting your car seat.

I’m betting you’re not impressed.   Believe it or not though, this little device was once considered an amazing innovation — a piece of equipment so disruptive manufacturers balked at producing it, citing “engineering challenges” and fear of cost overruns.

For decades, seats in cars came in a fixed position.  You could not move them forward or back.  For that Plane-Crash-04022016-2matter, the same was the case with seats in the cockpits of airplanes.  The result?  Many dead drivers and pilots.

The military actually spent loads of time and money during the 1940’s and 50’s looking for the source of the problem.  Why, they wondered, were so many planes crashing?  Investigators were baffled.

Every detail was checked and rechecked.  Electronic and mechanical systems tested out.  Pilot training was reviewed and deemed exceptional.  Systematic review of accidents ruled out human error.   Finally, the equipment was examined.  Nothing, it was determined, could not have been more carefully designed — the size and shape of the seat, distance to the controls, even the shape of the helmet, were based on measurements of 140 dimensions of 4,000 pilots (e.g., thumb length, hand size, waist circumference, crotch height, distance from eye to ear, etc.).

It was not until a young lieutenant, Gilbert S. Daniels, intervened that the problem was solved.  Turns out, despite of the careful measurements, no pilot fit the average of the various dimensions used to design the cockpit and flight equipment.  Indeed, his study found, even when “the average” was defined as the middle 30 percent of the range of values on any given indice, no actual pilot fell within the range!

The conclusion was as obvious as it was radical.  Instead of fitting pilot into planes, planes needed to be designed to fit pilots.  Voila!   The adjustable seat was born.

round-head-square-holeNow, before you scoff — wisecracking, perhaps, about “military intelligence” being the worst kind of oxymoron — beware.  The very same “averagarianism” that gripped leaders and engineers in the armed services is still in full swing today in the field of mental health.

Perhaps the best example is the randomized controlled trial (RCT) — deemed the “gold standard” for identifying “best practices” by professional bodies, research scientists, and governmental regulatory bodies.  t-test

However sophisticated the statistical procedures may appear to the non-mathematically inclined, they are nothing more than mean comparisons.

Briefly, participants are recruited and then randomly assigned to one of two groups (e.g., Treatment A or a Control group; Treatment A or Treatment as Usual; and more rarely, Treatment A versus Treatment B).  A measure of some kind is administered to everyone in both groups at the beginning and the end of the study.   Should the mean response of one group prove statistically greater than the other, that particular treatment is deemed “empirically supported” and recommended for all.

The flaw in this logic is hopefully obvious: no individual fits the average.  More, as any researcher will tell you, the variability between individuals within groups is most often greater than variability between groups being compared.

in boxBottom line:  instead of fitting people into treatments, mental health care should be to made to fit the person.  Doing so is referred to, in the psychotherapy outcome literature, as responsiveness  — that is, “doing the right thing at the right time with the right person.”  And while the subject receives far less attention in professional discourse and practice than diagnostic-specific treatment packages, evidence indicates it accounts for why, “certain therapists are more effective than others…” (p. 71, Stiles & Horvath, 2017). 

I’m guessing you’ll agree it’s time for the field to make an “adjustment lever” a core standard of therapeutic practice — I’ll bet it’s what you try to do with the people you care for anyway.on box

Turns out, a method exists that can aid in our efforts to adjust services to the individual client.  It involves routinely and formally soliciting feedback from the people we treat.  That said, not all feedback is created equal.  With a few notable exceptions, all routine outcome monitoring systems (ROM) in use today suffer from the same problem that dogs the rest of the field.  In particular, all generate feedback by comparing the individual client to an index of change based on an average of a large sample (e.g., reliable change index, median response of an entire sample).

By contrast, three computerized outcome monitoring systems use cutting edge technology to provide feedback about progress and the quality of the therapeutic alliance unique to the individual client.  Together, they represent a small step in providing an evidence-based alternative to the “mean” approaches traditionally used in psychotherapy practice and research.

Interested in your thoughts,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

PS: Want to learn more?  Join me and colleagues from around the world for any or all three, intensive workshops being offered this August in Chicago, IL (USA).

  1. The FIT Implementation Intensive: the only workshop in the US to provide an in depth training in the evidence-based steps for successful integration of Feedback Informed Treatment (FIT) into your agency or clinical practice.
  2. The Training of Trainers: a 3-day workshop aimed at enhancing your presentation and training skills.
  3. The Deliberate Practice Intensive: a 2-day training on using deliberate practice to improve your clinical effectiveness.

Click on the title of the workshop for more information or to register.

 

 

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, FIT Software Tools

Routine Outcome Monitoring and Deliberate Practice: Fad or Phenomenon?

March 26, 2019 By scottdm 1 Comment

new-improved-newspaper-headline-better-product-update-upgrad-headlines-announcements-upgrade-60079897Would you believe me if I told you there was a way you could more than double the chances of helping your clients?  Probably not, eh?  As I’ve documented previously, claims abound regaring new methods for improving the outcome of psychotherapy.  It’s easy to grow cynical.

And yet, findings from a recent study document when clinicians add this particular practice to their clinical work, clients are actually 2.5 times more likely to improve.  The impact is so significant, a review of research emerging from a task force of the American Psychological Association concluded, “it is among the most effective ways available to services to improve outcomes.”feedback effects

That said, there’s a catch.

The simple nature of this “highly rated,” transtheoretical method belies a steep learning curve.  In truth, experience shows you can learn  to do it — the mechanics — in a few minutes.

But therein lies the problem.  The empirical evidence makes clear successful implementation often takes several years.  This latter fact explains, in part, why surveys of American, Canadian, and Australian practitioners reveal that, while being aware of the method, they rarely integrate it into their work.

What exactly is the “it” being referred to?

Known by the acronym FIT,  feedback-informed treatment (FIT) involves using standardized measures to formally and routinely solicit feedback from clients regarding progress and the quality of the therapeutic relationship, and then using the resulting information to inform and improve care.

The ORS and SRS are examples of two simple feedback scales used in more than a dozen randomized controlled trials as well as vetted and deemed “evidence-based” by the Substance Abuse and Mental Health Services Administration.  Together, the forms take less than 3 minutes to administer, score and interpret (less if one of the web-based scoring systems is used).

So why, you might wonder, would it take so long to put such tools into practice?

As paradoxical as it may sound, because FIT is really not about using measures — any more say than making a home is about erecting four walls and a roof.  While the structure is the most visible aspect — a symbol or representation — we all know it’s what’s inside that counts; namely, the people and their relationships.

On this score, it should come as no surprise that a newly released study has found a significant portion of the impact of FIT is brought about by the alliance or relationship between client and therapist.   It’s the first study in history to look at how the process actually works and I’m proud to have been involved.

Of course, all practitioners know relationships skills are not only central to effective psychotherapy, but require lifelong learning.   With time, and the right kind of support, using measurement tools facilitates both responsiveness to individual clients and continuous professional development.

Here’s the rub.  Whenever I respond to inquiries about the tools — in particular, suggesting it takes time for the effects to manifest, and that the biggest benefit lies beyond the measurement of alliance and outcome — interest in FIT almost always disappears.  “We already know how to do therapy,” a manager  replied just over a week ago, “We only want the measures, and we like yours because they are the simplest and fastest to administer.”fit training

Every so often, however, the reply is different.  “What do we have to do to make this work to improve the effectiveness of our clinical work and clinicians?” asked Thomas Haastrup, the Coordinator of Family Services for Odense Municipality in Denmark.  When I advised, planning and patience, with an emphasis on helping individual practitioners learn to use feedback to foster professional development versus simply measuring their results, he followed through.  “We adopted the long view,” Thomas recounts, “and it’s paid off.”  Now in their 5th year, outcomes are improving at both the program and provider level across services aimed at helping adults, children, and families.

In addition to the Manual 6 in the ICCE Treatment and Training manuals, the ICCE Summer Intensives offer several opportunities for helping you or your agency to succeed in implementing FIT.  First, the 2-day FIT Implementation Training — the only workshop offering in-depth, evidence-based training in the steps for integrating FIT into clinical practice at the individual, agency, and system-of-care level.  Second, the Deliberate Practice Intensive — here you not only learn the steps, but begin to set up a professional develop plan designed to enhance your effectiveness.

To help out, I’d like to offer a couple of discounts:

  1. Purchase Manual 6 at 70% off the regular price.  Click here to order.  Enter the word IMPLEMENTATION at checkout to receive the discount  (If you want to purchase the entire set, I’m making them available at 50% off the usual price.  Enter IMPLEMENTATION2 at checkout).
  2. Register for any or all of the summer intensives by May 1st and receive an additional discount off the early bird price.  Simple enter the code FITPROMOAPRIL at checkout.  Please note, registration MUST occur before May 1st.  Generally, we sell out 6 to 8 weeks in advance.

Feel free to email me with any questions.  In the meantime, as always, I’m interested in your thoughts about FIT and DP.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
FIT Implementation Intensive Aug 2019 - ICCEFIT Training of Trainers Aug 2019 - ICCEFIT Deliberate Practice Aug 2019 - ICCE

Filed Under: evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT

Time for a New Paradigm? Psychotherapy Outcomes Stagnant for 40 years

February 1, 2019 By scottdm 9 Comments

airplane in treeYou’ve heard it said before.  Flying is the safest form of transportation.

Facts back up the claim.  In fact, it’s not even close.  In terms of distance traveled, the fatality rate per billion kilometers is .003, improving dramatically over the years.  Cars, by contrast, are almost 1,000 times more dangerous.  Still, since 1923, the fatality rate in motor vehicle accidents has declined an eye-popping 93%.

How about psychotherapy?  Have outcomes improved?  Judging by the size of the Diagnostic and Statistical Manual and growth in the number of treatment approaches, one would expect success rates to have climbed significantly, if not exponentially.  Not so, as I first presented at the Evolution of Psychotherapy Conference five years ago, and later on this blog, the empirical evidence clearly shows NO improvement.

And now a new study, this time reviewing the evidence regarding treatments for children and adolescents.  Using sophisticated statistical analyses, the researchers examined 453 RCT’s spanning 53 years, involving nearly 32,000 kids treated for anxiety, depression, attention deficit/hyperactivity, and conduct problems.  With the rising popularity of “evidence-based practice,” those conducting the study wanted to know whether “… our methods of developing and testing youth psychological therapies [are] producing improvement” (p. 2).

Can you guess what they found?Books in tree

Outcomes have not changed (much less improved) over the last five decades–that’s 351 in dog years!

Can you imagine the outcry had similar results been published about automobiles or planes?  You would fully expect hearings to be held, and leaders to be called called to account.  The lives of children are on the line.

Nope.  Instead, facing the supersized differences between promises made every year about “advances” in psychotherapy, and the results realized and reported in research studies, the authors meekly call for, “new approaches to treatment design and intervention science” (p. 1).

Really?  Is that what’s required?  Researchers going back to the drawing board of “treatment and intervention?”

No, what’s needed is an entirely different view of what clinicians actually do  — and it starts by giving up the idea that psychotherapy is a form of treatment similar to antibiotics or angioplasty.  Let’s face it.  Psychotherapy is no more a medical treatment than are the facials, salt glows and body wraps one receives at the local spa.  Which is not to say, it doesn’t work.

Eva-Strauss-Ivory-Tower

Outside the halls of academia, millions of therapists worldwide are helping people on a daily basis to live happier, more meaningful and functional lives.  Dozens of studies of real world practitioners document outcomes that meet or exceed benchmarks established in tightly controlled, model-driven, randomized trials — all without following a particular, “evidence-based” protocol (see 1, 2, 3,4).

So, how best to conceptualize the effective work clinicians do?  And, importantly, what could researchers offer that would be of real help to therapists?

That psychotherapy works, says more about humans and our need for connection, meaning, and purpose, than it does about the particulars of any given model or approach.  And that our methods focus on thoughts, feelings, behaviors, and brain chemistry, says more about our Western values and beliefs, than about the ingredients necessary for successful healing.

Simply put, the field does not need to, as the authors of the study argue, “intensify the search for mechanisms of change [and] transdiagnostic … treatments” (p. 1).  Doing so is merely a recipe for “more of the same.”  Rather, to move forward, it should abandon the medical paradigm that has long had a stranglehold on our research and professional discourse, choosing instead to reconnect with the larger, worldwide family of healers, one that has existed since the dawn of history and which, from the outset, has been deeply engaged in the values and beliefs of those they treat, using whatever means necessary, consistent with the culture, to engender change.

What might that look like in practice?

As already documented, practicing clinicians already do a pretty darn good job helping their clients.  There’s nothing wrong with our Westernized approaches when they work.  At the same time, we don’t succeed with everyone.  The problem, studies show, is we’re not particularly good at knowing when we’re not being helpful, when clients are at risk for dropping out or are actually deteriorating while in our care (1, 2).  On this score, research has already provided a solution.  Dozens of studies document, for example, using simple measures at the beginning and end of each visit not only provides clinicians with an opportunity to intervene more successfully with “at risk” clients, but also helps identify opportunities for their own growth and development (1, 2).  If you’re not routinely and formally measuring the quality and outcome of your work, you can get started by accessing two simple tools here.

With outcome as our guide, all that remains is being willing to look outside the profession for possibilities for healing and change unbound by convention and the medical view.  That’s happening already, by the way, in the world’s two most populous countries, India and China, with professionals learning the ways of indigenous healers and government officials tapping local shaman to meet citizen’s mental health and well being needs.

So, what about you?  What you are doing to extend your healing reach?

And, in case you haven’t seen it, the video below is from the most recent Evolution of Psychotherapy conference, where I talk about new research documenting psychics achieving the same or better results as psychotherapists.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S.: Want to learn more about using outcome to inform and improve your effectiveness?  Join me and an international group of teachers and researchers in Chicago for our Summer Intensives.  For detailed information and to register, click on the banners below.
FIT Implementation Intensive Aug 2019 - ICCEFIT Training of Trainers Aug 2019 - ICCEFIT Deliberate Practice Aug 2019 - ICCE

Filed Under: deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT

Just how good are our theories about the causes and alleviation of mental and emotional suffering?

July 12, 2018 By scottdm 7 Comments

wrong way

Does the name Barry Marshall ring a bell?

Probably not if you are a mental health professional.

For decades, the Australian physician was persona non grata in the field of medicine — or perhaps stated more accurately, persona sciocca, a fool.

Beginning in the early 1980’s, Marshall, together with colleague Robin Warren, advanced the hypothesis that the bacteria heliobacter pylori was at root of most stomach ulcers.  That idea proved exceptionally controversial flying, as it did, in the face of years of accepted practice and wisdom.  Ulcers caused by something as simple and obvious as a bacterial infection?  Bunk, the medical community responded, in the process lampooning the two researchers.  After all, everyone knew stress was the culprit.  The also knew the cure: certainly not antibiotics.  Rather, antacids, sedatives, therapy and, in the more chronic and serious cases, gastrectomy–a surgical procedure involving the removal of the lower third of the stomach.

The textbook used in my Introduction to Psychology course in my first year at University boldly declared, “Emotional stress is now known to relate to … such illnesses as … peptic ulcers” (p. 343, Psychology Today: An Introduction 4th Edition [Braun and Linder, 1979]).  The chapter on the subject was full of stories of people whose busy, emotionally demanding lives were clearly the cause of their stomach problems.  I dutifully overlined all the relevant sections with my orange highlighter.  Later, in my clinical career, whenever I saw a person with an ulcer, I told them it was caused by stress and, not surprisingly, taught them “stress-management” strategies.

The only problem is the field, my textbook, and I were wrong, seriously wrong.  Stress was not responsible for stomach ulcers.  And no, antacids, sedatives, and psychotherapy, were not the best treatments.  The problem could be cured much more efficiently and effectively with a standard course of antibiotics, many of which had been available since the 1960’s!   In other words, the cure had been within reach all along.  Which begs the question, how could the field have missed it?  Not only that, even after conclusively demonstrating the link between ulcers and the h.pylori bacterium, the medical community continued to reject Marshall and Warren’s papers and evidence for another 10 years (Klein, 2013)!mark twain

So what was it?  Money, ignorance, hubris–even the normal process by which new scientific findings are disseminated–have all been offered as explanations.   The truth is, however, the field of medicine, and mental health in particular, has a weakness–to paraphrase Mark Twain–for “knowing with certainty things that just ain’t so.”

How about these?

  • Structural abnormalities in the ovaries cause neurosis in women;
  • Psychopathology results from unconscious dynamics originating in childhood;
  • Optimism, anger control, and the expression of emotion reduces the risk of developing cancer;
  • Negative thinking, “cognitive distortions,” and/or a chemical imbalance cause depression;
  • Some psychotherapeutic approaches are more effective than others.

The list is extensive and dates all the way back to the field’s founding nearly 150 years ago.  All, at one point or another, deeply believed and passionately advocated.  All false.

story-magnet-attract-candidatesLooking back, its easy to see that we therapists are suckers for a good story–especially those that appear to offer scientific confirmation of strongly held cultural beliefs and values.

Nowadays, for example, it simply sounds better to say that our work targets, “abnormal activation patterns in dlPFC and amygdala that underlie the cognitive control and emotion regulation impairments observed in Major Depressive Disorder” than, “Hey, I listened attentively and offered some advice which seemed to help.”  And while there’s a mountain of evidence confirming the effectiveness of the latter, and virtually none supporting the former, proponents tell us it’s the former that “holds the promise” (Alvarez & Icoviello, 2015).

What to do?  Our present “neuroenchantment” notwithstanding, is there anything we practitioners and the field can learn from more than 150 years of theorizing?its piss

Given our history, it’s easy to become cynical, either coming to doubt the very existence of Truth or assuming that it’s relative to a particular individual, time, or culture.  The other choice, it seems to me, is humility–not the feigned ignorance believed by some to be a demonstration of respect for individual differences–but rather what results when we closely and carefully examine our actual work.

Take empathy, for example.  Not only do most practitioners consider the ability to understand and share the feelings of another  an “essential” clinical skill, it is one of the most frequently studied aspects of therapeutic work (Norcross, 2011).   And, research shows therapists, when asked, generally give themselves high marks in this area (c.f., Orlinksky & Howard, 2005).   My colleagues, Daryl Chow, Sharon Lu, Geoffrey Tan, and I encountered the same degree of confidence when working with therapists in our recent, Difficult Conversations in Therapy study.  Briefly, therapists were asked to respond empathically to a series of vignettes depicting challenging moments in psychotherapy (e.g., a client expressing anger at them).  Each time, their responses were rated on standardized scale and individualized feedback for improving was provided.

Head_spinNow, here is the absolutely cool part.  The longer therapists participated in the research, the less confident but more demonstrably empathic they became!   The process is known as “The Illusion of Explanatory Depth.”  Simply put, most of us feel we understand the world and our work with far greater detail, coherence, and depth than we really do.  Only when we are forced ourselves to grapple with the details, does this illusion give way to reality, and the possibility of personal and professional growth become possible.

If this makes your head spin, get a cup of coffee and watch the video below in which Dr. Daryl Chow explains these intriguing results.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S. Marshall and Warren were awarded the Nobel Prize for their research in 2005.  Better late than never.

FITSUP2019

Filed Under: evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT

Better Results through Deliberate Practice

January 16, 2018 By scottdm 1 Comment

better results

The legendary cellist Pablo Casals was once interviewed by comedian George Carlin.  When asked why, at age 93, he continued to practice three hours a day, Casals replied, “I’m beginning to show some improvement!”

Hard not to feel inspired and humbled by such dedication, eh?  And while humorous, Casals was not joking.  Across a wide variety of domains (e.g., sports, computer programming, teaching), deliberate practice leads to better results.   Indeed, our recent study of mental health practitioners documented a growth in effectiveness consistent with performance improvements obtained by elite atheletes.

practice makes perfectThe January issue of the APA monitor includes a detailed article on the subject.   Staff writer Tori DeAngelis lays out the process of applying deliberate practice strategies to clinical work in clear, step-by-step terms.  Best of all, it’s free–even continuing education credits are available if you need them.

daryl and scottAs mentioned in the article, each summer the International Center for  Clincal Excellence sponsors a two-day, intensive training on deliberate practice for therapists.  Daryl Chow, Ph.D. and I will be teaching together, presenting the latest scientific and practical information from our forthcoming book, Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness (APA, 2019).

As in prior years, we promise you will be participating in an intimate, cutting-edge, and highly-personalized learning experience.   Many practitioners return to year after year.  “I’ve attended the Deliberate Practice Intensive for three years in a row,” says therapist Jim Reynolds, “because there is such a warm camraderie.  We are all trying to do the best we can with our clients, but we go beyond that.  To do that, I need contact with others who are striving to do better.”

Until next time,

Scott

Scott D. Miller, Ph.D.

FIT Deliberate Practice Intensive 2018

Filed Under: Behavioral Health, deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance

That’s it. I’m done. It’s time for me to say goodbye.

November 2, 2017 By scottdm 3 Comments

dddb02383d1bbe1e0c3d0ad991bd95b8--alternative-treatments-termination-activities-for-teensEnding psychotherapy.

Whether formal or informal, planned or unplanned, it’s going to happen every time treatment is initiated.

What do we know about the subject?

Nearly 50% of people who start, discontinue without warning.  At the time they end, half have experienced no meaningful improvement in their functioning or well-being. On the other hand, of those who do continue, between 35-40% experience no measurable benefit despite continuous engagement in lengthy episodes of care.

Such findings remind me of the lyrics to the Beatles’ tune, “Hello Goodbye.”

“You say yes, I say no;Hello Goodbye

You say stop and I say go, go, go, oh no!

Hello, hello?

I don’t know why you say goodbye, I say hello.”

Here’s another key research finding: the most effective therapists have significantly more planned terminations.

In a recent study, Norcross, Zimmerman, Greenberg, and Swift identified eight core, pantheoretical processes associated with successful termination. You can read the article here.  Better yet, download and begin using the “termination checklist”–a simple, yet helpful method for ensuring you are putting these evidence-based principles to work with your clients.  Best of all, listen to my recent interview with John Norcross, Ph.D., the study’s first author, as we discuss how therapists can master this vitally important part of the therapeutic experience.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

Filed Under: Behavioral Health, evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, Termination

More Deliberate Practice Resources…

May 30, 2017 By scottdm 1 Comment

what happenedLast week, I blogged about a free, online resource aimed at helping therapists improve their outcomes via deliberate practice.  As the web-based system was doubling as a randomized controlled trial (RCT), participants would not only be accessing a cutting-edge, evidence-based protocol but also contributing to the field’s growing knowledge in this area.

To say interest was high, doesn’t even come close.  Within 45 minutes of the first social media blast, every available spot was filled–including those on the waiting list!  Lead researchers Daryl Chow and Sharon Lu managed to open a few additional spots, and yet demand still far exceeded supply.

I soon started getting emails.  Their content was strikingly similar–like the one I received from Kathy Hardie-Williams, an MFT from Forest Grove, Oregon, “I’m interested in deliberate practice!  Are there other materials, measures, tools that I can access and start using in my practice?”

The answer is, “YES!”  Here they are:

Cycle of Excellence cover - single

Resource #1.  Written for practicing therapists, supervisors, and supervisees, this volume brings together leading researchers and supervisors to teach practical methods for using deliberate practice to improve the effectiveness of psychotherapy.

Written for practicing therapists, supervisors, and supervisees, this volume brings together leading researchers and supervisors to teach practical methods for using deliberate practice to improve the effectiveness of psychotherapy.

Twelve chapters split into four sections covering: (1) the science of expertise and professional development; (2) practical, evidence-based methods for tracking individual performance; (3) step-by-step applications for integrating deliberate practice into clinical practice and supervision; and (4) recommendations for making psychotherapist expertise development routine and expected.

“This book offers a challenge and a roadmap for addressing a fundamental issue in mental health: How can therapists improve and become experts?  Our goal,” the editors of this new volume state, ” is to bring the science of expertise to the field of mental health.  We do this by proposing a model for using the ‘Cycle of Excellence’ throughout therapists’ careers, from supervised training to independent practice.”

The book is due out June 1st.  Order today by clicking here: The Cycle of Excellence: Using Deliberate Practice to Improve Supervision and Training

Resource #2: The MyOutcomes E-Learning Platform

The folks at MyOutcomes have just added a new module on deliberate practice to their already extensive e-learning platform.  The information is cutting edge, and the production values simply fantastic.  More, MyOutcomes is offering free access to the system for the first 25 people who email to support@myoutcomes.com.  Put the words, “Responding to Scott’s Blogpost” in the subject line.  Meanwhile, here’s a taste of the course:

Resource #3:

proDLast but not least, the FIT Professional Development Intensive.  There simply is no better way to learn about deliberate practice than to attend the upcoming intensive in Chicago.  It’s the only such training available.  Together with my colleague, Tony Rousmaniere–author of the new book, Deliberate Practice for Psychotherapists: A Guide to Improving Clinical Effectiveness, we will help you develop an individualized plan for improving your effectiveness based on the latest scientific evidence on expert performance.

We’ve got a few spaces left.  Those already registered are coming from spots all around globe, so you’ll be in good company.  Click here to register today!

OK, that’s it for now.  Wishing you all the best for the Summer,

Scott D. Miller, Ph.D.

 

Filed Under: Behavioral Health, deliberate practice, evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, Practice Based Evidence

Would you rather . . . be approved or improved?

February 5, 2017 By scottdm 6 Comments

Bad-SmellSome time ago, my son had a minor obsession.  Whether at the dinner table, in the car, or out for a walk, he was constantly peppering us with, “would you rather” questions?  You know the ones I mean, where you are forced to choose between two equally bizarre or unpleasant alternatives?

“Would you rather always have to say everything that is on your mind or never be able to speak again?”

“Would you rather have the hiccoughs the rest of your life or always feel like you have to sneeze but not be able to?”

And finally:

“Would you rather smell like poop and not know it or know you smell like poop but others can’t smell it?”

Fast forward to today.  fast-forward-button_318-37183

I was re-reading some recent research on the use of deliberate practice (DP) for improving individual clinician effectiveness.  As I’ve blogged about previously , one of the four crucial components of DP is feedback.  Not just any kind of mind you, but negative feedback–in particular, immediate, ongoing information regarding one’s errors and mistakes.

Put bluntly, receiving negative feedback is hard on the ego.  Despite what we may say or believe, a mountain of literature documents we all possess a strong need for social approval as well as a bias toward attributing positive traits to ourselves.

The same research shows that, beyond selective recall and well-known biases thinking-womanassociated with self-assessment, we actively work to limit information that conflicts with how we prefer to see ourselves (e.g., capable versus incompetent, perceptive versus obtuse, intuitive versus plodding, effective versus ineffective, etc.).

As a brief example of just how insidious ours efforts can be, consider an email sent out by the customer service department at a Honda dealership in Richmond, Virginia.

“As you may know,” it began, “we have a wide range of services performed here at our location and strive to do the best we can to accomodate each and everyone of our customers.”   A request for feedback followed, “There may be times we can not meet the needs and we would appreciate any feedback . . . for our company.”

So far so good.  The company was on the way to showing its customers that it cared.  It had sent a follow-up email.  It thanked its customers.  Most importantly, it invited them to provide the type of feedback necessary for improving service in the future.

The correspondence then ended, telling the recipient they would soon receive a survey which, “If you enjoyed or were satisfied with your recent visit and provide a 100% score you will receive a FREE oil change.”

Amazing, eh?  Thanks to my long-time colleague and friend, Arnold Woodruff, for noticing the irony in the email and passing it on to me.

For whatever reason, on reading it, one of those “would you rather” questions immediately came to my mind:

“Would you rather be approved or improved?”

No waffling now.  There is no in-between.  I can hear my son saying, “you have to choose!”

Why not join me and colleagues from around the world who are “choosing to improve” for our two-day intensive on deliberate practice.  Together with Dr. Tony Rousmaniere–the author of the new book Deliberate Practice for Psychotherapists—you’ll learn the latest, evidence-based strategies for improving your effectiveness.  Register today, by clicking here, or on the image below.

Until next time,

Scott D. Miller, Ph.D.
International Center for Clinical Effectiveness
proD

Filed Under: deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT

“Mind the Gap”: A Strategy for Insuring you get the Feedback you need to Improve your Game (whatever that is)

September 23, 2016 By scottdm 3 Comments

te1Join me in a brief “thought experiment.”  Suppose you were a gifted painter or photographer and had the chance to provide an image of yourself that would endure–and perhaps be the only one people would know you by–for hundreds of years after your death.  How would you proceed?  What criteria would guide your work, be used to deem it a success?

Seriously, take a moment to picture yourself in your mind’s eye…

Now, consider the painting below.

rembrandt-1669

It’s a self-portrait painted by Rembrandt van Rijn.

Why, you might wonder, would a painter widely considered one, if not, “the greatest . . .in European art” leave the world such an unflattering portrait?   His face is puffy and pale, his hair thin and receding, and his cloak and cap plain and undistinguished. And lest one assume this particular image is an exception to otherwise beautiful renditions of himself, think again.  The self-portraits he painted throughout his life share the same, homely quality.

Clearly, the skills Rembrandt possessed ensure he could have made himself look any way he wanted and the world would have been none the wiser.  Why such brutal honesty?  More to the point, given the choice, would you paint yourself as you truly are or as others generally see you?

The answer, according to some to very interesting and recent research, is, “No.”  The gap between how we view ourselves on the one hand and, on the other, look, think, and act in life is often quite wide.  And, it turns out, we fill that space with people who agree with us, who see us as we want to see ourselves.

Actually, according to Paul Green of the University of North Carolina at Chapel Hill, people actively, “move away from those who provide feedback that is more negative than their view of themselves. They do not listen to their advice and prefer to stop interacting with them altogether . . . tend[ing] to strengthen their bonds with people who only see their positive qualities.”

Surrounding ourselves with people who shore up our self-image is both understandable and needed.  Life is hard.  Support is a must.  The problem is that this largely unconscious behavior undermines performance.  In a variety of work contexts, for example, the researchers have documented that, “dropping relationships that provide disconfirming reviews [leads] to decreases in performance in the succeeding year.”

The importance of being able to see ourselves as we are is something Rembrandt appeared to understand quite well.  Indeed, it likely accounted for a significant portion of his artistic mastery.img_20160922_1024365

Bottom line?  Whatever our particular craft, if the goal is to improve, to get better at what we do, it’s essential to “mind the self-assessment gap.”  First, we have to be aware it exists.  Next, we have to actively work to solicit views other than our own.

In the therapy world, our team has pioneered a simple set of tools clinicians can use to solicit feedback about the quality and effectiveness of their work.  Multiple clinical trials document improved results.  Read this recently published free article to learn how to get started.

Of course, not all feedback is useful.  In the upcoming Intensive Trainings in Chicago, we’ll teach you how to sort helpful from unhelpful, guided by the latest and only empirical research published to date documenting what it takes for individual therapists to become more effective.

Join me, an international faculty, and practitioners from around the world for the Advanced FIT and Supervision trainings this coming March!

Until then,

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

 

Filed Under: excellence, Feedback, Feedback Informed Treatment - FIT

What is the essential quality of effective Feedback? New research points the way

February 8, 2016 By scottdm 1 Comment

“We should not try to design a better world,” says Owen Barder, senior fellow at the Center for Global Development, “We should make better feedback loops.”

buzzwordFeedback has become a bit of a buzzword in mental health.  Therapists are being asked to use formal measures of progress and the quality of the relationship and use the resulting information to improve effectiveness.

As it turns out, not all feedback is created alike.  The key to success is obtaining information that gives rise to increased consciousness—the type that causes one to pause, reflect, rethink.  In a word, negative feedback.

feedbackNearly a decade ago, we noticed a curious relationship between effectiveness and the therapeutic alliance.  Relationships that started off poorly but improved were nearly 50% more effective than those rated good throughout.

And now, more evidence from a brand-new, real-world study of therapy with adolescents (Owen, Miller, Seidel, & Chow, 2016).  Therapists asked for and received feedback via the Outcome and Session Rating scales at each and every visit.  Once again, relationships that improved over the course of treatment were significantly more effective.

Importantly, obtaining lower scores at the outset of therapy provides clinicians with an opportunity to discuss and address problems early in the working relationship.  But, how best to solicit such information?

The evidence documents that using a formal measure is essential, but not enough.  The most effective clinicians work hard at creating an environment that not invites, but actively utilizes feedback.  Additionally, they are particularly skilled at asking questions that go beyond platitudes and generalities, in the process transforming client experience into specific steps for improving treatment.

DemingAs statistician and engineer Edward Deming once observed, “If you do not know how to ask the right question, you discover nothing.”

Little useful information is generated when clients are asked, “How did you feel about the session today?” “Did you feel like I (listened to/understood) you?” or “What can I do better?”

The best questions are:

  • Specific rather than general;
  • Descriptive rather than evaluative;
  • Concerned with quantities rather than qualities; and are
  • Task rather than person-oriented.

Over the years, we’ve come to understand that learning to ask the “right” question takes both time and practice.  It’s not part of most training programs, and it only comes naturally to a few.  As a result, many therapists who start using formal measures to solicit feedback about progress and the therapeutic relationship, give up, frustrated in their efforts to solicit helpful feedback.

Learning to develop better “feedback loops,” as Barder recommends, is the focus for the upcoming FIT Implementation, Training of Trainers, and Professional Development Intensives scheduled for August in Chicago, Illinois (USA).  Our March courses sold out months in advance so reserve your spot now by clicking the icons to the right.

Until then, get started with these free articles.

Best wishes,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
IMG_20160121_122453

Filed Under: Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance

The Benefits of Doubt: New Research Sheds Light on Becoming a More Effective Therapist

December 9, 2015 By scottdm 6 Comments

puzzle

These are exciting times for clinicians.  The pieces of the puzzle are falling into place.  Researchers are finally beginning to understand what it takes to improve the effectiveness of psychotherapy.  Shifting away from the failed, decades-long focus on methods and diagnosis, attention has now turned to the individual practitioner.

Such efforts have already shown a host of factors to be largely ineffective in promoting therapist growth and development, including:

  • Supervision;
  • Continuing education;
  • Therapist personal therapy;
  • Clinical experience; and
  • Access to feedback

In October, I blogged about the largest, longitudinal study of therapists ever conducted.  Despite having access to ongoing, formal feedback from their clients for as long as 17 years, clinicians in the study not only did not improve, their outcomes actually deteriorated, on average, year after year.

Such findings contrast sharply with beliefs of practitioners who, according to other studies, see themselves as improving with time and experience.  In fact, findings on all the practices noted above contrast sharply with beliefs commonly-held in the field:

  • Supervision is at the top of the list of experiences therapists cite as central to their growth and development as practitioners. By contrast, the latest review of the literature concludes, “We do not seem to be any more able to say now (as opposed to 30 years ago) that psychotherapy supervision contributes to patient outcome” (p. 235, Watkins 2011).
  • Although most clinicians value participating in continuing education activities—and licensure requirements mandate attendance—there is no evidence such events engender learning, competence, or improved outcomes. Neither do they appear to decrease disciplinary actions, ethical infractions, or inspire confidence on the part of therapy consumers.
  • Therapist personal therapy is ranked as one of the most important sources of professional development despite there being no evidence it contributes to better performance as a clinician and some studies documenting a negative impact on outcome (see Orlinsky & Ronnestad, 2005);

If any of the research I’ve cited surprises you, or gives you pause, there is hope!  Really. Read on.

doubt_dice

Doubt, it turns out, is a good thing–a quality possessed by the fields’ most effective practitioners.  Possessing it is one of the clues to continuous professional development.  Indeed, several studies now confirm that “healthy self-criticism,” or professional self-doubt (PSD), is a strong predictor of both alliance and outcome in psychotherapy (2015).

To be sure, I’m not talking about assuming a “not-knowing” stance in therapeutic interactions.  Although much has been written about having a “beginner’s mind,” research by Nissen-Lie and others makes clear that nothing can be gained by either our feigned or willful ignorance.

Rather, the issue is about taking the time to reflect on our work.  Doing so on a routine basis prevents us from falling prey to the “over-claiming error”—a type of confidence that comes from the feeling we’ve seen something before when, in fact, we hnot listeningave not.

The “over-claiming error” is subtle, unconscious, and fantastically easy to succumb to and elicit.  In a very clever series of experiments, for example, researchers asked people a series of questions designed either to engender a feeling of knowledge and expertise or ignorance.  Being made to feel more knowledgeable, in turn, lead people to act less open-mindedly and feel justified in being dogmatic.  Most importantly, it caused them to falsely claim to know more about the subject than they did, including “knowing” things the researchers simply made up!

In essence, feeling like an expert actually makes it difficult to separate what we do and do not know.  Interestingly, people with the most knowledge in a particular domain (e.g., psychotherapy) are at the greatest risk.  Researchers term the phenomenon, “The ‘Earned Dogmatism’ Effect.”

What to do?  The practices of highly effective therapists provide some clues:

  1. Adopt an “error-centric” mindset. Take time to reflect on your work, looking for and then examining moments that do not go well. One simple way to prevent over-claiming is to routinely measure the outcome of your work.  Don’t rely on your judgement alone, use a simple measures like the ORS to enhance facts from your fictions.
  1. Think like a scientist. Actively seek disconfirmation rather than confirmation of your beliefs and practices.  Therapy can be vague and ambiguous process—two conditions which dramatically increase the risk of over-claiming.  Seeking out a community of peers and a coach to review your work can be helpful in this regard.  No need to leave your home or office.  Join colleagues in a worldwide virtual community at: iccexcellence.com.
  1. Seek formal feedback from clients. Interestingly, research shows that highly effective therapists are surprised more often by what their clients say than average clinicians who, it seems, “have heard it all before.”  If you haven’t been surprised in a while, ask your clients to provide feedback about your work via a simple tool like the SRS.  You’ll be amazed by what you’ve missed.
  1. Attend the 2016 Professional Development Intensive this summer in Chicago. At this small group, intensive training, you will the latest evidence-based steps for unlocking your potential as a therapist.

Best wishes for the Holidays.  As always, please leave a comment.

Scott

Scott D. Miller, Ph.D.
International Center for Clinical Excellence
Scott D. Miller - Australian Drug and Alcohol Symposium

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, Top Performance

Swedish National Audit Office concludes: When all you have is CBT, mental health suffers

November 10, 2015 By scottdm 17 Comments

hammer-screw

“The One-Sided Focus on CBT is Damaging Swedish Mental Health”

That’s the headline from one of Sweden’s largest daily newspapers for Monday, November 9th.  Professor Gunnar Bohman, together with colleagues and psychotherapists, Eva Mari Eneroth Säll and Marie-Louise Ögren, were responding to a report released last week by the Swedish National Audit Office (NAO).

Back in May 2012, I wrote about Sweden’s massive investment in cognitive behavioral therapy (CBT).  The idea was simple: address rising rates of disability due to mental illness by training clinicians in CBT.  At the time, a mere two billion Swedish crowns had been spent.

Now, several years and nearly 7 billion Crowns later, the NAO audited the program.  Briefly, it found:

  •  The widespread adoption of the method had no effect whatsoever on the outcome of people disabled by depression and anxiety;
  • A significant number of people who were not disabled at the time they were treated with CBT became disabled thereby increasing the amount of time they spent on disability; and 
  • Nearly a quarter of people treated with CBT dropped out.

The Swedish NAO concludes, “Steering towards specific treatment methods has been ineffective in achieving the objective.”

choice

How, you might reasonably ask, could anyone think that restricting choice would improve outcomes?  It was 1966, when psychologist Abraham Maslow famously observed, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (p. 15, The Psychology of Science).  Still, many countries and professional organizations are charting a similar path today.

The choice is baffling, given the lack of evidence for differential efficacy among psychotherapeutic approaches. Consider a study I blogged about in April 2013.  It was conducted in Sweden at 13 different public health outpatient clinics over a three year period.  Consistent with 40 years of evidence, the researchers found that psychotherapy was remarkably effective regardless of the type of treatment offered!

Key-to-success-h-800So, what is the key to improving outcome?

As Bohman, Säll and Ögren point out in their article in Svenska Dagbladet, “offering choice…on the basis of patients’ problems, preferences and needs.”

The NAO report makes one additional recommendation: systematic measurement and follow-up.

As readers of this blog know, insuring that services both fit the consumer and are effective is what Feedback-Informed Treatment (FIT) is all about.  More than 20 randomized clinical trials show that this transtheoretical process improves retention and outcome.  Indeed, in 2013, FIT was deemed evidence-based by the Substance Abuse and Mental Health Services Administration.

Learn more by joining the International Center for Clinical Excellence–a free, web-based community of practitioners dedicated to improving the quality and effectiveness of clinical work.   Better yet, join colleagues from around the world at our upcoming March intensive trainings in Chicago!  Register soon as both the Advanced Intensive and FIT Supervision Courses are already more than half subscribed.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT

Do Psychotherapists Improve with Time and Experience?

October 27, 2015 By scottdm 18 Comments

researchThe practice known as “routine outcome measurement,” or ROM, is resulting in the publication of some of the biggest and most clinically relevant psychotherapy studies in history.  Freed from the limits of the randomized clinical trial, and accompanying obsession with manuals and methods, researchers are finally able to examine what happens in real world clinical practice.

A few weeks ago, I blogged about the largest study of psychotherapy ever published.  More than 1,400 therapists participated.  The progress of over 26,000 people (aged 16-95) treated over a 12 year period in primary care settings in the UK was tracked on an ongoing basis via ROM.  The results?  In an average of 8 visits, 60% of those treated by this diverse group of practitioners achieved both reliable and clinically significant change—results on par with tightly controlled RCT’s.  The study is a stunning confirmation of the effectiveness of psychotherapy.

This week, another mega-study was accepted for publication in the Journal of Counselexperienceing Psychology.   Once more,
ROM was involved.  In this one, researchers Goldberg, Rousemanier, Miller, Whipple, Nielsen, Hoyt, and Wampold examined a large, naturalistic data set that included outcomes of 6500 clients treated by 170 practitioners whose results had been tracked an average of 5 years.

Their question?

Do therapists become more effective with time and experience?

Their answer?  No.

readerFor readers of this blog, such findings will not be particularly newsworthy.  As I’ve frequently pointed out, experience has never proven to be a significant predictor of effectiveness.

What might be a bit surprising is that the study found clinicians’ outcomes actually worsened with time and experience.  That’s right.  On average, the longer a therapist practiced, the less effective they became!  Importantly, this finding remained even when controlling for several patient-level, caseload-level, and therapist-level characteristics, as well as when excluding several types of outliers.

Such findings are noteworthy for a number of reasons but chiefly because they contrast sharply with results from other, equally-large studies documenting that therapists see themselves as continuously developing in both knowledge and ability over the course of their careers.   To be sure, the drop in performance reported by Goldberg and colleagues wasn’t steep.  Rather, the pattern was a slow, inexorable decline from year to year.

Where, one can wonder, does the disconnect come from?  How can therapists’ assessments of themselves and their work be so at odds with the facts?  Especially considering, in the study by Goldberg and colleagues, participating clinicians had ongoing access to data regarding their effectiveness (or lack thereof) on real-time basis!  Even the study I blogged about previously—the largest in history where outcomes of psychotherapy were shown to be quite positive—a staggering 40% of people treated experienced little or no change whatsoever.  How can such findings be reconciled with others indicating that clinicians routinely overestimate their effectiveness by 65%?

Turns out, thboundariese boundary between “belief in the process” and “denial of reality” is remarkably fuzzy.  Hope is a  significant contributor to outcome—accounting for as much as 30% of the variance in results.  At the same time, it becomes toxic when actual outcomes are distorted in a manner that causes practitioners to miss important opportunities to grow and develop—not to mention help more clients.  Recall studies documenting that top performing therapists evince more of what researchers term, “professional self-doubt.”  Said another way, they are less likely to see progress where none exists and more likely to values outcomes over therapeutic process.

What’s more, unlike their more average counterparts, highly effective practitioners actually become more effective with time and experience.  In the article below, my colleagues and I at the International Center for Clinical Excellence identify several evidence-based steps any practitioner follow to match such results.

Let me know your thoughts.

Until next time,

Scott

Scott D. Miller, Ph.D.
headerMain8.pngRegistration is now open for our March Intensives in Chicago.  Join colleagues from around the world for the FIT Advanced and the FIT Supervision workshops.

Do therapists improve (preprint)
The outcome of psychotherapy yesterday, today, and tomorrow (psychotherapy miller, hubble, chow, seidal, 2013)

 

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance Tagged With: excellence, outcome rating scale, psychotherapy

The Verdict is “In”: Feedback is NOT enough to Improve Outcome

September 21, 2015 By scottdm 17 Comments

verdict-icon

 

 

 
Nearly three years have passed since I blogged about claims being made about the impact of routine outcome monitoring (ROM) on the quality and outcome of mental health services.  While a small number of studies showed promise, others results indicated that therapists did not learn from nor become more effective over time as a result of being exposed to ongoing feedback.  Such findings suggested that the focus on measures and monitoring might be misguided–or at least a “dead end.”

Well, the verdict is in: feedback is not enough to improve outcomes.  Indeed, researchers are finding it hard to replicate the medium to large effects sizes enthusiastically reported in early studies, a well-known phenomenon called the “decline effect,” observed across a wide range of scientific disciplines.

decline1

 

 

 

 

In a naturalistic multisite randomized clinical trial (RCT) in Norway, for example, Amble, Gude, Stubdal, Andersen, and Wampold (2014) found the main effect of feedback to be much smaller (d = 0.32), than the meta-analytic estimate reported by Lambert and Shimokawa (2011 [d = 0.69]).  A more recent study (Rise, Eriksen, Grimstad, and Steinsbeck, 2015) found that routine use of the ORS and SRS had no impact on either patient activation or mental health symptoms among people treated in an outpatient setting.  Importantly, the clinicians in the study were trained by someone with an allegiance to the use of the scales as routine outcome measures.

Fortunately, a large and growing body of literature points in a more productive direction.  Consider the recent study by De Jong, van Sluis, Nugter, Heiser, and Spinhoven (2012), which found that a variety of therapist factors moderated the effect ROM had on outcome. Said another way, in order to realize the potential of feedback for improving the quality and outcome of psychotherapy, emphasis must shift away from measurement and monitoring and toward the development of more effective therapists.

What’s the best way to enhance the effectiveness of therapists?  Studies on expertise and expert performance document a single, underlying trait shared by top performers across a variety of endeavors: deep domain-specific knowledge.  In short, the best know more, see more and, accordingly, are able to do more.  The same research identifies a universal set of processes that both account for how domain-specific knowledge is acquired and furnish step-by-step directions anyone can follow to improve their performance within a particular discipline.  Miller, Hubble, Chow, & Seidel (2013) identified and provided detailed descriptions of three essential activities giving rise to superior performance.  These include: (1) determining a baseline level of effectiveness; (2) obtaining systematic, ongoing feedback; and (3) engaging in deliberate practice.

I discussed these three steps and more, in a recent interview for the IMAGO Relationships Think Tank.  Although intended for their members, the organizers graciously agreed to allow me to make the interview available here on my blog. Be sure and leave a comment after you’ve had a chance to listen!


Until next time,

Scott

Scott D. Miller, Ph.D.
www.whatispcoms.com
www.iccexcellence.com

headerMain8.png

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT

Intake: A Mistake

September 4, 2015 By scottdm 1 Comment

bad idea

 

 

 

 

Available evidence leaves little doubt.  As I’ve blogged about previously, separating intake from treatment results in:

• Higher dropout rates;
• Poorer outcomes;
• Longer treatment duration; and
• Higher costs

And yet, in many public behavioral health agencies, the practice is commonplace. What else can we expect?

Chronically underfunded, and perpetually overwhelmed by mindless paperwork and regulation, agencies and practitioners are left with few options to meet the ever-rising number of people in need of help. Between 2009 and 2012, for example, the number of people receiving mental health services increased by 10%. During the same period, funding to state agencies decreased $4.35 billion. Not long ago, in my own home town of Chicago, the city shuttered half—50%–of the city’s mental health clinics, forcing the remaining, already burdened, agencies to absorb an additional 5,000 people in need of care.

crowd

 

 

 

Simply put, the practice of separating intake from treatment is little more than a form of “crowd management”–and an ineffective one at that.

feedback keyboard

 

 

 

 

Adding to the growing body of evidence is a new study investigating the impact of computerized intake on the consumer’s experience of the therapeutic relationship and continuation in care. Not only did researchers find that therapist use of a computer had a negative impact on the quality of the working relationship—one of the best predictors of outcome–but clients were between 62 and 97% less likely to continue in care!

domino

 

 

 

 

It’s not hard to see how these well-intentioned—some would argue, absolutely necessary—solutions actually end up exacerbating the problem. Money is wasted when the paperwork is completed but people don’t come back; money that would be better spent providing treatment. Those who do not return don’t disappear, they simply access services in other ways (e.g., the E.R., police and social services, etc.)—after all, they need help! The ones who do continue after intake, experience poorer outcomes and stay longer in care, a cost to both the consumer and the system.

What to do?

solution

 

 

 

 

In addition to pushing back against the mindless regulation and paperwork, there are several steps practitioners and agency managers can take:

  • Stop separating intake from treatment

The practices does not save time and actually increases costs. Consider having consumers complete as much of the paperwork as possible before the session begins. The first visit is critical. It determines whether people continue or drop pout. Listen first. At the end of the visit, review the paperwork, filling in missing data, and completing any remaining forms.

  • Begin monitoring outcome

Research to date shows that routinely monitoring progress reduces dropout rates and the length of time spent in treatment while simultaneously improving outcome. Combined, such results work to alleviate the bottleneck at the entry point of services.

  • Begin monitoring the quality of the therapeutic relationship:

Engagement and outcomes are improved when problems in the relationship are identified and openly discussed. Even when intake is separated from treatment, feedback should be sought. Data to date indicate that the most effective clinicians seek and more often receive negative feedback, a skill that enables them to better meet the needs of those they serve.

Getting started is not difficult. Indeed, there’s an entire community of professionals just a click away who are working with and learning from one another. The International Center for Clinical Excellence is the largest, web based community of mental health professionals in the world. It’s ad free and costs nothing to join.

Sign up for the ICCE Fall Webinar. You will learn:

  • The Empirical Basis for Feedback Informed Treatment
  • Basics of Outcome and Alliance Measurement
  • Integrating Feedback into Practice & Creating a Culture of Feedback
  • Understanding Outcome and Alliance Data

Register online at: https://www.eventbrite.ie/e/fall-2015-feedback-informed-treatment-webinar-series-tickets-17502143382. CE’s are available.

Finally, join colleagues and friends from around the world for the Advanced and FIT Supervision courses are held in March in Chicago. We work and play hard. You will leave with a thorough grounding in feedback-informed principles and practice. Registration is limited, and the courses tend to sell out several month in advance.

Until then,

Scott

Scott D. Miller, Ph.D. Director, International Center for Clinical Excellence

Scott D. Miller - Australian Drug and Alcohol Symposium

 

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, ICCE

Room for Improvement: Feedback Informed Treatment and the Therapeutic Relationship

May 10, 2015 By scottdm 8 Comments

My Scandinavian Grandmother Christina was fond of saying, “The room for improvement…is the biggest one in our house.”

Turns out, when it comes to engaging people in physical and mental health services, Grandma was right.  We healthcare professionals can do better—and recent research points the way.

Stanford psychologists Sims and Tsai found that recipients of care both choose, and are more likely to follow the recommendations of, healthcare providers who match how they ideally want to feel.   People who valued feeling excitement, for example, were more likely to choose a professional who promoted excitement and vice versa.

Bottom line?  Making the helping relationship FIT how people want to feel—their goals, values, and preferences—improves engagement and effectiveness.

Tailoring services in the manner suggested by Sims and Tsai is precisely what Feedback-Informed Treatment (FIT) is all about.  Two simple scales—the Outcome and Session Ratings Scales—facilitate this process, enabling helping professionals to assess and adjust treatment in real time to improve the FIT.

Overwhelmed by paperwork?  No worries.  As I have written about in previous blogposts (1, 2), several web-based and electronic solutions exist that make integration a snap.  The pioneer–the very first to come online–is MyOutcomes.

MYO

Since coming on the scene, the owners have doggedly sought feedback from users, working steadily to provide a system that maximizes practitioners’ effectiveness.  The latest version is packed full of goodies, including a mobile app and the ability to have clients provide feedback remotely (e.g., home, between visits, etc.).  Watch the video below to get a more comprehensive overview of its many features.

I’m also proud to say that the parent company of MyOutcomes has partnered with the International Center for Clinical Excellence to create the first online training on Feedback-Informed Treatment.   Importantly, the FIT E-learning program is not another webinar.  It is a true online classroom, complete with video instruction and an intuitive software interface that tailors learning and mastery to the individual user.

Together, the ORS and SRS, FIT E-learning, and MyOutcomes make “the room for improvement” a much less daunting, even enjoyable, undertaking.

I can almost see my Granma Stina smiling!

Until next time,

Scott

Scott D. Miller, Ph.D.
International Center for Clinical Excellence

P.S.:  We still have a few spots open for our FIT Implementation and FIT Ethics courses coming up in August. Don’t wait.  The number of participants is limited and both courses fill about two months in advance!

ethical 2Fit IMP

 

Filed Under: Feedback, Feedback Informed Treatment - FIT, FIT Software Tools, ICCE

Implementing Feedback Informed Treatment

April 1, 2015 By scottdm 1 Comment

bigdigSOHeurofighter

What do the Sydney Opera House, Boston Central Artery Tunnel, and Eurozone Typhoon Defense Project all have in common?   In each case, their developers suffered from, “The Planning Fallacy” (PF). First recognized in 1979 by Nobel Prize winning psychologists Daniel Kahneman and Amos Tversky, the planning fallacy is the all too human tendency to underestimate the amount of time and money projects will require for completion. The impact is far from trivial, especially if you are footing the bill. The Sydney Opera House, for example, took ten years longer and cost nearly 15 times more money than originally planned (102 versus 7 million). The tunnel project in Boston ran over budget to the tune of 12 billion dollars—a figure equivalent to 19,000 dollars for man, woman, and child living in the city!

Research documents that the same shortsightedness plagues implementation of new best practices in mental healthcare. As I blogged about previously, available data indicate that between 70 and 95 percent of such efforts fail. The same body of evidence shows that prior experience with similar projects offers no protection. Indeed, regardless of experience, when planners are asked to provide a “best” and “worst” case estimate, the vast majority fail to meet even their most dire predictions.

planning fallacy

The impact of a failed implementation on staff morale can be devastating—not to mention the waste of precious time and resources, and missed opportunity to provide more effective services to consumers. I’ve seen it first hand, had it whispered to me on breaks at workshops, as I crisscross the globe teaching about Feedback-Informed Treatment (FIT). At a workshop in Ohio, a woman approached me saying, “So, you are the guy that developed the Outcome and Session Rating Scales?” When I answered yes, she leaned in, and in a quiet voice, asked, “Will you be telling us how to use them? ‘Cause we’ve been using them at my agency for about a year, but no one knows what they’re for.” More recently, at a training on the west coast, a participant told me he and his colleagues started using the scales following a two- day workshop at his agency, but eventually stopped. “Initially, there was a lot of excitement,” he said, “We really bought in. All of us were all doing it, or at least trying. Then, it just kind of fizzled.” I nodded in recognition. The planning fallacy strikes again!

Since first being reviewed and listed on SAMHSA’s National Registry of Evidence Based Programs and Practices, interest in the proven approach to improving the outcome and retention of mental health services has exploded.  More than 100,000 practitioners have downloaded the ORS and SRS.   Given the brevity and simplicity of scales, it is easy to assume that implementation will be quick and relatively easy. Ample evidence, as well as experience in diverse settings around the world, strongly suggests otherwise.

It goes without saying that getting started is not the problem.   Fully implementing FIT is another story. It takes time and careful planning—for most, between five and seven years. Along the way, there’s plenty of support to aid in success:

  • Managers, supervisors, and clinicians can join a free, online, international community of nearly 10,000 like-minded professionals using FIT in diverse settings (www.iccexcellence.com). Every day, members connect and share their knowledge and experience with each other;
  • A series of “how to” manuals and free, gap assessment tool (FRIFM) are available to aid in planning, guiding progress, and identifying common blind spots in implementation; ,
  • The 2-day FIT Implementation workshop provides an in-depth, evidence-based training based on the latest findings from the field of implementation science. Over the last few years, we’ve learned a great deal about what leads to success. Immunize yourself against the planning fallacy by joining colleagues from around the world for this event.
  • Finally, there’s technology.  Last blogpost, I introduce PragmaticTracker.com, a system for administering the ORS and SRS.  The video below introduces www.fit-outcomes.com, a simple, easy-to-use website with a clean, Apple-like interface that makes gathering and interpreting outcome and alliance data a snap.

That’s it for now.

Until next time, best wishes,

Scott Miller (Evolution 2014)

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

 

 

Filed Under: Conferences and Training, Feedback, Feedback Informed Treatment - FIT, FIT Software Tools Tagged With: feedback informed treatment

Is Documentation Helping or Hindering Mental Health Care? Please Let me know.

November 23, 2014 By scottdm 61 Comments

Drowning in paperwork

So, how much time do you spend doing paperwork?  Assessments, progress notes, treatment plans, billing, updates, etc.–the lot?

When I asked the director of the agency I was working at last week, it took him no time to respond. “Fifty percent,” he said, then added without the slightest bit of irony, “It’s a clinic-wide goal, keeping it to 50% of work time.”

Truth is, it’s not the first time I’ve heard this figure.  Wherever I travel–whether in the U.S. or abroad–practitioners are spending more and more time “feeding the bureaucratic beast.”  Each state or federal agency, regulatory body, and payer wants a form of some kind.  Unchecked, regulation has lost touch with reality.

Just a few short years ago, the figure commonly cited was 30%.  In the last edition of The Heart and Soul of Change, published in 2009, we pointed out that in one state, “The forms needed to obtain a marriage certificate, buy a new home, lease an automobile, apply for a passport, open a bank account, and die of natural causes were assembled … altogether weighed 1.4 ounces.  By contrast, the paperwork required for enrolling a single mother in counseling to talk about difficulties her child was experiencing at school came in at 1.25 pounds” (p. 300).

Research shows that a high documentation to clinical service ratio leads to higher rates of:

  • Burnout and job dissatisfaction among clinical staff;
  • Fewer scheduled treatment appointments;
  • No shows, cancellations, and disengagement among consumers.

Some potential solutions have emerged.  “Concurrent ,” a.k.a., “collaborative documentation.”  It’s a great idea: completing assessments, treatment plans, and progress notes together with clients during rather than after the session.  We started doing this to improve transparency and engagement at the Brief Family Therapy Center in Milwaukee, Wisconsin back in the late 1980’s.  At the same time, it’s chief benefit to date seems to be that it saves time on documentation–as though filling out paperwork is an end in and of itself!

Ostensibly, the goal of paperwork and oversight procedures is to improve accountability.  In these evidence-based times, that leads me to say, “show me the data.”  Consider the wide-spread practice–mandate, in most instances–of treatment planning. Simply put, it is less science than science fiction.  Perhaps this practice improves outcomes in a galaxy far, far away but on planet Earth, supporting evidence is spare to non-existent.  Where is the evidence that any of the other documentation improves accountability, benefits consumers, or results in better outcomes?

Put bluntly, the field needs an alternative.  What practice not only insures accountability but simultaneously improves the quality and outcome of behavioral health services?  Routinely and formally seeking feedback from consumers about how they are treated and their progress.

Soliciting feedback need not be time consuming nor difficult.  Last year, two brief, easy-to-use scales were deemed “evidence-based” by  the Substance Abuse and Mental Health Services Administration (SAMHSA).  The International Center for Clinical Excellence received perfect scores for the materials, training, and quality assurance procedures it makes available for implementing the measures into routine clinical practice:

SAMHSA 1

SAMHSA 2

Then again, these two forms add to the paperwork already burdening clinicians.  The main difference?  Unlike everything else, numerous RCT’s document that using these forms increases effectiveness and efficiency while decreasing both cost and risk of deterioration.

Learn more at the official website: www.whatispcoms.com.  Better yet, join us in Chicago for our upcoming intensives in Feedback Informed Treatment and Supervision:

Advanced FIT Training (2015)FIT Supervision Training (2015)

In the meantime, would you please let me know your thoughts?  To paraphrase Goldilocks, is the amount of documentation you are required to complete, “Too much,” Too little,” or “Just about Right!”  Type in your reply below!

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, Practice Based Evidence

What articles have 140,000 of your colleagues read to improve their practice?

November 21, 2014 By scottdm 1 Comment

Reading

Each week, I upload articles to the web about how to improve effectiveness. There are a lot to choose from, but here are the top ones read by behavioral health professionals around the world:

  • Measures and Feedback 2014

This is the latest version of the most widely-read upload on the site. It summarizes all of the available research about using feedback to improve retention in and outcome of care, including studies using the ORS and SRS.

  • How to Improve your Effectiveness

A short, fun article that highlights the evidence-based steps for improving one’s effectiveness as a behavioral health provider. Feedback, it turns out, is not enough. This article reviews the crucial step that makes all the difference.

Finally, here’s a link to a simple-to-use tool for interpreting scores on the ORS:

  • ORS Reliable Change Chart

That’s it for now. Best wishes in your work. Stay in touch.

Scott Miller (Evolution 2014)
Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
info@scottdmiller.com

Advanced FIT Training (2015)
Registration is open for the Advanced Training in Feedback-Informed Treatment (FIT). Learn how to integrate this SAMHSA certified evidence-based practice into your work or agency. We promise you three comprehensive, yet fun-filled days of learning together with colleagues from around the world.

 

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance

What can therapists learn from the CIA? Experts versus the "Wisdom of the Crowd"

May 6, 2014 By scottdm Leave a Comment

Central psychotherapy agency

What can we therapists learn from the CIA?  In a phrase, “When it comes to making predictions about important future events, don’t rely on experts!”

After a spate of embarrassing, high-profile intelligence failures, a recent story showed how a relatively small group of average people made better predictions about critical world events than highly-trained analysts with access to classified information.  The four-year study, known as the Good Judgment Project, adds to mounting evidence regarding the power of aggregating independent guesses of regular folks–or what is known as, “the wisdom of the crowd.”

When it comes to therapy, multiple scientific studies show that inviting the “wisdom of the crowd” into treatment as much as doubles effectiveness, while simultaneously cutting drop out and deterioration rates.

Whatever your profession, work setting, or preferred therapeutic approach, the process involves formally soliciting feedback from clients and then comparing the results to empirically established benchmarks.   Getting started is easy:

  • Download and  begin using two free, easy to use tools–one that charts progress, the other the quality of the therapeutic relationship–both of which are listed on SAMHSA’s National Registry of Evidence Based Programs and Practices.
  • Next, access cutting edge technology available on the web, smartphones, and tablets, that makes it easy to anonymously compare the progress of  your clients to effective patterns of practice worldwide.

You can learn more at: www.whatispcoms.com.  Plus, the ICCE–the world’s largest online community of professionals using feedback to enhance clinical judgment–is available at no cost to support you in your efforts.

While you’re at it, be sure and join fellow practitioners from the US, Canada, Europe, and Australia for the “Training of Trainers” or two-day FIT Implementation Intensive coming up this August in Chicago.  You’ll not only learn how to use the measures, but also tap into the collective wisdom of clients and practitioners around the globe.   Space is limited, and we are filling up quickly, so don’t wait to register.

Filed Under: Feedback, Feedback Informed Treatment - FIT Tagged With: feedback, feedback informed treatment, icce, international center for cliniclal excellence, National Registry of Evidence Based Programs and Practices, NREPP, PCOMS, SAMHSA, therapy, Training

Did you know your clients can tell if you are happy?

January 19, 2014 By scottdm 2 Comments

Are_You_Happy

It’s true.  Adding to a growing literature showing that the person of the therapist is more important than theoretical orientation, years of experience, or discipline, a new study documents that clients are sensitive to the quality of their therapist’s life outside of treament.  In short, they can tell when you are happy or not.  Despite our best efforts to conceal it, they see it in how we interact with them in therapy.  By contrast, therapists’ judgements regarding the quality of the therapy are biased by their own sense of personal well-being. The solution?  Short of being happy, it means we need to check in with our clients on a regular basis regarding the quality of the therapeutic relationship.  Multiple randomized clinical trials show that formally soliciting feedback regarding progress and the alliance improves outcome and continued engagement in treatment.  One approach, “Feedback-Informed Treatment” is now listed on SAMHSA’s National Registry of Evidence-Based Programs and Practices.  Step-by-step instructions and videos for getting started are available on a new website: www.pcomsinternational.com. Seeking feedback from clients not only helps to identify and correct potential problems in therapy, but is also the first step in pushing one’s effectiveness to the next level.  In case you didn’t see it, I review the research and steps for improving performance as a therapist in an article/interview on the Psychotherapy.net website.  It’s sure to make you happy!

Filed Under: CDOI, Feedback, Feedback Informed Treatment - FIT, PCOMS Tagged With: behavioral health, common factors, evidence based practice, excellence, healthcare, productivity, Therapist Effects

Cutting Edge Feedback

November 22, 2011 By scottdm Leave a Comment

Earth | Time Lapse View from Space, Fly Over | NASA, ISS

Using feedback to guide and improve the quality and outcome of behavioral health services is growing in popularity.  The number of systems available for measuring, aggregating, and interpreting the feedback provided by consumers is increasing.  The question, of course, is, “which is best?”  And the answer is, “it depends on the algorithms being used.”

Over a decade ago, my colleagues and I developed a set of mathematic equations that enabled us to plot the “expected treatment response” or ETR of a client based on their first session Outcome Rating Scale (ORS) score.  Although the math was complicated, the idea was not: therapists and clients could compare outcomes from session to session to the benchmark provided by the ETR.  If too much or too little progress were being made, client and therapist could discuss what changes might be made to the services being offered in order to insure more effective or durable progress.  It was a bold idea and definately “cutting edge” at the time–after all, 10 years ago, few people were even measuring outcomes let alone trying to provide benchmarks for guiding clinical practice.  The formulas  developed at that time for plotting change in treatmentare still being used by many around the world with great effect.  At the same time, it was merely a first attempt.

I am proud and excited to be able to announce the development and launch of a new set of algorithms–the largest and most sophisticated to date–based on a sample of 427,744 administrations of the ORS, in 95,478 unique episodes of care, provided by 2,354 different clinicians.  Unlike the prior formulas–which plotted the average progress of all consumers successful and not–the new equations provide benchmarks for comparing individual consumer progress to both successful and unsuccessful treatment episodes. Consider an analogy to the field of medicine.  No one would be interested in a test for the effectiveness of a particular cancer treatment that compared an individual’s progress to to the average of all patients whether they lived or died.  People want to know, “will I live?”  And in order to answer that question, the ETR of both successful and ultimately unsuccessful treatments must be determined and the individual clients progress compared to both benchmarks.  Adjustments can be made to the services offered when the client’s session by session outcomes fit the ETR of treatments that ended unsuccessfully.

An example of the type of feedback provided by the new algorithms is found below.  The graph displays three zones of potential progress (or ETR’s) for a client scoring 15 on the ORS at intake.  Scores falling in the “green” area from session to session are similar to treatments that ended successfully.  As might be expected, those in the “red” zone, ended unsuccessfully.  Finally, scores in the “yellow” zone had mixed results.  In each instance, both the client and therapist are provided with instant feedback: green = on track, red = off track, yellow = concern.


The new algorithms will be a major focus of the upcoming “Advanced Intensive in Feedback-Informed Treatment (FIT)” scheduled for March 19th-22nd, 2012.  All those subscribing to the event also receive the newly released series of FIT treatment manuals.  Space is limited, as always, to 35 people and we are filling fast so please don’t wait.  So many exciting developments!

Now, if you haven’t already done so, click on the video at the start of this post.  I was floored by these satellite images.  In some way, I hope that the new algorithms, FIT training manuals, and the ICCE community can inspire a similar sense of perspective!

Filed Under: evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT Software Tools Tagged With: cdoi, Dodo Bird, randomized clinical trial

Are Mental Health Practioners Afraid of Research and Statistics?

September 30, 2011 By scottdm Leave a Comment

A few weeks back I received an email from Dr. Kevin Carroll, a marriage and family therapist in Iowa.  Attached were the findings from his doctoral dissertation.  The subject was near and dear to my heart: the measurement of outcome in routine clinical practice.  The findings were inspiring.  Although few graduate level programs include training on using outcome measures to inform clinical practice, Dr. Carroll found that 64% of those surveyed reporting utilizing such scales with about 70% of their clients!  It was particularly rewarding for me to learn that the most common measures employed were the…Outcome and Session Rating Scales (ORS & SRS)

As readers of this blog know, there are multiple randomized clinical trials documenting the impact that routine use of the ORS and SRS has on retention, quality, and outcome of behavioral health services.  Such scales also provide direct evidence of effectiveness.  Last week, I posted a tongue-in-cheek response to Alan Kazdin’s broadside against individual psychotherapy practitioners.  He was bemoaning the fact that he could not find clinicians who utilized “empirically supported treatments.”  Such treatments when utilized, it is assumed, lead to better outcomes.  However, as all beginning psychology students know, there is a difference between “efficacy” and “effectiveness” studies.  The former tell us whether a treatment has an effect, the latter looks at how much benefit actual people gain from “real life” therapy.  If you were a client which kind of study would you prefer?  Unfortunately, most of the guidelines regarding treatment models are based on efficacy rather than effectiveness research.  The sine qua non of effectiveness research is measuring the quality and outcome of psychotherapy locally.  After all, what client, having sought out but ultimately gained nothing from psychotherapy, would say, “Well, at least the treatment I got was empircally supported.”  Ludicrous.

Dr. Carroll’s research clearly indicates that clinicians are not afraid of measurement, research, and even statistics.  In fact, this last week, I was in Denmark teaching a specialty course in research design and statistics for practitioners.  That’s right.  Not a course on research in psychotherapy or treatment.  Rather, measurement, research design, and statistics.  Pure and simple.  Their response convinces me even more that the much talked about “clinician-researcher” gap is not due to a lack of interest on practitioners’ parts but rather, and most often, a result of different agendas.  Clinicians want to know “what will work” for this client.  Research rarely address this question and the aims and goals of some in the field remain hopelessly far removed from day to day clinical practice.  Anyway, watch the video yourself:

Filed Under: Feedback, Feedback Informed Treatment - FIT Tagged With: continuing education, holland, icce, ors, Outcome, psychotherapy, Session Rating Scales, srs

Changing Home-Based Mental Health Care for Good: Using Feedback Informed Treatment

February 8, 2011 By scottdm Leave a Comment

Some teach.  Some write.  Some publish research.  Arnold Woodruff and Kathy Levenston work for a living!  Kathy Levenston specializes in working with foster and adopted children.

Arnold Woodruff developed the first intensive in-home program run by a community services board in Virginia. He has over 30 years of experience, and has served as the President of the Virginia Association for Marriage and Family Therapy.  And now, these two dedicated professionals, certified trainers and associates of the International Center for Clinical Excellence, have just purchased Home for Good, the first home-based mental health program in the Richmond, VA area to use Feedback-Informed Treatment (FIT).

The program is now a 100% employee-owned company and part of a larger vision the two have for establishing customer-friendly mental health care to people in the Richmond area. Home for Good has been providing Intensive In-home Services (counseling, case management, and crisis support) to children, adolescents, and their families for the past two years. Home for Good has achieved superior results compared to other mental health programs, based on an analysis of data genderated from routine administration of the Outcome Rating Scale in clinical practice. Home for Good’s results are continuing to improve with the use of Feedback-Informed Treatment. Home for Good will soon be offering additional services, including outpatient individual, family, and group therapy.

Filed Under: Behavioral Health, Feedback, ICCE Tagged With: case management, cdoi, counseling, evidence based practice, Home for Good, randomized clinical trial

Getting FIT in the New Year: The Latest Evidence

January 18, 2011 By scottdm Leave a Comment

 John Norcross, Ph.D.  is without a doubt the researcher that has done the most to highlight the evidence-base supporting the importance of the relationship between clinician and consumer in successful behavioral healthcare.   The second edition of his book, Psychotherapy Relationships that Work, is about to be released. Like the last edition, this volume is a virtual treasure trove of research findings and empirically supported practices.

Among the many gems in the book is a chapter by Michael J. Lambert, Ph.D–pioneering researcher on “feedback-informed treatment” (FIT).  As usual, he does a masterful job summarizing the existing research on the subject. The data are overwhelmingly positive: seeking and using standardized feedback regarding the progress and outcome of treatment cuts drop out and deterioration rates and significantly improves outcome.

Lambert also reports the results of two meta-analyses. One performed on studies using his own OQ System family of measures, the other based on research using the ORS and SRS. Not only did he find ample empirical support for the two systems, but in the case of the ORS and SRS those therapies informed by feedback, “had 3.5 times higher odds of experiencing reliable change.”  Additionally, and importantly, the brief, 4-item ORS and SRS scales performed the same as the longer and more detailed OQ 45.2.

What can you do? First, order John’s book. Second, if you are not FIT, now is the time to register to use the measures.  And if you need support, why not join the International Center for Clinical Excellence? Like the measures, there is no cost. Right now, professionals from different disciplines, working in diverse settings are connecting with and learning from each other. Here’s a nudge: you’ll be able to reach John Norcross there—he’s one of ICCE’s newest members.

Filed Under: Behavioral Health, CDOI, Feedback, PCOMS Tagged With: cdoi, continuing education, icce, randomized clinical trial

Hope Transcends: Learning from our Clients

July 30, 2010 By scottdm Leave a Comment

“Hope Transcends” was the theme of the 39th Annual Summer Institute on Substance Abuse and Mental Health held in Newark, Delaware this last week.  I had the honor of working with 60+ clinicians, agency managers, peer supports, and consumers of mental health services presenting a two-day, intensive training on “feedback-informed clinical work.”  I met so many talented and dedicated people over the two days and even had a chance to reconnect with a number of folks I’d met at previous trainings– both at the Institute and elsewhere.

One person I knew but never had the privilege of meeting before was psychologist Ronald Bassman.  A few years back, he’d written a chapter that was included in my book, The Heroic Client.  His topic at the Summer Institute was similar to what he’d written for the book: harmful treatment.  Research dating back decades documents that approximately 10% of people deteriorate while in psychotherapy.  The same body of evidence shows that clinicians are not adept at identifying: (a) people who are likely to drop out of care; or (b) people who are deteriorating while in care.

Anyway, you can read about Ron on his website or pick up his gripping book A Fight to Be.  Briefly, at age 22 Ron was committed to a psychiatric hospital.  Over the next several years, he was diagnosed with paranoid schizophrenia and forcefully subjected to a series of humiliating, painful, degrading and ultimately unhelpful “treatments.”  Eventually, he escaped his own and the systems’ madness and became a passionate advocate for improving mental health services.  His message is simple: “we can and must do better.”  And, he argues persuasively, the process begins with building better partnerships with consumers.

One way to build bridges with consumers is routinely seeking their feedback regarding the status of the therapeutic relationship and progress of any services offered.  Indeed, the definition of “evidence-based practice” formally adopted by the American Psychological Association mandates that the clinician “monitor…progress…[and] If progress is not proceeding adequately…alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or the implementation of the goals of treatment)” (pp. 276-277, APA, 2006).  Research reviewed in detail on this blog documents significant improvement in both retention and outcome when clinicians use the Outcome and Session Rating Scales to solicit feedback from consumers.  Hope really does transcend.  Thank you Ron and thank you clinicians and organizers at the Institute.

And now, just for fun.  Check out these two new videos:


Filed Under: Behavioral Health, excellence, Feedback, Feedback Informed Treatment - FIT Tagged With: American Psychological Society APA, cdoi, feedback informed treatment, meta-analysis, ors, out rating scale, Outcome, psychology, public behavioral health, randomized clinical trial, schizophrenia, session rating scale, srs, the heroic client

  • 1
  • 2
  • Next Page »

SEARCH

Subscribe for updates from my blog.

  

Upcoming Training

Mar
17

Feedback Informed Treatment (FIT) Intensive ONLINE


Mar
22

FIT Supervision Intensive 2021 ONLINE


Mar
30

FIT SPRING CAFÉ 2


Aug
02

FIT Implementation Intensive 2021


Aug
04

Training of Trainers 2021

FIT Software tools

FIT Software tools

NREPP Certified

HTML tutorial

LinkedIn

Topics of Interest:

  • Behavioral Health (111)
  • behavioral health (4)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (28)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (66)
  • excellence (61)
  • Feedback (38)
  • Feedback Informed Treatment – FIT (206)
  • FIT (26)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (8)
  • Top Performance (39)

Recent Posts

  • Feedback Informed Treatment in Statutory Services (Child Protection, Court Mandated)
  • Do We Learn from Our Clients? Yes, No, Maybe So …
  • Developing a Sustainable Deliberate Practice Plan
  • Making Sense of Client Feedback
  • Umpires and Psychotherapists

Recent Comments

  • Asta on The Expert on Expertise: An Interview with K. Anders Ericsson
  • Michael McCarthy on Culture and Psychotherapy: What Does the Research Say?
  • Jim Reynolds on Culture and Psychotherapy: What Does the Research Say?
  • gloria sayler on Culture and Psychotherapy: What Does the Research Say?
  • Joseph Maizlish on Culture and Psychotherapy: What Does the Research Say?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training