SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

The Study of Excellence: A Radically New Approach to Understanding "What Works" in Behavioral Health

December 24, 2009 By scottdm 2 Comments

“What works” in therapy?  Believe it or not, that question–as simple as it is–has and continues to spark considerable debate.  For decades, the field has been divided.  On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).

While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years!  Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services.  Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome.  At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work.  After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors?  More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.

The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness.  Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations.  Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples).  On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors.  Little wonder that outcomes have stagnated.  Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.

What to do?  For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing.  Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.

Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery.  One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback.  Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!

Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work.  Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy.  And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.

My fear?  Focusing on measurement and feedback misses the point.  Simply put: it’s not seeking feedback that is important.  Rather, it’s what feedback potentially engenders in the user that is critical.  Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service.  “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. [2004]).

Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.”   Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others.  Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players.  Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board.  In other words, their immense knowledge is context specific.

A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge.  I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference.  I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place.  Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).

Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.

Filed Under: Behavioral Health, excellence, Feedback, Top Performance Tagged With: brendan madden, cdoi, cognitive behavioral therapy, common factors, continuing education, david barlow, evidence based medicine, evidence based practice, Evolution of Psychotherapy, feedback, icce, micheal lambert, ors, outcome rating scale, proms, session rating scale, srs, therapist, therapists, therapy

Holidays and Suicide: Tis’ the Season NOT!

December 21, 2009 By scottdm Leave a Comment

The notion that suicides increase during the holiday season is as traditional as “Santa Claus”–and, according to statistics dating back at least a decade, just as illusory.  In fact, research actually shows suicide rates to be the lowest in December!  According to Dan Romer, a researcher at the Annenberg Public Policy Center at the University of Pennsylvania, the holidays are simply not a time for suicide.  If you are trying to peg the rate to a particular month during the year, try May.  Moreover, even suicide attempts decline during the holiday season!  At Cuyahoga County Mental health, a group I’ve worked closely with over the last three years implementing Feedback Informed Treatment (FIT), the director of crisis services, Rick Oliver, says that reviews done by the agency show that calls from suicidal people actually drop off during this time of year.

The culprit for the lingering misconception?  The media and–hold onto your candy cane–healthcare professionals!  That’s right.  In a study published this month in the British Medical Journal, researchers Vreeman and Carroll, found that healthcare professionals believe in the suicide-holiday connection along with a number of other dubious ideas (including sugar leads to hyperactivity, poinsettias are poisonous, and people lose heat through their head).stop-it-sign

So, the advice to the media and healthcare professionals, given the evidence, can only be: STOP IT!  Stop associating the holiday season with increased risk of suicide.

Clearly, suicide can happen at any time and none of the foregoing implies that people can’t and don’t feel blue.  At the same time, the decrease in suicides during this period suggests a possible course of action: connection and generosity.  If you are feeling down, do your best to reach out.  And if you’re not, then extend your hand.

Filed Under: Behavioral Health, Suicide Tagged With: british medical hournal, cdoi, dan romer, healthcare, rick oliver, suicide

The Effects of Feedback on Medication Compliance and Outcome: The University of Pittsburgh Study

December 18, 2009 By scottdm 1 Comment

A number of years ago, I was conducting a workshop in Pittsburgh.  At some point during the training, I met Dr. Jan Pringle, the director of the Program Evaluation Research Unit in the School of Pharmacy at the University of Pittsburgh.

Jan had an idea: use outcome feedback to improve pharmacy practice and outcome.  Every year, large numbers of prescriptions are written by physicians (and other practitioners) that are never filled.  Whats more, surprisingly large number of the scripts that are filled, are either: (a) not taken; or (b) not taken properly.  The result?  In addition to the inefficient use of scarce resources, the disconnect between prescribers, pharmacists, and patients puts people at risk for poor healthcare outcomes.

Together with project coordinator and colleague, Dr. Michael Melczak, Jan set up a study using the ORS and SRS.  Over the last 3 years, I’ve worked as a consultant to the project–providing training and addressing issues regarding application in this first ever study of pharmacy.

Anyway, there were two different conditions in the study.  In the first, pharmacists–the practitioner most likely to interact with patients about prescriptions–engaged in “practice as usual.”  In the second condition, pharmacists used the ORS and the SRS to chart, discuss, and guide patient progress and the pharmacist-patient alliance.  Although the manuscript is still in preparation, I’m pleased to be able to report here that, according to Drs. Pringle and Melczak, the results indicate, “that the patients who were seen by the pharmacists who used [the] scales were significantly more likely to take their medications at the levels that would be likely to result in clinical impact than the patients who saw a pharmacists who did not use the scales…for hypertensive and hyperlipidemia drugs especially.”

Stay tuned for more…

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, medication adherence Tagged With: jan pringle, michael melczak, ors, outcome rating scale, pharmacy, session rating scale, srs

  • « Previous Page
  • 1
  • …
  • 96
  • 97
  • 98
  • 99
  • 100
  • …
  • 108
  • Next Page »

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

There are no upcoming Events at this time.

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (109)
  • Brain-based Research (2)
  • CDOI (12)
  • Conferences and Training (62)
  • deliberate practice (29)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (64)
  • excellence (61)
  • Feedback (36)
  • Feedback Informed Treatment – FIT (230)
  • FIT (27)
  • FIT Software Tools (10)
  • ICCE (23)
  • Implementation (6)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (9)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (37)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland Hypertension icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training