SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Changing Home-Based Mental Health Care for Good: Using Feedback Informed Treatment

February 8, 2011 By scottdm Leave a Comment

Some teach.  Some write.  Some publish research.  Arnold Woodruff and Kathy Levenston work for a living!  Kathy Levenston specializes in working with foster and adopted children.

Arnold Woodruff developed the first intensive in-home program run by a community services board in Virginia. He has over 30 years of experience, and has served as the President of the Virginia Association for Marriage and Family Therapy.  And now, these two dedicated professionals, certified trainers and associates of the International Center for Clinical Excellence, have just purchased Home for Good, the first home-based mental health program in the Richmond, VA area to use Feedback-Informed Treatment (FIT).

The program is now a 100% employee-owned company and part of a larger vision the two have for establishing customer-friendly mental health care to people in the Richmond area. Home for Good has been providing Intensive In-home Services (counseling, case management, and crisis support) to children, adolescents, and their families for the past two years. Home for Good has achieved superior results compared to other mental health programs, based on an analysis of data genderated from routine administration of the Outcome Rating Scale in clinical practice. Home for Good’s results are continuing to improve with the use of Feedback-Informed Treatment. Home for Good will soon be offering additional services, including outpatient individual, family, and group therapy.

Filed Under: Behavioral Health, Feedback, ICCE Tagged With: case management, cdoi, counseling, evidence based practice, Home for Good, randomized clinical trial

The Growing Evidence Base for Feedback-Informed Treatment (FIT)

January 25, 2011 By scottdm Leave a Comment

Dateline: February 2, 2011
Location: Anchorage, AK
Greetings from Anchorage, Alaska where I’ve been traveling and teaching about feedback-informed treatment (FIT).  On Monday, I worked with dedicated behavioral health professionals living and working in Barrow–the northern most point in the United States.  FIT has literally reached the “top of the world.”  How incredible is that?

Here I am pictured in front of a sign which locals told me would prove I’d made the long journey to the village of 5,000.  I look forward to returning soon to help the group with the “nuts and bolts” of implementing FIT across various behavioral health services–practitoners were keen to get started.

As I’ve crisscrossed the state, I’ve been proud to share the growing evidence-base for feedback informed work.  Below, the data is summarized in a free, downloadable PDF file, “Measures and Feedback,” which has been updated to include the latest research using the ORS and SRS to improve the quality and outcome of treatment.  If you accessed this file back in 2010, be sure to get this updated version.

Measures and feedback 2016 from Scott Miller

Filed Under: Feedback Informed Treatment - FIT Tagged With: cdoi, evidence based practice, icce

The War on Unhappiness Heats Up

November 24, 2010 By scottdm Leave a Comment

Back in September, I blogged about an article by Gary Greenberg published in the August issue of Harper‘s magazine that took aim at the “helping profession.”   He cast a critical eye on the history of the field, it’s colorful characters, constantly shifting theoretical landscape, and claims and counterclaims regarding “best practice.”   Several paragraphs were devoted to my own work; specifically, research documenting the relatively inconsequential role that particular treatment approaches play in successful treatment and the importance of using ongoing feedback to inform and improve mental health services.

Just this last week, while I was overseas teaching in Romania (more on that trip soon), I received an email from Dr. Dave of ShrinkRapRadio who felt the piece by Greenberg was unfair to the field in general and a mischaracterization of the work by many of the clinicians cited in the article, including me.  “I’ve got a blog on the Psychology Today website and I’m planning to take him to task a bit,” he wrote.

If you have not had a chance to read the Greenberg article, you can find it on my original blogpost.  It’s a must read, really.  As I said then, whatever your opinion about the present state of practice, “Greenberg’s review of current and historical trends is sobering to say the least–challenging mental health professionals to look in the mirror and question what we really know for certain–and a must read for any practitioner hoping to survive and thrive in the current practice environment.”  Then, take a moment and read Dr. Dave’s response.  With his permission, I’ve posted it below!

  

Popping The Happiness Bubble: The Backlash Against Positive Psychology

Readers will recall that in Part 1, I suggested that a backlash against the ebullience of the positive psychology movement was probably inevitable. The most visible sign of that rebellion was last year’s best-selling book by Barbara Ehrenreich, Bright-Sided: How The Relentless Promotion of Positive Thinking Has Undermined America. While I found myself in agreement with much of her appraisal of American culture and our historical fascination with “positive thinking,” I thought her critique of positive psychology fell short by equating positive psychology to “positive thinking.” It also seemed to me that she failed to recognize that a huge body of research conducted by an army of independent researchers is emerging on a very diverse range of topics, which have been subsumed under the general heading of positive psychology. And, finally, much of her argument was based on an ad hominem attack on Martin Seligman.

I found further evidence of this backlash in the lead article in the October 2010 issue of Harper’s by psychotherapist Gary Greenberg, “The War on Unhappiness: Goodbye Freud, Hello Positive Thinking.” Greenberg is the author of Manufacturing Depression, a book that came out earlier this year. In addition, he is a prolific writer who has published articles that bridge science, politics, and ethics in a number of leading magazines. So he’s got great credentials both as a psychologist and a writer. Yet, I found this particular article unsatisfying. At least, that was my reaction upon first reading. As I later read it a second time to write about it here, I got a clearer sense of what he was up to and found myself in substantial agreement with his overall thrust.

The stimulus for Greenberg’s piece appears to have been his attendance at the annual Evolution of Psychotherapy Conference in Anaheim earlier this year. He seems to take a pretty dyspeptic view of the whole event: “Wandering the conference, I am acquainted, or reacquainted, with Cognitive Behavioral Therapy, Ericksonian Hypnosis, Emotionally Focused Therapy, Focusing, Buddhist Psychology, Therapist Sculpting, Facilitating Gene Expression, and Meditative methods.” A forty-year veteran of the California personal-growth/therapy scene, myself, it’s easy to develop a jaundiced eye over time as a panoply of approaches come and go. Yet, I have to say my own view, as a result of over 300 podcast interviews with psychologists across a broad spectrum of orientations, is there is more of a developing consensus and that the differences between many approaches are relatively minor.

By contrast, Greenberg seems to go into despair.

As I say, it took two readings of Greenberg’s article to really get the overall sweep. On first reading, it seems to be a bit of a meander, beginning with some slighting anecdotes about Freud. Then we’re on to the Anaheim conference and some handwringing about the seeming tower of Babel created by the profusion of therapeutic approaches. This segues into a discussion of Rozenzwig’s 1936 “Dodo Bird Effect” which asserts that therapeutic orientation doesn’t matter because all orientations work. As the Dodo pronounces in Alice in Wonderland, “Everyone has won and all must have prizes.” According to Greenberg, the Dodo Bird Effect has been borne out in subsequent studies and the requisite common ingredient for therapeutic success is faith, both the client’s and the therapist’s.

Greenberg goes on to describe several of the presentations, most notably by Otto Kernberg, Scott D. Miller, David Burns, and Martin Seligman. Part of what put me off about this article on my first reading is that I have conducted in-depth interviews with the first three of these gentlemen and I would not have recognized them from Greenberg’s somewhat muddled account.

Otto Kernberg, MD, one of the grand old men of psychoanalysis, is characterized as intoning “the old mumbo jumbo about the Almost Untreatable Narcissistic Patient…” In my opinion, this really slights his lifetime commitment to research, his many contributions to object relations theory, and his role as Director of The Institute for Personality Disorders at the Cornell Medical Center.  In my interview with Dr. Kernberg, I was struck by the flexibility of this octogenerian to incorporate the findings of neuroscience, genetics, and even cognitive behavioral therapy in this thinking.

Greenberg seems to use Dr. Scott D. Miller’s research as supporting the Dodo Bird effect. I attended a daylong workshop with Scott Miller a few years ago and it was one of the best presentations I’ve ever seen. I also interviewed him for one of my podcasts. The key takeaway for me from Scott Miller’s work is that the Dodo Bird effect shows up only when therapeutic effectiveness is averaged across therapists. That is, on average, all psychotherapies are moderately effective. However, Miller reports that not all therapists are equally effective and that, if you look at therapists who are consistently rated as effective by their clients vs. therapists who are consistently rated as ineffective, then therapy emerges as a highly worthwhile enterprise.

As Miller said in my interview with him, “If the consumer is able to feed back information to the system about their progress, whether or not progress is being made, those two things together can improve outcomes by as much as 65%.”

As I say, I had difficulty recognizing Miller in Greenberg’s account. Evidently, Greenberg is critical of Miller having developed a standardized set of rating scales for clients to provide feedback to their therapists. Greenberg sees these scales as playing into the hands of managed care and the trend towards “manualized” therapies. However, in my interview with Miller, he is very clearly critical of managed care, at least in terms of their emphasis on particular treatments for particular diagnostic categories. As Miller said in his interview with me, “If there were inter-rater reliability that would be one thing; the major problem with the DSM is that is lacks validity, however. That these groupings of symptoms actually mean anything… and that data is completely lacking… We are clustering symptoms together much the way medicine did in the medieval period: this is the way we treated people and thought about people when we talked about them being phlegmatic for example; or the humors that they had. Essentially they were categorizing illnesses based on clusters of symptoms.”

I also had difficulty recognizing Stanford psychiatry professor, David Burns, from Greenberg’s summary of the session he attended with Burns.  In short, Greenberg portrays Burns, who has developed a Therapist’s Toolkit inventory as wishing to replace “open-ended conversation with a five-item test… to take an X-ray of our inner lives.” This runs counter to my experience of Burns who, for example, in my interview with Dr. Burns about his cognitive therapy approach to couples work said, “…cognitive therapy has become probably the most widely practiced and researched form of psychotherapy in the world. But I really don’t consider myself a cognitive therapist or any other school of therapy; I’m in favor of tools, not schools of therapy. I think all the schools of therapy have had important discoveries and important angles, but the problem is they are headed up by gurus who push too hard trying to say cognitive therapy is the answer to everything, or rational emotive therapy is the answer to everything, or psychoanalysis is the answer to everything. And that is reductionism, and kind of foolish thinking to my point of view.” This hardly sounds like someone who thinks he’s invented a paper-and-pencil test that will be the end-all of psychotherapy.

And then Greenberg goes on to skewer positive psychology, which is what drew me to his article in the first place. After all, the title “The War on Unhappiness” seems to promise that. Like Ehrenreich, however, Greenberg’s critique is largely an ad hominem attack on Seligman. For example, referring to his earlier work subjecting dogs to electric shock boxes to study learned helplessness, Greenberg characterizes Seligman as, “More curious about dogs than about the people who tortured them…” He goes on to recount Seligman’s presentation to the CIA on learned helplessness which became the basis for enhanced “interrogation” techniques in Iraq. Now, we are told Seligman is working with the U.S. Army to teach resilience to our troops. In Greenberg’s view, Seligman would have us going his dogs one better by “thriving on the shocks that come our way rather than merely learning to escape them.”

So, it turns out that Greenberg’s attack on positive psychology is rather incidental to his larger concern which turns out to be that clinical psychology has sold its soul to the evidence-based, managed-care lobby in order to feed at the trough of medical reimbursement.

Greenberg’s article is a circular ramble that begins with slighting references to Freud and psychoanalysis and then ends with Freud as the champion of doubt.

It took me two readings to see that Greenberg is essentially using Miller, Burns, and Seligman as foils to attack smug certainty and blind optimism, the enemies of doubt. Of himself, Greenberg concludes, “I’m wondering now why I’ve always put such faith in doubt itself, or, conversely, what it is about certainty that attracts me so much, that I have spent twenty-seven years, thousands of hours, millions of other people’s dollars to repel it.”

Greenberg evidently values the darker side, the questions, the unknown, the mystery. “Even if Freud could not have anticipated the particulars – the therapists-turned-bureaucrats, the gleaming prepackaged stories, the trauma-eating soldiers-he might have deduced that a country dedicated in its infancy to the pursuit of happiness would grow up to make it a compulsion. He might have figured that American ingenuity would soon, maybe within a century, find a way to turn his gloomy appraisal of humanity into a psychology of winners.”

I think I’m in agreement with at least some of Greenberg’s larger argument. My fear, however, is that the general reader will come away with the impression that psychotherapists don’t know what they are doing and that the whole enterprise is a waste of time and money. That would be too bad. Both because I don’t think it’s true and I don’t think Greenberg does either.

I encourage you to find Greenberg’s article and to post your own reactions here in the comments area.

I had planned to stake out my own position on positive psychology in response to the critiques of Ehrenreich and Greenberg. It’s looking like there may need to be a Part 3. Stay tuned!

Filed Under: Practice Based Evidence Tagged With: Barbara Ehrenreich, evidence based practice, gary greenberg, healthcare, Manufacturing Depression, mental health, psychology today

Pushing the Research Envelope: Getting Researchers to Conduct Clinically Meaningful Research

November 5, 2010 By scottdm Leave a Comment

ACE Health Innovations - Developers of openFIT and mFIT

At the recent ACE conference, I had the pleasure of learning from the world’s leading experts on expertise and top performance.  Equally stimulating were conversations in the hallways between presentations with clinicians, policy makers, and researchers attending the event.  One of those was Bill Andrews, the director of the HGI Practice Research Network in the UK who work over the last 3+ years has focused on clinicians whose outcomes consistently fall in the top quartile of effectiveness.

In this brief interview, Bill talks about the “new direction” his research on top performing clinicians is taking.  He is truly “pushing the research envelope, challenging the field to move beyond the simplistic randomized clinical trials comparing different treatment packages.  Take a look:

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, continuing education, evidence based practice, icce

Am-ACE-ing Events in Kansas City: The First International Achieving Clinical Excellence Conference

October 27, 2010 By scottdm Leave a Comment

Here’s a riddle for you:

What do therapists, researchers, case managers, magicians, surgeons, award winning musicians, counselors, jugglers, behavioral health agency directors, and balloon twisting artists have in common?

Answer:

They all participated in the first “Achieving Clinical Excellence” held last week in Kansas City, Missouri.

It’s true. The “motley” crew of presenters, entertainers, and attendees came to Kansas City learn the latest, evidence-based strategies for helping clinicians achieve their “personal best” and, in the process, improve the quality and outcome of behavioral health services.  Not only did participants and presenters come from all over the globe–Australia, New Zealand, Norway, Sweden, Denmark, Austria, the UK, Ireland, Scotland, Germany, Canada, Holland, and elsewhere–but ICCE web 2.0 technology was used to stream many of the presentations live to a worldwide audience (click on the link to watch the recordings).

“The atmosphere was positively electric,” one participant remarked to me on break, “and so friendly.   First, I was inspired.  Each presentation contained something new, a take-away.  Then I wanted to sit with other attendees and discuss the content.”

And thanks to “Gillis for Children and Families,” who not only sponsored and ran the event, but provided a full breakfast and lunch each day of the conference, participants had ample opportunity to meet, process, and network with each other.


Rich Simon                       Anders Ericsson                     Michael Ammar

Rich Simon, Ph.D., the editor of the Psychotherapy Networker, kicked off the event using his time at the podium to place the conference’s emphasis on excellence within the broader history of the field of psychotherapy.  He was followed by K. Anders Ericsson, the editor of the influential Cambridge Handbook of Expertise and Expert Performance, reviewed research on expert performance gathered over the last 3 decades.  Scott D. Miller, Ph.D., translated existing research on expert performance into steps for improving outcomes in behavioral health. On day 2, professional magician Michael Ammar delivered a stunning performance of close up magic while teaching a specific method of deliberate practice that clinicians can use to improve their skills.  Meanwhile, break out sessions led by psychologists, physicians, counselors, pharmacists, and agency directors addressed “nuts and bolts” applications.

Rachel Hsu                                                  Roger Shen

In between each plenary and breakout session, top performers from a variety of fields entertained and inspired.  Moving performances on the violin and piano by nine year old Rachel Hsu and eleven year old Roger Shen amazed and challenged everyone in attendance.  “It is not talent,” Rachel told me, “It’s a lot of hard work–4 to 5 hours a day, everyday of the week, including weekends.”  The take home lesson from these exception kids was clear: there are no short cuts when it comes to top performance.  If you want to achieve your personal best you must work hard.  Promises otherwise are so much more snake oil.

On Thursday evening, the Australian classical pianist, David Helfott, whose lifestory was the subject of the award winning film, “Shine” entertained conference attendees.  His partner, Gillian, introduced and provided the audience with a brief history of David’s life, unfortunate treatment in the mental health system, and their long marriage.  The audience rose to their feet in a standing ovation at the conclusion of the performance.  There were few dry eyes in the house.  Afterwards, the two spent nearly an hour meeting and greeting attendees personally.  Once again, portions of the performance were broadcast live via ICCE web 2.0 technology to a world wide audience.

The inspiration that conference attendees felt continues on the International Center for Clinical Excellence web-based community.  Join us as we work to help each other achieve our personal best.  Still looking for inspiration?  Take a look at the following two videos; first, a montage of events at ACE; and second, Mr. Ah’ Lee Robinson, the director of the Kansas City Boys Choir, whose story and performance brought the conference to a moving conclusion.

Filed Under: Behavioral Health, Conferences and Training, excellence Tagged With: cdoi, evidence based practice, holland, icce

What is "Best Practice?"

October 20, 2010 By scottdm Leave a Comment

You have to admit the phrase “best practice” is the buzzword of late. Graduate school training programs, professional continuing education events, policy and practice guidelines, and funding decisions are tied in some form or another to the concept. So, what exactly is it? At the State and Federal level, lists of so-called “evidence-based” interventions have been assembled and are being disseminated. In lockstep, as I reviewed recently, are groups like NICE. Their message is simple and straightforward: best practice is about applying specific treatments to specific disorders.
Admittedly, the message has a certain “common sense” appeal.    The problem, of course, is that behavioral health interventions are not the psychological equivalent of penicillin. In addition to the numerous studies highlighted on this blog documenting the failure of the “specific treatments for specific disorders” perspective, consider research published in the Spring 2010 edition of the Journal of Counseling and Development by Scott Nyman, Mark Nafziger, and Timothy Smith. Briefly, the authors examined outcome data to “evaluate treatment effectiveness across counselor training level [and found] no significant outcome differences between professional staff and …. interns, and practicum students” (p. 204). Although the researchers are careful to make all the customary prevarications, the conclusion—especially when combined with years of similar findings reported in the literature– is difficult to escape: counseling and psychotherapy are highly regulated activities requiring years of expensive professional training that ultimately fails to make the practitioner any better than they were at the outset.
What gives? Truth is, the popular conceptualization of “best practice” as a “specific treatment for a specific disorder” is hopelessly outdated. In a report few have read, the American Psychological Association (following the lead of the Institute of Medicine) redefined evidence-based, or best practice, as, “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.” Regarding the phrase “clinical expertise” in this definition, the Task Force stated, “Clinical expertise…entails the monitoring of patient progress (and of changes in the patient’s circumstances—e.g., job loss, major illness) that may suggest the need to adjust the treatment (Lambert, Bergin, & Garfield, 2004a). If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate” (p. 273; emphasis included in the original text).
Said another way, instead of choosing the “specific treatment for the specific disorder” from a list of approved treatments, best practice is:
·         Integrating the best evidence into ongoing clinical practice;
·         Tailoring services to the consumer’s characteristics, culture, and preferences;
·         Formal, ongoing, real-time monitoring of progress and the therapeutic relationship.
In sum, best practice is Feedback Informed Treatment (FIT)—the vision of the International Center for Clinical Excellence. And right now, clinicians, researchers and policy makers are learning, sharing, and discussion implementing FIT in treatment settings around the globe on the ICCE web-based community.
Word is getting out. As just one example, consider Accreditation Canada, which recently identified FIT as a “leading practice” for use in behavioral health services. According to the website, leading practices are defined as “creative, evidence-based innovations [that] are commendable examples of high quality leadership and service delivery.” The accreditation body identified FIT as a “simple, measurable, effective, and feasible outcome-based accountability process,” stating that the approach is a model for the rest of the country! You can read the entire report here.
How exactly did this happen? Put bluntly, people and hard work. ICCE senior associates and certified trainers, Rob Axsen and Cynthia Maeschalck, with the support and backing of Vancouver Coast Health, worked tirelessly over the last 5 years both implementing and working to gain recognition for FIT. Similar recognition is taking place in the United States, Denmark, Sweden, England, and Norway.
You can help. Next time someone—be it colleague, trainer, or researcher—equates “best practice” with using a particular model or list of “approved treatment approaches” share the real, official, “approved” definition noted above.  Second, join Rob, Cynthia, and the hundreds of other practitioners, researchers, and policy makers on the ICCE helping to reshape the behavioral health practice worldwide.

Filed Under: Behavioral Health, evidence-based practice, ICCE, Practice Based Evidence Tagged With: Accreditation Canada, American Psychological Association (APA), cdoi, Cochrane Review, evidence based practice, icce, NICE

So you want to be a better therapist? Take a hike!

July 16, 2010 By scottdm Leave a Comment

How best to improve your performance as a clinician?  Take the continuing education multiple-choice quiz:

a. Attend a two-day training;
b. Have an hour of supervision from a recognized expert in a particular treatment approach;
c. Read a professional book, article, or research study;
d. Take a walk or nap.

If you chose a, b, or c, welcome to the world of average performance!  As reviewed on my blog (March 2010), there is exactly zero evidence that attending a continuing education event improves performance.  Zero.  And supervision?  In the most recent review of the research, researchers Beutler et al. (2005) concluded, “Supervision of psychotherapy cases has been the major method of ensuring that therapists develop proficiency and skill…unfortunately, studies are sparse…and apparently, supervisors tend to rate highly the performance of those who agree with them” (p. 246).  As far as professional books, articles, and studies are concerned–including those for which a continuing education or “professional development” point may be earned–the picture is equally grim.  No evidence.  That leaves taking a walk or nap!

K. Anders Ericsson–the leading researcher in the area of expertise and expert performance–points out the type and intensity of practice required to improve performance, “requires concentration that can be maintained only for limited periods of time.”  As a result, he says, “expert performers from many domains engage in practice without rest for only around an hour…The limit…holds true for a wide range of elite performers in difference domains…as does their increased tendency to recperative take naps”  (p.699, Erickson, 2006).  By the way, Ericsson will deliver a keynote address at the upcoming “Achieving Clinical Excellence” conference.  Sign up now for this event to reserve your space!


Two recently released studies add to the evidence base on rest and expertise.  The first, conducted at the University of California, Berkeley by psychologist Matthew Walker found that a midday nap markedly improved the brain’s learning capacity.  The second, published last week in the European Journal of Developmental Psychology, found that simply taking a walk–one where you are free to choose the speed–similarly improved performance on complex cognitive tasks.

So, there you go.  I’d say more but I’m feeling sleepy.

Filed Under: Behavioral Health, deliberate practice, evidence-based practice, excellence Tagged With: cdoi, European Journal of Developmental Psychology, evidence based practice, K. Anders Erickson, professional development, psychotherapy, supervision

Feedback Informed Treatment (FIT): A Worldwide Trend in Behavioral Health

July 14, 2010 By scottdm Leave a Comment

In my prior blogpost, I reviewed exciting developments taking place throughout Canada regarding “feedback-informed treatment” (FIT).  For those following me on Twitter–and if you’re not, please do so by clicking on the link–you already know that last week I was in Tunbridge, England for a two day training sponsored by the Kent-Medway National Healthcare Trust on “Supershrinks: Learning from the Fields Most Effective Practitioners.”  Interest in outcomes is growing exponentially, becoming a worldwide phenomenon.

It was a real pleasure being asked to work with the dedicated–and I must say, long-suffering–physicians, psychologists, counselors, social workers, and nurses of the NHS Trust.  I say “long-suffering” because these healthcare professionals, like others around the globe, are laboring to provide effective services while contending with a back breaking amount of paperwork, oversight, mandated treatment protocols, and regulation.

Much of the mess that behavioral health practitioners find themselves in is due to the way “good practice” is and has been conceptualized.  Simply put, the field–it’s researchers, visionaries, policy makers and sadly, many clinicians–are still stuck in the penicillin era, promoting specific treatments for specific disorders.  The result has been a growing list of protocols, fidelity and adherence measures, and other documentation requirements.  As pointed Bohanske and Franzcak point out in their excellent chapter on transforming behavioral health in the latest edition of The Heart and Soul of Change: Delivering What Works in Therapy, “The forms needed to obtain a marriage certificate, buy a new home, lease an automobile, apply for a passport, open a bank account, and die of natural causes…altogether…weigh 1.4 ounces.  By contrast, the paperwork required for enrolling a single mother in counseling to talk about difficulties her child [is] experiencing [weigh] 1.25 pounds” (p. 300).

Something has to change, and that something is the incessant focus on controlling the process–or “how”– of treatment.  Instead, as the video interview below illustrates, emphasis can be placed on outcome.  Doing so will not only simplify oversight and regulation but, as an increasing number of studies show, result in improved “FIT” and effect of services offered.

 

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT Tagged With: behavioral health, bohanske, Canada, cdoi, England, evidence based practice, feedback informed treatment, franzcak, icce, Kent-Medway National Healthcare Trust, randomized clinical trial

After the Thrill is Gone: Sustaining a Commitment to Routinely Seeking Feedback

May 8, 2010 By scottdm Leave a Comment


Helsingor Castle (the setting for Shakespeare’s Hamlet)

Dateline: May 8th, 2010, Helsingor, Denmark

This weekend I’m in Denmark doing a two-day workshop on “Supershrinks” sponsored by Danish psychologist and ICCE Senior Associate and Trainer Susanne Bargmann.  Just finished the first day with a group of 30 talented clinicians working diligently to achieve their personal best.  The challenge, I’m increasingly aware, is sustaining a commitment to seeking client feedback over time once the excitement of a workshop is over.  On the surface, the idea seems simple: ask the consumer.  In practice however, it’s not easy.  The result is that many practitioners who are initially enthusiastic lose steam, eventually setting aside the measures.  It’s a serious concern given that available evidence documents the dramatic impact of routine outcome and alliance monitoring on outcome and retention in behavioral health.

Support of like-minded colleagues is one critical key for sustaining commitment “after the thrill is gone.”  Where can you find such people?  As I blogged about last week, over a thousand clinicians are connecting, sharing, and supporing each other on the web-based community of the International Center for Clinical Excellence (If you’re not already a member, click here to request your own personal (and free) invitation to join the conversation).

In the brief interview above, Susanne identifies a few additional steps that practitioners and agencies can take for making the process of seeking feedback successful over the long haul.  By the way, she’ll be covering these principles and practices in detail in an afternoon workshop at the upcoming Achieving Clinical Excellence conference.  Don’t miss it!

Filed Under: Conferences and Training, excellence, Feedback Informed Treatment - FIT Tagged With: addiction, behavioral health, evidence based practice, Therapist Effects

Finding Feasible Measures for Practice-Based Evidence

May 4, 2010 By scottdm Leave a Comment

Let’s face it.  Clinicians are tired.  Tired of paperwork (electronic or othrwise).  When I’m out and about training–which is every week by the way–and encouraging therapists to monitor and measure outcomes in their daily work few disagree in principle.  The pain is readily apparent however, the minute the paper version of the Outcome Rating Scale flashes on the screen of my PowerPoint presentation.

It’s not uncommon nowadays for clinicians to spend 30-50% of their time completing intake, assessment, treatment planning, insurance, and other regulatory forms.  Recently, I was in Buffalo, New York working with a talented team of children’s mental health professionals.  It was not uncommon, I learned, to spend most of two outpatient visits doing the required paperwork.  When one considers that the modal number of sessions consumers attend is 1 and the average approximately 5 its hard not to conclude that something is seriously amiss.

Much of the “fear and loathing” dissipates when I talk about the time it usually takes to complete the Outcome and Session Ratings Scales.  On average, filling out and scoring the measures takes about a minute a piece.  Back in January, I blogged about research on the ORS and SRS, including a summary in PDF format of all studies to date.  The studies make clear that the scales are valid and reliable.  Most important, however, for day-to-day clinical practice, the ORS and SRS are also the most clinically feasible measures available.

Unfortunately, many of the measures currently in use were never designed for routine clinical practice–certainly few therapists were consulted.  In order to increase “complaince” with such time consuming outcome tools, many agencies advise clinicians to complete the scales occasionally (e.g., “prime numbers” [5,7, 11 and so on]) or only at the beginning and end of treatment.  The very silliness of such ideas will be immediately apparent to anyone who ever actually conducted treatment.  Who can predict a consumer’s last session?  Can you imagine a similar policy ever flying in medicine?  Hey Doc, just measure your patient’s heart rate at the beginning and end of the surgery!  Inbetween? Fahgetabotit.  Moreover, as I blogged about from behind the Icelandic ash plume, the latest research strongly favors routine measurement and feedback.  In real-world clinical settings feasibility is every bit as important as reliability and validity.  Agency managers, regulators, and policy makers ignore it at their own (and their data’s) peril.

How did the ORS and SRS end up so brief and without any numbers?  When asked at workshops, I usually respond, “That’s an interesting story.”  And then continue, “I was in Israel teaching.  I’d just finished a two day workshop on ‘What Works.'” (At the time, I was using and recommending the 10-item SRS and 45-item OQ).

“The audience was filing out of the auditorium and I was shutting down my laptop when the sponsor approached the dais.  ‘Scott,’ she said, ‘one of the participants has a last question…if you don’t mind.'”

“Of course not,” I immediately replied.

“His name is Haim Omer.  Do you know of him?”


Dr. Haim Omer

“Know him?” I responded, “I’m a huge fan!”  And then, feeling a bit weak in the knees asked, “Has he been here the w h o l e time?”

Haim was as gracious as ever when he finally made it to the front of the room.  “Great workshop, Scott.  I’ve not laughed so hard in a long time!”  But then he asked me a very pointed question.  “Scott,” he said and then paused before continuing, “you complained a bit about the length of the two measures you are using.  Why don’t you use a visual analog scale?”

“That’s simple Haim,” I responded, “It’s because I don’t know what a visual analog measure is!”

Haim described such scales in detail, gave me some examples (e.g., smiley and frowny faces), and even provided references.  My review on the flight home reminded me of a simple neuropsychological assessment scale I used on internship called “The Line Bisection Task”–literally a straight line (a measure developed by my neuropsych supervisor, Dr. Tom Schenkenberg).   And the rest is, as they say, history.

Filed Under: deliberate practice, excellence, Feedback Informed Treatment - FIT Tagged With: continuing education, Dr. Haim Omer, Dr. Tom Schenkenberg, evidence based practice, icce, ors, outcome rating scale, session rating scale, srs

Feedback, Friends, and Outcome in Behavioral Health

May 1, 2010 By scottdm Leave a Comment


My first year in college, my declared major was accounting.  What can I say?  My family didn’t have much money and my mother–who chose my major for me–thought that the next best thing to wealth was being close to money.

Much to her disappointment I switched from accounting to psychology in my sophomore year.  That’s when I first met Dr. Michael Lambert.


Michael J. Lambert, Ph.D.

It was 1979 and I was enrolled in a required course taught by him on “tests and measures.”  He made an impression to be sure.  He was young and hip–the only professor I met while earning my Bachelor’s degree who insisted the students call him by his first name.  What’s more, his knowledge and passion made what everyone considered the “deadliest” class in the entire curriculum seem positively exciting.  (The text, Cronbach’s classic Essentials of Psychological Testing, 3rd Edition, still sits on my bookshelf–one of the few from my undergraduate days).  Within a year, I was volunteering as a “research assistant,” reading and then writing up short summaries of research articles.

Even then, Michael was concerned about deterioration in psychotherapy.  “There is ample evidence,” he wrote in his 1979 book, The Effects of Psychotherapy (Volume 1), “that psychotherapy can and does cause harm to a portion of those it is intended to help” (p. 6).  And where the entire field was focused on methods, he was hot on the trail of what later research would firmly establish as the single largest source of variation in outcome: the therapist.  “The therapist’s contribution to effective psychotherapy is evident,” he wrote, “…training and selection on dimensions of…empathy, warmth, and genuineness…is advised, although little research supports the efficacy of current training procedures.”  In a passage that would greatly influence the arc of my own career, he continued, “Client perception…of the relationship correlate more highly with outcome that objective judges’ ratings” (Lambert, 1979, p. 32).

Fast forward 32 years.  Recently, Michael sent me a pre-publication copy of a mega-analysis of his work on using feedback to improve outcome and reduce deterioration in psychotherapy.  Mega-analysis combines original, raw data from multiple studies–in this case 6–to create a large, representative data set of the impact of feedback on outcome.  In his accompanying email, he said, “our new study shows what the individual studies have shown.”  Routine, ongoing feedback from consumers of behavioral health services not only improves overall outcome but reduces risk of deterioration by nearly two thirds!    The article will soon appear in the Journal of Consulting and Clinical Psychology.

Such results were not available when I first began using Lambert’s measure–the OQ 45–in my clinical work.  It was late 1996.  My colleagues and I had just put the finishing touches on Escape from Babel, our first book together on the “common factors.”

That’s when I received a letter from my colleague and mentor, Dr. Lynn Johnson.


Lynn D. Johnson, Ph.D.

In the envelop was a copy of an article Lynn had written for the journal, Psychotherapy entitled, “Improving Quality in Psychotherapy” in which he argued for the routine measurement of outcome in psychotherapy.  He cited three reasons: (1) providing proof of effectiveness to payers; (2) enabling continuous analysis and improvement of service delivery; and (3) giving consumers voice and choice in treatment.  (If you’ve never read the article, I highly recommend it–if for no other reason than its historical significance.  I’m convinced that the field would be in far better shape now had Lynn’s suggestions been heeded then).

Anyway, I was hooked.  I soon had a bootleg copy of the OQ and was using it in combination with Lynn’s Session Rating Scale with every person I met.

It wasn’t always easy.  The measure took time and more than a few of my clients had difficulty reading and comprehending the items on the measure.  I was determined however, and so persisted, occasionally extending sessions to 90 minutes so the client and I could read and score the 45-items together.

Almost immediately, routinely measuring and talking about the alliance and outcome had an impact on my work.  My average number of sessions began slowly “creeping up” as the number of single-session therapies, missed appointments, and no shows dropped.  For the first time in my career, I knew when I was and was not effective.  I was also able to determine my overall success rate as a therapist.  These early experiences also figured prominently in development of the Outcome Rating Scale and revision of the Session Rating Scale.

More on how the two measures–the OQ 45 and original 10-item SRS–changed from lengthy Likert scales to short, 4-item visual analog measures later.  At this point, suffice it to say I’ve been extremely fortunate to have such generous and gifted teachers, mentors, and friends.

Filed Under: Feedback Informed Treatment - FIT Tagged With: behavioral health, cdoi, continuing education, evidence based practice, holland, icce, Michael Lambert, Paychotherapy, public behavioral health

Where Necessity is the Mother of Invention: Forming Alliances with Consumers on the Margins

April 11, 2010 By scottdm 3 Comments

Spring of last year, I traveled to Gothenburg, Sweden to provide training GCK–an top notch organization led by Ulla Hansson and Ulla Westling-Missios providing cutting-edge training on “what works” in psychotherapy.  I’ll be back this week again doing an open workshop and an advanced training for the group.

While I’m always excited to be out and about traveling and training, being in Sweden is special for me.  It’s like my second home.  My family roots are Swedish and Danish and, it just so happens, I speak the language.  Indeed, I lived and worked in the country for two years back in the late seventies.  If you’ve never been, be sure and put it on your short list of places to visit…

AND IMPORTANTLY, go in the Summer!  (Actually, the photos above are from the famous “Ice Hotel”–that’s right, a hotel completely made of icc.  The lobby, bar, chairs, beds.  Everything!  If you find yourself in Sweden during the winter months, it’s a must see.  I promise you’ll never forget the experience).

Anyway, the last time I was in Gothenburg, I met a clinician whose efforts to deliver consumer-driven and outcome-informed services to people on the margins of society were truly inspiring.   During one of the breaks at the training, therapist Jan Larsson introduced himself, told me he had been reading my books and articles, and then showed me how he managed to seek and obtain feedback from the people he worked with on the streets.  “My work does not look like ‘traditional’ therapeutic work since I do not meet clients at an office.  Rather, I meet them where they live: at home, on a bench in the park, or sitting in the library or local activity center.”

Most of Jan’s clients have been involved with the “psychiatric system” for years and yet, he says, continue to struggle and suffer with many of the same problems they entered the system with years earlier.  “Oftentimes,” he observed, “a ‘treatment plan’ has been developed for the person that has little to do with what they think or want.”

So Jan began asking.  And each time they met, they also completed the ORS and SRS–“just to be sure,” he said.  No computer.  No I-phone app.  No sophisticated web-based adminsitration system.  With a pair of scissors, he simply trimmed copies of the measures to fit in his pocket-sized appointment book.

His experience thusfar?  In Swedish Jan says, “Det finns en livserfarenhet hos klienterna som bara väntar på att bli upptäckt och bli lyssnad till. Klienterna är så mycket mer än en diagnos. Frågan är om vi är nyfikna på den eftersom diagnosen har stulit deras livberättelse.”  Translated: “There is life experience with clients that is just waiting to be noticed and listened to.  Clients are so much more than their diagnosis.  The question is whether we are curious about them because the diagnosis has stolen their life story.”

I look forward to catching up Jan and the crew at GKC this coming week.  I also be posting interviews with Ulla and Ulla as well as ICCE certified trainers Gun-Eva Langdahl (who I’ll be working with in Skelleftea) and Gunnar Lindfeldt (who I’ll be meeting in Stockholm).  In the meantime, let me post several articles he sent by Swedish research Alain Topor on developing helpful relationships with people on the margins.  Dr. Topor was talking about the “recovery model” among people considered “severely and persistently mentally ill long before it became popular here in the States. Together with others, such as psychologist Jan Blomqvist (who I blogged about late last year), Alain’s work is putting the consumer at the center of service delivery.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT Tagged With: evidence based practice, Hypertension, Jan Blomqvist, ors, outcome rating scale, Pharmacology, psychotherapy, randomized clinical trial, recovery model, session rating scale, srs, sweden, Training

Improving Outcomes in the Treatment of Obesity via Practice-Based Evidence: Weight Loss, Nutrition, and Work Productivity

April 9, 2010 By scottdm 4 Comments

Obesity is a large and growing problem in the United States and elsewhere.  Data gathered by the National Center for Health Statistics indicate that 33% Americans are obese.  When overweight people are added to the mix, the figure climbs to a staggering 66%!   The problem is not likely to go away soon or on its own as the same figures apply to children.

Researchers estimate that weight problems are responsible for over 300,000 deaths annually and account for 12% of healthcare costs or 100 billion–that’s right, $100,000,000,000–in the United States alone.   The overweight and obese have higher incidences of arthritis, breast cancer, heart disease, colorectal cancer, diabetes, endometrial cancer, gallbladder disease, hypertension, liver disease, back pain, sleeping problems, and stroke–not to mention the tremendous emotional, relational, and social costs.  The data are clear: the overweight are the target of discrimination in education, healthcare, and employment.  A study by Brownell and Puhl (2003), for example, found that: (1) a significant percentage of healthcare professionals admit to feeling  “repulsed” by obese person, even among those who specialize in bariatric treatment; (2) parents provide less college support to their overweight compared to “thin” children; and (3) 87% of obese individuals reported that weight prevented them from being hired for a job.

Sadly, available evidence indicates that while weight problems are “among the easiest conditions to recognize,” they remain one of the “most difficult to treat.”  Weight loss programs abound.  When was the last time you watched television and didn’t see an ad for a diet pill, program, or exercise machine?  Many work.  Few, however, lead to lasting change.

What might help?

More than a decade ago, I met Dr. Paul Faulkner, the founder and then Chief Executive Officer of Resources for Living (RFL), an innovative employee assistance program located in Austin, Texas.  I was teaching a week-long course on outcome-informed work at the Cape Cod Institute in Eastham, Massachusetts.  Paul had long searched for a way of improving outcomes and service delivery that could simultaneously be used to provide evidence of the value of treatment to purchasers–in the case of RFL, the large, multinational companies that were paying him to manage their employee assistance programs.  Thus began a long relationship between me and the management and clinical staff of RFL.  I was in Austin, Texas dozens of times providing training and consultation as well as setting up the original ORS/SRS feedback system known as ALERT, which is still in use at the organization today.  All of the original reliability, validity, norming, and response trajectories were done together with the crew at RFL.

Along the way, RFL expanded services to disease management, including depression, chronic obstructive pulmonary disease, diabetes, and obesity.  The “weight management” program delivered coaching and nutritional consultation via the telephone informed by ongoing measurement of outcomes and the therapeutic alliance using the SRS and ORS.  The results are impressive.  The study by Ryan Sorrell, a clinician and researcher at RFL, not only found that the program and feedback led to weight loss, but also significant improvements in distress, health eating behaviors (70%), exercise (65%), and presenteeism on the job (64%)–the latter being critical to the employers paying for the service.

Such research adds to the growing body of literature documenting the importance of “practice-based” evidence, making clear that finding the “right” or “evidence-based” approach for obesity (or any problem for that matter) is less important than finding out “what works” for each person in need of help.  With challenging, “life-style” problems, this means using ongoing feedback to inform whatever services may be deemed appropriate or necessary.  Doing so not only leads to better outcomes, but also provides real-time, real-world evidence of return on investment for those footing the bill.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, cognitive-behavioral therapy, conferences, continuing education, diabetes, disease management, Dr. Paul Faulkner, evidence based medicine, evidence based practice, Hypertension, medicine, obesity, ors, outcome rating scale, practice-based evidence, public behavioral health, randomized clinical trial, session rating scale, srs, Training

Problems in Evidence-Based Land: Questioning the Wisdom of "Preferred Treatments"

March 29, 2010 By scottdm Leave a Comment

This last week, Jeremy Laurance, Health Editor for the U.K. Independent published an article entitled, “The big question: Does cognitive therapy work? And should the NHS (National Health Service) provide more of it?” Usually such questions are limited to professional journals and trade magazines. Instead, it ran in the “Life and Style” section of one of Britain’s largest daily newspapers. Why?

In 2007, the government earmarked £173,000,000 (approximately 260,000,000 U.S. dollars) to train up an army of new therapists. Briefly, the money was allocated following an earlier report by Professor Richard Layard of the London School of Economics which found that a staggering 38% of illness and disability claims were accounted for by “mental disorders.” The sticking point—and part of the reason for the article by Laurance—is that training was largely limited to a single treatment approach: cognitive-behavioral therapy (CBT).  And research released this week indicates that the efficacy of the method has been seriously overestimated due to “publication bias.”
Researchers Cuijpers, Smith, Bohlmeijer, Hollon, and Andersson (2010) examined the “effect sizes” of 117 trials and found that the tendency of journals to accept trials that showed positive results and reject those with null or negative findings reduced the reported effectiveness of CBT by as much as 33 percent!
Combine such findings with evidence from multiple meta-analyses showing no difference in outcome between treatment approaches intended to be therapeutic and one has to wonder why CBT continues to enjoy a privileged position among policy makers and regulatory bodies.  Despite the evidence, the governmental body in the UK that is responsible for reviewing research and making policy recommendations—National Institute for Health and Clinical Excellence (NICE)–continues to advocate for CBT.  It’s not only unscientific, its bad policy. Alas, when it comes to treatment methods, CBT enjoys what British psychologist Richard Wiseman calls, the “get out of a null effect free” card.
What would work? If the issue is truly guaranteeing effective treatment, the answer is measurement and feedback.  The single largest contributor to outcome is who provides the treatment and not what treatment approach is employed.  More than a dozen randomized clinical trials—the design of choice of NICE and SAMSHA—indicate that outcomes and retention rates are improved while costs are decreased—in many cases dramatically so.
I respectfully ask, “What is the hold up?”

Filed Under: Practice Based Evidence Tagged With: CBT, cdoi, cognitive-behavioral therapy, conferences, evidence based practice, icce, Jeremy Laurance, National Institute for Health and Clinical Excellence (NICE), randomized clinical trial, Richard Layard, Richard Wiseman

"What Works" in Holland: The Cenzo Experience

March 23, 2010 By scottdm 1 Comment

When it comes to healthcare, it can be said without risk of exaggeration that “revolution is in the air.”  The most sweeping legislation in history has just been passed in the United States.  Elsewhere, as I’ve been documenting in my blogs, countries, states, provinces, and municipalities are struggling to maintain quality while containing costs of the healthcare behemoth.

Back in January, I talked about the approach being taken in Holland where, in contrast to many countries, the healthcare system was jettisoning their government-run system in favor of private insurance reimbursement.  Believe me, it is a change no less dramatic in scope and impact than what is taking place in the U.S.  At the time, I noted that Dutch practitioners were, in response “’thinking ahead’, preparing for the change—in particular, understanding what the research literature indicates works as well as adopting methods for documenting and improving the outcome of treatment.” As a result, I’ve been traveling back and forth—at least twice a quarter–providing trainings to professional groups and agencies across the length and breadth of the country.

Not long ago, I was invited to speak at the 15th year anniversary of Cenzo—a franchise organization with 85 registered psychologist members.  Basically, the organization facilitates—some would say “works to smooth”–the interaction between practitioners and insurance companies.  In addition to helping with contracts, paperwork, administration, and training, Cenzo also has an ongoing “quality improvement” program consisting of routine outcome monitoring and feedback as well as client satisfaction metrics.  Everything about this forward-thinking group is “top notch,” including a brief film they made about the day and the workshop.  Whether you work in Holland or not, I think you’ll find the content interesting!  If you understand the language, click here to download the 15th year Anniversary Cenzo newsletter.

Filed Under: Feedback Informed Treatment - FIT Tagged With: behavioral health, cenzo, common factors, evidence based practice, holland, medicine, Therapist Effects

Outcomes in New Zealand

March 23, 2010 By scottdm Leave a Comment

Made it back to Chicago after a week in New Zealand providing training and consultation.  As I blogged about last Thursday, the last two days of my trip were spent in Christchurch providing a two-day training on “What Works” for Te Pou–New Zealand’s National Centre of Mental Health Research, Information, and Workforce Development.  Last year around this same time, I provided a similar training for Te Pou for managers and policy makers in Auckland.  News spread and this year my contact at Te Pou, Emma Wood brought the training to the south island.  It is such a pleasure to be involved with such a forward thinking organization.

Long before I arrived, leadership at Te Pou were promoting outcome measurement and feedback.  Here’s a direct quote from their website:

Outcomes information can assist:

  • service users to use their own outcomes data to reflect on their wellbeing and circumstances, talk to clinicians about their support needs and inform their recovery plans
  • clinicians to use outcomes information to support their decision-making in day-to-day practice, monitoring change, better understanding the needs of the service user, and also to begin evaluating the effectiveness of different interventions
  • planners and funders to assess population needs for mental health services and assist with allocation of resources policy and mental health strategy developments through nationally aggregated data.

Indeed, using outcome to inform mental health service delivery is a key aspect of the Past, Present, and Future: Vision Paper–a review of “what works” in care and a plan for improving treatment in the future.  The site even publishes a quarterly newsletter Outcomes Matter.  Take a few minutes and explore the Te Pou website.  While you are there, be sure and download the pamphlet entitled, “A Guide to Talking Therapies.”  As the title implies, this brief, easy-to-read text provides a non-nonsense guide to the various “talk therapies” for consumers (I took several copies home with me from the workshop).

Before ending, let me say a brief hello to the Clinical Practice Leaders from the Problem Gambling Foundation of New Zealand who attended the two-day training in Christchurch.    The dedicated staff use an integrated public health and clinical model and are working to implement ongoing measurement of outcome and consumer feedback into service delivery.  The website contains a free online library including fact sheets, research, and books on the issue of problem gambling that is an incredible resource to professionals and the public.  Following the workshop, the group sent a photo that was taken of us together.  From left to right, they are Wenli Zhang, me, Margaret Sloan, and Jude West.

Filed Under: Behavioral Health, Conferences and Training, excellence, Feedback Informed Treatment - FIT Tagged With: books, evidence based practice, medicine, New Zealand, randomized clinical trial, Te Pou, Therapist Effects

Addressing the Financial Crisis in Public Behavioral Healthcare Head On in Chesterfield, Virginia

March 5, 2010 By scottdm Leave a Comment

If you are following me on Twitter (and I hope you are), you know the last month has been extremely busy.  This week I worked with clinicians in Peterborough, Ontario Canada.  Last week, I was in Nashville, Tennessee and Richmond Virginia.  Prior to that, I spent nearly two weeks in Europe, providing training and consultations in the Netherlands and Belgium.

It was, as always, a pleasure meeting and working with clinicians representing a wide range of disciplines (social workers, case managers, psychologists, psychiatrists, professional counselors, alcohol and drug treatment professionals, etc.) and determined to provide the best service possible.  As tiring as “road work” can sometimes be, my spirits are always buoyed by the energy of the individuals, groups, and agencies I meet and work with around the world.

At the same time, I’d be remiss if I didn’t acknowledge the fear and hardship I’m witnessing among providers and treatment agencies each week as I’m out and about.  Frankly, I’ve never seen anything like it in my seventeen years “on the road.”  Being able to say that we predicted the current situation nearly 6 years ago provides little comfort (see The Heroic Client, 2004).

While nearly all are suffering, the economic crisis in the United States is hitting public behavioral health particularly hard.  In late January I blogged about the impact of budget cuts in Ohio.   Sadly, the situations in Virginia and Tennessee are no different.  Simply put, public behavioral health agencies are expected to do more with less, and most often with fewer providers.  What can be done?

Enter Chesterfield Community Service Board.  Several years ago, I met and began working with Larry Barnett,  Lyn Hill, and the rest of the talented clinical staff at this forward thinking public behavioral health agency.  Their goal?  According to the agency mission statement, “to promote improved quality of life…through exceptional and comprehensive mental health, mental retardation, substance abuse, and early intervention services.”  Their approach?  Measure and monitor the process and outcome of service delivery and use the resulting information to improve productivity and performance.

As Larry and Lynn report in the video below, the process was not easy.  Indeed, it was damn difficult–full of long hours, seemingly endless discussions, and tough, tough choices.  But that was then.  Some three years later, the providers at Chesterfield CSB are serving 70% more people than they did in 2007 despite there being no increase in available staff resources in the intervening period.  That’s right, 70%!  And that’s not all.  While productivity rates soared, clinician caseloads were reduced by nearly 30%.  As might be expected, the time consumers in need of services had to wait was also significantly reduced.

In short, everybody won: providers, agency managers, funders, and consumers.  And thanks to the two days of intensive training in Richmond, Virginia organized by Arnold Woodruff, many additional public behavioral health agencies have the information needed to get started.  It won’t be easy.  However, as the experience in Chesterfield demonstrates, it is possible to survive and thrive during these tumultuous times.  But don’t take my word for it, listen to how Larry and Lynn describe the process–warts and all–and the results:

Filed Under: Behavioral Health, CDOI, excellence, Feedback Informed Treatment - FIT Tagged With: behavioral health, brief therapy, cdoi, clinician caseloads, evidence based practice, healthcare, holland, Hyperlipidemia, meta-analysis, public behavioral health, randomized clinical trial

The Turn to Outcomes: A Revolution in Behavioral Health Practice

February 1, 2010 By scottdm Leave a Comment

Get ready.  The revolution is coming (if not already here).  Whether you are a direct service provider (psychologist, counselor, marriage and family therapist), agency, broker, or funder, you will be required to measure and likely report the outcomes of your clinical work.


Jay Lebow, Ph.D.

Just this month, Dr. Jay Lebow, a professor of psychology at the Family Institute at Northwestern University, published an article in the Psychotherapy Networker–the most widely circulated publication for practitioners in the world–where he claimed the field had reached a “tipping point.”  “Once a matter of interest only among a small circle of academics,” Dr. Lebow writes in his piece entitled, The Big Squeeze, “treatment outcome has now become a part of the national debate about healthcare reform.”


David Barlow, Ph.D.

The same sentiments were expressed in a feature article entitled, “Negative Effects from Psychological Treatments,” written by Dr. David Barlow in the January issue of the American Psychologist.  “Therapists,” he argues both eloquently and persuasively, “do not have to wait for the next clinical trial….[rather] clinicians [can act] as local clinical scientists…[using] outcome measures to track progress…rapidly becom[ing] aware of lack of progress or even deterioration” (p. 19).  What can I say, except that any practitioner with more than a few years to work before retirement, should read these articles and then forward them to every practitioner they know.

During the Holidays, and just before the turn of the New Year, I blogged about the trend toward outcome measurement.  As readers will recall, I talked about my experience on a panel at the Evolution of Psychotherapy conference where Dr. Barlow–who, in response to my brief remarks about the benefits of feedback– suprised me by stating unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work. And even though my work has focused almost exclusively on measuring and using outcomes to improve both retention in and the results of behavioral health for the last 15 years, I said his pronouncement frightened me–which, by the way, reminds me of a joke.

A sheep farmer is out in the pasture tending his flock–I promise this is clean, so read on–when from over a small hill comes a man in a custom-tailored, three-piece business suit.  In one hand, the businessman holds a calculator; in the other, an expensive, leather brief case.  “I have a proposition for you,” the well-clad man says as he approaches the farmer, and then continues, “if I can tell you how many sheep are in your flock, to the exact number, may I have one of your sheep?”  Though initially startled by the stranger’s abrupt appearance and offer, the farmer quickly gathers his wits.  Knowing there is no way the man could know the actual number of sheep (since many in his flock were out of site in other pastures and several were born just that morning and still in the barn), the farmer quickly responded, “I’ll take that bet!”

Without a moment’s hesitation, the man calls out the correct number, “one thousand, three hundred and forty six,” then quickly adds, “…with the last three born this morning and still resting in the barn!”  Dumbfounded, the farmer merely motions toward his flock.  In response, the visitor stows his calculator, slings one of the animals up and across his shoulders and then, after retrieving his briefcase, begins making his way back up the hill.  Just as he nears the top of the embankment, the farmer finds his voice and calls out, “Sir, I have a counter proposal for you.”

“And what might that be?” the man replies, turning to face the farmer, who then asked, “If I can tell you, sir, what you do for a living, can I have my animal back?”

Always in the mood for a wager, the stranger replies, “I’ll take that bet!”  And then without a moment’s hesitation, the sheep farmer says, “You’re an accountant, a bureaucrat, a ‘bean-counter.'”  Now, it’s the businessman’s turn to be surprised.  “That’s right!” he says, and then asks, “How did you know?”

“Well,” the farmer answers, “because that’s my dog you have around your neck.”

The moral of the story?  Bureaucrats can count but they can’t tell the difference between what is and is not important.  In my blogpost on December 24th, I expressed concern about the explosion of “official interest” in measuring outcomes.  As the two articles mentioned above make clear, the revolution has started.  There’s no turning back now.  The only question that remains is whether behavioral health providers will be present to steer measurement toward what matters?  Here, our track record is less than impressive (remember the 80-90’s and the whole managed care revolution).  We had ample warning (and did, well, nothing.  If you don’t believe me, click here and read this article from 1986 by Dr. Nick Cummings).

As my colleague and friend Peter Albert is fond of saying, “If you’re not at the table, you’re likely to be on the menu.”  So, what can the average clinician do?  First of all, if you haven’t already done so, began tracking your outcomes.  Right here, on my website, you can download, free, simple-to use, valid and reliable measures.  Second, advocate for measures that are feasible, client-friendly, and have a empirical track record of improving retention and outcome.  Third, and lastly, join the International Center for Clinical Excellence.  Here, clinicians from all over the globe are connecting, learning, and sharing their experiences about how to use ongoing measures of progress and alliance.  Most importantly, all are determined to lead the revolution.

Filed Under: Behavioral Health, CDOI, excellence, Feedback Informed Treatment - FIT Tagged With: brief therapy, evidence based practice, icce, Jay Lebow, medicine, post traumatic stress, psychotherapy networker, public behavioral health

Accountability in Behavioral Health: Steps for Dealing with Cutbacks, Shortfalls, and Tough Economic Conditions

January 25, 2010 By scottdm 3 Comments

As anyone who follows me on Facebook knows, I get around.  In the past few months, I visited Australia, Norway, Sweden, Denmark (to name but a few countries) as well as criss-crossed the United States.  If I were asked to sum up the state of public behavioral health agencies in a single word, the word–with very few exceptions–would be: desperate.  Between the unfunded mandates and funding cutbacks, agencies are struggling.

Not long ago, I blogged about the challenges facing agencies and providers in Ohio.  In addition to reductions in staffing, those in public behavioral health are dealing with increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  The one bright spot in this otherwise frightening climate is: outcomes.  Several counties in Ohio have adopted the ORS and SRS and been using them to improve the effectiveness and efficiency of behavioral health services.

I’ve been working with the managers and providers in both Marion and Crawford counties for a little over two years.  Last year, the agencies endured significant cuts in funding.  As a result, they were forced to eliminate a substantial number of positions.  Needless to say, it was a painful process with no upsides–except that, as a result of using the measures, the dedicated providers had so improved the effectiveness and efficiency of treatment they were able to absorb the loss of staff without having to cut on services to clients.

The agencies cite four main findings resulting from the work we’ve done together over the last two years.  In their own words:

  1.  Use of FIT has enabled us to be more efficient, which is particularly important given Ohio’s economic picture and the impact of State budget cuts. Specifically, FIT is enabling service providers and supervisors to identify consumers much earlier who are not progressing in the treatment process. This allows us to change course sooner when treatment is not working, to know if changes work, to identify consumers in need of a different level of care, etc.  FIT also provides data on which the provider and consumer can base decisions about the intensity of treatment and treatment continuation (i.e. when to extend time between services or when the episode of service should end). In short, our staff and consumers are spending much less time “spinning their wheels” in unproductive activities.  As a result, we have noticed more “planned discharges versus clients just dropping out of treatment.
  2. FIT provides aggregate effect size data for individual service providers, for programs, and for services, based on data from a valid and reliable outcome scale. Effect sizes are calculated by comparing our outcome data to a large national data base. Progress achieved by individual consumers is also compared to this national data base. For the first time, we can “prove” to referral sources and funding sources that our treatment works, using data from a valid and reliable scale. Effect size data also has numerous implications for supervision, and supervision sessions are more focused and productive.
  3.  Use of the SRS (session rating scale) is helping providers attend to the therapeutic alliance in a much more deliberate manner. As a result, we have noticed increased collaboration between consumer and provider, less resistance and more partnership, and greater openness from consumers about their treatment experience. Consumer satisfaction surveying has revealed increased satisfaction by consumers. The implications for consumers keeping appointments and actually implementing what is learned in treatment are clear. The Session Rating Scale is also yielding some unexpected feedback from clients and has caused us to rethink what we assume about clients and their treatment experience.
  4. Service providers, especially those who are less experienced, appear to be more confident and purposeful when providing services. The data provides a basis for clinical work and there is much less ‘flying by the seat of their pants.’”Inspiring, eh?  And now, listen to Community Counseling Services Director Bob Moneysmith and Crawford-Marion ADAMH Board Associate Director Shirley Galdys describe the implementation:

Filed Under: Behavioral Health Tagged With: cdoi, evidence based practice, icce, ors, outcome rating scale, public behavioral health, research, session rating scale, srs

Outcomes in the Artic: An Interview with Norwegian Practitioner Konrad Kummernes

January 21, 2010 By scottdm Leave a Comment

Dateline: Mosjoen, Norway

The last stop on my training tour around northern Norway was Mosjoen.  The large group of psychologists, social workers, psychiatrists, case managers, and physicians laughed uproariously when I talked about the bumpy, “white-knuckler” ride aboard the small twin-engine airplane that delivered me to the snowy, mountain-rimmed town. They were all to familiar with the peculiar path pilots must follow to navigate safely between the sharp, angular peaks populating the region.

Anyway, I’d been invited nearly two years earlier to conduct the day-long training on “what works in treatment.” The event was sponsored by Helgelandssykehuset-Mosjoen and organized by Norwegian practitioner Konrad Kummernes.  I first met Konrad at a conference held in another beautiful location in Norway (is there any other type in this country?!), Stavanger–best known for its breathtaking Fjordes.  The goal for the day in Mosjoen?  Facilitate the collaboration between the many different services providers and settings thereby enabling the delivery of the most effective and comprehensive clinical services.  Meeting Konrad again and working with the many dedicated professionals in Mosjoen was an inspiration. Here’s Konrad:

Filed Under: Behavioral Health, Conferences and Training, Feedback Informed Treatment - FIT Tagged With: cdoi, evidence based practice, icce, Norway, psychotherapy

Practice-Based Evidence in Norway: An Interview with Psychologist Mikael Aagard

January 19, 2010 By scottdm Leave a Comment

For those of you following me on Facebook–and if you’re not, click here to start–you know that I was traveling above the arctic circle in Norway last week.  I always enjoy visiting the Scandinavian countries.  My grandparents immigrated from nearby Sweden.  I lived there myself for a number of years (and speak the language).  And I am married to a Norwegian!  So, I consider Scandinavia to be my second home.

In a prior post, I talked a bit about the group I worked with during my three day stay in Tromso.  Here, I briefly interview psychologist Mikael Aagard, the organizer of the conference.  Mikael works at KORUS Nord, an addiction technology transfer center, which sponsored the training.  His mission?  To help clinicians working in the trenches stay up-to-date with the research on “what works” in behavioral health.  Judging by the tremendous response–people came from all over the disparate regions of far northern Norway to attend the conference–he is succeeding.

Listen as he describes the challenges facing practitioners in Norway and the need to balance the “evidence-based practice” movement with “practice-based evidence.”  If you’d like any additional information regarding KORUS, feel free to connect with Mikael and his colleagues by visiting their website.  Information about the activities of the International Center for Clinical Excellence in Scandinavia can be found at: www.centerforclinicalexcellence.org.

Filed Under: Behavioral Health, Drug and Alcohol, evidence-based practice, Practice Based Evidence Tagged With: cdoi, evidence based practice, Hyperlipidemia, icce, meta-analysis, psychotherapy

Evidence-based practice or practice-based evidence? Article in the Los Angeles Times addresses the debate in behavioral health

January 18, 2010 By scottdm Leave a Comment


January 11th, 2010

“Debate over Cognitive & Traditional Mental Health Therapy” by Eric Jaffe

The fight debate between different factons, interest groups, scholars within the field of mental health hit the pages of the Los Angeles Times this last week. At issue?  Supposedly, whether the field will become “scientific” in practice or remain mired in traditions of the past.  On the one side are the enthusiastic supporters of cognitive-behavioral therapy (CBT) who claim that existing research provides overwhelming support for the use of CBT for the treatment of specific mental disorders.  On the other side are traditional, humanistic, “feel-your-way-as-you-go” practitioners who emphasize quality over the quantitative.

My response?  Spuds or potatoes.  Said another way, I can’t see any difference between the two warring factions.  Yes, research indicates the CBT works.  That exact same body of literature shows overwhelmingly, however, that any and all therapeutic approaches intended to be therapeutic are effective.  And yes, certainly, quality is important.  The question is, however, “what counts as quality?” and more importantly, “who gets to decide?”

In the Los Angeles Times article, I offer a third way; what has loosely been termed, “practice-based evidence.”  The bottom line?  Practitioners must seek and obtain valid, reliable, and ongoing feedback from consumers regarding the quality and effectiveness of the services they offer.  After all, what person following unsuccessful treatment would say, “well, at least I got CBT!” or, “I’m sure glad I got the quality treatment.”

Filed Under: Behavioral Health, Dodo Verdict, Practice Based Evidence Tagged With: behavioral health, cognitive-behavioral therapy (CBT), evidence based practice, icce, Los Angeles Times, mental health, meta-analysis, public behavioral health

"What Works" in Norway

January 13, 2010 By scottdm 1 Comment

Dateline: Tromso, Norway
Place: Rica Ishavshotel

For the last two days, I’ve had the privilege of working with 125+ clinicians (psychotherapists, psychologists, social workers, psychiatrists, and addiction treatment professionals) in far northern Norway.  The focus of the two-day training was on “What Works” in treatment, in particular examining what constitutes “evidence-based practice” and how to seek and utilize feedback from consumers on an ongoing basis.  The crowd was enthusiastic, the food fantastic, and the location, well, simply inspiring.  Tomorrow, I’ll be working with a smaller group of practitioners, doing an advanced training.  More to come.

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice Tagged With: behavioral health, evidence based practice, icce, Norway, psychotherapy, public behavioral health, Therapist Effects

Research on the Outcome Rating Scale, Session Rating Scale & Feedback

January 7, 2010 By scottdm Leave a Comment

PCOMS - Partners for change outcome management system Scott D Miller - SAMHSA - NREPP“How valid and reliable are the ORS and SRS?”  “What do the data say about the impact of routine measurement and feedback on outcome and retention in behavioral health?”  “Are the ORS and SRS ‘evidence-based?'”

These and other questions regarding the evidence supporting the ORS, SRS, and feedback are becoming increasingly common in the workshops I’m teaching in the U.S. and abroad.

As indicated in my December 24th blogpost, routine outcome monitoring (PROMS) has even been endorsed by “specific treatments for specific disorders” proponent David Barlow, Ph.D., who stated unequivocally that “all therapists would soon be required to measure and monitor the outcome of their clinical work.”  Clearly, the time has come for all behavioral health practitioners to be aware of the research regarding measurement and feedback.

Over the holidays, I updated a summary of the data to date that has long been available to trainers and associates of the International Center for Clinical Excellence.  The PDF reviews all of the research on the psychometric properties of the outcome and session ratings scales as well as the studies using these and other formal measures of progress and the therapeutic relationship to improve outcome and retention in behavioral health services.  The topics is so important, that I’ve decide to make the document available to everyone.  Feel free to distribute the file to any and all colleagues interested in staying up to date on this emerging mega-trend in clinical practice.

Measures And Feedback from Scott Miller

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, continuing education, david barlow, evidence based medicine, evidence based practice, feedback, Hypertension, icce, medicine, ors, outcome measurement, outcome rating scale, post traumatic stress, practice-based evidence, proms, randomized clinical trial, session rating scale, srs, Training

The Study of Excellence: A Radically New Approach to Understanding "What Works" in Behavioral Health

December 24, 2009 By scottdm 2 Comments

“What works” in therapy?  Believe it or not, that question–as simple as it is–has and continues to spark considerable debate.  For decades, the field has been divided.  On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).

While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years!  Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services.  Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome.  At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work.  After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors?  More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.

The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness.  Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations.  Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples).  On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors.  Little wonder that outcomes have stagnated.  Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.

What to do?  For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing.  Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.

Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery.  One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback.  Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!

Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work.  Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy.  And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.

My fear?  Focusing on measurement and feedback misses the point.  Simply put: it’s not seeking feedback that is important.  Rather, it’s what feedback potentially engenders in the user that is critical.  Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service.  “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. [2004]).

Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.”   Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others.  Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players.  Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board.  In other words, their immense knowledge is context specific.

A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge.  I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference.  I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place.  Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).

Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.

Filed Under: Behavioral Health, excellence, Feedback, Top Performance Tagged With: brendan madden, cdoi, cognitive behavioral therapy, common factors, continuing education, david barlow, evidence based medicine, evidence based practice, Evolution of Psychotherapy, feedback, icce, micheal lambert, ors, outcome rating scale, proms, session rating scale, srs, therapist, therapists, therapy

Five Incredible Days in Anaheim

December 15, 2009 By scottdm 2 Comments

From December 9-13th, eight thousand five hundred mental health practitioners, from countries around the globe, gathered in Anaheim, California to attend the “Evolution of Psychotherapy” conference.  Held every five years since 1985, the conference started big and has grown only larger.  “Only a few places in the US can accommodate such a large gathering,” says Jeffrey K. Zeig, Ph.D., who has organized the conference since the first.

The event, held every five years, brings together 40 of the field’s leading researchers, practitioners, trend setters, and educators to deliver keynote addresses and workshops, host discussion panels, and offer clinical demonstrations on every conceivable subject related to clinical practice.  Naturally, I spoke about my current work on “Achieving Clinical Excellence” as well as served on several topical panels, including “evidence based practice” (with Don Meichenbaum), “Research on Psychotherapy” (with Steven Hayes and David Barlow), and “Severe and Persistent Mental Illness (with Marsha Linnehan and Jeff Zeig).

Most exciting of all, the Evolution of Psychotherapy conference also served as the official launching point for the International Center for Clinical Excellence.  Here I am pictured with long-time colleague and friend, Jeff Zeig, and psychologist and ICCE CEO, Brendan Madden, in front of the ICCE display in the convention center hall.

Over the five days, literally hundreds of visitors stopped by booth #128 chat with me, Brendan, and Senior ICCE Associates and Trainers, Rob Axsen, Jim Walt, Cynthia Maeschalck, Jason Seidel, Bill Andrews, Gunnar Lindfeldt, and Wendy Amey.  Among other things, a cool M and M dispenser passed out goodies to folks (if they pressed the right combination of buttons), we also talked about and handed out leaflets advertising the upcoming “Achieving Clinical Excellence” conference, and finally people watched a brief video introducing the ICCE community.  Take a look yourself:.


More to come from the week in Anaheim….

Filed Under: Behavioral Health, Conferences and Training, excellence, ICCE Tagged With: Acheiving Clinical Excellence, brendan madden, david barlow, Don Meichenbaum, evidence based practice, Evolution of Psychotherapy, icce, Jeff Zeig, jeffrey K. zeig, Marsha Linnehan, mental health, psychotherapy, Steve Hayes

Outcomes in Oz II

November 25, 2009 By scottdm 4 Comments

Sitting in my hotel room in Brisbane, Australia.  It’s beautiful here: white, sandy beaches and temperatures hovering around 80 degrees.  Can’t say that I’ll be enjoying the sunny weather much.  Tomorrow I’ll be speaking to a group of 135+ practitioners about “Supershrinks.”  I leave for home on Saturday.  While it’s cold and overcast in Chicago, I’m really looking forward to seeing my family after nearly two weeks on the road.

I spent the morning talking to practitioners in New Zealand via satellite for a conference sponsored by Te Pou.  It was a completely new and exciting experience for me, seated in an empty television studio and talking to a camera.  Anyway, organizers of the conference are determined to avoid mistakes made in the U.S., Europe, and elsewhere with the adoption of “evidence-based practice.”  As a result, they organized the event around the therapeutic alliance–the most neglected, yet evidence-based concept in the treatment literature!  More later, including a link to the hour-long presentation.

On Friday and Saturday of this last week, I was in the classic Victorian city of Melbourne, Australia doing two days worth of training at the request of WorkSafe and the Traffic Accident Commission.  The mission of WorkSafe is, “Working with the community to deliver outstanding workplace safety, together with quality care and insurance protection to workers and employers.”  100+ clinicians dedicated to helping Australians recover from work and traffic-related injuries were present for the first day of training which focused on using formal client feedback to improve retention and outcome of psychological services.  On day 2, a smaller group met for an intensive day of training and consultation.  Thanks go to the sponsors and attendees for an exciting two days.  Learn more about how outcomes are being used to inform service delivery by watching the video below with Daniel Claire and Claire Amies from the Health Services Group.

 

Filed Under: Behavioral Health, Top Performance Tagged With: australia, evidence based medicine, evidence based practice, New Zealand, supershrinks

Common versus Specific Factors and the Future of Psychotherapy: A Response to Siev and Chambless

October 31, 2009 By scottdm 4 Comments

Early last summer, I received an email from my long time friend and colleague Don Meichenbaum alerting me to an article published in the April 2009 edition of the Behavior Therapist–the official “newsletter” of the Association for Behavioral and Cognitive Therapies–critical of the work that I and others have done on the common factors.

Briefly, the article, written by two proponents of the “specific treatments for specific disorders” approach to “evidence-based practice” in psychology, argued that the common factors position–the idea that the efficacy of psychotherapy is largely due to shared rather than unique or model-specific factors–was growing in popularity despite being based on “fallacious reasoning” and a misinterpretation of the research.

Although the article claimed to provide an update on research bearing directly on the validity of the “dodo verdict”–the idea that all treatment approaches work equally well–it simply repeated old criticisms and ignored contradictory, and at times, vast evidence.  Said another way, rather than seizing the opportunity they were given to educate clinicians and address the complex issues involved in questions surrounding evidence-based practice, Siev and Chambless instead wrote to “shore up the faithful.”  “Do not doubt,” authors Siev and Chambless were counseling their adherents, “science is on our side.”

That differences and tensions exist in the interpretation of the evidence is clear and important.  At the same time, more should be expected from those who lead the field.  You read the articles and decide.  The issues at stake are critical to the future of psychotherapy.  As I will blog about next week, there are forces at work in the United States and abroad that are currently working to limit the types of approaches clinicians can employ when working with clients.  While well-intentioned, available evidence indicates they are horribly misguided.  Once again, the question clinicians and consumers face is not “which treatment is best for that problem,” but rather “which approach “fits with, engages, and helps” the particular consumer at this moment in time?”

Behavior Therapist (April 2009) from Scott Miller

Dissemination of EST’s (November 2009) from Scott Miller

Filed Under: Dodo Verdict, evidence-based practice, Practice Based Evidence Tagged With: Association for Behavioral and Cognitive Therapies, behavior therapist, Don Meichenbaum, evidence based medicine, evidence based practice, psychology, psychotherapy

Whoa Nellie! A 25 Million Dollar Study of Treatments for PTSD

October 27, 2009 By scottdm 1 Comment

I have in my hand a frayed and yellowed copy of observations once made by a well known trainer of horses. The trainer’s simple message for leading a productive and successful professional life was, “If the horse you’re riding dies, get off.”

You would think the advice straightforward enough for all to understand and benefit.  And yet, the trainer pointed out, “many professionals don’t always follow it.”  Instead, they choose from an array of alternatives, including:

  1. Buying a strong whip
  2. Switching riders
  3. Moving the dead horse to a new location
  4. Riding the dead horse for longer periods of time
  5. Saying things like, “This is the way we’ve always ridden the horse.”
  6. Appointing a committee to study the horse
  7. Arranging to visit other sites where they ride dead horses more efficiently
  8. Increasing the standards for riding dead horses
  9. Creating a test for measuring our riding ability
  10. Complaining about how the state of the horse the days
  11. Coming up with new styles of riding
  12. Blaming the horse’s parents as the problem is often in the breeding.
When it comes to the treatment of post traumatic stress disorder, it appears the Department of Defense is applying all of the above.  Recently, the DoD awarded the largest grant ever awarded to “discover the best treatments for combat-related post-traumatic stress disorder” (APA Monitor).  Beneficiaries of the award were naturally ecstatic, stating “The DoD has never put this amount of money to this before.”
Missing from the announcements was any mention of research which clearly shows no difference in outcome between approaches intended to be therapeutic—including, the two approaches chosen for comparison in the DoD study!  In June 2008, researchers Benish, Imel, and Wampold, conducted a meta-analysis of all studies in which two or more treatment approaches were directly compared.  The authors conclude, “Given the lack of differential efficacy between treatments, it seems scientifically questionable to recommend one particular treatment over others that appear to be of comparable effectiveness. . . .keeping patients in treatment would appear to be more important in achieving desired outcomes than would prescribing a particular type of psychotherapy” (p. 755).
Ah yes, the horse is dead, but proponents of “specific treatments for specific disorders” ride on.  You can hear their rallying cry, “we will find a more efficient and effective way to ride this dead horse!” My advice? Simple: let’s get off this dead horse. There are any number of effective treatments for PTSD.  The challenge is decidedly not figuring out which one is best for all but rather “what works” for the individual. In these recessionary times, I can think of far better ways to spend 25 million than on another “horse race” between competing therapeutic approaches.  Evidence based methods exist for assessing and adjusting both the “fit and effect” of clinical services—the methods described, for instance, in the scholarly publications sections of my website.  Such methods have been found to improve both outcome and retention by as much as 65%.  What will happen? Though I’m hopeful, I must say that the temptation to stay on the horse you chose at the outset of the race is a strong one.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence, PTSD Tagged With: behavioral health, continuing education, evidence based medicine, evidence based practice, icce, meta-analysis, ptst, reimbursement

The Crown Jewel of Research on CDOI: Professor Jan Blomqvist receives 2.9 million crown grant for RCT on feedback in Sweden

October 20, 2009 By scottdm 2 Comments

If you’ve been following me on Twitter, then you know that last week I was touring and teaching in different spots around Europe.  First, I presented two days in Copenhagen.  Then I keynoted the British Association of Counseling and Psychotherapy Conference in Newcastle, England.  Early Saturday morning, I flew from London to Stockholm.  My long time friend and associate, Gunnar Lindfelt picked me up at Arlanda airport and drove me back to his lovely home in the city.  There, we gorged on smoked salmon, “svensk godies” (small candies, my favorite of which is “skum bananer”–dark chocolate covered marshmellow in the shape of a banana) and Cider–a non-alcoholic fizzy apple drink that is an old time Swedish favorite.

It was Gunnar Lindfeldt, a gifted clinician and expert in the treatment of drug and alcohol problems, who first introduced me to the work of Swedish psychologist Jan Blomqvist.  In 1998, Blomqvist published a book entitled, “Beyond Treatment? Widening the Approach to Alcohol Problems and Solutions“ in which he made the provocative argument that common rather than specific factors held the key to effective care.  Since writing the book, Jan Blomqvist has continued his research and is currently a full professor at SORAD, the Centre for Social Research on Alcohol and Drugs at Stockholm University.

Anyway, I had the pleasure of meeting with Professor Blomqvist at his home in Uppsala, Sweden this last week.  Over homemade spinach soup, freshly-baked bread and cheese, we chatted about the state of the field.  The pièce de résistance, however, was hearing about the 2.9 million Swedish crown grant he had just been awarded for a 4 year long study of outcome-informed treatment of alcohol problems, called “Putting the Client in the Driver’s Seat.”

The study to be conducted by Professor Blomqvist will be the largest, most comprehensive, randomized clinical trial on client-directed outcome informed clinical work.  A centerpience of the study will be the routine use of the ORS and SRS and provision of feedback in the delivery of treatment services.  Importantly, unlike all other studies to date, this project completely avoids claims of “allegiance effects” as no developers of measures or supporters of CDOI are participating.  Stay tuned to the “Top Performance” blog for additional updates!  While you are waiting, take a moment and read Professor Blomqvist’s provocative take on “addiction” in slide viewer below.

J Blomqvist 3 from Scott Miller

Filed Under: Drug and Alcohol, evidence-based practice, Feedback, Feedback Informed Treatment - FIT Tagged With: addiction, behavioral health, brief therapy, cdoi, continuing education, evidence based practice, icce, Jan Blomqvist, ors, post traumatic stress, randomized clinical trial, SORAD, srs, sweden

  • « Previous Page
  • 1
  • 2
  • 3
  • Next Page »

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

There are no upcoming Events at this time.

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (112)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon
  • himalayan on Do certain people respond better to specific forms of psychotherapy?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training