SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Excellence in Amsterdam: The 2013 ACE Conference

June 6, 2013 By scottdm Leave a Comment

My how time flies!  Nearly three weeks have passed since hundreds of clinicians, researchers, and educators met in Amsterdam, Holland for the 2013 “Achieving Clinical Excellence.”  Participants came from around the globe–Holland, the US, Germany, Denmark, Italy, Russia, Norway, Sweden, Denmark, New Zealand, Romania, Australia, France–for three days of presentations on improving the quality and outcome of behavioral healthcare.  Suffice it to say, we had a blast!

The conference organizers, Dr. Liz Pluut and Danish psychologist Susanne Bargmann did a fantastic job planning the event, organizing a beautiful venue (the same building where the plans for New York City were drafted back in the 17th century), coordinating speakers (36 from around the globe), arranging meals, hotel rooms, and handouts.

Dr. Pluut opened the conference and introduced the opening plenary speaker, Dr. K. Anders Ericsson, the world’s leading researcher and “expert on expertise.”  Virtually all of the work being done by me and my colleagues at the ICCE on the study of excellence and expertise among therapists is based on the three decades of pioneering work done by Dr. Ericsson.  You can read about our work, of course, in several recent articles: Supershrinks, The Road to Mastery, or the latest The Outcome of Psychotherapy: Past, Present and Future (which appeared in the 50th anniversary edition of the journal, Psychotherapy).

Over the next several weeks, I’ll be posting summaries and videos of many of the presentations, including Dr. Ericsson.  One key aspect of his work is the idea of “Deliberate Practice.”  Each of the afternoon sessions on the first day focused on this important topic, describing how clinicians, agency managers, and systems of care can apply it to improve their skills and outcome.

The first of these presentations was by psychologist Birgit Valla–the leader of Family Help, a mental health agency in Stange, Norway–entitled, “Unreflectingly Bad or Deliberately Good: Deciding the Future of Mental Health Services.”  Grab a cup of coffee and listen in…

Oh, yeah…while on the subject of excellence, here’s an interview that just appeared in the latest issue of the UK’s Therapy Today magazine:

Excellence in therapy: An Interview with Scott D. Miller, Ph.D. by Colin Feltham. 
It starts on page 32.

Filed Under: Conferences and Training, ICCE Tagged With: accountability, behavioral health, conference, conferences, continuing education, evidence based practice, excellence, feedback

How Cool is Kuhl? A Man with Vision on a Mission

April 19, 2013 By scottdm Leave a Comment

This week, my colleague and friend, Dr. David Mee-Lee, sent me a link to a blogpost written by Don Kuhl.  Actually, I was already a subscriber to Don’s Minful MIDweek blog (you should be too), but my travel this week had prevented me from reading his latest installment.  His posts always leave me inspired and give me something to think about.  This week was no different.  More on that in a moment.

In the meantime, let me tell you about Don.  He is the founder and CEO of The Change Companies, a company whose mission is to create tailored materials and programs to support behavioral change for special populatons.  And create they do.  Hundreds of bright, attractive, highly readable publications and guided workbooks for use by professionals and the people they serve.  Their material is exhaustive and comprehensive, including adult behavioral health, criminal justice, education and prevention, clinical assessment, and faith-based programs.  A side note, it was Don and his skillful team at The Change Companies that produced the ICCE Feedback Informed Treatment and Training Manuals.  If you’ve not seen them, you should.  They are the cutting edge of information about FIT.

What is most striking about Don, however, is his passion.  I met him at a conference in San Francisco nearly a decade ago.  On several occasions, he flew to Chicago from his home base in Carson City, Nevada just to meet, talk, and share ideas.  The photo above is from one of the meetings he arranged.  Don is devoted to improving the quality and experience of behavioral health services for professionals and clients alike.  Simply said, Don Kuhl is cool.

In his blogpost this week, Don wrote about that meeting with Jim Prochaska, David Mee-Lee, me, and Bill Miller.  He referred to it as a “highlight” of his recent professional life, a lucky event resulting from his mindful pursuit of relationships with “people who have smiles on their faces and goodness in their hearts.”

My thought?  I was and am the lucky one.  Thanks Don.  Thanks Change Companies.  Keep up the good work.

Filed Under: Top Performance Tagged With: addiction, behavioral health, books, Change Companies, continuing education, Don Kuhl, evidence based practice, excellence, icce

The Importance of "Whoops" in Improving Treatment Outcome

December 2, 2012 By scottdm Leave a Comment

“Ring the bells that still can ring,
Forget your perfect offering
There is a crack in everything,
That’s how the light gets in.”

Leonard Cohen, Anthem

Making mistakes.  We all do it, in both our personal and professional lives.  “To err is human…,” the old saying goes.  And most of us say, if asked, that we agree whole heartedly with the adage–especially when it refers to someone else!  When the principle becomes personal, however, its is much more difficult to be so broad-minded.

Think about it for a minute: can you name five things you are wrong about?  Three?  How about the last mistake you made in your clinical work?  What was it?  Did you share it with the person you were working with?  With your colleagues?

Research shows there are surprising benefits to being wrong, especially when the maker views such errors differently.  As author Alina Tugend points out in her fabulous book, Better by Mistake, custom wrongly defines a mistake as ” the failure of a planned sequence of mental or physical activities to achieve its intended outcome.”  When you forget a client’s name during a session or push a door instead of pull, that counts as  slip or lapse.  A mistake, by contrast, is when “the plan itself is inadequate to achieve it’s objectives” (p. 11).  Knowing the difference, she continues, “can be very helpful in avoiding mistakes in the future” because it leads exploration away from assigning blame to the exploring systems, processes, and conditions that either cause mistakes or thwart their detection.

Last week, I was working with a talented and energetic group of helping professionals in New Bedford, Massachusetts.  The topic was, “Achieving Excellence: Pushing One’s Clinical Performance to the Next Level of Effectiveness.”  As part of my presentation, I talked about becoming more, “error-centric” in our work; specifically, using ongoing measurement of the alliance to identify opportunities for improving our connection with consumers of behavioral health services.  As an example of the benefits of making mistakes the focus of professional development efforts, I showed a brief video of Rachel Hsu and Roger Chen, two talented musicians who performed at the last Achieving Clinical Excellence (ACE) conference.  Rachel plays a piece by Liszt, Roger one by Mozart.  Both compositions are extremely challenging to play.  You tell me how they did (by the way, Rachel is 8 years old, Roger. 9):

Following her performance, I asked Rachel if she’d made any mistakes during her performance.  She laughed, and then said, “Yes, a lot!”  When I asked her what she did about that, she replied, “Well, its impossible to learn from my mistakes while I’m playing.  So I note them and then later practice those small bits, over and over, slow at first, then speeding up, until I get them right.”

After showing the video in New Bedford, a member of the audience raised his hand, “I get it but that whole idea makes me a bit nervous.”  I knew exactly what he was thinking.  Highlighting one’s mistakes in public is risky business.  Studies documenting that the most effective clinicians experience more self-doubt and are more willing to admit making mistakes is simply not convincing when one’s professional self-esteem or job may be on the line.  Neither is research showing that health care professionals who admit making mistakes and apologize to consumers are significantly less likely to be sued.  Becoming error centric, requires a change in culture, one that not only invites discloure but connects it with the kind of support and structure that leads to superior results.

Creating a “whoops-friendly” culture will be a focus of the next Achieving Clinical Excellence conference, scheduled for May 16-18th, 2013 in Amsterdam, Holland.  Researchers and clinicians from around the world will gather to share their data and experience at this unique event.  I promise you don’t want to miss it.  Here’s a short clip of highlights from the last one:

My colleague, Susanne Bargmann and I will also be teaching the latest research and evidence based methods for transforming mistakes into improved clinical performance at the upcoming FIT Advanced Intensive training in Chicago, Illinois.   I look forward to meeting you at one of these upcoming events.  In the meantime, here’s a fun, brief but informative video from the TED talks series on mistakes:

By the way, the house pictured above is real.  My family and I visited it while vacationing in Niagara Falls, Canada in October.  It’s a tourist attraction actually.  Mistakes, it seems, can be profitable.

Filed Under: Feedback Informed Treatment - FIT Tagged With: accountability, Alliance, behavioral health, cdoi, conferences, continuing education, deliberate practice, evidence based practice, feedback, mental health, Therapist Effects, top performance

Looking for Results in All the Wrong Places: What Makes Feedback Work?

September 16, 2012 By scottdm Leave a Comment

As anyone knows who reads this blog or has been to one of my workshops, I am a fan of feedback.  Back in the mid-1990’s, I began using Lynn Johnson’s 10-item Session Rating Scale in my clinical work.  His book, Psychotherapy in the Age of Accountability, and our long relationship, convinced me that I needed to check in regularly with my clients.  At the same time, I started using the Outcome Questionnaire (OQ-45).  The developer, Michael Lambert, a professor and mentor, was finding that routinely measuring outcome helped clinicians catch and prevent deterioration in treatment.  In time, I worked with colleagues to develop a set of tools, the brevity of which made the process of asking for and receiving feedback about the relationship and outcome of care, feasible.

Initial research on the measures and feedback process was promising.   Formally and routinely asking for feedback was associated with improved outcomes, decreased drop-out rates, and cost savings in service delivery!  As I warned in my blogpost last February, however, such results, while important, were merely “first steps” in a scientific journey.  Most importantly, the research to date said nothing about why the use of the measures improved outcomes.  Given the history of our field, it would be easy to begin thinking of the measures as an “intervention” that, if faithfully adopted and used, would result in better outcomes.  Not surprisingly, this is exactly what has happened, with some claiming that the measures improve outcomes more than anything since the beginning of psychotherapy.  Sadly, such claims rarely live up to their initial promise.  For decades the quest for the holy grail has locked the field into a vicious cycle of hope and despair, one that ultimately eclipses the opportunity to conduct the very research needed to facilitate understanding of the complex processes at work in any intervention.

In February, I wrote about several indirect, but empirically robust, avenues of evidence indicating that another variable might be responsible for the effect found in the initial feedback research.  Now, before I go on, let me remind you that I’m a fan of feedback, a big fan.  At the same time, its important to understand why it works and, specifically, what factors are responsible for the effect.  Doing otherwise risks mistaking method with cause, what we believe with reality.  Yes, it could be the measures.  But, the type research conducted at the time did not make it possible to reach that conclusion.  Plus, it seemed to me, other data pointed elsewhere; namely to the therapist.  Consider, for example, the following findings: (1) therapists did not appear to learn from the feedback provided by measures of the alliance and outcome; (2) therapists did not become more effective over time as a result of being exposed to feedback.  In other words, as with every other “intervention” in the history of psychotherapy, the effect of routinely monitoring the alliance and outcome seems to vary by therapist.

Such results, if true, would have significant implications for the feedback movement (and the field of behavioral health in general).  Instead of focusing on methods and interventions, efforts to improve the outcome of behavioral health practice should focus on those providing the service.  And guess what?  This is precisely what the latest research on routine outcome measurement (ROM) has now found. Hot off the press, in the latest issue of the journal, Psychotherapy Research, Dutch investigators de Jong, van Sluis, Nugter, Heiser, and Spinhoven (2012) found that feedback was not effective under all circumstances.  What variable was responsible for the difference?  You guessed it: the therapist–in particular, their interest in receiving feedback, sense of self-efficacy, commitment to use the tools to receive feedback, and…their gender (with women being more willing to use the measures).  Consistent with ICCE’s emphasis on supporting organizations with implementation, other research points to the significant role setting and structure plays in success.  Simon, Simon, Harris and Lambert (2011), Reimer and Bickman (2012), and de Jong (2012) have all found that organizational and administrative issues loom large in mediating the use and impact of feedback in care.

Together with colleagues, we are currently investigating both the individual therapist and contextual variables that enable clinicians to benefit from feedback.  The results are enticing.  The first will be presented at the upcoming Achieving Clinical Excellence conference in Holland, May 16-18th.  Other results will be reported in the 50th anniversayry issue of the journal, Psychotherapy, to which we’ve been asked to contribute.  Stay tuned.

Filed Under: Feedback Informed Treatment - FIT Tagged With: cdoi, continuing education, holland, icce, Michael Lambert, post traumatic stress

A Lotta Help from One’s Friends: The Role of Community in the Pursuit of Excellence

August 3, 2012 By scottdm Leave a Comment

Dateline: Chicago, IL USA

Hard not to be impressed with the USA Women’s Gymnastic team.  What skill, percision, expertise, and excellence.

By now, I’m sure you’ve seen the interviews.  In all instances, each and every one has focused on the team.  Despite some in the media attempting to make stars out of the individual members, the atheletes have continually highlighted, “The Team.”  When asked to account for their success or the source of their ambition, the reason cited has been: THE TEAM.

Sixteen year old McKayla Maroney said, “I think we’re as close as we can be.  We’ve all been working and training together for a long time…I’ve known (fellow team member) Kyla since I was 6 years old.  We are all best of friends.  They did so great today and I just love this team so much.”

As highlighted in our recent article, “The Road to Mastery,” excellence does not occur in a vacuum.  Surrounding every great performer is a community (teachers, coaches, mentors, and peers).  In the busy world that is modern clinical practice, where can practitioners finda trsutworthy and supportive community of peers?  A group of colleagues that will challenge them to keep growing as professionals and people?

In a word, the ICCE.  In December 2009, the International Center for Clinical Excellence was launched and since them become the largest, global, web-based community of clinicians, researchers, administrators, and policy makers dedicated to excellence in behavioral health.  The ICCE has it’s own gold-medal winning team!  Practitioners working together in locations around the globe.

Practitioners like Jason Seidel, Psy.D., who represented ICCE at last week’s meeting of the American Psychological Association.  Jason presented on Feedback Informed Treatment (FIT) and then participated in a panel discussion on Practice Based Evidence together with Paul Clement, Michael Lambert, Bill Stiles, Carol Goodheart, and David Barlow.  Jason rocked the packed house with his tight summary of the empirical support for FIT and argument in favor of practice-based evidence!

Then there’s Daryl Chow, a psychologist from Singapore, who is currently finishing up a quantitative study of “Supershrinks.” His research is the first to employ a sophisticated statistical analysis of therapists practices related to superior outcomes.  Suffice it to say, his results are mind blowing.  Daryl’s work won him a scholarship to this year’s “Training of Trainers” course.  If you’re not signed up for that event, you can meet him today by joining the ICCE and looking him up!

There are many, many other dedicated and supportive members.  Join and share your expertise with the community today!

Filed Under: excellence, Top Performance Tagged With: cdoi, continuing education, feedback informed treatment, icce

The DSM 5: Mental Health’s "Disappointingly Sorry Manual" (Fifth Edition)

June 11, 2012 By scottdm 2 Comments

Have you seen the results from the field trials for the fifth edition of the Diagnostic and Statistical Manual?  The purpose of the research was to test the reliability of the diagnoses contained in the new edition.  Reliable (ri-lahy–uh-buhl), meaning “trustworthy, dependable, consistent.”

Before looking at the data, consider the following question: what are the two most common mental health problems in the United States (and, for that matter, most of the Western world)?  If you answered depression and anxiety, you are right.  The problem is that the degree of agreement between experts trained to used the criteria is unacceptably low.

Briefly, reliability is estimated using what statisticians call the Kappa (k) coefficient, a measure of inter-rater agreement.  Kappa is thought to be a more robust measure than simple percent agreement as it takes into account the likelihood of raters agreeing by chance.

The results?  The likelihood of two clinicians, applying the same criteria to assess the same person, was poor for both depression and anxiety.  Although there is no set standard, experts generally agree that kappa coefficients that fall lower that .40 can be considered poor; .41-.60, fair; .61-.75, good; and .76 and above, excellent.  Look at the numbers below and judge for yourself:

Diagnosis DSM-5 DSM4 ICD-10 DSM-3
Major Depressive Disorder .32 .59 .53 .80
Generalized Anxiety Disorder .20 .65 .30 .72

Now, is it me or do you notice a trend?  The reliability for the two most commonly diagnosed and treated “mental health disorders” has actually worsened over time!  The same was found for a number of the disorders, including schizophrenia (.46, .76, .81), alcohol use disorder (.40, .71, .80), and oppositional defiant disorder (.46, .51., .66).  Antisocial and Obsessive Personality Disorders were so variable as to be deemed unreliable.

Creating a manual of  “all known mental health problems” is a momumental (and difficult) task to be sure.  Plus, not all the news was bad.  A number of diagnoses demonstrated good reliability (autism spectrum disorder, posttraumatic stress disorder (PTSD), and attention-deficit/hyperactivity disorder (ADHD) in children (.69, .67, .61, respectively).  Still, the overall picture is more than a bit disconcerting–especially when one considers that the question of the manual’s validity has never been addressed.  Validity (vuh–lid-i-tee), meaning literally, “having some foundation; based on truth.”  Given the lack of any understanding of or agreement on the pathogenesis or etiology of the 350+ diagnoses contained in the manual, the volume ends up being, at best, a list of symptom clusters–not unlike categorizing people according to the four humours (e.g., phlegmatic, choleric, melancholy, sanquine).

Personally, I’ve always been puzzled by the emphasis placed on psychiatric diagnoses, given the lack of evidence of diagnostic specific treatment effects in psychotherapy outcome research.  Additionally, a increasing number of randomized clinical trials has provided solid evidence that simply monitoring alliance and progress during care significantly improves both quality and outcome of the services delivered.  Here’s the latest summary of feedback-related research.

Filed Under: Feedback Informed Treatment - FIT Tagged With: continuing education, DSM

The International Center for Clinical Excellence: Using Social Networks for "Real Time" Research

June 6, 2012 By scottdm 1 Comment

The International Center for Clinical Excellence was officially lauched at the Evolution of Psychotherapy Conference in December 2009.  Since that time, membership has grown steadily.  With over 3800 members, the ICCE is the largest, web-based community of behavioral health professionals dedicated to improving the quality and outcome of service delivery.  The site features nearly a hundred discussion forums, taking place in a number of languages, on topics specific to treatment and research.

Many agencies and systems of care are using the site to coordinate implementation of feedback-informed treatment.  Of course, those attending ICCE training events (e.g., the Advanced Intensive, Training of Trainers, and Achieving Clinical Excellence conference), use the site for both pre and post training support and continuing education.

And now, the site is being used for a new purpose: research.  ICCE member and associate Wendy Amey was the first.  She used the site successfully for her dissertation, surveying members about how they work with trauma.  I am pleased to announce two new research projects that will access the ICCE community.

The first is being conducted by McGill University counseling psychology doctoral candidate Ionita Gabriela.  Her study focuses on clinicians’ experiences with using measures to monitor client progress in the services they offer.  Implementation is the challenge most clinician and agencies face when incorporating routine outcome monitoring into practice.  All participants will be entered into a drawing for a $100 Amazon gift certificate.  More importantly, participants will contribute to the expanding knowledge base on feedback informed treatment.  Whether or not you are a member of ICCE, you can contribute by taking part in the study.  Click here to send an email to Ionita to complete the interview (it only takes about 15 minutes).

The second study is being conducted by me and ICCE Associate Daryl Chow as part of ICCE’s continuing emphasis on clinical excellence.  The study builds on groundbreaking research by Ronnestad and Orlinksy on the subject of therapist development.  Particpants are asked to complete a brief (8-12 minutes), online survey with questions pertaining to your development as a clinician.   All participants will be entered into a drawing, the winner receiving all 6 of the newly released FIT Treatment and Training Manuals (valued at $100).  Again, you can participant whether or not you are currently a member of the ICCE.  In fact, please ask your colleagues to participate as well!  Click here to complete the secure, online survey (no identifying information will be sought).

Filed Under: Conferences and Training, ICCE Tagged With: continuing education, icce

More from Sweden

June 4, 2012 By scottdm Leave a Comment

sweden-mapThree short weeks ago, I was in Stockholm, Sweden talking about “what works” in clinical practice.  As I announced at the time, my visit coincided with an announcement by the organization governing mental health practice in the country.  For the better part of a decade, CBT enjoyed near exclusive status as “evidence-based.”  Indeed, payment for training of clinicians and treatment of clients in other approaches disappeared as over two billion Swedish crowns were spent on in CBT. 

The result? The widespread adoption of the method had no effect whatsoever on the outcome of people disabled by depression and anxiety.  The conclusion?  Guidelines for clinical practice were reviewed and expanded.  Research on feedback is in full swing in the largest randomized clinical trial on FIT in history.

More news…

Today, I received notice from Swedish clinician and publisher, Bengt Weine, that my article, “The Road to Mastery” (written together with my long friend and collaborator, Mark A. Hubble, Ph.D.), had been translated into Swedish and accepted for publication in SFT, the Swedish Family Therapy journal.  If you understand the language, click here to access a copy.

Helping clinicians and agencies along the “road to mastery” is what the upcoming Advanced Intensive and Training of Trainers events are all about.  Join colleagues from around the globe for these fun, intense days of training in Chicago.

Filed Under: Conferences and Training Tagged With: CBT, continuing education, FIT, holland, mark hubble, sweden

Revolution in Swedish Mental Health Care: Brief Update

May 14, 2012 By scottdm 1 Comment

In April 2010, I blogged about Jan Larsson, a Swedish clinician who works with people on the margins of the mental health system.  Jan was dedicated to seeking feedback, using the ORS and SRS to tailor services to the individuals he met.  It wasn’t easy.  Unilke most, he did not meet his clients in an office or agency setting.  Rather, he met them where they were: in the park, on the streets, and in their one room aparments.  Critically, wherever they met, Jan had them complete the two measures–“just to be sure,” he said.  No computer.  No I-phone app.  No sophisticated web-based adminsitration system.  With a pair of scissors, he simply trimmed copies of the measures to fit in his pocket-sized appointment book! I’ve been following his creative application of the scales ever since.

Not surprisingly, Jan was on top of the story I blogged about yesterday regarding changes in the guidelines governing Swedish mental health care practice.  He emailed me as I was writing my post, including the link to the Swedish Radio program about the changes.  Today, he emailed again, sending along links to stories appearing in two Swedish newspapers: Dagens Nyheter and Goteborg Posten.

Thanks Jan!

And to everyone else, please continue to send any new links, videos, and comments.

Filed Under: behavioral health, excellence, Feedback Informed Treatment - FIT, Top Performance Tagged With: continuing education, Dagens Nyheter, evidence based practice, Goteborg Posten, icce, ors, outcome rating scale, session rating scale, srs, sweden

Revolution in Swedish Mental Health Practice: The Cognitive Behavioral Therapy Monopoly Gives Way

May 13, 2012 By scottdm 34 Comments

Sunday, May 13th, 2012
Arlanda Airport, Sweden

Over the last decade, Sweden, like most Western countries, embraced the call for “evidence-based practice.”  Socialstyrelsen, the country’s National Board of Health and Welfare, developed and disseminated a set of guidelines (“riktlinger”) for mental health practice.  Topping the list of methods was, not surprisingly, cognitive-behavioral therapy. 

The Swedish State took the list seriously, restricting payment for training of clinicians and treatment of clients to cognitive behavioral methods.  In the last three years, a billion Swedish crowns were spent on training clinicians in CBT.  Another billion was spent on providing CBT to people with diagnoses of depression and anxiety.  No funding was provided for training or treatment in other methods. 

The State’s motives were pure: use the best methods to decrease the number of people who become disabled as result of depression and anxiety.  Like other countries, the percentage of people in Sweden who exit the work force and draw disability pensions has increased dramatically.  As a result, costs skyrocketed.  Even more troubling, far too many became permanently disabled. 

The solution?  Identify methods which have scientific support, or what some called, “evidence-based practice.” The result?  Despite substantial evidence that all methods work equally well, CBT became the treatment of choice throughout the country.  In point of fact, CBT became the only choice.

As noted above, Sweden is not alone in embracing practice guidelines.  The U.K. and U.S. have charted similar paths, as have many professional organizations.  Indeed, the American Psychological Association has now resurrected its plan to develop and disseminate a series of guidelines advocating specific treatments for specific disorders.  Earlier efforts by Division 12 (“Clinical Psychology”) met with resistance from the general membership as well as scientists who pointed to the lack of evidence for differential effectiveness among treatment approaches. 

Perhaps APA and other countries can learn from Sweden’s experience.  The latest issue of Socionomen, the official journal for Swedish social workers, reported the results of the government’s two billion Swedish crown investment in CBT.  The widespread adoption of the method has had no effect whatsoever on the outcome of people disabled by depression and anxiety.  Moreover, a significant number of people who were not disabled at the time they were treated with CBT became disabled, costing the government an additional one billion Swedish crowns.  Finally, nearly a quarter of those who started treatment, dropped out, costing an additional 340 million!

In sum, billions training therapists in and treating clients with CBT to little or no effect.  

Since the publication of Escape from Babel in 1995, my colleagues and I at the International Center for Clinical Excellence have gathered, summarized, published, and taught about research documenting little or no difference in outcome between treatment approaches.  All approaches worked about equally well, we argued, suggesting that efforts to identify specific approaches for specific psychiatric diagnoses were a waste of precious time and resources.  We made the same argument, citing volumes of research in two editions of The Heart and Soul of Change.

Yesterday, I presented at Psykoterapi Mässan, the country’s largest free-standing mental health conference.  As I have on previous visits, I talked about “what works” in behavioral health, highlighting data documenting that the focus of care should shift away from treatment model and technique, focusing instead on tailoring services to the individual client via ongoing measurement and feedback.  My colleague and co-author, Bruce Wampold had been in the country a month or so before singing the same tune.

One thing about Sweden:  the country takes data seriously.  As I sat down this morning to eat breakfast at the home of my long-time Swedish friend, Gunnar Lindfeldt, the newscaster announced on the radio that Socialstyrelsen had officially decided to end the CBT monopoly (listen here).  The experiment had failed.  To be helped, people must have a choice. 

“What have we learned?” Rolf Holmqvist asks in Socionomen, “Treatment works…at the same time, we have the possibility of exploring…new perspectives.  First, getting feedback during treatment…taking direction from the patient at every session while also tracking progress and the development of the therapeutic relationship!”

“Precis,” (exactly) my friend Gunnar said. 

And, as readers of my blog know, using the best evidence, informed by clients’ preferences and ongoing monitoring of progress and alliance is evidence-based practice.  However the concept ever got translated into creating lists of preferred treatment is anyone’s guess and, now, unimportant.  Time to move forward.  The challenge ahead is helping practitioners learn to integrate client feedback into care—and here, Sweden is leading the way.

“Skål Sverige!”

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: CBG, continuing education, evidence based practice, icce, Socialstyrelsen, sweden

Is the "Summer of Love" Over? Positive Publication Bias Plagues Pharmaceutical Research

March 27, 2012 By scottdm Leave a Comment


Evidence-based practice is only as good as the available “evidence”–and on this subject, research points to a continuing problem with both the methodology and type of studies that make it into the professional literature.  Last week, PloS Medicine, a peer-reviewed, open access journal of the Public Library of Science, published a study showing a positive publication bias in research on so-called atypical antipsychotic drugs.  In comparing articles appearing in journals to the FDA database, researchers found that almost all postive studies were published while clinical trials with negative or questionable results were not or–and get this–were published as having positive results!

Not long ago, similar yet stronger results appeared in the same journal on anti-depressants.  Again, in a comparison with the FDA registry, researchers found all postive studies were published while clinical trials with negative or questionable results were not or–and get this–were published as having positive results!  The problem is far from insignificant.  Indeed, a staggering 46% of studies with negative results were not published or published but reported as positive.

Maybe the “summer of love” is finally over for the field and broader American public.  Today’s Chicago Tribune has a story by Kate Kelland and Ben Hirschler reporting data about sagging sales of anti-depressants and multiple failures to bring new, “more effective” drug therapies to market.  Taken together, robust placebo effects, the FDA mandate to list all trials (positive and negative), and an emphasis in research on conducting fair comparisons (e.g., comparing any new “products” to existing ones) make claims about “new and improved” effectiveness challenging.

Still one sees ads on TV making claims about the biological basis of depression–the so called, “biochemical imbalance.”  Perhaps this explains why a recent study of Medicaid clients found that costs of treating depression rose by 30% over the last decade while the outcomes did not improve at all during the same period.  The cause for the rise in costs?    Increased use of psychiatric drugs–in particular, anti-psychotics in cases of depression.

“It’s a great time for brain science, but at the same time a poor time for drug discovery for brain disorders,” says David Nutt, professor of neuropsychopharmacology, cited in the Chicago Tribune, “That’s an amazing paradox which we need to do something about.”

Here’s an idea: how about not assuming that problems in living are reduceable to brain chemistry?   That the direction of causality for much of what ails people is not brain to behavior but perhaps behavior to brain?  On this note, it is sad to note that while the percentage of clients prescribed drugs rose from 81 to 87%–with no improvement in effect–the number of those receiving psychotherapy dropped from 57 to 38%.

Here’s what we know about psychotherapy: it works and it has a far less troublesome side effect profile than psychotropic drugs.  No warnings needed for dry mouth, dizziness, blood and liver problems, or sexual dysfunction.  The time has come to get over the collective 1960’s delusion of better living through chemistry.

Filed Under: Practice Based Evidence Tagged With: behavioral health, continuing education, depression, evidence based practice, icce, Medicaid, mental health, psychotherapy

A Progress Report on the Science (and Art ) of Psychotherapy: The Psychotherapy Networker 30th Anniversary Edition

March 18, 2012 By scottdm Leave a Comment

The 30th Anniversary Edition of the Psychotherapy Networker has hit newsstands.  In it, is an article by Diane Cole taking the measure of psychotherapy.  Her question? Has the field gotten any better over the last three decades?  The entire issue is a “must read,” starting with editor Rich Simon’s lengthy and thought provoking editorial, “Still Crazy After All These Years.”

Even if you are not a subscriber, much of the current edition is available FOR FREE online at the Networker website.  It is an honor that the work that I have been doing on excellence and expert performance, together with many Senior Associates at ICCE (Susanne Bargman, Cynthia Maeschalck, Julie Tilsen, Rob Axsen, Jason Seidel, and Bob Bertolino) is featured prominently in this special issue magazine.

Don’t miss it!  And don’t miss the Networker conference scheduled this week in Washington, D.C.   I’ll be there on Friday delivering the luncheon keynote address and a workshop on pushing your clinical performance to the next level of effectiveness!

Filed Under: Top Performance Tagged With: cdoi, continuing education, icce, psychotherapy networker

Feedback-Informed Treatment as Evidence-based Practice: APA, SAMSHA, and NREPP

November 1, 2011 By scottdm 1 Comment

What is evidence-based practice?  Visit the UK-based NICE website, or talk to proponents of particular theoretical schools or therapeutic models, and they will tell you that being “evidence-based” means using the approach research has deemed effective for a particular diagnosis  (e.g., CBT for depression, EMDR for trauma).  Over the last two decades, numerous organizations and interest groups have promoted lists of “approved” treatment approaches–guidelines that clinicians and funding bodies should follow when making practice decisions.  Throughout the 1990’s, for example, division 12 within the American Psychological Association (APA) promoted the idea of “empirically supported treatments.”

However, when one considers the official definition of evidence-based practice offered by the Institute of Medicine and the APA, it is hard to fathom how anyone could come to such a conclusion.  According to the APA, evidence-based practice is, “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.” (see American Psychologist, May 2006).  Nothing here about “empirically supported treatments” or the mindless application of specific treatment protocols.  Rather, according to the APA and IOM, clinicians must FIT the treatment to the client, their preferences, culture, and circumstances.  And how can one do that?  Well, conspicuously absent from the definition is, “consult a set of treatment guidelines.”  Rather, when evidence-based, clinicians must monitor “patient progress (and of changes in the patient’s circumstances—e.g.,job loss, major illness) that may suggest the need to adjust the treatment. If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate.”

The principles and practices of feedback-informed treatment (FIT) are not only consistent with but operationalize the American Psychological Association’s (APA) definition of evidence-based practice.  To wit, routinely and formally soliciting feedback from consumers regarding the therapeutic alliance and outcome of care and using the resulting information to inform  and tailor service delivery.  And indeed, over the last 9 months, together with Senior Associates, I completed and submitted an application for FIT to be reviewed by NREPP–SAMSHA’s National Registry of Evidence-based Practices and Approaches!  As part of that application and ICCE’s commitment to improving the quality and outcome of behavioral health, we developed a list of “core competencies” for FIT practice, a series of six detailed treatment and implementation manuals, a gap assessment tool that organizations can use to quickly and expertly assess implementation and fidelity problems, and supportive documentation and paperwork.  Finally, we developed and rigorously tested training curricula and administered the first standardized exam for certifying FIT practitioners and trainers.  We are in the final stages of that review process soon and I’m sure I’ll be making a major announcement right here on this blog shortly.  So, stay tuned.

In the meantime, this last Saturday, clinicians located the globe–Canada, New Zealand, Australia, the US,a nd Romania–sat for the first administration of ICCE “Core Competency” Exam.  Taking the test is the last step in becoming an ICCE “Certified Trainer.”   The other requirements include: (1) attending the “Advanced Intensive” and “Training of Trainers” workshops; and (2) submitting a training video on FIT for review.  The exam was administered online using the latest technology.


The members, directors, and senior associates of ICCE want to congratulate (from top left):

  • Eeuwe Schuckard, Psychologist, Wellington, New Zealand;
  • Aaron Frost, Psychologist, Brisbane, Australia;
  • Cindy Hansen, BA-Psych, HHP, Manager Myoutcomes;
  • David Prescott, Director of Professional Development, Becket Family of Services, Portland, Maine;
  • Arnold Woodruff, LMFT, Clinical Director, Home for Good, Richmond, Virginia;
  • Bogdan, Ion, Ph.D., Bucharest University, Bucharest, Romania;
  • Daniel Buccino, Clinical Supervisor, Community Psychiatry Program. Johns Hopkins;
  • Dwayne Cameron, Outreach Counselor, Prince Albert, Saskatoon, Canada;
  • Mark Goheen, the Clinical Practice Lead at Fraser Health, British Columbia.

If you are not yet a member of the ICCE community, please join the largest, fastest growing, and friendly group of behavioral health professionals today at: www.centerforclinicalexcellence.com.

Filed Under: Conferences and Training, Feedback Informed Treatment - FIT, ICCE Tagged With: APA, cdoi, continuing education, evidence based practice, HHS, icce, NREPP, SAMHSA

Are Mental Health Practioners Afraid of Research and Statistics?

September 30, 2011 By scottdm Leave a Comment

A few weeks back I received an email from Dr. Kevin Carroll, a marriage and family therapist in Iowa.  Attached were the findings from his doctoral dissertation.  The subject was near and dear to my heart: the measurement of outcome in routine clinical practice.  The findings were inspiring.  Although few graduate level programs include training on using outcome measures to inform clinical practice, Dr. Carroll found that 64% of those surveyed reporting utilizing such scales with about 70% of their clients!  It was particularly rewarding for me to learn that the most common measures employed were the…Outcome and Session Rating Scales (ORS & SRS)

As readers of this blog know, there are multiple randomized clinical trials documenting the impact that routine use of the ORS and SRS has on retention, quality, and outcome of behavioral health services.  Such scales also provide direct evidence of effectiveness.  Last week, I posted a tongue-in-cheek response to Alan Kazdin’s broadside against individual psychotherapy practitioners.  He was bemoaning the fact that he could not find clinicians who utilized “empirically supported treatments.”  Such treatments when utilized, it is assumed, lead to better outcomes.  However, as all beginning psychology students know, there is a difference between “efficacy” and “effectiveness” studies.  The former tell us whether a treatment has an effect, the latter looks at how much benefit actual people gain from “real life” therapy.  If you were a client which kind of study would you prefer?  Unfortunately, most of the guidelines regarding treatment models are based on efficacy rather than effectiveness research.  The sine qua non of effectiveness research is measuring the quality and outcome of psychotherapy locally.  After all, what client, having sought out but ultimately gained nothing from psychotherapy, would say, “Well, at least the treatment I got was empircally supported.”  Ludicrous.

Dr. Carroll’s research clearly indicates that clinicians are not afraid of measurement, research, and even statistics.  In fact, this last week, I was in Denmark teaching a specialty course in research design and statistics for practitioners.  That’s right.  Not a course on research in psychotherapy or treatment.  Rather, measurement, research design, and statistics.  Pure and simple.  Their response convinces me even more that the much talked about “clinician-researcher” gap is not due to a lack of interest on practitioners’ parts but rather, and most often, a result of different agendas.  Clinicians want to know “what will work” for this client.  Research rarely address this question and the aims and goals of some in the field remain hopelessly far removed from day to day clinical practice.  Anyway, watch the video yourself:

Filed Under: Feedback, Feedback Informed Treatment - FIT Tagged With: continuing education, holland, icce, ors, Outcome, psychotherapy, Session Rating Scales, srs

Psychologist Alan Kazdin Needs Help: Please Give

September 25, 2011 By scottdm Leave a Comment

Look at this picture.  This man needs help.  He is psychologist, Alan Kazdin, former president of the American Psychological Association, and current Professor of Psychology at Yale University.  A little over a week ago, to the surprise and shock of many in the field, he disclosed a problem in his professional life.  In an interview that appeared online at TimeHealthland Dr. Kazdin reported being unable to find a therapist or treatment program to which he could refer clients–even in Manhattan, New York, the nation’s largest city!

After traveling the length and breadth of the United States for the last decade, and meeting and working with hundreds of agencies and tens of thousands of therapists, I know there are many clinicians that can help Dr. Kazdin with his problem.  Our group has been tracking the outcome of numerous practitioners over the last decade and found average outcomes to be on par with those obtained in tightly controlled randomized clinical trails!  That’s good news for Dr. Kazdin.

Now, just to be sure, it should be pointed out that Dr. Kazdin is asking for practitioners who adhere to the Cochrane Review’s and the American Psychological Association’s definition of evidence-based practice (EBP)–or, I should say, I believe that is what he is asking for as the interview is not entirely clear on this point and appears to imply that EBP is about using specific treatment methods (the most popular, of course, being CBT).  The actual definition contains three main points, and clearly states that EBP is the integration of:

  1. The best available research;
  2. Clinical expertise; and
  3. The client’s culture, values, and preferences.

Interestingly, the official APA policy on evidence-based practice further defines clinical expertise as the “monitoring of patient progress (and of changes in the patient’s circumstances)…that may suggest the need to adjust the treatment.  If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate.”

I say “interestingly” for two reasons.  First, the definition of EBP clearly indicates that clinicians must tailor psychotherapy to the individual client.  And yet, the interview with Dr. Kazdin specifically quotes him as saying, “That’s a red herring. The research shows that no one knows how to do that. [And they don’t know how to monitor your progress].”   Now, admittedly, the research is new and, as Dr. Kazdin says, “Most people practicing who are 50 years or older”–like himself–may not know about it, but there are over a dozen randomized clinical trials documenting how routinely monitoring progress and the relationship and adjusting accordingly improves outcome.  The interview also reports him saying that “there is no real evidence” that the relationship (aka alliance) between the therapist and client matters when, in fact, the APA Interdivisional Task Force on Evidence-Based Therapy Relationships concluded that there is abundant evidence that “the therapy relationship accounts for substantial and consistent contributions to…outcome….at least as much as the particular method.”  (Incidently, the complete APA policy statement on EBP can be found in the May-June 2006 issue of the American Psychologist).

Who knows how these two major bloopers managed to slip through the editing process?  I sure know I’d be embarrased and immediately issue a clarification if I’d been misquoted making statements so clearly at odds with the facts.  Perhaps Dr. Kazdin is still busy looking for someone to whom he can refer clients.  If you are a professional who uses your clinical expertise to tailor the application of scientifically sound psychotherapy practices to client preferences, values, and culture, then you can help.

Filed Under: evidence-based practice, Top Performance Tagged With: Alan Kazdin, American Psychological Association, brief therapy, Carl Rogers, CBT, continuing education, evidence based practice, icce, medicine, therapy

Getting FIT in the New Year: The Latest Evidence

January 18, 2011 By scottdm Leave a Comment

 John Norcross, Ph.D.  is without a doubt the researcher that has done the most to highlight the evidence-base supporting the importance of the relationship between clinician and consumer in successful behavioral healthcare.   The second edition of his book, Psychotherapy Relationships that Work, is about to be released. Like the last edition, this volume is a virtual treasure trove of research findings and empirically supported practices.

Among the many gems in the book is a chapter by Michael J. Lambert, Ph.D–pioneering researcher on “feedback-informed treatment” (FIT).  As usual, he does a masterful job summarizing the existing research on the subject. The data are overwhelmingly positive: seeking and using standardized feedback regarding the progress and outcome of treatment cuts drop out and deterioration rates and significantly improves outcome.

Lambert also reports the results of two meta-analyses. One performed on studies using his own OQ System family of measures, the other based on research using the ORS and SRS. Not only did he find ample empirical support for the two systems, but in the case of the ORS and SRS those therapies informed by feedback, “had 3.5 times higher odds of experiencing reliable change.”  Additionally, and importantly, the brief, 4-item ORS and SRS scales performed the same as the longer and more detailed OQ 45.2.

What can you do? First, order John’s book. Second, if you are not FIT, now is the time to register to use the measures.  And if you need support, why not join the International Center for Clinical Excellence? Like the measures, there is no cost. Right now, professionals from different disciplines, working in diverse settings are connecting with and learning from each other. Here’s a nudge: you’ll be able to reach John Norcross there—he’s one of ICCE’s newest members.

Filed Under: Behavioral Health, CDOI, Feedback, PCOMS Tagged With: cdoi, continuing education, icce, randomized clinical trial

Pushing the Research Envelope: Getting Researchers to Conduct Clinically Meaningful Research

November 5, 2010 By scottdm Leave a Comment

ACE Health Innovations - Developers of openFIT and mFIT

At the recent ACE conference, I had the pleasure of learning from the world’s leading experts on expertise and top performance.  Equally stimulating were conversations in the hallways between presentations with clinicians, policy makers, and researchers attending the event.  One of those was Bill Andrews, the director of the HGI Practice Research Network in the UK who work over the last 3+ years has focused on clinicians whose outcomes consistently fall in the top quartile of effectiveness.

In this brief interview, Bill talks about the “new direction” his research on top performing clinicians is taking.  He is truly “pushing the research envelope, challenging the field to move beyond the simplistic randomized clinical trials comparing different treatment packages.  Take a look:

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, continuing education, evidence based practice, icce

Clinician Beware: Ignoring Research Can be Hazardous to Your Professional (and Economic) Health

September 25, 2010 By scottdm Leave a Comment

“Studies show…”
“Available data indicate…”
“This method is evidence-based…”
My how things have changed. Twenty years ago when I entered the field, professional training, continuing education events, and books rarely referred to research or evidence. Now, everyone refers to the “data.”  The equation is simple: no research = no money.  Having “an evidence-base” increasingly determines book sales, attendance at continuing education events, and myriad other funding and reimbursement decisions.

So what do the data actually say? S adly, the answer is often, “it depends on who you ask.”  If you read the latest summary and treatment recommendations for post-traumatic stress disorder (PTSD) posted by the Cochrane Collaboration, you are told that TFCBT and EMDR are the most effective, “state of the art” treatments on offer.  Other summaries, as I recently blogged about, arrive at very—even opposite—conclusions; namely, all psychotherapies (trauma-focused and otherwise) work equally well in the treatment of PTSD.  For the practicing clinician (as well as other consumers of research), the end result is confusion and, dare I say, despair.

Unable to resolve the discrepant findings, the research is either rejected out of hand (“it’s all crap anyway”) or cherry-picked (“your research is crap, mine is good”).  In a world where experts disagree–and vehemently–what is the average Joe or Jane therapist to do?

Fortunately, there is another way, beyond agnosticism and instead of fundamentalism.   In a word, it is engagement. This last week, I spent 5 days teaching an intensive workshop with ICCE Senior Associate Susanne Bargmann to a group of Danish psychologists on “Statistics and Research Design.”  That’s right.  Five days, 6 hours a day spent away from work and clients learning how to understand, read, and conduct research.

The goal of the training was simple and straight-forward: help practitioners learn to evaluate the methods and meanings, strengths and weaknesses, and political and paradigmatic influences associated with research and evidentiary claims. At the conclusion of the five days, none of those assembled had difficulty engaging with and understanding the reasons for the seemingly discrepant findings noted above. As a result, they could state with confidence “what works” with PTSD, helping clarify this not only to colleagues, payers, and policy members but also to consumers of behavioral health services.

The “Statistics and Research Design” course will be held again in Denmark in 2011.  If the experience of this year’s participants proves anything, it is that, “The only thing therapists have to fear about statistics and research design, is fear itself.”  Please contact Vinther and Mosgaard directly for more information.

Finally, as part of the International Center for Clinical Excellence (ICCE) efforts to improve the quality and outcome of behavioral health services worldwide, two additional intensive trainings will be offered in Chicago, Illinois (USA). First, the “Advanced Training in Feedback-Informed Treatment (FIT).”  And second, the annual “Training of Trainers.”   In the Advanced Training, participants learn:

·         The empirical foundations of feedback-informed clinical work (i.e., empirically supported factors underlying successful clinical work, the impact of feedback on performance)
·         Clinical skills for enhancing client engagement that cut across different therapeutic orientations and diverse treatment populations
·         How to integrate outcome management tools (including one or more of the following: ORS, SRS, CORE, and OQ 45) into clinical practice
·         How to use the outcome management tools to inform and improve service delivery
·         How to significantly improve your clinical skills and outcomes via feedback and deliberate practice
·         How to use data generated from outcome measures to inform management, supervision, and training decisions
·         Strategies for successful implementation of CDOI and FIT in your organization or practice
Need more information about the course?  Email us or click on the video below to hear more about the course.  In the meantime, space is limited so register early at: http://www.eventbrite.ie/o/the-international-centre-for-clinical-excellence-298540255.

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice Tagged With: cdoi, continuing education, denmark, icce, reimbursement

What Works in the Treatment of Post Traumatic Stress Disorder? The Definitive Study

September 15, 2010 By scottdm 1 Comment

What works in the treatment of people with post-traumatic stress?  The influential Cochrane Collaboration–an “independent network of people” whose self-professed mission is to help “healthcare providers, policy makers, patients, their advocates and carers, make well-informed decisions, concludes that, “non trauma-focused psychological treatments [do] not reduce PTSD symptoms as significantly…as individual trauma focused cognitive-behavioral therapy (TFCBT), eye movement desensitization and reprocessing, stress mamangement and group TFCBT.”  The same conclusion was reached by the National Institute for Health and Clinical Excellence (or NICE) in the United Kingdom which has developed and disseminated practice guidelines that unequivocally state that , “all people with PTSD should be offered a course of trauma focused psychological treatment (TFCBT) or eye movement desensitization and reprocessing (EMDR).”  And they mean all: adults and kids, young and old.  Little room for left for interpretation here.  No thinking is required.  Like the old Nike ad, you should: “Just do it.”

Wait a minute though…what do the data say? Apparently, the NICE and Cochrane recommendations are not based on, well…the evidence–at least, that is, the latest meta-analytic research!  Meta-analysis, you will recall, is a procedure for aggregating results from similar studies in order to test a hypothesis, such as, “are certain approaches for the treatment of post traumatic stress more effective than others?”  A year ago, I blogged about the publication of a meta-analysis by Benish, Imel, & Wampold which clearly showed that there was no difference in outcome between treatments for PTSD and that the designation of some therapies as “trauma-focused” was devoid of empirical support, a fiction.

So, how to account for the differences?  In a word, allegiance.  Although written by scientists, so-called “scholarly” reviews of the literature and “consensus panel” opinions inevitably reflect the values, beliefs, and theoretical predilections of the authors.  NICE guidelines, for example, read like a well planned advertising campaign for single psychotherapeutic modality: CBT.  Indeed, the organization is quite explicit in it’s objective: “provide support for the local implementation of…appropriate levels of cognitive beheavioral therapy.”   Astonishingly, no other approach is accorded the same level of support or endorsement despite robust evidence of the equivalence of outcomes among treatment approaches.  Meanwhile, the review of the PTSD literature and treatment recommendations published by the Cochrane Collaboration has not been updated since 2007–a full two years following the publication of the Benish et al. (2008) meta-analysis–and that was penned by a prominent advocate of…CBT…Trauma-focused CBT.

As I blogged about back in January, researchers and prominent CBT proponents, published a critique of the Benish et al. (2008) meta-analysis in the March 2010 issue of Clinical Psychology Review (Vol. 30, No. 2, pages 269-76).  Curiously, the authors chose not to replicate the Benish et al. study, but rather claim that bias, arbitrariness, lack of transparency, and poor judgement accounted for the findings.   As I promised at the time, I’m making the response we wrote–which appeared in the most recent issue of Clinical Psychology Review—available here.

Of course, the most important finding of the Benish et al. (2008) and our later response (Wampold et al. 2010) is that mental health treatments work for people with post traumatic stress.  Such a conclusion is unequivocal.  At the same time, as we state in our response to the critique of Benish et al. (2008), “there is little evidence to support the conclusion…that one particular treatment for PTSD is superior to others or that some well defined ingredient is crucial to successful treatments of PTSD.”  Saying otherwise, belies the evidence and diverts attention and scarce resources away from efforts likely to improve the quality and outcome of behavioral health services.

View more documents from Scott Miller.

Filed Under: Behavioral Health, Practice Based Evidence Tagged With: Carl Rogers, continuing education, icce, post traumatic stress, PTSD, reimbursement

Finding Feasible Measures for Practice-Based Evidence

May 4, 2010 By scottdm Leave a Comment

Let’s face it.  Clinicians are tired.  Tired of paperwork (electronic or othrwise).  When I’m out and about training–which is every week by the way–and encouraging therapists to monitor and measure outcomes in their daily work few disagree in principle.  The pain is readily apparent however, the minute the paper version of the Outcome Rating Scale flashes on the screen of my PowerPoint presentation.

It’s not uncommon nowadays for clinicians to spend 30-50% of their time completing intake, assessment, treatment planning, insurance, and other regulatory forms.  Recently, I was in Buffalo, New York working with a talented team of children’s mental health professionals.  It was not uncommon, I learned, to spend most of two outpatient visits doing the required paperwork.  When one considers that the modal number of sessions consumers attend is 1 and the average approximately 5 its hard not to conclude that something is seriously amiss.

Much of the “fear and loathing” dissipates when I talk about the time it usually takes to complete the Outcome and Session Ratings Scales.  On average, filling out and scoring the measures takes about a minute a piece.  Back in January, I blogged about research on the ORS and SRS, including a summary in PDF format of all studies to date.  The studies make clear that the scales are valid and reliable.  Most important, however, for day-to-day clinical practice, the ORS and SRS are also the most clinically feasible measures available.

Unfortunately, many of the measures currently in use were never designed for routine clinical practice–certainly few therapists were consulted.  In order to increase “complaince” with such time consuming outcome tools, many agencies advise clinicians to complete the scales occasionally (e.g., “prime numbers” [5,7, 11 and so on]) or only at the beginning and end of treatment.  The very silliness of such ideas will be immediately apparent to anyone who ever actually conducted treatment.  Who can predict a consumer’s last session?  Can you imagine a similar policy ever flying in medicine?  Hey Doc, just measure your patient’s heart rate at the beginning and end of the surgery!  Inbetween? Fahgetabotit.  Moreover, as I blogged about from behind the Icelandic ash plume, the latest research strongly favors routine measurement and feedback.  In real-world clinical settings feasibility is every bit as important as reliability and validity.  Agency managers, regulators, and policy makers ignore it at their own (and their data’s) peril.

How did the ORS and SRS end up so brief and without any numbers?  When asked at workshops, I usually respond, “That’s an interesting story.”  And then continue, “I was in Israel teaching.  I’d just finished a two day workshop on ‘What Works.'” (At the time, I was using and recommending the 10-item SRS and 45-item OQ).

“The audience was filing out of the auditorium and I was shutting down my laptop when the sponsor approached the dais.  ‘Scott,’ she said, ‘one of the participants has a last question…if you don’t mind.'”

“Of course not,” I immediately replied.

“His name is Haim Omer.  Do you know of him?”


Dr. Haim Omer

“Know him?” I responded, “I’m a huge fan!”  And then, feeling a bit weak in the knees asked, “Has he been here the w h o l e time?”

Haim was as gracious as ever when he finally made it to the front of the room.  “Great workshop, Scott.  I’ve not laughed so hard in a long time!”  But then he asked me a very pointed question.  “Scott,” he said and then paused before continuing, “you complained a bit about the length of the two measures you are using.  Why don’t you use a visual analog scale?”

“That’s simple Haim,” I responded, “It’s because I don’t know what a visual analog measure is!”

Haim described such scales in detail, gave me some examples (e.g., smiley and frowny faces), and even provided references.  My review on the flight home reminded me of a simple neuropsychological assessment scale I used on internship called “The Line Bisection Task”–literally a straight line (a measure developed by my neuropsych supervisor, Dr. Tom Schenkenberg).   And the rest is, as they say, history.

Filed Under: deliberate practice, excellence, Feedback Informed Treatment - FIT Tagged With: continuing education, Dr. Haim Omer, Dr. Tom Schenkenberg, evidence based practice, icce, ors, outcome rating scale, session rating scale, srs

Feedback, Friends, and Outcome in Behavioral Health

May 1, 2010 By scottdm Leave a Comment


My first year in college, my declared major was accounting.  What can I say?  My family didn’t have much money and my mother–who chose my major for me–thought that the next best thing to wealth was being close to money.

Much to her disappointment I switched from accounting to psychology in my sophomore year.  That’s when I first met Dr. Michael Lambert.


Michael J. Lambert, Ph.D.

It was 1979 and I was enrolled in a required course taught by him on “tests and measures.”  He made an impression to be sure.  He was young and hip–the only professor I met while earning my Bachelor’s degree who insisted the students call him by his first name.  What’s more, his knowledge and passion made what everyone considered the “deadliest” class in the entire curriculum seem positively exciting.  (The text, Cronbach’s classic Essentials of Psychological Testing, 3rd Edition, still sits on my bookshelf–one of the few from my undergraduate days).  Within a year, I was volunteering as a “research assistant,” reading and then writing up short summaries of research articles.

Even then, Michael was concerned about deterioration in psychotherapy.  “There is ample evidence,” he wrote in his 1979 book, The Effects of Psychotherapy (Volume 1), “that psychotherapy can and does cause harm to a portion of those it is intended to help” (p. 6).  And where the entire field was focused on methods, he was hot on the trail of what later research would firmly establish as the single largest source of variation in outcome: the therapist.  “The therapist’s contribution to effective psychotherapy is evident,” he wrote, “…training and selection on dimensions of…empathy, warmth, and genuineness…is advised, although little research supports the efficacy of current training procedures.”  In a passage that would greatly influence the arc of my own career, he continued, “Client perception…of the relationship correlate more highly with outcome that objective judges’ ratings” (Lambert, 1979, p. 32).

Fast forward 32 years.  Recently, Michael sent me a pre-publication copy of a mega-analysis of his work on using feedback to improve outcome and reduce deterioration in psychotherapy.  Mega-analysis combines original, raw data from multiple studies–in this case 6–to create a large, representative data set of the impact of feedback on outcome.  In his accompanying email, he said, “our new study shows what the individual studies have shown.”  Routine, ongoing feedback from consumers of behavioral health services not only improves overall outcome but reduces risk of deterioration by nearly two thirds!    The article will soon appear in the Journal of Consulting and Clinical Psychology.

Such results were not available when I first began using Lambert’s measure–the OQ 45–in my clinical work.  It was late 1996.  My colleagues and I had just put the finishing touches on Escape from Babel, our first book together on the “common factors.”

That’s when I received a letter from my colleague and mentor, Dr. Lynn Johnson.


Lynn D. Johnson, Ph.D.

In the envelop was a copy of an article Lynn had written for the journal, Psychotherapy entitled, “Improving Quality in Psychotherapy” in which he argued for the routine measurement of outcome in psychotherapy.  He cited three reasons: (1) providing proof of effectiveness to payers; (2) enabling continuous analysis and improvement of service delivery; and (3) giving consumers voice and choice in treatment.  (If you’ve never read the article, I highly recommend it–if for no other reason than its historical significance.  I’m convinced that the field would be in far better shape now had Lynn’s suggestions been heeded then).

Anyway, I was hooked.  I soon had a bootleg copy of the OQ and was using it in combination with Lynn’s Session Rating Scale with every person I met.

It wasn’t always easy.  The measure took time and more than a few of my clients had difficulty reading and comprehending the items on the measure.  I was determined however, and so persisted, occasionally extending sessions to 90 minutes so the client and I could read and score the 45-items together.

Almost immediately, routinely measuring and talking about the alliance and outcome had an impact on my work.  My average number of sessions began slowly “creeping up” as the number of single-session therapies, missed appointments, and no shows dropped.  For the first time in my career, I knew when I was and was not effective.  I was also able to determine my overall success rate as a therapist.  These early experiences also figured prominently in development of the Outcome Rating Scale and revision of the Session Rating Scale.

More on how the two measures–the OQ 45 and original 10-item SRS–changed from lengthy Likert scales to short, 4-item visual analog measures later.  At this point, suffice it to say I’ve been extremely fortunate to have such generous and gifted teachers, mentors, and friends.

Filed Under: Feedback Informed Treatment - FIT Tagged With: behavioral health, cdoi, continuing education, evidence based practice, holland, icce, Michael Lambert, Paychotherapy, public behavioral health

More Eruptions (in Europe and in Research)

April 20, 2010 By scottdm Leave a Comment

Dateline: Tuesday, 8:21pm, April 20th, 2010, Skellefteå, Sweden

What an incredible week.  Spent the day today working with 250 social workers, case managers, psychologists, psychiatrists, and agency directors in the far nothern town of Skellefteå, Sweden.  Many practitioners here are already measuring outcomes on an ongoing basis and using the information to improve the results of their work with consumers of behavioral health services.  Today, I presented the latest findings from ICCE’s ongoing research on “Achieving Clinical Excellence.”

I’ve been coming to the area to teach and consult since the early 1990’s, when I was first invited to work with Gun-Eva Langdahl and the rest of the talented crew at Rådgivningen Oden (RO).  As in previous years, I spent my first day (Monday) in Skellefteå watching sessions and working with clients at RO clinic.  Frankly, getting to Skellefteå from Goteborg had been a bit of ordeal.  What usually took a little over an hour by plane ended up being a 12-hour combination of cars, trains, and buses–all due to volcanic eruptions on Iceland.  (I shudder to think of how I will get from Skellefteå to Amsterdam on Wednesday evening if air travel doesn’t resume).

Anyway, the very first visit of the day at Rådgivningen Oden was with an adolescent and her parents.  Per usual, the session started with the everyone completing and discussing the Outcome Rating Scale.  The latest research reported in the April 2010 edition of Journal of Consulting and Clinical Psychology (JCCP) confirms the wisdom of this practice: measuring and discussing progress with consumers at every visit results in better outcomes.

It turns out that adolescents are at greater risk for deteriorating in treatment than adults (20% versus 10%).  Importantly, the study in JCCP by Warren, Nelson, Mondragon, Baldwin, and Burlingame found that the more frequently measures are used the less likely adolescents are to worsen in care.  Indeed, as ICCE Senior Associate Susanne Bargmann pointed out in a series of recent emails about this important study, “routinely tracking and discussing progress led to 37% higher recovery rates and 38% lower rates of deterioration!”

Skellefteå is a hotbed of feedback-informed practice in Sweden.  Accompanying the family at Rådgivningen Oden, for example, were professionals from a number of other agencies involved in the treatment and wanting to learn more about outcome-informed practice.  As already noted, 250 clinicians took time away from their busy schedules to hear the latest information and finesse their use of the measures.  And tomorrow, Wednesday, I meet with managers and directors of behavioral health agencies to discuss steps for successfully implementing routine measurement of progress and feedback in their settings.  You can download a video discussing the work being done by the team at Odin in Northern Sweden, by clicking here.

Stay tuned for more.  If all goes well, I’ll be in Amsterdam by Wednesday evening.

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT Tagged With: behavioral health, continuing education, Journal of Consulting and Clinical Psychology, medicine, meta-analysis, public behavioral health

Eruptions in Europe and in Research

April 18, 2010 By scottdm 3 Comments

Dateline: 11:20 am, April 18th, 2010

Today I was supposed to fly from Stockholm, Sweden to the far northern town of Skelleftea–a flight that takes a little over an hour.  Instead, I’m sitting on a train headed for Sundsvall, the first leg of a 12 hour trip that will include a 6 hour bus ride and then a short stint in a taxi.

If you’ve been following the news coming out of Europe, you know that all flights into, out of, and around Europe have been stopped. Eyjafjallajokull–an Icelandic volcano–erupted the day after I landed in Goteborg spewing an ash cloud that now covers most of Europe disrupting millions of travellers.  People are making due, sleeping on cots in airline, train, and bus terminals and using Facebook and Twitter to connect and arrange travel alternative.

In the meantime, another eruption has taken place with the publication of the latest issue of the Journal of Consulting and Clinical Psychology that threatens to be equally disruptive to the field of psychotherapy–and to proponents of the narrow, specific-treatments-for-specific-disorders or “evidence-based treatments” movement.   Researchers Webb, DeRubeis, and Barber conducted a meta-analysis of studies examining the relationship between adherence to and competence in delivering a particular approach and outcome.  The authors report finding that, “neither adherence nor competence was…related to patient (sic) outcome and indeed that the aggregate estimates of their effects were very close to zero.”

Zero!  I’m not sure what zero means to everyone else, but where I come from it’s pretty close to nothing.  And yet, the romance with the EBT movement continues among politicians, policy makers, and proponents of specific treatment models.  Each year, millions and millions of dollars of scarce resources are poured into an approach to behavioral health that accounts for exactly 0% of the results.

Although it was not a planned part of their investigation, the must-read study by Webb, DeRubeis, and Barber also points to the “magma” at the heart of effective psychotherapy: the alliance, or quality of the relationship between consumer and provider.  The authors report, for example, finding “larger competence-outcome effect size estimates [in studies that]…did not control for the influence of the alliance.”

The alliance will take center stage at the upcoming, “Achieving Clinical Excellence” and “Training of Trainers” events.  Whatever you thought you knew about effective therapeutic relationships will be challenged by the latest research from our study of top performing clinicians worldwide.  I hope you’ll join our international group of trainers, researchers, and presenters by clicking on either of the links above.  And, if you’ve not already done so, be sure and visit the International Center for Clinical Excellence home page and request an invitation to join the community of practitioners and researchers who are learning and sharing their expertise.

Filed Under: Behavioral Health, Practice Based Evidence Tagged With: behavioral health, brief therapy, continuing education, icce, Journal of Consulting and Clinical Psychology, Outcome, public behavioral health

Improving Outcomes in the Treatment of Obesity via Practice-Based Evidence: Weight Loss, Nutrition, and Work Productivity

April 9, 2010 By scottdm 4 Comments

Obesity is a large and growing problem in the United States and elsewhere.  Data gathered by the National Center for Health Statistics indicate that 33% Americans are obese.  When overweight people are added to the mix, the figure climbs to a staggering 66%!   The problem is not likely to go away soon or on its own as the same figures apply to children.

Researchers estimate that weight problems are responsible for over 300,000 deaths annually and account for 12% of healthcare costs or 100 billion–that’s right, $100,000,000,000–in the United States alone.   The overweight and obese have higher incidences of arthritis, breast cancer, heart disease, colorectal cancer, diabetes, endometrial cancer, gallbladder disease, hypertension, liver disease, back pain, sleeping problems, and stroke–not to mention the tremendous emotional, relational, and social costs.  The data are clear: the overweight are the target of discrimination in education, healthcare, and employment.  A study by Brownell and Puhl (2003), for example, found that: (1) a significant percentage of healthcare professionals admit to feeling  “repulsed” by obese person, even among those who specialize in bariatric treatment; (2) parents provide less college support to their overweight compared to “thin” children; and (3) 87% of obese individuals reported that weight prevented them from being hired for a job.

Sadly, available evidence indicates that while weight problems are “among the easiest conditions to recognize,” they remain one of the “most difficult to treat.”  Weight loss programs abound.  When was the last time you watched television and didn’t see an ad for a diet pill, program, or exercise machine?  Many work.  Few, however, lead to lasting change.

What might help?

More than a decade ago, I met Dr. Paul Faulkner, the founder and then Chief Executive Officer of Resources for Living (RFL), an innovative employee assistance program located in Austin, Texas.  I was teaching a week-long course on outcome-informed work at the Cape Cod Institute in Eastham, Massachusetts.  Paul had long searched for a way of improving outcomes and service delivery that could simultaneously be used to provide evidence of the value of treatment to purchasers–in the case of RFL, the large, multinational companies that were paying him to manage their employee assistance programs.  Thus began a long relationship between me and the management and clinical staff of RFL.  I was in Austin, Texas dozens of times providing training and consultation as well as setting up the original ORS/SRS feedback system known as ALERT, which is still in use at the organization today.  All of the original reliability, validity, norming, and response trajectories were done together with the crew at RFL.

Along the way, RFL expanded services to disease management, including depression, chronic obstructive pulmonary disease, diabetes, and obesity.  The “weight management” program delivered coaching and nutritional consultation via the telephone informed by ongoing measurement of outcomes and the therapeutic alliance using the SRS and ORS.  The results are impressive.  The study by Ryan Sorrell, a clinician and researcher at RFL, not only found that the program and feedback led to weight loss, but also significant improvements in distress, health eating behaviors (70%), exercise (65%), and presenteeism on the job (64%)–the latter being critical to the employers paying for the service.

Such research adds to the growing body of literature documenting the importance of “practice-based” evidence, making clear that finding the “right” or “evidence-based” approach for obesity (or any problem for that matter) is less important than finding out “what works” for each person in need of help.  With challenging, “life-style” problems, this means using ongoing feedback to inform whatever services may be deemed appropriate or necessary.  Doing so not only leads to better outcomes, but also provides real-time, real-world evidence of return on investment for those footing the bill.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, cognitive-behavioral therapy, conferences, continuing education, diabetes, disease management, Dr. Paul Faulkner, evidence based medicine, evidence based practice, Hypertension, medicine, obesity, ors, outcome rating scale, practice-based evidence, public behavioral health, randomized clinical trial, session rating scale, srs, Training

Neurobabble: Comments from Dr. Mark Hubble on the Latest Fad in the World of Therapy

March 24, 2010 By scottdm Leave a Comment


Rarely does a day go by without hearing about another “advance” in the neurobiology of human behavior.  Suddenly, it seems, the world of psychotherapy has discovered that people have brains!  And now where the unconscious, childhood, emotions, behaviors, and cognitions once where…neurons, plasticity, and magnetic resonance imagining now is.  Alas, we are a field forever in search of legitimacy.  My long time colleague and friend, Mark Hubble, Ph.D., sent me the following review of recent developments.  I think you’ll enjoy it, along with video by comedian John Cleese on the same subject.

Mark Hubble, Ph.D.

Today, while contemplating the numerous chemical imbalances that are unhinging the minds of Americans — notwithstanding the longstanding failure of the left brain to coach the right with reason, and the right to enlighten the left with intuition — I unleashed the hidden power of my higher cortical functioning to the more pressing question of how to increase the market share for practicing therapists. As research has dismantled once and for all the belief that specific treatments exist for specific disorders, the field is left, one might say, in an altered state of consciousness. If we cannot hawk empirically supported therapies or claim any specialization that makes any real difference in treatment outcome, we are truly in a pickle. All we have is ourselves, the relationships we can offer to our clients, and the quality of their participation to make it all work. This, of course, hardly represents a propitious proposition for a business already overrun with too many therapists, receiving too few dollars.

Fortunately, the more energetic and enterprising among us, undeterred by the demise of psychotherapy as we know it, are ushering the age of neuro-mythology and the new language of neuro-babble.   Seemingly accepting wholesale the belief that the brain is the final frontier, some are determined to sell us the map thereto and make more than a buck while they are at it. Thus, we see terms such as “Somatic/sensorimotor Psychotherapy,” “Interpersonal Neurobiology,” “Neurogenesis and Neuroplasticity,”  “Unlocking the Emotional Brain,” “NeuroTherapy,” “Neuro Reorganization,” and so on.  A moment’s look into this burgeoning literature quickly reveals the existence of an inverse relationship between the number of scientific sounding assertions and actual studies proving the claims made. Naturally, this finding is beside the point, because the purpose is to offer the public sensitive, nuanced brain-based solutions for timeless problems. Traditional theories and models, are out, psychotherapies-informed-by-neuroscience, with the aura of greater credibility, are in.

Neurology and neuroscience are worthy pursuits. To suggest, however, that the data emerging from these disciplines have reached the stage of offering explanatory mechanisms for psychotherapy, including the introduction of “new” technical interventions, is beyond the pale. Metaphor and rhetoric, though persuasive, are not the same as evidence emerging from rigorous investigations establishing and validating cause and effect, independently verified, and subject to peer review.

Without resorting to obfuscation and pseudoscience, already, we have a pretty good idea of how psychotherapy works and what can be done now to make it more effective for each and every client. From one brain to another, to apply that knowledge, is a good case of using the old noggin.

Filed Under: Brain-based Research, Practice Based Evidence Tagged With: behavioral health, brief therapy, continuing education, mark hubble, meta-analysis, neuro-mythology, Norway, psychotherapy, public behavioral health

Are all treatments approaches equally effective?

January 9, 2010 By scottdm Leave a Comment

Bruce Wampold, Ph.D.

Late yesterday, I blogged about a soon-to-be published article in Clinical Psychology Review in which the authors argue that the finding by Benish, Imel, & Wamppold (2008) of equivalence in outcomes among treatments for PTSD was due to, “bias, over-generalization, lack of transparency, and poor judgement.”  Which interpretation of the evidence is correct?  Are there “specific approaches for specific disorders” that are demonstrably more effective than others?  Or does the available evidence show all approaches intended to be therapeutic to be equally effective?

History makes clear that science produces results in advance of understanding.  Until the response to Ehlers, Bisson, Clark, Creamer, Pilling, Richards, Schnurr, Turner, and Yule becomes available, I wanted to remind people of three prior blog posts that review the evidence regarding differential efficacy of competing therapeutic approaches.  The first (and I think most illuminating)–“The Debate of the Century“–appeared back in August.  The post featured a link to a debate between Bruce Wampold and enthusiastic proponent of “empirically supported treatments,” Steve Hollon.  Listen and then see if you agree with the large group of scientists and practitioners in attendance who thought–by a margin of 15:1–that Bruce carried the day.

The second post–Whoa Nellie!– commented on a 25 Million US$ research grant awarded by the US Department of Defense to study treatments for PTSD.  Why does this make me think of “deep throat’s” admonition to, “follow the money!”  Here you can read the study that is causing the uproar within the “specific treatments for specific disorders” gang.

Third, and finally, if you haven’t already read the post “Common versus Specific Factors and the Future of Psychotherapy,” I believe you’ll find the thorough review of the research done in response to an article by Siev and Chambless critical of the “dodo verdict” helpful.

Filed Under: Behavioral Health, evidence-based practice, Practice Based Evidence, PTSD Tagged With: behavioral health, bruce wampold, Children, continuing education, icce, post traumatic stress, PTSD, public behavioral health

DODO BIRD HYPOTHESIS PROVEN FALSE! Study of PTSD finally proves Wampold, Miller, and other "common factor" proponents wrong

January 8, 2010 By scottdm 3 Comments

The Dodo Bird Researchers Anke Ehlers, Jonathon Bisson, David Clark, Mark Creamer, Steven Pilling, David Richards, Paula Schnurr, Stuart Turner, and William Yule have finally done it!  They slayed the “dodo.” Not the real bird of course–that beast has been extinct since the mid to late 17th century but rather the “dodo bird” conjecture first articulated by Saul Rozenzweig, Ph.D. in 1936.  The idea that all treatment approaches work about equally well has dogged the field–and driven proponents of  “specific treatments for specific disorders” positively mad.  In a soon to be published article in Clinical Psychology Review, the authors claim that bias, overgeneralization, lack of transparency, and poor judgement account for the finding that “all therapeutic approaches work equally well for people with a diagnosis of PTSD” reported in a meta-analysis by Benish, Imel, & Wampold (2008).

I guess this means that a public admission by me, Wampold, and other common factors researchers is in order…or maybe not!  Right now, we are writing a response to the article.  All I can say at this point is, “unbelievable!”  As soon as it becomes available, you’ll find it right here on this blog.  I’ll be drawing inspiration from Saul Rosenzweig who passed away in 2004.  It was such an honor to meet him.  Still working at 96 years of age.

Filed Under: Behavioral Health, Dodo Verdict Tagged With: behavioral health, Children, continuing education, icce, medicine, meta-analysis, post traumatic stress, public behavioral health, reimbursement

Research on the Outcome Rating Scale, Session Rating Scale & Feedback

January 7, 2010 By scottdm Leave a Comment

PCOMS - Partners for change outcome management system Scott D Miller - SAMHSA - NREPP“How valid and reliable are the ORS and SRS?”  “What do the data say about the impact of routine measurement and feedback on outcome and retention in behavioral health?”  “Are the ORS and SRS ‘evidence-based?'”

These and other questions regarding the evidence supporting the ORS, SRS, and feedback are becoming increasingly common in the workshops I’m teaching in the U.S. and abroad.

As indicated in my December 24th blogpost, routine outcome monitoring (PROMS) has even been endorsed by “specific treatments for specific disorders” proponent David Barlow, Ph.D., who stated unequivocally that “all therapists would soon be required to measure and monitor the outcome of their clinical work.”  Clearly, the time has come for all behavioral health practitioners to be aware of the research regarding measurement and feedback.

Over the holidays, I updated a summary of the data to date that has long been available to trainers and associates of the International Center for Clinical Excellence.  The PDF reviews all of the research on the psychometric properties of the outcome and session ratings scales as well as the studies using these and other formal measures of progress and the therapeutic relationship to improve outcome and retention in behavioral health services.  The topics is so important, that I’ve decide to make the document available to everyone.  Feel free to distribute the file to any and all colleagues interested in staying up to date on this emerging mega-trend in clinical practice.

Measures And Feedback from Scott Miller

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, continuing education, david barlow, evidence based medicine, evidence based practice, feedback, Hypertension, icce, medicine, ors, outcome measurement, outcome rating scale, post traumatic stress, practice-based evidence, proms, randomized clinical trial, session rating scale, srs, Training

The Study of Excellence: A Radically New Approach to Understanding "What Works" in Behavioral Health

December 24, 2009 By scottdm 2 Comments

“What works” in therapy?  Believe it or not, that question–as simple as it is–has and continues to spark considerable debate.  For decades, the field has been divided.  On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).

While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years!  Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services.  Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome.  At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work.  After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors?  More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.

The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness.  Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations.  Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples).  On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors.  Little wonder that outcomes have stagnated.  Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.

What to do?  For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing.  Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.

Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery.  One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback.  Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!

Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work.  Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy.  And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.

My fear?  Focusing on measurement and feedback misses the point.  Simply put: it’s not seeking feedback that is important.  Rather, it’s what feedback potentially engenders in the user that is critical.  Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service.  “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. [2004]).

Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.”   Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others.  Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players.  Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board.  In other words, their immense knowledge is context specific.

A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge.  I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference.  I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place.  Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).

Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.

Filed Under: Behavioral Health, excellence, Feedback, Top Performance Tagged With: brendan madden, cdoi, cognitive behavioral therapy, common factors, continuing education, david barlow, evidence based medicine, evidence based practice, Evolution of Psychotherapy, feedback, icce, micheal lambert, ors, outcome rating scale, proms, session rating scale, srs, therapist, therapists, therapy

Whoa Nellie! A 25 Million Dollar Study of Treatments for PTSD

October 27, 2009 By scottdm 1 Comment

I have in my hand a frayed and yellowed copy of observations once made by a well known trainer of horses. The trainer’s simple message for leading a productive and successful professional life was, “If the horse you’re riding dies, get off.”

You would think the advice straightforward enough for all to understand and benefit.  And yet, the trainer pointed out, “many professionals don’t always follow it.”  Instead, they choose from an array of alternatives, including:

  1. Buying a strong whip
  2. Switching riders
  3. Moving the dead horse to a new location
  4. Riding the dead horse for longer periods of time
  5. Saying things like, “This is the way we’ve always ridden the horse.”
  6. Appointing a committee to study the horse
  7. Arranging to visit other sites where they ride dead horses more efficiently
  8. Increasing the standards for riding dead horses
  9. Creating a test for measuring our riding ability
  10. Complaining about how the state of the horse the days
  11. Coming up with new styles of riding
  12. Blaming the horse’s parents as the problem is often in the breeding.
When it comes to the treatment of post traumatic stress disorder, it appears the Department of Defense is applying all of the above.  Recently, the DoD awarded the largest grant ever awarded to “discover the best treatments for combat-related post-traumatic stress disorder” (APA Monitor).  Beneficiaries of the award were naturally ecstatic, stating “The DoD has never put this amount of money to this before.”
Missing from the announcements was any mention of research which clearly shows no difference in outcome between approaches intended to be therapeutic—including, the two approaches chosen for comparison in the DoD study!  In June 2008, researchers Benish, Imel, and Wampold, conducted a meta-analysis of all studies in which two or more treatment approaches were directly compared.  The authors conclude, “Given the lack of differential efficacy between treatments, it seems scientifically questionable to recommend one particular treatment over others that appear to be of comparable effectiveness. . . .keeping patients in treatment would appear to be more important in achieving desired outcomes than would prescribing a particular type of psychotherapy” (p. 755).
Ah yes, the horse is dead, but proponents of “specific treatments for specific disorders” ride on.  You can hear their rallying cry, “we will find a more efficient and effective way to ride this dead horse!” My advice? Simple: let’s get off this dead horse. There are any number of effective treatments for PTSD.  The challenge is decidedly not figuring out which one is best for all but rather “what works” for the individual. In these recessionary times, I can think of far better ways to spend 25 million than on another “horse race” between competing therapeutic approaches.  Evidence based methods exist for assessing and adjusting both the “fit and effect” of clinical services—the methods described, for instance, in the scholarly publications sections of my website.  Such methods have been found to improve both outcome and retention by as much as 65%.  What will happen? Though I’m hopeful, I must say that the temptation to stay on the horse you chose at the outset of the race is a strong one.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence, PTSD Tagged With: behavioral health, continuing education, evidence based medicine, evidence based practice, icce, meta-analysis, ptst, reimbursement

  • 1
  • 2
  • Next Page »

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (112)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon
  • himalayan on Do certain people respond better to specific forms of psychotherapy?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training