SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Public Attitudes Toward Mental Health Services: A Change for the Worse

July 3, 2014 By scottdm 1 Comment

Here it is

The results are not encouraging.  A recent meta-analysis found that public attitudes toward psychotherapy have become progressively more negative over the last 40 years.  The impact on practitioners is staggering.  Between 1997 and 2007, use of psychotherapy declined by 35%.  Not surprisingly, clinicians’ incomes also suffered, dropping 15-20% over the last decade.

So, if not psychotherapy, what do consumers of mental health services really want?

Well, if you trust the study I’ve cited, the answer seems clear: drugs.  During the same time period that talking fell out of favor, use of pharmaceuticals increased a whopping 75%!  Some blame society’s short attention span and desire for a “quick fix.”  Such an argument hardly seems credible, however, given that psychotherapy works to alleviate distress as fast or faster than most psychotropics.

Others, including the authors of the meta-analysis, blame public education campaigns and pharmacological marketing aimed at “convincing the public that mental disorders have a neurobiological etiology that require biological treatments” (p. 103).  At first glance, this idea is compelling.  After all, every year, the pharmaceutical industry spends $5 billion dollars on direct-to-consumer advertising.

And yet, what is it the drug companies are really selling in those ads?  In one of the most well-known TV commercials for a popular antidepressant, less than 7 seconds is spent on the supposed neurobiological cause.  Instead, the majority of the time is spent depicting the positive results one can expect from the product.   It’s marketing 101: focus on the benefits not the features of whatever you’re selling.

What do consumers want?  The answer is: results.  Your training, degree, certification, and treatment approach are irrelevant, mere features most consumers could care less about.  Your rate of effectiveness is another matter entirely–its the benefit people are looking for from working with you.

So, how effective are you?  Do you know?  Not a guess or a hunch, but the actual number of people you treat that are measurably improved?  If not, its easy to get started.  Start by downloading two, simple, free, SAMHSA-approved scales for measuring progress and quality of mental health services.  Next, visit www.whatispcoms.com to learn how individual practitioners and agencies can use these tools to monitor and improve outcome and retention in treatment, as well as communicate results effectively to consumers.

To see how outcomes attract consumers, just take a look at the Colorado Center for Clinical Excellence website.   This Denver-based group of practitioners is a model for the future of clinical practice.

Filed Under: Behavioral Health Tagged With: antidepressants, Colorado Center for Clinical Excellence, drugs, meta-analysis, ors, outcome rating scale, pharmalogical, psychotherapy, SAMHSA, session rating scale, srs

Is Supervision Important to you?

June 20, 2014 By scottdm 1 Comment

How valuable is clinical supervision to you?  In their massive, long-term international study of therapist development, researchers Orlinsky and Rønnestad (2005) found that “practitioners at all experience levels, theoretical orientations, professions, and nationalities report that supervised client experience is highly important for their current and career development” (p. 188).

Despite the value most of us place on the process, the latest review of the literature found no empirical evidence, “that psychotherapy supervision contributes to patient outcome” (Watkins, 2011).  Said another way, supervision does not produce more effective clinicians.  The result?  In the US, at least, opportunities for clinical supervision are in the decline, replaced by growing documentation requirements and administrative oversight–a trend destined to continue if the dearth of evidence persists.

What can be done?  Simply put, solicit formal feedback from clients regarding their experience of progress and the therapeutic relationship.  Such information, in turn, can be used to guide supervision, providing both a focus for the consultation and data supporting its effectiveness.  After all, multiple studies already document that the process improves outcomes while simultaneously decreasing drop out and deterioration rates (Miller, 2013 ).

Getting started is not difficult.  First, access two, free, easy-to-use scales for monitoring client progress and the relationship.   Second, join colleagues in the largest, online community of behavioral health professionals in the world.  It’s free–no hidden costs or secret levels of premium content.  On the ICCE, you can connect and consult with practitioners who are using feedback to improve the quality and outcome of treatment and supervision.  If you are new to feedback-informed work (FIT)–a SAMHSA certified evidence-based practice–you can get a thorough introduction at: www.whatispcoms.com .

Finally, get the  Feedback-Informed supervision manual and newly released, two-hour DVD.  Both provide step-by step instructions and examples of integrating feedback into supervision.  While you are at it, join us for our Feedback-Informed Supervision Intensive.  Last time around, it sold out months advance.  Registration is now open for our next training in March 2015.

Filed Under: Feedback Informed Treatment - FIT Tagged With: clinical supervision, feedback informed treatment, icce, international center for cliniclal excellence, Orlinsky, ors, outcome rating scale, PCOMS, psychotherapy supervision, Rønnestad, SAMHSA, session rating scale, srs

What’s in an Acronym? CDOI, FIT, PCOMS, ORS, SRS … all BS?

June 7, 2014 By scottdm Leave a Comment

“What’s in a name?”

–William Shakespeare

A little over a week ago, I received an email from Anna Graham Anderson, a graduate student in psychology at Aarhus University in Denmark.  “I’m writing,” she said, “in hopes of receiving some clarifications.”

Anna Graham Anderson
Anna Graham Anderson

Without reading any further, I knew exactly where Anna was going.  I’d fielded the same question before.  As interest in measurement and feedback has expanded, it comes up more and more frequently.

Anna continued,  “I cannot find any literature on the difference between CDOI, FIT, PCOMS, ORS, and SRS.  No matter where I search, I cannot find any satisfying clues.  Is it safe to say they are the same?”  Or, as another asked more pointedly, “Are all these acronyms just a bunch of branding B.S.?”

I answered, “B.S.?  No.  Confusing?  Absolutely.  So, what is the difference?”

As spelled out in each of the six treatment and training manuals, FIT, or feedback-informed treatment, is, “a panetheoretical approach for evaluating and improving the quality and effectiveness of behavioral health services.  It involves routinely and formally soliciting feedback from consumers regarding the therapeutic relationship and outcome of care and using the resulting information to inform and tailor service deliver.”

Importantly, FIT is agnostic regarding both the method of treatment and the particular measures a practitioner may employ.  Some practitioners use the ORS and SRS, two brief, simple-to-use, and free measures of progress and the therapeutic relationship–but any other valid and reliable scales could be used.

Of all the acronyms associated with my work, CDOI is the one I no longer use.  For me, it had always problematic as it came precariously close to being a treatment model, a way of doing therapy.  I wasn’t  interested in creating a new therapeutic approach.  My work and writing on the common factors had long ago convinced me the field needed no more therapeutic schools.  The phrase, “client-directed, outcome-informed”  described the team’s position at the time, with one foot in the past (how to do therapy), the other in the future (feedback).

And PCOMS?  A long time ago, my colleagues and I had a dream of launching a web-based “system for both monitoring and improving the effectiveness of treatment” (Miller et. al, 2005).  We did some testing at an employee assistance program in located in Texas, formed a corporation called PCOMS (Partners for Change Outcome Management System), and even hired a developer to build the site.  In the end, nothing happened.  Overtime, the acronym, PCOMS, began to be used as an overall term referring to the ORS, SRS, and norms for interpreting the scores.  In February 2013, the Substance Abuse and Mental Health Service Adminstration (SAMHSA) formally recognized PCOMS as an evidence-based practice.  You can read more about PCOMS at: www.whatispcoms.com.

I expect there will be new names and acronyms as the work evolves.  While some remain, others, like fossils, are left behind; evidence of what has come before, their sum total a record of development over time.

Filed Under: Feedback Informed Treatment - FIT Tagged With: cdoi, evidence based medicine, evidence based practice, feedback informed treatment, FIT, ors, outcome measurement, outcome rating scale, PCOMS, SAMHSA, session rating scale, srs, Substance Abuse and Mental Health Service Adminstration

Are you any good as a therapist? The Legacy of Paul W. Clement

March 26, 2014 By scottdm 4 Comments

Paul Clement

Twenty years ago, I came across an article published in the journal, Professional Psychology.  It was written by a psychologist in private practice, Paul Clement.  The piece caught my eye for a number of reasons.  First, although we’d never met, Paul lived and worked in a town near my childhood home: Pasadena, California.  Second, the question he opened his article with was provocative, to say the least, “Are you any good?”  In other words, how effective are YOU as a psychotherapist?  Third, and most important, he had compiled and was reporting a quantitative analysis of his results over the last 26 years as a practicing clinician.  It was both riveting and stunning.  No one I knew had ever done had published something similar before.

In graduate school, I’d learned to administer a variety of tests (achievement, vocational, personality, projective, IQ, etc.).  Not once, however, did I attend a course or sit in a lecture about how to measure my results.   I was forced to wonder, “How could that be?”  Six years in graduate school and not a word about evaluating one’s outcomes.  After all, if we don’t know how effective we are, how are any of us supposed to improve?

What was the reason for the absence of measurement, evaluation, and analysis?   It certainly wasn’t because psychotherapy wasn’t effective.  A massive amount of research existed documenting the effectiveness of treatment.  Paul’s research confirmed these results.  Of those he’d worked with, 75% were improved at termination.  Moreover, such results were obtained in a relatively brief period of time, the median number of sessions used being 12.

Other results he reported were not so easy to accept.  In short, Paul’s analysis showed that his outcomes had not improved over the course of his career.   At the conclusion of the piece, he observed, “I had expected to find that I had gotten better and better over the years, but my data failed to suggest any systematic change in my therapeutic effectiveness across the 26 years in question…it was a bad surprise for me.” (p. 175).

For years, I carried the article with me in my briefcase, hoping that one day, I might better understand his findings.   Maybe, I thought, Clement was simply an outlier?  Surely, we get better with experience.  It was hard for me to believe I hadn’t improved since my first, ham-handed sessions with clients.  Then again, I didn’t really know.  I wasn’t measuring my results in any meaningful way.

The rest is history.  Within a few short years, I was routinely monitoring the outcome and alliance at every session I did with clients.  Thanks to my undergraduate professor, Michael Lambert, Ph.D., I began using the OQ 45 to assess outcomes.  Another mentor, Dr. Lynn Johnson had developed a 10-item scale for evaluating the quality of the therapeutic relationship, know as the Session Rating Scale.  Both tools became an integral part of the way I worked.  Eventually, a suggestion by Haim Omer, Ph.D., led me to consider creating shorter, less time consuming visual analogue versions of both measures.  In time, together with colleagues, the ORS and SRS were developed and tested.  Throughout this process, Paul Clement, and his original study remained an important, motivating force.

Just over a year ago, Paul sent me an article evaluating 40 years of his work as a psychotherapist.   Once again, I was inspired by his bold, brave, and utterly transparent example.  Not only had his outcomes not improved, he reported, they’d actually deteriorated!  Leave it to him to point the way!   As readers of this blog know, our group is busy at work researching what it takes to forestall such deterioration and improve effectiveness.  Last year, we summarized our findings in the 50th Anniversary issue of Psychotherapy.  As I write, we are preparing a more detailed report for publication in the same journal.

Yesterday, I was drafting an email, responding to one I’d recently received from him, when I learned Paul had died.  I will miss him.  In this, I know I’m not alone.

Filed Under: Top Performance Tagged With: clinician, Haim Omer, Lynn Johnson, Michael Lambert, OQ45, ors, outcome rating scale, Paul Clement, popular psychology, practice-based evidence, psychotherapy, session rating scale, srs, top performance

The Revolution in Swedish Mental Health Services: UPDATE on the CBT Monopoly

April 5, 2013 By scottdm Leave a Comment

No blogpost I’ve ever published received the amount of attention as the one on May 13th, 2012 detailing changes to Swedish Mental Health practice.  At the time, I reported about research results showing that the massive investment of resources in training therapists in CBT had not translated into improved outcomes or efficiency in the treatment of people with depression and anxiety.  In short, the public experiment of limiting training and treatment to so called, “evidence-based methods” had failed to produce tangible results.  The findings generated publications in Swedish journals as merited commentary in Swedish newspapers and on the radio.

I promised to keep people updated if and when research became available in languages other than Swedish.  This week, the journal Psychotherapy, published an article comparing outcomes of three different treatment approaches, including CBT, psychodynamic, and integrative-eclectic psychotherapy.  Spanning a three year period, the results gathered at 13 outpatient clinics, found that psychotherapy was remarkably effective regardless of the type of treatment offered!  Read the study yourself and then ask yourself: when will a simpler, less expensive, and more client-centered approach to insuring effective and efficient behavioral health services be adopted?  Routinely seeking feedback from consumers regarding the process and outcome of care provides such an alternative.  The failure to find evidence that adopting specific models for specific disorders improves outcomes indicates the time has come.  You can learn more about feedback-informed treatment (FIT), a practice recently designed “evidence-based” by the Substance Abuse and Mental Health Services Administration (SAMHSA), by visiting the International Center for Clinical Excellence web-based community or attending an upcoming training with me in Chicago or on the road.

  • Learn more about what is going on in Sweden by reading:

Everyday evidence outcomes of psychotherapies in swedish public health services (psychotherapy werbart et al 2013)

  • Here’s one additional reference for those of you who read Swedish.  It’s the official summary of the results from the study that started this entire thread:
Delrapport ii slutversion

Filed Under: Practice Based Evidence Tagged With: CBT, evidence based practice, ors, outcome rating scale, psychotherapy, session rating scale, srs, sweden

Believing is Seeing: How Wishing Makes Things So

January 3, 2013 By scottdm Leave a Comment

Yesterday evening, my family and I were watching a bit of T.V.  My son, Michael commented about all the ads for nutrional supplements, juicing machines, weight loss programs and devices.  “Oh yeah,” I thought, then explained to him, “It’s the start of a new year.”  Following “spending more time with family,” available evidence shows exercise and weight loss top the bill of resolutions.  Other research shows that a whopping 80% eventually break these well intentioned commitments.  Fully a third won’t even make it to the end of the month!  Most attribute the failure to being too busy, others to a lack of motivation.  Whatever the cause, it’s clear that, when it comes to change, hope and belief will only take you so far. 

What can help?  More on that in a moment.

In the meantime, consider a recent study on the role of hope and belief in research on psychotherapy.  Beginning in the 1970’s, study after study, and studies of studies, have found a substantial association between the effectiveness of particular treatment models and the beliefs of the researchers who conduct the specific investigations.  In the literature, the findings are referred to under the generic label, “research allegiance” or R.A.  Basically, psychotherapy outcome researchers tend to find in favor of the approach they champion, believe in, and have an affinity towards.  Unlike New Year’s resolutions, it seems, the impact of hope and belief in psychotherapy outcome research is not limited; indeed, it carries investigators all the way to success–albeit a result that is completely “in the eye of the beholder.”  That is, if one believes the research.  Some don’t.

Hang with me now as I review the controversy about this finding.  As robust as the results on researcher allegiance appear, an argument can be made that the phenomenon is a reflection rather than a cause of differences in treatment effectiveness.  The argument goes: researcher allegiance is caused by the same factors that lead to differences in outcome between approaches: real differences in outcome betweepproaches.  In short, researchers’ beliefs do not cause the effects, as much as the superior effects of the methods cause researchers to believe.   Makes sense, right?  And the matter has largely nguished there, unresolved for decades.

That is, until recently.  Turns out, believing is seeing.  Using a sample of studies in which treatments with equivalent efficacy were directly compared within the same study, researchers Munder, Fluckiger, Gerger, Wampold, and Barth (2012) found that a researcher’s allegiance to a particular method systemically biases their results in favor of their chosen approach.  The specific methods included in this study were all treatments designated as “Trauma-focused” and deemed “equally effective” by panels of experts such as the U.K.’S National Institute for Clinical Excellence.  Since the TFT approaches are equivalent in outcome, researcher allegiance should not have been predictive of outcome.  Yet, it was–accounting for an incredible 12% of the variance.  When it comes to psychotherapy outcome research, wishing makes it so.

What’s the “take-away” for practitioners?  Belief is powerful stuff: it can either help you see possibilities or blind you to important realities.  Moreover, you cannot check your beliefs at the door of the consulting room, nor would you want to.  Everyday, therapists encourage people to take the first steps toward a happier, more meaningful life by rekindling hope.  However, if researchers, bound by adherence to protocol and subject to peer review can be fooled, so can therapists.  The potentially significant consequences of unchecked belief become apparent when one considers a recently published study by Walfish et al. (2012) which found that therapists on average overestimate their effectiveness by 65%.

When it comes to keeping New Year’s resolutions, experts recommend avoiding broad promises and grand commitments and instead advise setting small, concrete measureable objectives.  Belief, it seems, is most helpful when its aims are clear and effects routinely verified.  One simple way to implement this sage counsel in psychotherapy is to routinely solicit feedback from consumers about the process and outcome of the services offered.  Doing so, research clearly shows, improves both retention and effectiveness.

You can get two, simple, easy-to use scales for free by registering at: http://scottdmiller.com/srs-ors-license/  A world wide community of behavioral health professionals is available to support your efforts at: www.centerforclinicalexcellence.com.

You can also join us in Chicago for four days of intensive training.  We promise to challenge your both beliefs and provide you with the skills and tools necessary for pushing your clinical performance to the next level of effectiveness.

Filed Under: Feedback Informed Treatment - FIT Tagged With: NICE, ors, outome rating scale, psychotherapy, session rating scale, srs, wampold

Feedback in Groups: New Tools, New Evidence

December 29, 2012 By scottdm Leave a Comment

 

Groups are an increasingly popular mode for delivering behavioral health services.  Few would deny that using the same hour to treat mutliple people is more cost effective.  A large body of research shows it to be as effective in general as individually delivered treatments.

Now clinicians can incorporate feedback into the group therapy using a brief, scientifically validated measurement scale: the Group Session Rating Scale.  The measure is part of the packet of FIT tools available in 20+ languages on both my personal and the International Center for Clinical Excellence websites.   Since the alliance is one of the most robust predictors of outcome, the GSRS provides yet another method for helping therapists obtain feedback from consumers of behavior health services.  As readers of this blog know, over a dozen randomized clinical trials document the positive impact of routinely assessing consumers’ experience of progress and the alliance on both retention and outcome of treatment.

The most up-to-date information about incorporating the GSRS into group therapy is covered in Manual 5: Feedback Informed Clinical Work: Specific Populations and Service Settings written together with ICCE Senior Associates Julie Tilsen, Cynthia Maeschalck, Jason Seidel, and Bill Robinson.

Manual 5 is one of six, state-of-the-art, how-to volumes on Feedback-Informed Treatment.  The series covers every aspect of FIT, from supporting research to implementation in agencies and larger systems of care.  The were developed and submitted in partial support of ICCE’s application to SAMSHA for designation as an evidence-based practice.

These popular e-books are being used in agencies and by practitioners around the world.  Right now, they are also available on a limited edition, searchable CD at 50% off the regular price.  As always, individual clinicians can download the GSRS and begin using it in their work for free.  

Advanced FIT Training - March 2013

Using the GSRS to inform and improve the effectiveness of group therapy will also be a focus on the ICCE Advanced Intensive training scheduled for March 18th-21st in Chicago, Illinois (USA).  Registration is simple and easy.  Click here to get started.  Participants from all over the United States, Canada, Europe and elsewhere are already registered to attend.

Click on the link below to read the validation article on the GSRS:

The Group Session Rating Scale (Quirk, Miller, Duncan, Owen, 2013)

Filed Under: Feedback Informed Treatment - FIT Tagged With: behavioral health, feedback informed treatment, ors, outcome rating scale, session rating scale, srs

Psychotherapy Training: Is it Worth the Bother?

October 29, 2012 By scottdm 2 Comments

Big bucks.  That’s what training in psychotherapy costs.  Take graduate school in psychology as an example.  According to the US Department of Education’s National Center (NCES), a typical doctoral program takes five years to complete and costs between US$ 240,000-300,000.00.

Who has that kind of money laying around after completing four years of college?  The solution? Why, borrow the money, of course!  And students do.  In 2009, the average amount of debt of those doctoral students in psychology who borrowed was a whopping US$ 88,000–an amount nearly double that of the prior decade.  Well, the training must be pretty darn good to warrent such expenditures–especially when one considers that entry level salaries are on the decline and not terribly high to start!

Oh well, so much for high hopes.

Here are the facts, as recounted in a recent, concisely written summary of the evidence by John Malouff:

1. Studies comparing treatments delivered by professionals and paraprofessionals either show that paraprofessionals have better outcomes or that there is no difference between the two groups;

2. There is virtually no evidence that supervision of students by professionals leads to better client outcomes (you should have guessed this after reading the first point);

3. There is no evidence that required coursework in graduate programs leads to better client outcomes.

If you are hoping that post doctoral experience will make up for the shortcomings of professional training, well, keep hoping.  In truth, professional experience does not correlate often or significantly with client therapy outcomes.

What can you do?  As Malouf points out, “For accrediting agencies to operate in the realm of principles of evidence-based practice, they must produce evidence…and this evidence needs to show that…training…contribute(s) to psychotherapy outcomes…[and] has positive benefits for future clients of the students” (p. 31).

In my workshops, I often advise therapists to forgo additional training until they determine just how effective they are right now.  Doing otherwise, risks perceiving progress where, in fact, none exists.  What golfer would buy new clubs or pursue expensive lessions without first knowing their current handicap?  How will you know if the training you attend is “worth the bother” if you can’t accurately measure the impact of it on your performance?

Determining one’s baseline rate of effectiveness is not as hard as it might seem.  Simply download the Outcome Rating Scale and begin using it with your clients.  It’s free.  You can then aggregate and analyze the data yourself or use one of the existing web-based systems (www.fit-outcomes.com or www.myoutcomes.com) to get data regarding your effectiveness in real time.

After that, join your colleagues at the upcoming Advanced Intensive Training in Feedback Informed Treatment.   This is an “evidence-based” training event.  You learn:

• How to use outcome management tools (e.g., the ORS) to inform and improve the treatment services you provide;

• Specific skills for determining your overall clinical success rate;

• How to develop an individualized, evidence-based professional development plan for improving your outcome and retention rate.

There’s a special “early bird” rate available for a few more weeks.  Last year, the event filled up several months ahead of time, so don’t wait.

On another note, just received the schedule for the 2013 Evolution of Psychotherapy conference.  I’m very excited to have been invited once again to the pretigious event and will be bring the latest information and research on acheiving excellence as a behavioral health practitioner.  On that note, the German artist and psychologist, Andreas Steiner has created a really cool poster and card game for the event, featuring all of the various presenters.  Here’s the poster.  Next to it is the “Three of Hearts.”  I’m pictured there with two of my colleagues, mentors, and friends, Michael Yapko and Stephen Gilligan:

Filed Under: Conferences and Training, Feedback Informed Treatment - FIT, Top Performance Tagged With: Andreas Steiner, evidence based medicine, evidence based practice, Evolution of Psychotherapy conference, john malouff, Michael Yapko, ors, outcome management, outcome measurement, outcome rating scale, paraprofessionals, psychology, psychotherapy, session rating scale, srs, Stephen Gilligan, therapy, Training, US Department of Education's National Center (NCES)

Revolution in Swedish Mental Health Care: Brief Update

May 14, 2012 By scottdm 1 Comment

In April 2010, I blogged about Jan Larsson, a Swedish clinician who works with people on the margins of the mental health system.  Jan was dedicated to seeking feedback, using the ORS and SRS to tailor services to the individuals he met.  It wasn’t easy.  Unilke most, he did not meet his clients in an office or agency setting.  Rather, he met them where they were: in the park, on the streets, and in their one room aparments.  Critically, wherever they met, Jan had them complete the two measures–“just to be sure,” he said.  No computer.  No I-phone app.  No sophisticated web-based adminsitration system.  With a pair of scissors, he simply trimmed copies of the measures to fit in his pocket-sized appointment book! I’ve been following his creative application of the scales ever since.

Not surprisingly, Jan was on top of the story I blogged about yesterday regarding changes in the guidelines governing Swedish mental health care practice.  He emailed me as I was writing my post, including the link to the Swedish Radio program about the changes.  Today, he emailed again, sending along links to stories appearing in two Swedish newspapers: Dagens Nyheter and Goteborg Posten.

Thanks Jan!

And to everyone else, please continue to send any new links, videos, and comments.

Filed Under: behavioral health, excellence, Feedback Informed Treatment - FIT, Top Performance Tagged With: continuing education, Dagens Nyheter, evidence based practice, Goteborg Posten, icce, ors, outcome rating scale, session rating scale, srs, sweden

A Handy "Little Helper" for the Outcome Rating Scale: A Freebie from the ACE Conference Committee

April 24, 2012 By scottdm Leave a Comment

This last week the planning committee for the upcoming Achieving Clinical Excellence (ACE) conference meet once again in Horsholm, Denmark.  In the picture from left to right: Liz Plutt, Bill Andrews, myself, Rick Plutt (Conference Chair), and Bogdan Ion.  Taking the photo was Susanne Bargmann.

The agenda for the three day event is now set: (1) one day pre-conference on feedback informed treatment (FIT); (2) two days of plenaries and presentations by an international group of clinicians, researchers, and educators.

On day one, the conference kicks off with a keynote address by the world’s “expert on expertise,” Dr. K. Anders Ericsson.  Throughout the day, other speakers will translate Dr. Ericsson’s research into practical steps for enhancing the performance of mental health professionals, agencies, and systems of care.

Day two kicks off with a keynote address by Dr. Robbie Wagner addressing the question, “what barriers stand in the way of improving our effectiveness?”  Once again, the rest of the day will be spent identifying solutions for the problems standing in the way of expertise and expert performance.

We still have several openings for presentations at the conference.  If you have experiences or data related to: (1) measuring outcomes; (2) implementing feedback informed treatment; (3) the qualities of super effective clinicians or treatment approaches, then PLEASE click go to the ICCE website and submit a description for consideration.

It’ll be a fun, inspiring, and rewarding three days in Amsterdam.  Don’t miss it!  Register today and get the early bird special, saving you 100’s of dollars!

In the meantime, click on the link below to download a handy little tool for scoring the Outcome and Session Rating Scales.  It’s a combination bookmark and 10 centimeter ruler.

Ace Ruler (PDF Format)

Filed Under: Conferences and Training, excellence Tagged With: cdoi, denmark, feedback informed treatment, icce, ors, outcome rating scale, session rating scale, srs, Therapist Effects

The Outcome and Session Rating Scales: Support Tools

March 30, 2012 By scottdm 6 Comments

Japan, Sweden, Norway, Denmark, Germany, France, Israel, Poland, Chile, Guam, Finland, Hungary, Mexico, Australia, China, the United States…and many, many more.  What do all these countries have in common?  In each, clinicians and agencies are using the ORS and SRS scales to inform and improve behavioral health services.  Some are using web-based systems for administration, scoring, interpretation and data aggregation (e.g., myoutcomes.com and fit-outcomes), many are accessing paper and pencil versions of the measures for free and then administering and scoring by hand.

Even if one is not using a web-based system to compare individual client progress to cutting edge norms, practitioners can still determine simply and easily whether reliable change is being made by using the “Reliable Change Chart” below.  Recall, a change on the ORS is considered reliable when the difference in scores exceeds the contribution attributable to chance, maturation, and measurement error. Feel free to print out the graph and use it in your practice.

To learn how to get the most out of the measures, be sure and download the six FIT Treatment and Training Manuals.  The six manuals cover every aspect of feedback-informed practice including: empirical foundations, basic and advanced applications (including FIT in groups, couples, and with special populations), supervision, data analysis, and agency implementation. Each manual is written in clear, step-by-step, non-technical language, and is specifically designed to help practitioners and agencies integrate FIT into routine clinical practice. Indeed, the manuals were submitted as part of ICCE’s application for consideration of FIT as an “evidence-based practice” to the National Registry of Evidence-Based Programs and Practices

ORS Reliable Change Chart

Filed Under: Behavioral Health, excellence, Feedback Informed Treatment - FIT Tagged With: cdoi, Hypertension, icce, NREPP, ors, outcome rating scale, SAMHSA, session rating scale, srs

Looking Back, Looking Forward

January 6, 2012 By scottdm Leave a Comment

Bidding goodbye to last year and welcoming the new always puts me in a reflective frame of mind.  How did my life, work, and relationships go?  What are my hopes for the future?

Just two short years ago, together with colleagues from around the world, the International Center for Clinical Excellence (ICCE) was launched.  Today, the ICCE is the largest, global, web-based community of providers, educators, researchers, and policy makers dedicated to improving the quality and outcome of behavioral health services.  Clinicians can choose to participate in any of the 100-plus forums, create their own discussion group, immerse themselves in a library of documents and how-to videos, and consult directly with peers. Membership costs nothing and the site is free of the advertising.  With just a few clicks, practitioners are able to plug into a group of like-minded clinicians whose sole reason for being on the site is to raise everyone’s performance level.  I have many people to thank for the success of ICCE: senior associates and trainers, our community manager Susanne Bargmann, director of training Julie Tilsen, and our tech wizard Enda Madden. 

As membership in ICCE has grown from a few hundred to well over 3000, many in the community have worked together to translate research on excellence into standards for improving clinical practice.  Routine outcome monitoring (ROM) has grown in popularity around the world.  As a result, new measures and trainings have proliferated.  In order to insure quality and consistency, a task force was convened within ICCE in 2010 to develop a list of “Core Competencies”—a document establishing the empirical and practice foundations for outcome-informed clinical work.  In 2011, the ICCE Core Competencies were used to develop and standardize the curricula for the “Advanced Intensive” and “Training-of-Trainers” workshops as well as the exam all attendees must pass to achieve certification as an ICCE Trainer.   As if these accomplishments were not enough, a small cadre of ICCE associates banded together to compose the Feedback Informed Treatment and Training Manuals—six practical, “how-to”volumes covering everything from empirical foundations to implementation.  None of this would have been possible without the tireless contributions of Bob Bertolino, Jason Seidel, Cynthia Maeschalck, Rob Axsen, Susanne Bargmann, Bill Robinson, Robbie Wagner, and Julie Tilsen.

Looking back, I feel tremendous gratitude–both for the members, associates, and trainers of ICCE as well as the many people who have supported my professional journey.  This year, two of those mentors passed away: Dick Fisch and James Hillman.   During my graduate school years, I read James Hillman’s book, Suicide and the Soul.  Many years later, I had the opportunity to present alongside him at the “Evolution of Psychotherapy” conference.  Dick, together with his colleagues from MRI, had a great influence on my work, especially during the early years when I was in Milwaukee with Insoo Berg and Steve de Shazer in Milwaukee doing research and writing about brief therapy.  Thinking about Dick reminded me of two other teachers and mentors from that period in my life; namely, John Weakland and Jay Haley.


Looking forward, I am filled with hope and high expectations.  The “Advanced Intensive” training scheduled for March 19-22nd is booked to capacity—not a single spot left.  Registrations for this summer’s “Training of Trainers” course are coming in at a record pace (don’t wait if you are thinking about joining me, Cynthia and Rob).  Currently, I am awaiting word from the National Registry of Evidence Based Programs and Practices (NREPP) formally recognizing “Feedback Informed Treatment” (FIT) as an evidence-based approach.  The application process has been both rigorous and time-consuming.  It’s worth it though.  Approval by this department within the federal government would instantly raise awareness about as well as increased access to funding for implementing FIT.  Keep your fingers crossed!

There’s so much more:

  • Professor Jan Blomqvist, a researcher at the Center for Alcohol and Drug Research at Stockholm University (SoRAD) launched what will be the largest, independent evaluation of feedback informed treatment to date, involving 80+ clinicians and 100’s of clients located throughout Sweden.   I provided the initial training to clinicians in October of last year.  ICCE Certified Trainers Gunnar Lindfeldt and Magnus Johansson are providing ongoing logistic and supervisory support.
  • The most sophisticated and empirically robust interpretive algorithms for the Outcome Rating Scale (based on a sample of 427,744 administrations of the ORS, in 95,478 unique episodes of care, provided by 2,354 different clinicians) have been developed and are now available for integration into software and web based applications.  Unlike the prior formulas–which plotted the average progress of all consumers successful and not–the new equations provide benchmarks for comparing individual consumer progress to both successful and unsuccessful treatment episodes.
  • The keynote speakers and venue for the Second Achieving Clinical Excellence Conference have been secured.  We’ll be meeting at one of the nicest hotels in Amsterdam, Holland, May 16-18=9th, 2013.  Thanks go to the planning committee: Bill Andrews, Susanne Bargmann, Liz Plutt, Rick Plutt, Tony Jordan, and Bogdan Ion.  Please visit the conference website and submit a proposal for a workshop or presentation.
  • Finally, I’ve been asked to deliver the lunchtime keynote at the upcoming Psychotherapy Networker Conference scheduled on March 23, 2012.  The topic?  Achieving excellence as a behavioral health practitioner.  Last year, my colleague Mark Hubble and I published the lead article in the May-June issue of the magazine, describing the latest research on top performing clinicians.  I’m deeply honored by the opportunity to speak at this prestigious event.

More coming in the weeks ahead.  Until then, look forward to connecting on ICCE.

Filed Under: Behavioral Health, Conferences and Training, excellence, Feedback Informed Treatment - FIT, ICCE, PCOMS Tagged With: cdoi, feedback informed treatment, HHS, Insoo Berg, NREPP, ors, outcome rating scale, session rating scale, srs, Steve de Shazer

Yes, More Evidence: Spanish version of the ORS Validated by Chilean Researchers

June 16, 2011 By scottdm Leave a Comment

Last week, Chile.  This week, Perth, Australia.  Yesterday, I landed in Sydney following a 30 hour flight from the United States.  I managed to catch the last flight out to Perth before all air travel was grounded due to another ash clound–this time coming from Chile!  I say “another” as just over a year ago, I was trapped behind the cloud of ash from the Icelandic eruption!  So far so good.  Today, I’ll spend the day talking about “excellence” in behavioral healthcare.

Before heading out to teach for the day, I wanted to upload a report from a recent research project conducted in Chile investigating the statistical properties of the ORS.  I’ve attached the report here so you can read for yourself.  That said, let me present the highlights:

  • The spanish version of the ORS is reliable (alpha coefficients .90-.95).
  • The spanish version of the ORS shows good construct and convergent validity (correlations with the OQ45 .5, .58).
  • The spanish version of the ORS is sensitive to change in a treated population.

The authors of the report that was presented at the Society for Psychotherapy Research meeting conclude, “The ORS is a valid instrument to be used with the Chilean population.”

As asked in my blogpost last week, “how much more evidence is needed?”  Now, more than ever, clinicians needs simple, valid, reliable, and feasible tools for evaluating the process and outcome of behavioral healthcare.  The ORS and SRS FITS the bill!

Filed Under: FIT, PCOMS, Practice Based Evidence Tagged With: behavioral health, cdoi, Chile, evidence based practice, mental health, ors, outcome rating scale, session rating scale, srs

How Much More Evidence Is Needed? A New Meta-Analysis on Feedback-Informed Treatment

June 9, 2011 By scottdm 1 Comment

Received an email from friend and colleague John Norcross, Ph.D.  Attached were the results of a meta-analysis completed by Michael Lambert and Kenichi Shimokawa on Feedback-Informed Treatment (FIT) which will appear in the second edition of his book, Psychotherapy Relationships that Work (Oxford University Press).  For those who cannot wait, you can access the same results in the lastest issue of the APA journal Psychotherapy (Volume 48, Number 1, March 2011, pages 72-79).

Briefly, the chapter begins with a review of the literature on feedback–a body of evidence that, by the way, dates back to 1930’s and has always shown small to moderate effects on the outcome of treatment.  In reviewing studies specific to the ORS and SRS, the authors conclude, “”>the results indicated that those in the feedback group ha[ve] 3.5 times higher odds of experiencing reliable change while having less than half the odds of experiencing deterioration.”  Additionally, Lambert and Shimokawa report few if any meaningful differences between therapies informed by the ORS and SRS and those using the well-established and widely used Outcome Questionnaire (OQ).   Finally, and importantly, the authors note that in “busy practices…the brevity of the [ORS and SRS]…expedite and ease practical difficulties” thereby decreasing barriers to implementation.

How much more evidence will it take before feedback informed treatment becomes standard practice?  All of the available data is summarized in the materials below.

Measures and Feedback January 2011

View more documents from Scott Miller

Be sure and join other clinicians and researchers who are discussing FIT at the International Center for Clinical Excellence–the largest, free, web-based community dedicated to improving the quality and outcome of behavioral health.

Finally, if you are in thinking about or in the process of becoming FIT in your agency or practice, please join us at the upcoming “Training of Trainers” workshop held the first week of August.  Registration is limited to 35 participants and we have only a few spots left!  Here’s what attendees from last year had to say about the event…

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT Tagged With: cdoi, evidence based practice, icce, ors, outcome rating scale, session rating scale, srs

Getting FIT: The Advanced Intensive Training

January 19, 2011 By scottdm Leave a Comment

Dateline: January 19, 2011
Buffalo, New York

The New Year is here and travel/training season is in full swing.  Last week, I was in Ohio and Virginia.  This week New York and Idaho (keep your weather fingers crossed, it’s going to be dicey getting from here to there and home again).

Interest in “Feedback Informed Treatment” continues to grow.  Agencies across the United States and abroad–as my travel schedule attests–are implementing the ORS and SRS in routine clinical practice.  Clinicians are finding the support they need on the International Center for Clinical Excellence web-based community.  As I blogged about a while back, the ICCE is the largest and most diverse group of practitioners working to improve the quality and outcome of behavioral health services.  Many will soon be joining me in Chicago for the 2011 “Advanced Intensive” training.  Once again, clinicians from all over the world will be in attendance–Sweden, Holland, England, Australia and so on.  Interest is high as participants receive a thorough, state-of-the-art grounding in the principles and practice of FIT.  I look forward to meeting everyone soon.

Last summer, I videoblogged about the event.  Ah, summer!   With everything my co-teacher, psychologist Susanne Bargmann, and I have planned, we promise a warm and rewarding event.

Filed Under: Behavioral Health, Conferences and Training, Feedback Informed Treatment - FIT Tagged With: feedback informed treatment, icce, ors, outcome rating scale, session rating scale, srs, Training

Hope Transcends: Learning from our Clients

July 30, 2010 By scottdm Leave a Comment

“Hope Transcends” was the theme of the 39th Annual Summer Institute on Substance Abuse and Mental Health held in Newark, Delaware this last week.  I had the honor of working with 60+ clinicians, agency managers, peer supports, and consumers of mental health services presenting a two-day, intensive training on “feedback-informed clinical work.”  I met so many talented and dedicated people over the two days and even had a chance to reconnect with a number of folks I’d met at previous trainings– both at the Institute and elsewhere.

One person I knew but never had the privilege of meeting before was psychologist Ronald Bassman.  A few years back, he’d written a chapter that was included in my book, The Heroic Client.  His topic at the Summer Institute was similar to what he’d written for the book: harmful treatment.  Research dating back decades documents that approximately 10% of people deteriorate while in psychotherapy.  The same body of evidence shows that clinicians are not adept at identifying: (a) people who are likely to drop out of care; or (b) people who are deteriorating while in care.

Anyway, you can read about Ron on his website or pick up his gripping book A Fight to Be.  Briefly, at age 22 Ron was committed to a psychiatric hospital.  Over the next several years, he was diagnosed with paranoid schizophrenia and forcefully subjected to a series of humiliating, painful, degrading and ultimately unhelpful “treatments.”  Eventually, he escaped his own and the systems’ madness and became a passionate advocate for improving mental health services.  His message is simple: “we can and must do better.”  And, he argues persuasively, the process begins with building better partnerships with consumers.

One way to build bridges with consumers is routinely seeking their feedback regarding the status of the therapeutic relationship and progress of any services offered.  Indeed, the definition of “evidence-based practice” formally adopted by the American Psychological Association mandates that the clinician “monitor…progress…[and] If progress is not proceeding adequately…alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or the implementation of the goals of treatment)” (pp. 276-277, APA, 2006).  Research reviewed in detail on this blog documents significant improvement in both retention and outcome when clinicians use the Outcome and Session Rating Scales to solicit feedback from consumers.  Hope really does transcend.  Thank you Ron and thank you clinicians and organizers at the Institute.

And now, just for fun.  Check out these two new videos:


Filed Under: Behavioral Health, excellence, Feedback, Feedback Informed Treatment - FIT Tagged With: American Psychological Society APA, cdoi, feedback informed treatment, meta-analysis, ors, out rating scale, Outcome, psychology, public behavioral health, randomized clinical trial, schizophrenia, session rating scale, srs, the heroic client

Finding Feasible Measures for Practice-Based Evidence

May 4, 2010 By scottdm Leave a Comment

Let’s face it.  Clinicians are tired.  Tired of paperwork (electronic or othrwise).  When I’m out and about training–which is every week by the way–and encouraging therapists to monitor and measure outcomes in their daily work few disagree in principle.  The pain is readily apparent however, the minute the paper version of the Outcome Rating Scale flashes on the screen of my PowerPoint presentation.

It’s not uncommon nowadays for clinicians to spend 30-50% of their time completing intake, assessment, treatment planning, insurance, and other regulatory forms.  Recently, I was in Buffalo, New York working with a talented team of children’s mental health professionals.  It was not uncommon, I learned, to spend most of two outpatient visits doing the required paperwork.  When one considers that the modal number of sessions consumers attend is 1 and the average approximately 5 its hard not to conclude that something is seriously amiss.

Much of the “fear and loathing” dissipates when I talk about the time it usually takes to complete the Outcome and Session Ratings Scales.  On average, filling out and scoring the measures takes about a minute a piece.  Back in January, I blogged about research on the ORS and SRS, including a summary in PDF format of all studies to date.  The studies make clear that the scales are valid and reliable.  Most important, however, for day-to-day clinical practice, the ORS and SRS are also the most clinically feasible measures available.

Unfortunately, many of the measures currently in use were never designed for routine clinical practice–certainly few therapists were consulted.  In order to increase “complaince” with such time consuming outcome tools, many agencies advise clinicians to complete the scales occasionally (e.g., “prime numbers” [5,7, 11 and so on]) or only at the beginning and end of treatment.  The very silliness of such ideas will be immediately apparent to anyone who ever actually conducted treatment.  Who can predict a consumer’s last session?  Can you imagine a similar policy ever flying in medicine?  Hey Doc, just measure your patient’s heart rate at the beginning and end of the surgery!  Inbetween? Fahgetabotit.  Moreover, as I blogged about from behind the Icelandic ash plume, the latest research strongly favors routine measurement and feedback.  In real-world clinical settings feasibility is every bit as important as reliability and validity.  Agency managers, regulators, and policy makers ignore it at their own (and their data’s) peril.

How did the ORS and SRS end up so brief and without any numbers?  When asked at workshops, I usually respond, “That’s an interesting story.”  And then continue, “I was in Israel teaching.  I’d just finished a two day workshop on ‘What Works.'” (At the time, I was using and recommending the 10-item SRS and 45-item OQ).

“The audience was filing out of the auditorium and I was shutting down my laptop when the sponsor approached the dais.  ‘Scott,’ she said, ‘one of the participants has a last question…if you don’t mind.'”

“Of course not,” I immediately replied.

“His name is Haim Omer.  Do you know of him?”


Dr. Haim Omer

“Know him?” I responded, “I’m a huge fan!”  And then, feeling a bit weak in the knees asked, “Has he been here the w h o l e time?”

Haim was as gracious as ever when he finally made it to the front of the room.  “Great workshop, Scott.  I’ve not laughed so hard in a long time!”  But then he asked me a very pointed question.  “Scott,” he said and then paused before continuing, “you complained a bit about the length of the two measures you are using.  Why don’t you use a visual analog scale?”

“That’s simple Haim,” I responded, “It’s because I don’t know what a visual analog measure is!”

Haim described such scales in detail, gave me some examples (e.g., smiley and frowny faces), and even provided references.  My review on the flight home reminded me of a simple neuropsychological assessment scale I used on internship called “The Line Bisection Task”–literally a straight line (a measure developed by my neuropsych supervisor, Dr. Tom Schenkenberg).   And the rest is, as they say, history.

Filed Under: deliberate practice, excellence, Feedback Informed Treatment - FIT Tagged With: continuing education, Dr. Haim Omer, Dr. Tom Schenkenberg, evidence based practice, icce, ors, outcome rating scale, session rating scale, srs

Bringing up Baseline: The Effect of Alliance and Outcome Feedback on Clinical Performance

April 29, 2010 By scottdm 1 Comment

Not long ago, my friend and colleague Dr. Rick Kamins was on vacation in Hawaii.  He was walking along the streets of a small village, enjoying the warm weather and tropical breezes, when the sign on a storefront caught his eye.  Healing Arts Alliance, it read.  The proprietor?  None other than, “Scott Miller, Master of Oriental Medicine.”

“With all the talking you do about the alliance,” Rick emailed me later, “I wondered, could it be the same guy?!”

I responded, “Ha, the story of my life.  You go to Hawaii and all I get is this photo!”

Seriously though, I do spend a fair bit of time when I’m out and about talking about the therapeutic alliance.  As reviewed in the revised edition of The Heart and Soul of Change there are over 1100 studies documenting the importance of the alliance in successful psychotherapy.  Simply put, it is the most evidence-based concept in the treatment literature.

At the same time, whenever I’m presenting, I go to great lengths to point out that I’m not teaching an “alliance-based approach” to treatment.  Indeed–and this can be confusing–I’m not teaching any treatment approach whatsoever.  Why would I?  The research literature is clear: all approaches work equally well.  So, when it comes to method, I recommend that clinicians choose the one that fits their core values and preferences.  Critically, however, the approach must also fit and work for the person in care–and this is where research on the alliance and feedback can inform and improve retention and outcome.


Lynn D. Johnson, Ph.D.

Back in 1994, my long time mentor Dr. Lynn Johnson encouraged me to begin using a simple scale he’d developed.  It was called…(drum roll here)…”The Session Rating Scale!”  The brief, 10-item measure was specifically designed to obtain feedback on a session by session basis regarding the quality of the therapeutic alliance.  “Regular use of [such] scales,” he argued in his book Psychotherapy in the Age of Accountability, “enables patients to be the judge of the…relationship.  The approach is…egalitarian and respectful, supporting and empowering the client” (Johnson, 1995, p. 44).  If you look at the current version of the SRS, you will see Lynn is listed on the copyright line–as Paul Harvey would say, “And now you know…the rest of the story.”  Soon, I’ll tell you how the measure went from a 10-item, Likert scale to a 4-item visual analog scale.

Anyway, some 17 years later, research has now firmly validated Lynn’s idea: formally seeking feedback improves both retention and outcome in behavioral health.  How does it work?  Unfortunately science, as Malcoln Gladwell astutely observes, “all too often produces progress in advance of understanding.”  That said, recent evidence indicates that routinely monitoring outcome and alliance establishes and serves to maintain a higher level of baseline performance.   In other words, regularly seeking feedback helps clinicians attend to core therapeutic principles and processes easily lost in the complex give-and-take of the treatment hour.

Such findings are echoed in the research literature on expertise which shows that superior performers across a variety of domains (physics, computer programming, medicine, etc.) spend more time than average performers reviewing basic core principles and practice.


At an intensive training in Antwerp, Belgium

The implications for improving practice are clear: before reaching for the stars, we should attend to the ground we stand on.  It’s so simple, some might think it stupid.  How can a four item scale given at the end of a session improve anything?  And yet, in medicine, construction, and flight training, there is a growing reliance on such “checklists” to insure that proven steps to success are not overlooked.  Atul Gawande reviews this practice in his new and highly readable book, The Checklist Manifesto: How to Get Things Right.  Thanks go to Dan Buccino, member of the International Center for Clinical Excellence, for bringing this work to my attention.  (By the way, you can connect with Dan and Lynn in the ICCE community.  If you’re not a member, click here to join.  It’s free).

The only question that remains is, I suppose, with all the workshops and training on “advanced methods and specialized techniques,” will practitioners interested in bringing up baseline?

Filed Under: Feedback Informed Treatment - FIT Tagged With: icce, Malcolm Gladwell, ors, outcome rating scale, session rating scale, srs

Where Necessity is the Mother of Invention: Forming Alliances with Consumers on the Margins

April 11, 2010 By scottdm 3 Comments

Spring of last year, I traveled to Gothenburg, Sweden to provide training GCK–an top notch organization led by Ulla Hansson and Ulla Westling-Missios providing cutting-edge training on “what works” in psychotherapy.  I’ll be back this week again doing an open workshop and an advanced training for the group.

While I’m always excited to be out and about traveling and training, being in Sweden is special for me.  It’s like my second home.  My family roots are Swedish and Danish and, it just so happens, I speak the language.  Indeed, I lived and worked in the country for two years back in the late seventies.  If you’ve never been, be sure and put it on your short list of places to visit…

AND IMPORTANTLY, go in the Summer!  (Actually, the photos above are from the famous “Ice Hotel”–that’s right, a hotel completely made of icc.  The lobby, bar, chairs, beds.  Everything!  If you find yourself in Sweden during the winter months, it’s a must see.  I promise you’ll never forget the experience).

Anyway, the last time I was in Gothenburg, I met a clinician whose efforts to deliver consumer-driven and outcome-informed services to people on the margins of society were truly inspiring.   During one of the breaks at the training, therapist Jan Larsson introduced himself, told me he had been reading my books and articles, and then showed me how he managed to seek and obtain feedback from the people he worked with on the streets.  “My work does not look like ‘traditional’ therapeutic work since I do not meet clients at an office.  Rather, I meet them where they live: at home, on a bench in the park, or sitting in the library or local activity center.”

Most of Jan’s clients have been involved with the “psychiatric system” for years and yet, he says, continue to struggle and suffer with many of the same problems they entered the system with years earlier.  “Oftentimes,” he observed, “a ‘treatment plan’ has been developed for the person that has little to do with what they think or want.”

So Jan began asking.  And each time they met, they also completed the ORS and SRS–“just to be sure,” he said.  No computer.  No I-phone app.  No sophisticated web-based adminsitration system.  With a pair of scissors, he simply trimmed copies of the measures to fit in his pocket-sized appointment book.

His experience thusfar?  In Swedish Jan says, “Det finns en livserfarenhet hos klienterna som bara väntar på att bli upptäckt och bli lyssnad till. Klienterna är så mycket mer än en diagnos. Frågan är om vi är nyfikna på den eftersom diagnosen har stulit deras livberättelse.”  Translated: “There is life experience with clients that is just waiting to be noticed and listened to.  Clients are so much more than their diagnosis.  The question is whether we are curious about them because the diagnosis has stolen their life story.”

I look forward to catching up Jan and the crew at GKC this coming week.  I also be posting interviews with Ulla and Ulla as well as ICCE certified trainers Gun-Eva Langdahl (who I’ll be working with in Skelleftea) and Gunnar Lindfeldt (who I’ll be meeting in Stockholm).  In the meantime, let me post several articles he sent by Swedish research Alain Topor on developing helpful relationships with people on the margins.  Dr. Topor was talking about the “recovery model” among people considered “severely and persistently mentally ill long before it became popular here in the States. Together with others, such as psychologist Jan Blomqvist (who I blogged about late last year), Alain’s work is putting the consumer at the center of service delivery.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT Tagged With: evidence based practice, Hypertension, Jan Blomqvist, ors, outcome rating scale, Pharmacology, psychotherapy, randomized clinical trial, recovery model, session rating scale, srs, sweden, Training

Improving Outcomes in the Treatment of Obesity via Practice-Based Evidence: Weight Loss, Nutrition, and Work Productivity

April 9, 2010 By scottdm 4 Comments

Obesity is a large and growing problem in the United States and elsewhere.  Data gathered by the National Center for Health Statistics indicate that 33% Americans are obese.  When overweight people are added to the mix, the figure climbs to a staggering 66%!   The problem is not likely to go away soon or on its own as the same figures apply to children.

Researchers estimate that weight problems are responsible for over 300,000 deaths annually and account for 12% of healthcare costs or 100 billion–that’s right, $100,000,000,000–in the United States alone.   The overweight and obese have higher incidences of arthritis, breast cancer, heart disease, colorectal cancer, diabetes, endometrial cancer, gallbladder disease, hypertension, liver disease, back pain, sleeping problems, and stroke–not to mention the tremendous emotional, relational, and social costs.  The data are clear: the overweight are the target of discrimination in education, healthcare, and employment.  A study by Brownell and Puhl (2003), for example, found that: (1) a significant percentage of healthcare professionals admit to feeling  “repulsed” by obese person, even among those who specialize in bariatric treatment; (2) parents provide less college support to their overweight compared to “thin” children; and (3) 87% of obese individuals reported that weight prevented them from being hired for a job.

Sadly, available evidence indicates that while weight problems are “among the easiest conditions to recognize,” they remain one of the “most difficult to treat.”  Weight loss programs abound.  When was the last time you watched television and didn’t see an ad for a diet pill, program, or exercise machine?  Many work.  Few, however, lead to lasting change.

What might help?

More than a decade ago, I met Dr. Paul Faulkner, the founder and then Chief Executive Officer of Resources for Living (RFL), an innovative employee assistance program located in Austin, Texas.  I was teaching a week-long course on outcome-informed work at the Cape Cod Institute in Eastham, Massachusetts.  Paul had long searched for a way of improving outcomes and service delivery that could simultaneously be used to provide evidence of the value of treatment to purchasers–in the case of RFL, the large, multinational companies that were paying him to manage their employee assistance programs.  Thus began a long relationship between me and the management and clinical staff of RFL.  I was in Austin, Texas dozens of times providing training and consultation as well as setting up the original ORS/SRS feedback system known as ALERT, which is still in use at the organization today.  All of the original reliability, validity, norming, and response trajectories were done together with the crew at RFL.

Along the way, RFL expanded services to disease management, including depression, chronic obstructive pulmonary disease, diabetes, and obesity.  The “weight management” program delivered coaching and nutritional consultation via the telephone informed by ongoing measurement of outcomes and the therapeutic alliance using the SRS and ORS.  The results are impressive.  The study by Ryan Sorrell, a clinician and researcher at RFL, not only found that the program and feedback led to weight loss, but also significant improvements in distress, health eating behaviors (70%), exercise (65%), and presenteeism on the job (64%)–the latter being critical to the employers paying for the service.

Such research adds to the growing body of literature documenting the importance of “practice-based” evidence, making clear that finding the “right” or “evidence-based” approach for obesity (or any problem for that matter) is less important than finding out “what works” for each person in need of help.  With challenging, “life-style” problems, this means using ongoing feedback to inform whatever services may be deemed appropriate or necessary.  Doing so not only leads to better outcomes, but also provides real-time, real-world evidence of return on investment for those footing the bill.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, cognitive-behavioral therapy, conferences, continuing education, diabetes, disease management, Dr. Paul Faulkner, evidence based medicine, evidence based practice, Hypertension, medicine, obesity, ors, outcome rating scale, practice-based evidence, public behavioral health, randomized clinical trial, session rating scale, srs, Training

Behavioral Healthcare in Holland: The Turn Away from the Single-payer, Government-Based Reimbursement System

January 26, 2010 By scottdm Leave a Comment

Several years ago I was contacted by a group of practitioners located in the largest city in the north of the Netherlands–actually the capital of the province known as Groningen.  The “Platform,” as they are known, were wondering if I’d be willing to come and speak at one of their upcoming conferences.  The practice environment was undergoing dramatic change, the group’s leadership (Dorti Been & Pico Tuene) informed me.  Holland would soon be switching from government to a private insurance reimbursement system.  Dutch practitioners were “thinking ahead,” preparing for the change–in particular, understanding what the research literature indicates works in clinical practice as well as learning methods for documenting and improving the outcome of treatment.

I was then, and remain now, deeply impressed with the abilities and dedication of Dutch practitioners.  During that visit to Groningen, and the many that have followed (to Amsterdam, Rotterdam, Beilen, etc.), its clear that clinicians in the Netherlands are determined to lead rather than be led.  I’ve been asked to meet with university professors, practitioner organizations, training coordinators, and insurance company executives.  In a very short period of time, two Dutch therapists–physician Flip Van Oenen and psychologist Mark Crouzen–have completed the “Training of Trainers” course and become recognized trainers and associates for the International Center for Clinical Excellence.  And finally, a study will soon be published showing sound psychometric properties of the Dutch translations of the ORS and SRS.

I’ve also been working closely with the Dutch company Reflectum–a group dedicated to supporting outcome-informed healthcare and clinical excellence.  Briefly, Reflectum has organized several conferences and expert meetings between me and clinicians, agency managers, and insurance companies.  One thing for sure: we will be working closely together to train a network of trainers and consultants to promote, support, and train agencies and practitioners in outcome-informed methods in order to meet the demands of the changing practice climate.

Check out the videobelow filmed at Schipol airport during one of my recent trips to Holland:

Filed Under: Behavioral Health, CDOI, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT Tagged With: brief therapy, cdoi, common factors, holland, meta-analysis, ors, outcome rating scale, public behavioral health, reflectum, session rating scale, srs

Accountability in Behavioral Health: Steps for Dealing with Cutbacks, Shortfalls, and Tough Economic Conditions

January 25, 2010 By scottdm 3 Comments

As anyone who follows me on Facebook knows, I get around.  In the past few months, I visited Australia, Norway, Sweden, Denmark (to name but a few countries) as well as criss-crossed the United States.  If I were asked to sum up the state of public behavioral health agencies in a single word, the word–with very few exceptions–would be: desperate.  Between the unfunded mandates and funding cutbacks, agencies are struggling.

Not long ago, I blogged about the challenges facing agencies and providers in Ohio.  In addition to reductions in staffing, those in public behavioral health are dealing with increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  The one bright spot in this otherwise frightening climate is: outcomes.  Several counties in Ohio have adopted the ORS and SRS and been using them to improve the effectiveness and efficiency of behavioral health services.

I’ve been working with the managers and providers in both Marion and Crawford counties for a little over two years.  Last year, the agencies endured significant cuts in funding.  As a result, they were forced to eliminate a substantial number of positions.  Needless to say, it was a painful process with no upsides–except that, as a result of using the measures, the dedicated providers had so improved the effectiveness and efficiency of treatment they were able to absorb the loss of staff without having to cut on services to clients.

The agencies cite four main findings resulting from the work we’ve done together over the last two years.  In their own words:

  1.  Use of FIT has enabled us to be more efficient, which is particularly important given Ohio’s economic picture and the impact of State budget cuts. Specifically, FIT is enabling service providers and supervisors to identify consumers much earlier who are not progressing in the treatment process. This allows us to change course sooner when treatment is not working, to know if changes work, to identify consumers in need of a different level of care, etc.  FIT also provides data on which the provider and consumer can base decisions about the intensity of treatment and treatment continuation (i.e. when to extend time between services or when the episode of service should end). In short, our staff and consumers are spending much less time “spinning their wheels” in unproductive activities.  As a result, we have noticed more “planned discharges versus clients just dropping out of treatment.
  2. FIT provides aggregate effect size data for individual service providers, for programs, and for services, based on data from a valid and reliable outcome scale. Effect sizes are calculated by comparing our outcome data to a large national data base. Progress achieved by individual consumers is also compared to this national data base. For the first time, we can “prove” to referral sources and funding sources that our treatment works, using data from a valid and reliable scale. Effect size data also has numerous implications for supervision, and supervision sessions are more focused and productive.
  3.  Use of the SRS (session rating scale) is helping providers attend to the therapeutic alliance in a much more deliberate manner. As a result, we have noticed increased collaboration between consumer and provider, less resistance and more partnership, and greater openness from consumers about their treatment experience. Consumer satisfaction surveying has revealed increased satisfaction by consumers. The implications for consumers keeping appointments and actually implementing what is learned in treatment are clear. The Session Rating Scale is also yielding some unexpected feedback from clients and has caused us to rethink what we assume about clients and their treatment experience.
  4. Service providers, especially those who are less experienced, appear to be more confident and purposeful when providing services. The data provides a basis for clinical work and there is much less ‘flying by the seat of their pants.’”Inspiring, eh?  And now, listen to Community Counseling Services Director Bob Moneysmith and Crawford-Marion ADAMH Board Associate Director Shirley Galdys describe the implementation:

Filed Under: Behavioral Health Tagged With: cdoi, evidence based practice, icce, ors, outcome rating scale, public behavioral health, research, session rating scale, srs

Research on the Outcome Rating Scale, Session Rating Scale & Feedback

January 7, 2010 By scottdm Leave a Comment

PCOMS - Partners for change outcome management system Scott D Miller - SAMHSA - NREPP“How valid and reliable are the ORS and SRS?”  “What do the data say about the impact of routine measurement and feedback on outcome and retention in behavioral health?”  “Are the ORS and SRS ‘evidence-based?'”

These and other questions regarding the evidence supporting the ORS, SRS, and feedback are becoming increasingly common in the workshops I’m teaching in the U.S. and abroad.

As indicated in my December 24th blogpost, routine outcome monitoring (PROMS) has even been endorsed by “specific treatments for specific disorders” proponent David Barlow, Ph.D., who stated unequivocally that “all therapists would soon be required to measure and monitor the outcome of their clinical work.”  Clearly, the time has come for all behavioral health practitioners to be aware of the research regarding measurement and feedback.

Over the holidays, I updated a summary of the data to date that has long been available to trainers and associates of the International Center for Clinical Excellence.  The PDF reviews all of the research on the psychometric properties of the outcome and session ratings scales as well as the studies using these and other formal measures of progress and the therapeutic relationship to improve outcome and retention in behavioral health services.  The topics is so important, that I’ve decide to make the document available to everyone.  Feel free to distribute the file to any and all colleagues interested in staying up to date on this emerging mega-trend in clinical practice.

Measures And Feedback from Scott Miller

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, continuing education, david barlow, evidence based medicine, evidence based practice, feedback, Hypertension, icce, medicine, ors, outcome measurement, outcome rating scale, post traumatic stress, practice-based evidence, proms, randomized clinical trial, session rating scale, srs, Training

The Study of Excellence: A Radically New Approach to Understanding "What Works" in Behavioral Health

December 24, 2009 By scottdm 2 Comments

“What works” in therapy?  Believe it or not, that question–as simple as it is–has and continues to spark considerable debate.  For decades, the field has been divided.  On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).

While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years!  Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services.  Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome.  At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work.  After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors?  More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.

The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness.  Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations.  Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples).  On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors.  Little wonder that outcomes have stagnated.  Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.

What to do?  For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing.  Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.

Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery.  One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback.  Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!

Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work.  Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy.  And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.

My fear?  Focusing on measurement and feedback misses the point.  Simply put: it’s not seeking feedback that is important.  Rather, it’s what feedback potentially engenders in the user that is critical.  Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service.  “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. [2004]).

Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.”   Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others.  Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players.  Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board.  In other words, their immense knowledge is context specific.

A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge.  I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference.  I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place.  Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).

Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.

Filed Under: Behavioral Health, excellence, Feedback, Top Performance Tagged With: brendan madden, cdoi, cognitive behavioral therapy, common factors, continuing education, david barlow, evidence based medicine, evidence based practice, Evolution of Psychotherapy, feedback, icce, micheal lambert, ors, outcome rating scale, proms, session rating scale, srs, therapist, therapists, therapy

The Effects of Feedback on Medication Compliance and Outcome: The University of Pittsburgh Study

December 18, 2009 By scottdm 1 Comment

A number of years ago, I was conducting a workshop in Pittsburgh.  At some point during the training, I met Dr. Jan Pringle, the director of the Program Evaluation Research Unit in the School of Pharmacy at the University of Pittsburgh.

Jan had an idea: use outcome feedback to improve pharmacy practice and outcome.  Every year, large numbers of prescriptions are written by physicians (and other practitioners) that are never filled.  Whats more, surprisingly large number of the scripts that are filled, are either: (a) not taken; or (b) not taken properly.  The result?  In addition to the inefficient use of scarce resources, the disconnect between prescribers, pharmacists, and patients puts people at risk for poor healthcare outcomes.

Together with project coordinator and colleague, Dr. Michael Melczak, Jan set up a study using the ORS and SRS.  Over the last 3 years, I’ve worked as a consultant to the project–providing training and addressing issues regarding application in this first ever study of pharmacy.

Anyway, there were two different conditions in the study.  In the first, pharmacists–the practitioner most likely to interact with patients about prescriptions–engaged in “practice as usual.”  In the second condition, pharmacists used the ORS and the SRS to chart, discuss, and guide patient progress and the pharmacist-patient alliance.  Although the manuscript is still in preparation, I’m pleased to be able to report here that, according to Drs. Pringle and Melczak, the results indicate, “that the patients who were seen by the pharmacists who used [the] scales were significantly more likely to take their medications at the levels that would be likely to result in clinical impact than the patients who saw a pharmacists who did not use the scales…for hypertensive and hyperlipidemia drugs especially.”

Stay tuned for more…

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, medication adherence Tagged With: jan pringle, michael melczak, ors, outcome rating scale, pharmacy, session rating scale, srs

Climate Change in Denmark

December 5, 2009 By scottdm Leave a Comment

hans_christian_andersen_gbHans Christian Andersen, the author of such classic stories as The Ugly Duckling and the Emperor’s New Clothes, once wrote, “Life itself is the most wonderful fairy tale of all.”  That sentiment is certainly true of my own life.  For the last 16 years, I’ve been privileged to travel around the world conducting training and providing consultation.  Each year, I meet literally thousands of therapists and I’m consistently impressed and inspired by their dedication and persistence.  Truth be told, that “spirit”–for lack of a better word–is actually what keeps me in the field.

This last year, I’ve spent a considerable amount of time working with practitioners in Denmark.  Interest in Feedback-Informed Treatment has taken off–and I have the frequent flyer miles to prove it! While I’ve been traveling to the homeland of Hans Christian Andersen for many years (actually my maternal grandfather and his family immigrated to the United States from a small town just outside Copenhagen), momentum really began building following several years of workshops arranged by Henrik and Mette Petersen who run Solution–a top notch organization providing both workshops and year-long certification courses in short-term, solution-focused, and systemic therapies.

In October, I worked with 100+ staff who work at Psykoterapeutisk Center Stolpegård–a large outpatient center just outside of Copenhagen.  For two days, we talked about research and practice in psychotheapy, focusing specifically on using outcome to inform and improve clinical services.  Peter Koefoed, chief psychologist and head of Training organized the event.   I was back in Denmark not quite one month later for two days with Henrik and Mette Petersen and a then third day for a small, intensive training with Toftemosegaard–a center for growth and change–smack dab in the middle of Copenhagen.

At each event, I was honored to be accompanied by Danish psychologist Susanne Bargmann, who is an Associate and Certified Trainer for the Center for Clinical Excellence (ICCE).  I first met Susanne at a two-day workshop sponsored by Solutions a number of years ago.  Her attitude and drive is infectious.  She attended the Training of Trainer’s event in Chicago and now runs a listserve for Danish practitioners interested in feedback-informed treatment (FIT) (by the way, if you are interested in joining the group simply click on her name above to send an email).

Recently, she published an important article in Psycholog Nyt–the official magazine for the Danish Psychological Association. The article is really the first written in Danish by a Danish practitioner to suggest “practice-based evidence” as a scientifically credible alternative to the narrow “specific treatments for specific problems” paradigm that has come to dominate professional discourse and practice the world over.

Anyway, I’ll be back in Denmark several times in 2010.  In May, I’ll be teaching “Supershrinks: Learning from the Field’s Most Effective Practitioners.”  The course, as I understand it, is already sold out.  No worries though as the workshop is being offered again in November–so sign up early (click here to access my workshop calendar).  Also, in September, Susanne and I will jointly teach a course for psychologists on research entitled, “Forskning og Formidling”–a required training for those seeking specialist approval by the Danish Psychological Association. Finally, as I’ve done for the last several years, I’m scheduled to do two days for Solution as well.  If you live and work in Denmark, I truly hope to see you at one of these events.

Bargman Nye Veje For Evidensbegrebet from Scott Miller

 

Filed Under: Behavioral Health, excellence, Feedback Informed Treatment - FIT Tagged With: cdoi, Danish Psychological Association, denmark, icce, international center for cliniclal excellence, ors, outcome rating scale, practice-based evidence, session rating scale, srs, supershrinks

Where is Scott Miller going? The Continuing Evolution

November 16, 2009 By scottdm 2 Comments

I’ve just returned from a week in Denmark providing training for two important groups.  On Wednesday and Thursday, I worked with close to 100 mental health professionals presenting the latest information on “What Works” in Therapy at the Kulturkuset in downtown Copenhagen.  On Friday, I worked with a small group of select clinicians working on implementing feedback-informed treatment (FIT) in agencies around Denmark.  The day was organized by Toftemosegaard and held at the beautiful and comfortable Imperial Hotel.

In any event, while I was away, I received a letter from my colleague and friend, M. Duncan Stanton.  For many years, “Duke,” as he’s known, has been sending me press clippings and articles both helping me stay “up to date” and, on occasion, giving me a good laugh.  Enclosed in the envelope was the picture posted above, along with a post-it note asking me, “Are you going into a new business?!”

As readers of my blog know, while I’m not going into the hair-styling and spa business, there’s a grain of truth in Duke’s question.  My work is indeed evolving.  For most of the last decade, my writing, research, and training focused on factors common to all therapeutic approaches. The logic guiding these efforts was simple and straightforward. The proven effectiveness of psychotherapy, combined with the failure to find differences between competing approaches, meant that elements shared by all approaches accounted for the success of therapy (e.g., the therapeutic alliance, placebo/hope/expectancy, structure and techniques, extratherapeutic factors).  As first spelled out in Escape from Babel: Toward a Unifying Language for Psychotherapy Practice, the idea was that effectiveness could be enhanced by practitioners purposefully working to enhance the contribution of these pantheoretical ingredients.  Ultimately though, I realized the ideas my colleagues and I were proposing came dangerously close to a new model of therapy.  More importantly, there was (and is) no evidence that teaching clinicians a “common factors” perspective led to improved outcomes–which, by the way, had been my goal from the outset.

The measurable improvements in outcome and retention–following my introduction of the Outcome and Session Rating Scales to the work being done by me and my colleagues at the Institute for the Study of Therapeutic Change–provided the first clues to the coming evolution.  Something happened when formal feedback from consumers was provided to clinicians on an ongoing basis–something beyond either the common or specific factors–a process I believed held the potential for clarifying how therapists could improve their clinical knowledge and skills.  As I began exploring, I discovered an entire literature of which I’d previously been unaware; that is, the extensive research on experts and expert performance.  I wrote about our preliminary thoughts and findings together with my colleagues Mark Hubble and Barry Duncan in an article entitled, “Supershrinks” that appeared in the Psychotherapy Networker.

Since then, I’ve been fortunate to be joined by an internationally renowned group of researchers, educators, and clinicians, in the formation of the International Center for Clinical Excellence (ICCE).  Briefly, the ICCE is a web-based community where participants can connect, learn from, and share with each other.  It has been specifically designed using the latest web 2.0 technology to help behavioral health practitioners reach their personal best.  If you haven’t already done so, please visit the website at www.iccexcellence.com to register to become a member (its free and you’ll be notified the minute the entire site is live)!

As I’ve said before, I am very excited by this opportunity to interact with behavioral health professionals all over the world in this way.  Stay tuned, after months of hard work and testing by the dedicated trainers, associates, and “top performers” of ICCE, the site is nearly ready to launch.

Filed Under: excellence, Feedback Informed Treatment - FIT, Top Performance Tagged With: denmark, icce, Institute for the Study of Therapeutic Change, international center for cliniclal excellence, istc, mental health, ors, outcome rating scale, psychotherapy, psychotherapy networker, session rating scale, srs, supershrinks, therapy

Leading Outcomes in Vermont: The Brattleboro Retreat and Primarilink Project

November 8, 2009 By scottdm 4 Comments

For the last 7 years, I’ve been traveling to the small, picturesque village of Brattleboro, Vermont to work with clinicians, agency managers, and various state officials on integrating outcomes into behavioral health services.  Peter Albert, the director of Governmental Affairs and PrimariLink at the Brattleboro Retreat, has tirelessly crisscrossed the state, promoting outcome-informed clinical work and organizing the trainings and ongoing consultations.   Over time, I’ve done workshops on the common factors, “what works” in therapy, using outcome to inform treatment, working with challenging clinical problems and situations and, most recently, the qualities and practices of super effective therapists.  In truth, outcome-informed clinical work both grew up and “came of age” in Vermont.  Indeed, Peter Albert was the first to bulk-purchase the ASIST program and distribute it for free to any provider interested in tracking and improving the effectiveness of their clinical work.

If you’ve never been to the Brattleboro area, I can state without reservation that it is one of the most beautiful areas I’ve visited in the U.S.–particularly during the Fall, when the leaves are changing color.  If you are looking for a place to stay for a few days, the Crosy House is my first and only choice.  The campus of the Retreat is also worth visiting.  It’s no accident that the trainings are held there as it has been a place for cutting edge services since being founded in 1874.  The radical idea at that time?  Treat people with respect and dignity.  The short film below gives a brief history of the Retreat and a glimpse of the serene setting.

Anyway, this last week, I spent an entire day together with a select group of therapists dedicated to improving outcomes and delivering superior service to their clients.  Briefly, these clinicians have been volunteering their time to participate in a project to implement outcome-informed work in their clinical settings.  We met in the boardroom at the Retreat, discussing the principles and practices of outcome-informed work as well as reviewing graphs of their individual and aggregate ORS and SRS data.

It has been and continues to be an honor to work with each and every one in the PrimariLink project.  Together, they are making a real difference in the lives of those they work with and to the field of behavioral health in Vermont.  If you are a clinician located in Vermont or provide services to people covered by MVP or PrimariLink and would like to participate in the project, please email Peter Albert.  At the same time, if you are a person in need of behavioral health services and looking for a referral, you could do no better than contacting one of the providers in the project!

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, FIT Software Tools, Practice Based Evidence Tagged With: behavioral health, common factors, consultation, ors, outcome rating scale, session rating scale, srs, supershrinks, therapy, Training

Outcomes in Ohio: The Ohio Council of Behavioral Health & Family Service Providers

October 30, 2009 By scottdm Leave a Comment

Ohio is experiencing the same challenges faced by other states when it comes to behavioral health services: staff and financial cutbacks, increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  Into the breach, the Ohio Council of Behavioral Health & Family Service Providers organized their 30th annual conference, focused entirely on helping their members meet the challenges and provide the most effective services possible.

On Tuesday, I presented a plenary address summarizing 40 years of research on “What Works” in clinical practice as well as strategies for documenting and improving retention and outcome of behavioral health services.  What can I say?  It was a real pleasure working with the 200+ clinicians, administrators, payers, and business executives in attendance.  Members of OCBHFSP truly live up to their stated mission of, “improving the health of Ohio’s communities and the well-being of Ohio’s families by promoting effective, efficient, and sufficient behavioral health and family services through member excellence and family advocacy.”

For a variety of reasons, the State of Ohio has recently abandoned the outcome measure that had been in use for a number of years.  In my opinion, this is a “good news/bad news” situation.  The good news is that the scale that was being used was neither feasible or clinically useful.  The bad news, at least at this point in time, is that state officials opted for no measure rather than another valid, reliable, and feasible outcome tool.  This does not mean that agencies and providers are not interested in outcome.  Indeed, as I will soon blog about, a number of clinics and therapists in Ohio are using the Outcome and Session Rating Scales to inform and improve service delivery.  At the conference, John Blair and Jonathon Glassman from Myoutcomes.com demonstrated the web-based system for administering, scoring, and interpreting the scales to many attendees.  I caught up with them both in the hall outside the exhibit room.

Anyway, thanks go to the members and directors of OCBHFSP for inviting me to present at the conference.  I look forward to working with you in the future.

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT Tagged With: behavioral health, medicine, outcome measurement, outcome measures, outcome rating scale, research, session rating scale, therapiy, therapy

Achieving Clinical Excellence: The Conference

October 26, 2009 By scottdm Leave a Comment

A few weeks ago, I announced the first International “Achieving Clinical Excellence” (ACE) conference to be held at the Westin Hotel in Kansas City, Missouri on October 20-22nd, 2010.  You can now register for this and all other ICCE events, by clicking here.  Through a variety of keynote addresses and workshops, participants will learn the “science and steps” to excellence in clinical practice.  Attendees will also meet and learn directly from internationally ranked performers from a variety of professions, including medicine, science, music, entertainment, and sports.  I do hope you’ll join us in Kansas City for three days of science, skill building, and inspiration.

In the meantime, I wanted to tell you a bit about one of the conference’s keynote speakers, K. Anders Ericsson, Ph.D. As anyone who has been following my blog knows, Dr. Ericsson is the editor of the massive and influential “Cambridge Handbook of Expertise and Expert Performance.”  He is an internationally known writer, researcher, and speaker who is commonly referred to as “the expert on experts.”

 At the ACE conference, Dr. Ericsson will bring his knowledge and experience to bear on the subject of expertise in behavioral health.  I promise you won’t want to miss it. For a flavor, give his recent article from the Harvard Business Review a read.

Filed Under: Behavioral Health, excellence Tagged With: addiction, cdoi, conferences and training, icce, ors, outcome rating scale, session rating scale, srs, Therapist Effects, training and consultation

  • 1
  • 2
  • Next Page »

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Jun
03

Feedback Informed Treatment (FIT) Intensive ONLINE


Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (112)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Bea Lopez on The Cryptonite of Behavioral Health: Making Mistakes
  • Anshuman Rawat on Integrity versus Despair
  • Transparency In Therapy and In Life - Mindfully Alive on How Does Feedback Informed Treatment Work? I’m Not Surprised
  • scottdm on Simple, not Easy: Using the ORS and SRS Effectively
  • arthur goulooze on Simple, not Easy: Using the ORS and SRS Effectively

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training