SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Is your therapy making your clients worse? The Guardian Strikes Again

June 12, 2014 By scottdm 1 Comment

demand-evidence-and-think

Last week, an article appeared in The Guardian, one of the U.K.’s largest daily newspapers.  “Counselling and Therapy can be Harmful,” the headline boldly asserted, citing results of a study yet to be published.  It certainly got my attention.

Do some people in therapy get worse?  The answer is, most assuredly, “Yes.”  Research dating back several decades puts the figure at about 10% (Lambert, 2010).  Said another way, at termination, roughly one out of ten people are functioning more poorly than they were at the beginning of treatment.

The cause?  Here’s what we know.  Despite claims to the contrary (e.g., Lilenfeld, 2007), no psychotherapy approach tested in a clinical trial has ever been shown to reliably lead to or increase the chances of deterioration.  NONE.  Scary stories about dangerous psychological treatments are limited to a handful of fringe therapies–approaches that have been never vetted scientifically and which all practitioners, but a few, avoid.

So, if it’s not about the method, then how to account for deterioration?  As the article points out, “some therapists had a lot more clients [who] deteriorated than others.”  And yet, while that statement is true–lots of prior research shows that some do more harm than others–there are too few such clinicians to account for the total number of clients who worsen.  Moreover, beyond that 10%, between 30 and 50% of people in treatment experience no benefit whatsoever!

Here is where the old adage, “an ounce of prevention is worth a pound of cure,” applies.  Whatever the cause, lack of progress and risk of deterioration are issues for all clinicians.  A growing body of research makes clear, the key to addressing the problem is tracking the progress of clients from visit to visit so that those not improving, or getting worse, can be identified and offered alternatives.

It’s not hard to get started.  You can learn a simple, evidence-based method for tracking progress and the quality of the relationship at: www.whatispcoms.com.  Best of all, practitioners can access the tools for free!

After that, join fellow practitioners from the US, Canada, Europe, and Australia  for one of our intensive trainings  coming up this August in Chicago.  I promise you’ll leave prepared to address the issue of deterioration directly and successfully.

Filed Under: Feedback Informed Treatment - FIT Tagged With: clinical trial, counselling, lilenfeld, michael lambery, psychotherapy, the guardian, therapy, Training, whatispcoms

What can therapists learn from the CIA? Experts versus the "Wisdom of the Crowd"

May 6, 2014 By scottdm Leave a Comment

Central psychotherapy agency

What can we therapists learn from the CIA?  In a phrase, “When it comes to making predictions about important future events, don’t rely on experts!”

After a spate of embarrassing, high-profile intelligence failures, a recent story showed how a relatively small group of average people made better predictions about critical world events than highly-trained analysts with access to classified information.  The four-year study, known as the Good Judgment Project, adds to mounting evidence regarding the power of aggregating independent guesses of regular folks–or what is known as, “the wisdom of the crowd.”

When it comes to therapy, multiple scientific studies show that inviting the “wisdom of the crowd” into treatment as much as doubles effectiveness, while simultaneously cutting drop out and deterioration rates.

Whatever your profession, work setting, or preferred therapeutic approach, the process involves formally soliciting feedback from clients and then comparing the results to empirically established benchmarks.   Getting started is easy:

  • Download and  begin using two free, easy to use tools–one that charts progress, the other the quality of the therapeutic relationship–both of which are listed on SAMHSA’s National Registry of Evidence Based Programs and Practices.
  • Next, access cutting edge technology available on the web, smartphones, and tablets, that makes it easy to anonymously compare the progress of  your clients to effective patterns of practice worldwide.

You can learn more at: www.whatispcoms.com.  Plus, the ICCE–the world’s largest online community of professionals using feedback to enhance clinical judgment–is available at no cost to support you in your efforts.

While you’re at it, be sure and join fellow practitioners from the US, Canada, Europe, and Australia for the “Training of Trainers” or two-day FIT Implementation Intensive coming up this August in Chicago.  You’ll not only learn how to use the measures, but also tap into the collective wisdom of clients and practitioners around the globe.   Space is limited, and we are filling up quickly, so don’t wait to register.

Filed Under: Feedback, Feedback Informed Treatment - FIT Tagged With: feedback, feedback informed treatment, icce, international center for cliniclal excellence, National Registry of Evidence Based Programs and Practices, NREPP, PCOMS, SAMHSA, therapy, Training

Do you know who said, "Sometimes the magic works, sometimes it doesn’t"?

April 30, 2014 By scottdm Leave a Comment

Dan George

Chief Dan George playing the role of Old Lodge Skins in the 1970 movie, “Little Big Man.”  Whether or not you’ve seen or remember the film, if you’re a practicing therapist, you know the wisdom contained in that quote.  No matter how skilled the clinician or devoted the client, “sometimes therapy works, sometimes it doesn’t.”

Evidence from randomized clinical trials indicates that, on average, clinicians achieve a reliable change–that is, a difference not attributable to chance, maturation, or measurement error–with approximately 50% of people treated.  For the most effective therapists, it’s about 70%.  Said another way, all of us fail between 30-50% of the time.

Of greater concern, however, is the finding that we don’t see the failure coming.  Hannan and colleagues (2005) found, for example, that therapists correctly predicted deterioration in only 1 of 550 people treated, despite having been told beforehand the likely percentage of their clients that would worsen and knowing they were participating in a study on the subject!

It’s one thing when “the magic doesn’t work”–nothing is 100%–but it’s an entirely different matter when we go on believing that something is working, when it’s not.  Put bluntly, we are a terminally, and forever hopeful group of professionals!

What to do?  Hannan et al. (2005) found that simple measures of progress in therapy correctly identified 90% of clients “at risk” for a negative outcome or dropout.  Other studies have found that routinely soliciting feedback from people in treatment regarding progress and their experience of the therapeutic relationship as much as doubles effectiveness while simultaneously reducing dropout and deterioration rates.

You can get two, simple, evidence-based measures for free here.   Get started by connecting with and learning from colleagues on the world’s largest, online network of clinicians: The International Center for Clinical Excellence.  It’s free and signing up takes only a minute or two.

Six FIT Manuals-1

Finally, take advantage of a special offer for the 6 Feedback Informed Treatment and Training Manuals, containing step by step instructions for using the scales to guide and improve the services you offer.  These manuals are the reason the ICCE received the perfect scores when SAMHSA reviewed and approved our application for evidence-based status.

Here’s to knowing when our “magic” is working, and when it’s not!

Filed Under: Feedback Informed Treatment - FIT Tagged With: icce, international center for cliniclal excellence, magic, outcome measurement, randomized clinical trial, therapy

Good News and Bad News about Psychotherapy

March 25, 2014 By scottdm 3 Comments

good news bad news

Have you seen this month’s issue of, “The National Psychologist?”  If you do counseling or psychotherapy, you should read it.  The headline screams, “Therapy: No Improvement for 40 Years.”  And while I did not know the article would be published, I was not surprised by the title nor it’s contents.  The author and associate editor, John Thomas, was summarizing the invited address I gave at the recent Evolution of Psychotherapy conference.

Fortunately, it’s not all bad news.  True, the outcomes of psychotherapy have not been improving.  Neither is there much evidence that clinicians become more effective with age and experience.  That said, we can get better.  Results from studies of top performing clinicians point the way.  I also reviewed this exciting research in my presentation.
Even if you didn’t attend the conference, you can see it here thanks to the generosity of the Milton H. Erickson Foundation.  Take a look at the article and video, then drop me a line and let me know what you think.  To learn more, you can access a variety of articles for free in the scholarly publications section of the website.

Click here to access the article from the National Psychologist about Scott Miller’s speech at the Evolution of Psychotherapy Conference in Anaheim, California (US) 

Filed Under: Top Performance Tagged With: accountability, Alliance, counselling, deliberate practice, erickson, evidence based practice, Evolution of Psychotherapy, feedback, healthcare, john thomas, psychotherapy, The National Psychologist, therapy

Psychotherapy Training: Is it Worth the Bother?

October 29, 2012 By scottdm 2 Comments

Big bucks.  That’s what training in psychotherapy costs.  Take graduate school in psychology as an example.  According to the US Department of Education’s National Center (NCES), a typical doctoral program takes five years to complete and costs between US$ 240,000-300,000.00.

Who has that kind of money laying around after completing four years of college?  The solution? Why, borrow the money, of course!  And students do.  In 2009, the average amount of debt of those doctoral students in psychology who borrowed was a whopping US$ 88,000–an amount nearly double that of the prior decade.  Well, the training must be pretty darn good to warrent such expenditures–especially when one considers that entry level salaries are on the decline and not terribly high to start!

Oh well, so much for high hopes.

Here are the facts, as recounted in a recent, concisely written summary of the evidence by John Malouff:

1. Studies comparing treatments delivered by professionals and paraprofessionals either show that paraprofessionals have better outcomes or that there is no difference between the two groups;

2. There is virtually no evidence that supervision of students by professionals leads to better client outcomes (you should have guessed this after reading the first point);

3. There is no evidence that required coursework in graduate programs leads to better client outcomes.

If you are hoping that post doctoral experience will make up for the shortcomings of professional training, well, keep hoping.  In truth, professional experience does not correlate often or significantly with client therapy outcomes.

What can you do?  As Malouf points out, “For accrediting agencies to operate in the realm of principles of evidence-based practice, they must produce evidence…and this evidence needs to show that…training…contribute(s) to psychotherapy outcomes…[and] has positive benefits for future clients of the students” (p. 31).

In my workshops, I often advise therapists to forgo additional training until they determine just how effective they are right now.  Doing otherwise, risks perceiving progress where, in fact, none exists.  What golfer would buy new clubs or pursue expensive lessions without first knowing their current handicap?  How will you know if the training you attend is “worth the bother” if you can’t accurately measure the impact of it on your performance?

Determining one’s baseline rate of effectiveness is not as hard as it might seem.  Simply download the Outcome Rating Scale and begin using it with your clients.  It’s free.  You can then aggregate and analyze the data yourself or use one of the existing web-based systems (www.fit-outcomes.com or www.myoutcomes.com) to get data regarding your effectiveness in real time.

After that, join your colleagues at the upcoming Advanced Intensive Training in Feedback Informed Treatment.   This is an “evidence-based” training event.  You learn:

• How to use outcome management tools (e.g., the ORS) to inform and improve the treatment services you provide;

• Specific skills for determining your overall clinical success rate;

• How to develop an individualized, evidence-based professional development plan for improving your outcome and retention rate.

There’s a special “early bird” rate available for a few more weeks.  Last year, the event filled up several months ahead of time, so don’t wait.

On another note, just received the schedule for the 2013 Evolution of Psychotherapy conference.  I’m very excited to have been invited once again to the pretigious event and will be bring the latest information and research on acheiving excellence as a behavioral health practitioner.  On that note, the German artist and psychologist, Andreas Steiner has created a really cool poster and card game for the event, featuring all of the various presenters.  Here’s the poster.  Next to it is the “Three of Hearts.”  I’m pictured there with two of my colleagues, mentors, and friends, Michael Yapko and Stephen Gilligan:

Filed Under: Conferences and Training, Feedback Informed Treatment - FIT, Top Performance Tagged With: Andreas Steiner, evidence based medicine, evidence based practice, Evolution of Psychotherapy conference, john malouff, Michael Yapko, ors, outcome management, outcome measurement, outcome rating scale, paraprofessionals, psychology, psychotherapy, session rating scale, srs, Stephen Gilligan, therapy, Training, US Department of Education's National Center (NCES)

Psychologist Alan Kazdin Needs Help: Please Give

September 25, 2011 By scottdm Leave a Comment

Look at this picture.  This man needs help.  He is psychologist, Alan Kazdin, former president of the American Psychological Association, and current Professor of Psychology at Yale University.  A little over a week ago, to the surprise and shock of many in the field, he disclosed a problem in his professional life.  In an interview that appeared online at TimeHealthland Dr. Kazdin reported being unable to find a therapist or treatment program to which he could refer clients–even in Manhattan, New York, the nation’s largest city!

After traveling the length and breadth of the United States for the last decade, and meeting and working with hundreds of agencies and tens of thousands of therapists, I know there are many clinicians that can help Dr. Kazdin with his problem.  Our group has been tracking the outcome of numerous practitioners over the last decade and found average outcomes to be on par with those obtained in tightly controlled randomized clinical trails!  That’s good news for Dr. Kazdin.

Now, just to be sure, it should be pointed out that Dr. Kazdin is asking for practitioners who adhere to the Cochrane Review’s and the American Psychological Association’s definition of evidence-based practice (EBP)–or, I should say, I believe that is what he is asking for as the interview is not entirely clear on this point and appears to imply that EBP is about using specific treatment methods (the most popular, of course, being CBT).  The actual definition contains three main points, and clearly states that EBP is the integration of:

  1. The best available research;
  2. Clinical expertise; and
  3. The client’s culture, values, and preferences.

Interestingly, the official APA policy on evidence-based practice further defines clinical expertise as the “monitoring of patient progress (and of changes in the patient’s circumstances)…that may suggest the need to adjust the treatment.  If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate.”

I say “interestingly” for two reasons.  First, the definition of EBP clearly indicates that clinicians must tailor psychotherapy to the individual client.  And yet, the interview with Dr. Kazdin specifically quotes him as saying, “That’s a red herring. The research shows that no one knows how to do that. [And they don’t know how to monitor your progress].”   Now, admittedly, the research is new and, as Dr. Kazdin says, “Most people practicing who are 50 years or older”–like himself–may not know about it, but there are over a dozen randomized clinical trials documenting how routinely monitoring progress and the relationship and adjusting accordingly improves outcome.  The interview also reports him saying that “there is no real evidence” that the relationship (aka alliance) between the therapist and client matters when, in fact, the APA Interdivisional Task Force on Evidence-Based Therapy Relationships concluded that there is abundant evidence that “the therapy relationship accounts for substantial and consistent contributions to…outcome….at least as much as the particular method.”  (Incidently, the complete APA policy statement on EBP can be found in the May-June 2006 issue of the American Psychologist).

Who knows how these two major bloopers managed to slip through the editing process?  I sure know I’d be embarrased and immediately issue a clarification if I’d been misquoted making statements so clearly at odds with the facts.  Perhaps Dr. Kazdin is still busy looking for someone to whom he can refer clients.  If you are a professional who uses your clinical expertise to tailor the application of scientifically sound psychotherapy practices to client preferences, values, and culture, then you can help.

Filed Under: evidence-based practice, Top Performance Tagged With: Alan Kazdin, American Psychological Association, brief therapy, Carl Rogers, CBT, continuing education, evidence based practice, icce, medicine, therapy

The Study of Excellence: A Radically New Approach to Understanding "What Works" in Behavioral Health

December 24, 2009 By scottdm 2 Comments

“What works” in therapy?  Believe it or not, that question–as simple as it is–has and continues to spark considerable debate.  For decades, the field has been divided.  On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).

While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years!  Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services.  Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome.  At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work.  After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors?  More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.

The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness.  Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations.  Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples).  On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors.  Little wonder that outcomes have stagnated.  Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.

What to do?  For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing.  Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.

Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery.  One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback.  Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!

Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work.  Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy.  And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.

My fear?  Focusing on measurement and feedback misses the point.  Simply put: it’s not seeking feedback that is important.  Rather, it’s what feedback potentially engenders in the user that is critical.  Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service.  “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. [2004]).

Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.”   Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others.  Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players.  Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board.  In other words, their immense knowledge is context specific.

A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge.  I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference.  I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place.  Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).

Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.

Filed Under: Behavioral Health, excellence, Feedback, Top Performance Tagged With: brendan madden, cdoi, cognitive behavioral therapy, common factors, continuing education, david barlow, evidence based medicine, evidence based practice, Evolution of Psychotherapy, feedback, icce, micheal lambert, ors, outcome rating scale, proms, session rating scale, srs, therapist, therapists, therapy

Where is Scott Miller going? The Continuing Evolution

November 16, 2009 By scottdm 2 Comments

I’ve just returned from a week in Denmark providing training for two important groups.  On Wednesday and Thursday, I worked with close to 100 mental health professionals presenting the latest information on “What Works” in Therapy at the Kulturkuset in downtown Copenhagen.  On Friday, I worked with a small group of select clinicians working on implementing feedback-informed treatment (FIT) in agencies around Denmark.  The day was organized by Toftemosegaard and held at the beautiful and comfortable Imperial Hotel.

In any event, while I was away, I received a letter from my colleague and friend, M. Duncan Stanton.  For many years, “Duke,” as he’s known, has been sending me press clippings and articles both helping me stay “up to date” and, on occasion, giving me a good laugh.  Enclosed in the envelope was the picture posted above, along with a post-it note asking me, “Are you going into a new business?!”

As readers of my blog know, while I’m not going into the hair-styling and spa business, there’s a grain of truth in Duke’s question.  My work is indeed evolving.  For most of the last decade, my writing, research, and training focused on factors common to all therapeutic approaches. The logic guiding these efforts was simple and straightforward. The proven effectiveness of psychotherapy, combined with the failure to find differences between competing approaches, meant that elements shared by all approaches accounted for the success of therapy (e.g., the therapeutic alliance, placebo/hope/expectancy, structure and techniques, extratherapeutic factors).  As first spelled out in Escape from Babel: Toward a Unifying Language for Psychotherapy Practice, the idea was that effectiveness could be enhanced by practitioners purposefully working to enhance the contribution of these pantheoretical ingredients.  Ultimately though, I realized the ideas my colleagues and I were proposing came dangerously close to a new model of therapy.  More importantly, there was (and is) no evidence that teaching clinicians a “common factors” perspective led to improved outcomes–which, by the way, had been my goal from the outset.

The measurable improvements in outcome and retention–following my introduction of the Outcome and Session Rating Scales to the work being done by me and my colleagues at the Institute for the Study of Therapeutic Change–provided the first clues to the coming evolution.  Something happened when formal feedback from consumers was provided to clinicians on an ongoing basis–something beyond either the common or specific factors–a process I believed held the potential for clarifying how therapists could improve their clinical knowledge and skills.  As I began exploring, I discovered an entire literature of which I’d previously been unaware; that is, the extensive research on experts and expert performance.  I wrote about our preliminary thoughts and findings together with my colleagues Mark Hubble and Barry Duncan in an article entitled, “Supershrinks” that appeared in the Psychotherapy Networker.

Since then, I’ve been fortunate to be joined by an internationally renowned group of researchers, educators, and clinicians, in the formation of the International Center for Clinical Excellence (ICCE).  Briefly, the ICCE is a web-based community where participants can connect, learn from, and share with each other.  It has been specifically designed using the latest web 2.0 technology to help behavioral health practitioners reach their personal best.  If you haven’t already done so, please visit the website at www.iccexcellence.com to register to become a member (its free and you’ll be notified the minute the entire site is live)!

As I’ve said before, I am very excited by this opportunity to interact with behavioral health professionals all over the world in this way.  Stay tuned, after months of hard work and testing by the dedicated trainers, associates, and “top performers” of ICCE, the site is nearly ready to launch.

Filed Under: excellence, Feedback Informed Treatment - FIT, Top Performance Tagged With: denmark, icce, Institute for the Study of Therapeutic Change, international center for cliniclal excellence, istc, mental health, ors, outcome rating scale, psychotherapy, psychotherapy networker, session rating scale, srs, supershrinks, therapy

Leading Outcomes in Vermont: The Brattleboro Retreat and Primarilink Project

November 8, 2009 By scottdm 4 Comments

For the last 7 years, I’ve been traveling to the small, picturesque village of Brattleboro, Vermont to work with clinicians, agency managers, and various state officials on integrating outcomes into behavioral health services.  Peter Albert, the director of Governmental Affairs and PrimariLink at the Brattleboro Retreat, has tirelessly crisscrossed the state, promoting outcome-informed clinical work and organizing the trainings and ongoing consultations.   Over time, I’ve done workshops on the common factors, “what works” in therapy, using outcome to inform treatment, working with challenging clinical problems and situations and, most recently, the qualities and practices of super effective therapists.  In truth, outcome-informed clinical work both grew up and “came of age” in Vermont.  Indeed, Peter Albert was the first to bulk-purchase the ASIST program and distribute it for free to any provider interested in tracking and improving the effectiveness of their clinical work.

If you’ve never been to the Brattleboro area, I can state without reservation that it is one of the most beautiful areas I’ve visited in the U.S.–particularly during the Fall, when the leaves are changing color.  If you are looking for a place to stay for a few days, the Crosy House is my first and only choice.  The campus of the Retreat is also worth visiting.  It’s no accident that the trainings are held there as it has been a place for cutting edge services since being founded in 1874.  The radical idea at that time?  Treat people with respect and dignity.  The short film below gives a brief history of the Retreat and a glimpse of the serene setting.

Anyway, this last week, I spent an entire day together with a select group of therapists dedicated to improving outcomes and delivering superior service to their clients.  Briefly, these clinicians have been volunteering their time to participate in a project to implement outcome-informed work in their clinical settings.  We met in the boardroom at the Retreat, discussing the principles and practices of outcome-informed work as well as reviewing graphs of their individual and aggregate ORS and SRS data.

It has been and continues to be an honor to work with each and every one in the PrimariLink project.  Together, they are making a real difference in the lives of those they work with and to the field of behavioral health in Vermont.  If you are a clinician located in Vermont or provide services to people covered by MVP or PrimariLink and would like to participate in the project, please email Peter Albert.  At the same time, if you are a person in need of behavioral health services and looking for a referral, you could do no better than contacting one of the providers in the project!

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, FIT Software Tools, Practice Based Evidence Tagged With: behavioral health, common factors, consultation, ors, outcome rating scale, session rating scale, srs, supershrinks, therapy, Training

Outcomes in Ohio: The Ohio Council of Behavioral Health & Family Service Providers

October 30, 2009 By scottdm Leave a Comment

Ohio is experiencing the same challenges faced by other states when it comes to behavioral health services: staff and financial cutbacks, increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  Into the breach, the Ohio Council of Behavioral Health & Family Service Providers organized their 30th annual conference, focused entirely on helping their members meet the challenges and provide the most effective services possible.

On Tuesday, I presented a plenary address summarizing 40 years of research on “What Works” in clinical practice as well as strategies for documenting and improving retention and outcome of behavioral health services.  What can I say?  It was a real pleasure working with the 200+ clinicians, administrators, payers, and business executives in attendance.  Members of OCBHFSP truly live up to their stated mission of, “improving the health of Ohio’s communities and the well-being of Ohio’s families by promoting effective, efficient, and sufficient behavioral health and family services through member excellence and family advocacy.”

For a variety of reasons, the State of Ohio has recently abandoned the outcome measure that had been in use for a number of years.  In my opinion, this is a “good news/bad news” situation.  The good news is that the scale that was being used was neither feasible or clinically useful.  The bad news, at least at this point in time, is that state officials opted for no measure rather than another valid, reliable, and feasible outcome tool.  This does not mean that agencies and providers are not interested in outcome.  Indeed, as I will soon blog about, a number of clinics and therapists in Ohio are using the Outcome and Session Rating Scales to inform and improve service delivery.  At the conference, John Blair and Jonathon Glassman from Myoutcomes.com demonstrated the web-based system for administering, scoring, and interpreting the scales to many attendees.  I caught up with them both in the hall outside the exhibit room.

Anyway, thanks go to the members and directors of OCBHFSP for inviting me to present at the conference.  I look forward to working with you in the future.

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT Tagged With: behavioral health, medicine, outcome measurement, outcome measures, outcome rating scale, research, session rating scale, therapiy, therapy

My New Year’s Resolution: The Study of Expertise

January 2, 2009 By scottdm Leave a Comment

Most of my career has been spent providing and studying psychotherapy.  Together with my colleagues at the Institute for the Study of Therapeutic Change, I’ve now published 8 books and many, many articles and scholarly papers.  If you are interested you can read more about and even download many of my publications here.

Like most clinicians, I spent the early part of my career focused on how to do therapy.  To me, the process was confusing and the prospect of sitting opposite a real, suffering, client, daunting.  I was determined to understand and be helpful so I went graduate school, read books, and attended literally hundreds of seminars.

Unfortunately, as detailed in my article, Losing Faith, written with Mark Hubble, the “secret” to effective clinical practice always seemed to elude me.  Oh, I had ideas and many of the people I worked with claimed our work together helped.  At the same time, doing the work never seemed as simple or effortless as professional books and training it appear.

Each book and paper I’ve authored and co-authored over the last 20 years has been an attempt to mine the “mystery” of how psychotherapy actually works.  Along the way, my colleagues and I have paradoxically uncovered a great deal about what contributes little or nothing to treatment outcome! Topping the list, of course, are treatment models.  In spite of the current emphasis on “evidence-based” practice, there is no evidence that using particular treatment models for specific diagnostic groups improves outcome.  It’s also hugely expensive!  Other factors that occupy a great deal of professional attention but ultimately make little or no difference include: client age, gender, DSM diagnosis, prior treatment history; additionally, therapist age, gender, years of experience, professional discipline, degree, training, amount of supervision, personal therapy, licensure, or certification.

In short, we spend a great deal of time, effort, and money on matters that matter very little.

For the last 10 years, my work has focused on factors common to all therapeutic approaches. The logic guiding these efforts was simple and straightforward. The proven effectiveness of psychotherapy, combined with the failure to find differences between competing approaches, meant that elements shared by all approaches accounted for the success of therapy. And make no mistake, treatment works. The average person in treatment is better off than 80% of those with similar problems that do not get professional help.

In the Heart and Soul of Change, my colleagues and I, joined by some of the field’s leading researchers, summarized what was known about the effective ingredients shared by all therapeutic approaches. The factors included the therapeutic alliance, placebo/hope/expectancy, structure and techniques in combination with a huge, hairy amount of unexplained “stuff” known as “extratherapeutic factors.”

Our argument, at the time, was that effectiveness could be enhanced by practitioners purposefully working to enhance the contribution of these pantheoretical ingredients.  At a minimum, we believed that working in this manner would help move professional practice beyond the schoolism that had long dominated the field.

Ultimately though, we were coming dangerously close to simply proposing a new model of therapy–this one based on the common factors.  In any event, practitioners following the work treated our suggestions as such.  Instead of say, “confronting dysfunctional thinking,” they understood us to be advocating for a “client-directed” or strength-based approach.  Discussion of particular “strategies” and “skills” for accomplishing these objectives did not lag far behind.  Additionally, while the common factors enjoyed overwhelming empirical support (especially as compared to so-called specific factors), their adoption as a guiding framework was de facto illogical.  Think about it.  If the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why would anyone need to learn about the common factors?

Since the publication of the first edition of the Heart and Soul of Change in 1999 I’ve struggled to move beyond this point. I’m excited to report that in the last year our understanding of effective clinical practice has taken a dramatic leap forward.  All hype aside, we discovered the reason why our previous efforts had long failed: our research had been too narrow.  Simply put, we’d been focusing on therapy rather than on expertise and expert performance.  The path to excellence, we have learned, will never be found by limiting explorations to the world of psychotherapy, with its attendant theories, tools, and techniques.  Instead, attention needs to be directed to superior performance, regardless of calling or career.

A significant body of research shows that the strategies used by top performers to achieve superior success are the same across a wide array of fields including chess, medicine, sales, sports, computer programming, teaching, music, and therapy!  Not long ago, we published our initial findings from a study of 1000’s of top performing clinicians in an article titled, “Supershrinks.”  I must say, however, that we have just “scratched the surface.”  Using outcome measures to identify and track top performing clinicians over time is enabling us, for the first time in the history of the profession, to “reverse engineer” expertise.  Instead of assuming that popular trainers (and the methods they promote) are effective, we are studying clinicians that have a proven track record.  The results are provocative and revolutionary, and will be reported first here on the Top Performance Blog!  So, stay tuned.  Indeed, why not subscribe? That way, you’ll be among the first to know.

Filed Under: Behavioral Health, excellence, Top Performance Tagged With: behavioral health, cdoi, DSM, feedback informed treatment, mental health, ors, outcome measurement, psychotherapy, routine outcome measurement, srs, supervision, therapeutic alliance, therapy

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (112)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon
  • himalayan on Do certain people respond better to specific forms of psychotherapy?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training