SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
info@scottdmiller.com 773.404.5130

What’s in an Acronym? CDOI, FIT, PCOMS, ORS, SRS … all BS?

June 7, 2014 By scottdm Leave a Comment

“What’s in a name?”

–William Shakespeare

A little over a week ago, I received an email from Anna Graham Anderson, a graduate student in psychology at Aarhus University in Denmark.  “I’m writing,” she said, “in hopes of receiving some clarifications.”

Anna Graham Anderson
Anna Graham Anderson

Without reading any further, I knew exactly where Anna was going.  I’d fielded the same question before.  As interest in measurement and feedback has expanded, it comes up more and more frequently.

Anna continued,  “I cannot find any literature on the difference between CDOI, FIT, PCOMS, ORS, and SRS.  No matter where I search, I cannot find any satisfying clues.  Is it safe to say they are the same?”  Or, as another asked more pointedly, “Are all these acronyms just a bunch of branding B.S.?”

I answered, “B.S.?  No.  Confusing?  Absolutely.  So, what is the difference?”

As spelled out in each of the six treatment and training manuals, FIT, or feedback-informed treatment, is, “a panetheoretical approach for evaluating and improving the quality and effectiveness of behavioral health services.  It involves routinely and formally soliciting feedback from consumers regarding the therapeutic relationship and outcome of care and using the resulting information to inform and tailor service deliver.”

Importantly, FIT is agnostic regarding both the method of treatment and the particular measures a practitioner may employ.  Some practitioners use the ORS and SRS, two brief, simple-to-use, and free measures of progress and the therapeutic relationship–but any other valid and reliable scales could be used.

Of all the acronyms associated with my work, CDOI is the one I no longer use.  For me, it had always problematic as it came precariously close to being a treatment model, a way of doing therapy.  I wasn’t  interested in creating a new therapeutic approach.  My work and writing on the common factors had long ago convinced me the field needed no more therapeutic schools.  The phrase, “client-directed, outcome-informed”  described the team’s position at the time, with one foot in the past (how to do therapy), the other in the future (feedback).

And PCOMS?  A long time ago, my colleagues and I had a dream of launching a web-based “system for both monitoring and improving the effectiveness of treatment” (Miller et. al, 2005).  We did some testing at an employee assistance program in located in Texas, formed a corporation called PCOMS (Partners for Change Outcome Management System), and even hired a developer to build the site.  In the end, nothing happened.  Overtime, the acronym, PCOMS, began to be used as an overall term referring to the ORS, SRS, and norms for interpreting the scores.  In February 2013, the Substance Abuse and Mental Health Service Adminstration (SAMHSA) formally recognized PCOMS as an evidence-based practice.  You can read more about PCOMS at: www.whatispcoms.com.

I expect there will be new names and acronyms as the work evolves.  While some remain, others, like fossils, are left behind; evidence of what has come before, their sum total a record of development over time.

Filed Under: Feedback Informed Treatment - FIT Tagged With: cdoi, evidence based medicine, evidence based practice, feedback informed treatment, FIT, ors, outcome measurement, outcome rating scale, PCOMS, SAMHSA, session rating scale, srs, Substance Abuse and Mental Health Service Adminstration

How not to be among the 70-95% of practitioners and agencies that fail

April 20, 2014 By scottdm Leave a Comment

fail2

Our field is full of good ideas, strategies that work.  Each year, practitioners and agencies devote considerable time and resources to staying current with new developments.  What does the research say about such efforts?  When it comes to the implementation of new, evidence-based practices, traditional training strategies routinely produce only 5% to 30% success rates.  Said another way, 70-95% of training fails (Fixsen, Blase, Van Dyke, & Metz, 2013).  

In 2013, Feedback Informed Treatment (FIT)–that is, formally using measures of progress and the therapeutic alliance to guide care–was deemed an evidence-based practice by SAMHSA, and listed on the official NREPP website.  It’s one of those good ideas.  Research to date shows that FIT as much as doubles the effectiveness of behavioral health services, while decreasing costs, deterioration and dropout rates. 

As effective as FIT has proven to be in scientific studies, the bigger challenge is helping clinicians and agencies implement the approach in real world clinical settings.  Simply put, it’s not enough to know “what works.”  You have to be able to put “what works” to work.  On this subject, researchers have identified five, evidence-based steps associated with the successful implementation of any evidence-based practice.  The evidence is summarized in a free, manual available online.  You can avoid the 70-95% failure rate by reading it before attending another training, buying that new software, or hiring the latest consultant.

At the International Center for Clinical Excellence, we’ve integrated the research on implementation into all training events, including a special, two-day intensive workshop on implementing Feedback-Informed Treatment (FIT).  Based on the five, scientifically-established steps, clinicians, supervisors, and agency directors will learn how to both plan and execute a successful implementation of this potent evidence-based practice. 

You can register today by clicking on the link above or the “FIT for Management” icon below.  Feel free to e-mail me with any questions.  In the meantime, hope to see you this summer in Chicago!

Fit Imp 2014

Filed Under: Conferences and Training Tagged With: behavioral health, dropout rates, evidence based medicine, evidence based practice, feedback informed treatment, FIT, icce, implementation, international center for cliniclal excellence, NREPP, SAMHSA, Training

Psychotherapy Training: Is it Worth the Bother?

October 29, 2012 By scottdm 2 Comments

Big bucks.  That’s what training in psychotherapy costs.  Take graduate school in psychology as an example.  According to the US Department of Education’s National Center (NCES), a typical doctoral program takes five years to complete and costs between US$ 240,000-300,000.00.

Who has that kind of money laying around after completing four years of college?  The solution? Why, borrow the money, of course!  And students do.  In 2009, the average amount of debt of those doctoral students in psychology who borrowed was a whopping US$ 88,000–an amount nearly double that of the prior decade.  Well, the training must be pretty darn good to warrent such expenditures–especially when one considers that entry level salaries are on the decline and not terribly high to start!

Oh well, so much for high hopes.

Here are the facts, as recounted in a recent, concisely written summary of the evidence by John Malouff:

1. Studies comparing treatments delivered by professionals and paraprofessionals either show that paraprofessionals have better outcomes or that there is no difference between the two groups;

2. There is virtually no evidence that supervision of students by professionals leads to better client outcomes (you should have guessed this after reading the first point);

3. There is no evidence that required coursework in graduate programs leads to better client outcomes.

If you are hoping that post doctoral experience will make up for the shortcomings of professional training, well, keep hoping.  In truth, professional experience does not correlate often or significantly with client therapy outcomes.

What can you do?  As Malouf points out, “For accrediting agencies to operate in the realm of principles of evidence-based practice, they must produce evidence…and this evidence needs to show that…training…contribute(s) to psychotherapy outcomes…[and] has positive benefits for future clients of the students” (p. 31).

In my workshops, I often advise therapists to forgo additional training until they determine just how effective they are right now.  Doing otherwise, risks perceiving progress where, in fact, none exists.  What golfer would buy new clubs or pursue expensive lessions without first knowing their current handicap?  How will you know if the training you attend is “worth the bother” if you can’t accurately measure the impact of it on your performance?

Determining one’s baseline rate of effectiveness is not as hard as it might seem.  Simply download the Outcome Rating Scale and begin using it with your clients.  It’s free.  You can then aggregate and analyze the data yourself or use one of the existing web-based systems (www.fit-outcomes.com or www.myoutcomes.com) to get data regarding your effectiveness in real time.

After that, join your colleagues at the upcoming Advanced Intensive Training in Feedback Informed Treatment.   This is an “evidence-based” training event.  You learn:

• How to use outcome management tools (e.g., the ORS) to inform and improve the treatment services you provide;

• Specific skills for determining your overall clinical success rate;

• How to develop an individualized, evidence-based professional development plan for improving your outcome and retention rate.

There’s a special “early bird” rate available for a few more weeks.  Last year, the event filled up several months ahead of time, so don’t wait.

On another note, just received the schedule for the 2013 Evolution of Psychotherapy conference.  I’m very excited to have been invited once again to the pretigious event and will be bring the latest information and research on acheiving excellence as a behavioral health practitioner.  On that note, the German artist and psychologist, Andreas Steiner has created a really cool poster and card game for the event, featuring all of the various presenters.  Here’s the poster.  Next to it is the “Three of Hearts.”  I’m pictured there with two of my colleagues, mentors, and friends, Michael Yapko and Stephen Gilligan:

Filed Under: Conferences and Training, Feedback Informed Treatment - FIT, Top Performance Tagged With: Andreas Steiner, evidence based medicine, evidence based practice, Evolution of Psychotherapy conference, john malouff, Michael Yapko, ors, outcome management, outcome measurement, outcome rating scale, paraprofessionals, psychology, psychotherapy, session rating scale, srs, Stephen Gilligan, therapy, Training, US Department of Education's National Center (NCES)

Feedback Informed Treatment as Evidence-Based Practice

May 23, 2012 By scottdm Leave a Comment

Back in November, I blogged about the ICCE application to SAMSHA’s National Registry for consideration of FIT as an official evidence-based approach (EBP).  Given the definition of EBP by the Institute of Medicine and the American Psychological Association, Feedback Informed Treatment seems a perfect, well, FIT.  According to the IOM and APA, evidence-based practice means using the best evidence and tailoring services to the client, their preferences, culture, and circumstances.  Additionally, when evidence-based, clinicians must monitor “patient progress (and of changes in the patient’s circumstances—e.g.,job loss, major illness) that may suggest the need to adjust the treatment. If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate.”

In late Summer 2011, ICCE submitted 1000’s of pages of supporting documents, research studies, as well as video in support of the application.  This week, we heard that FIT passed the “Quality of Research” phase of the review.  Now, the committee is looking at the “Readiness for Dissemination” materials, including the six detailed treatment and implementation manuals on feedback informed treatment.  Keep your fingers crossed.  We’ve been told that the entire process should be completed sometime in late fall.

In the meantime, we are preparing for this summer’s Advanced Intensive and Training of Trainer workshops.  Once again, clinicians, educators, and researchers from around the world will be coming together for cutting edge training.  Only a few spots remain, so register now.

Filed Under: Feedback Informed Treatment - FIT Tagged With: American Psychological Association, evidence based medicine, evidence based practice, feedback informed treatment, FIT, icce, Institute of Medicine, NREPP, practice-based evidence, SAMHSA, Training

Improving Outcomes in the Treatment of Obesity via Practice-Based Evidence: Weight Loss, Nutrition, and Work Productivity

April 9, 2010 By scottdm 4 Comments

Obesity is a large and growing problem in the United States and elsewhere.  Data gathered by the National Center for Health Statistics indicate that 33% Americans are obese.  When overweight people are added to the mix, the figure climbs to a staggering 66%!   The problem is not likely to go away soon or on its own as the same figures apply to children.

Researchers estimate that weight problems are responsible for over 300,000 deaths annually and account for 12% of healthcare costs or 100 billion–that’s right, $100,000,000,000–in the United States alone.   The overweight and obese have higher incidences of arthritis, breast cancer, heart disease, colorectal cancer, diabetes, endometrial cancer, gallbladder disease, hypertension, liver disease, back pain, sleeping problems, and stroke–not to mention the tremendous emotional, relational, and social costs.  The data are clear: the overweight are the target of discrimination in education, healthcare, and employment.  A study by Brownell and Puhl (2003), for example, found that: (1) a significant percentage of healthcare professionals admit to feeling  “repulsed” by obese person, even among those who specialize in bariatric treatment; (2) parents provide less college support to their overweight compared to “thin” children; and (3) 87% of obese individuals reported that weight prevented them from being hired for a job.

Sadly, available evidence indicates that while weight problems are “among the easiest conditions to recognize,” they remain one of the “most difficult to treat.”  Weight loss programs abound.  When was the last time you watched television and didn’t see an ad for a diet pill, program, or exercise machine?  Many work.  Few, however, lead to lasting change.

What might help?

More than a decade ago, I met Dr. Paul Faulkner, the founder and then Chief Executive Officer of Resources for Living (RFL), an innovative employee assistance program located in Austin, Texas.  I was teaching a week-long course on outcome-informed work at the Cape Cod Institute in Eastham, Massachusetts.  Paul had long searched for a way of improving outcomes and service delivery that could simultaneously be used to provide evidence of the value of treatment to purchasers–in the case of RFL, the large, multinational companies that were paying him to manage their employee assistance programs.  Thus began a long relationship between me and the management and clinical staff of RFL.  I was in Austin, Texas dozens of times providing training and consultation as well as setting up the original ORS/SRS feedback system known as ALERT, which is still in use at the organization today.  All of the original reliability, validity, norming, and response trajectories were done together with the crew at RFL.

Along the way, RFL expanded services to disease management, including depression, chronic obstructive pulmonary disease, diabetes, and obesity.  The “weight management” program delivered coaching and nutritional consultation via the telephone informed by ongoing measurement of outcomes and the therapeutic alliance using the SRS and ORS.  The results are impressive.  The study by Ryan Sorrell, a clinician and researcher at RFL, not only found that the program and feedback led to weight loss, but also significant improvements in distress, health eating behaviors (70%), exercise (65%), and presenteeism on the job (64%)–the latter being critical to the employers paying for the service.

Such research adds to the growing body of literature documenting the importance of “practice-based” evidence, making clear that finding the “right” or “evidence-based” approach for obesity (or any problem for that matter) is less important than finding out “what works” for each person in need of help.  With challenging, “life-style” problems, this means using ongoing feedback to inform whatever services may be deemed appropriate or necessary.  Doing so not only leads to better outcomes, but also provides real-time, real-world evidence of return on investment for those footing the bill.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, cognitive-behavioral therapy, conferences, continuing education, diabetes, disease management, Dr. Paul Faulkner, evidence based medicine, evidence based practice, Hypertension, medicine, obesity, ors, outcome rating scale, practice-based evidence, public behavioral health, randomized clinical trial, session rating scale, srs, Training

Research on the Outcome Rating Scale, Session Rating Scale & Feedback

January 7, 2010 By scottdm Leave a Comment

PCOMS - Partners for change outcome management system Scott D Miller - SAMHSA - NREPP“How valid and reliable are the ORS and SRS?”  “What do the data say about the impact of routine measurement and feedback on outcome and retention in behavioral health?”  “Are the ORS and SRS ‘evidence-based?'”

These and other questions regarding the evidence supporting the ORS, SRS, and feedback are becoming increasingly common in the workshops I’m teaching in the U.S. and abroad.

As indicated in my December 24th blogpost, routine outcome monitoring (PROMS) has even been endorsed by “specific treatments for specific disorders” proponent David Barlow, Ph.D., who stated unequivocally that “all therapists would soon be required to measure and monitor the outcome of their clinical work.”  Clearly, the time has come for all behavioral health practitioners to be aware of the research regarding measurement and feedback.

Over the holidays, I updated a summary of the data to date that has long been available to trainers and associates of the International Center for Clinical Excellence.  The PDF reviews all of the research on the psychometric properties of the outcome and session ratings scales as well as the studies using these and other formal measures of progress and the therapeutic relationship to improve outcome and retention in behavioral health services.  The topics is so important, that I’ve decide to make the document available to everyone.  Feel free to distribute the file to any and all colleagues interested in staying up to date on this emerging mega-trend in clinical practice.

Measures And Feedback from Scott Miller

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, continuing education, david barlow, evidence based medicine, evidence based practice, feedback, Hypertension, icce, medicine, ors, outcome measurement, outcome rating scale, post traumatic stress, practice-based evidence, proms, randomized clinical trial, session rating scale, srs, Training

The Study of Excellence: A Radically New Approach to Understanding "What Works" in Behavioral Health

December 24, 2009 By scottdm 2 Comments

“What works” in therapy?  Believe it or not, that question–as simple as it is–has and continues to spark considerable debate.  For decades, the field has been divided.  On one side are those who argue that the efficacy of psychological treatments is due to specific factors (e.g., changing negative thinking patterns) inherent in the model of treatment (e.g., cognitive behavioral therapy) remedial to the problem being treated (i.e., depression); on the other, is a smaller but no less committed group of researchers and writers who posit that the general efficacy of behavioral treatments is due to a group of factors common to all approaches (e.g., relationship, hope, expectancy, client factors).

While the overall effectiveness of psychological treatment is now well established–studies show that people who receive care are better off than 80% of those who do not regardless of the approach or the problem treated–one fact can not be avoided: outcomes have not improved appreciably over the last 30 years!  Said another way, the common versus specific factor battle, while generating a great deal of heat, has not shed much light on how to improve the outcome of behavioral health services.  Despite the incessant talk about and promotion of “evidence-based” practice, there is no evidence that adopting “specific methods for specific disorders” improves outcome.  At the same time, as I’ve pointed out in prior blogposts, the common factors, while accounting for why psychological therapies work, do not and can not tell us how to work.  After all, if the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why learn about the common factors?  More to the point, there simply is no evidence that adopting a “common factors” approach leads to better performance.

The problem with the specific and common factor positions is that both–and hang onto your seat here–have the same objective at heart; namely, contextlessness.  Each hopes to identify a set of principles and/or practices that are applicable across people, places, and situations.  Thus, specific factor proponents argue that particular “evidence-based” (EBP) approaches are applicable for a given problem regardless of the people or places involved (It’s amazing, really, when you consider that various approaches are being marketed to different countries and cultures as “evidence-based” when there is in no evidence that these methods work beyond their very limited and unrepresentative samples).  On the other hand, the common factors camp, in place of techniques, proffer an invariant set of, well, generic factors.  Little wonder that outcomes have stagnated.  Its a bit like trying to learn a language either by memorizing a phrase book–in the case of EBP–or studying the parts of speech–in the case of the common factors.

What to do?  For me, clues for resolving the impasse began to appear when, in 1994, I followed the advice of my friend and long time mentor, Lynn Johnson, and began formally and routinely monitoring the outcome and alliance of the clinical work I was doing.  Crucially, feedback provided a way to contextualize therapeutic services–to fit the work to the people and places involved–that neither a specific or common factors informed approach could.

Numerous studies (21 RCT’s; including 4 studies using the ORS and SRS) now document the impact of using outcome and alliance feedback to inform service delivery.  One study, for example, showed a 65% improvement over baseline performance rates with the addition of routine alliance and outcome feedback.  Another, more recent study of couples therapy, found that divorce/separation rates were half (50%) less for the feedback versus no feedback conditions!

Such results have, not surprisingly, led the practice of “routine outcome monitoring” (PROMS) to be deemed “evidence-based.” At the recent, Evolution of Psychotherapy conference I was on a panel with David Barlow, Ph.D.–a long time proponent of the “specific treatments for specific disorders” (EBP)–who, in response to my brief remarks about the benefits of feedback, stated unequivocally that all therapists would soon be required to measure and monitor the outcome of their clinical work.  Given that my work has focused almost exclusively on seeking and using feedback for the last 15 years, you would think I’d be happy.  And while gratifying on some level, I must admit to being both surprised and frightened by his pronouncement.

My fear?  Focusing on measurement and feedback misses the point.  Simply put: it’s not seeking feedback that is important.  Rather, it’s what feedback potentially engenders in the user that is critical.  Consider the following, while the results of trials to date clearly document the benefit of PROMS to those seeking therapy, there is currently no evidence of that the practice has a lasting impact on those providing the service.  “The question is,” as researcher Michael Lambert notes, “have therapists learned anything from having gotten feedback? Or, do the gains disappear when feedback disappears? About the same question. We found that there is little improvement from year to year…” (quoted in Miller et al. [2004]).

Research on expertise in a wide range of domains (including chess, medicine, physics, computer programming, and psychotherapy) indicates that in order to have a lasting effect feedback must increase a performer’s “domain specific knowledge.”   Feedback must result in the performer knowing more about his or her area and how and when to apply than knowledge to specific situations than others.  Master level chess players, for example, have been shown to possess 10 to 100 times more chess knowledge than “club-level” players.  Not surprisingly, master players’ vast information about the game is consilidated and organized differently than their less successful peers; namely, in a way that allows them to access, sort, and apply potential moves to the specific situation on the board.  In other words, their immense knowledge is context specific.

A mere handful studies document similar findings among superior performing therapists: not only do they know more, they know how, when, and with whom o apply that knowledge.  I noted these and highlighted a few others in the research pipeline during my workshop on “Achieving Clinical Excellence” at the Evolution of Psychotherapy conference.  I also reviewed what 30 years of research on expertise and expert performance has taught us about how feedback must be used in order to insure that learning actually takes place.  Many of those in attendance stopped by the ICCE booth following the presentation to talk with our CEO, Brendan Madden, or one of our Associates and Trainers (see the video below).

Such research, I believe, holds the key to moving beyond the common versus specific factor stalemate that has long held the field in check–providing therapists with the means for developing, organizing, and contextualizing clinical knowledge in a manner that leads to real and lasting improvements in performance.

Filed Under: Behavioral Health, excellence, Feedback, Top Performance Tagged With: brendan madden, cdoi, cognitive behavioral therapy, common factors, continuing education, david barlow, evidence based medicine, evidence based practice, Evolution of Psychotherapy, feedback, icce, micheal lambert, ors, outcome rating scale, proms, session rating scale, srs, therapist, therapists, therapy

Outcomes in Oz II

November 25, 2009 By scottdm 4 Comments

Sitting in my hotel room in Brisbane, Australia.  It’s beautiful here: white, sandy beaches and temperatures hovering around 80 degrees.  Can’t say that I’ll be enjoying the sunny weather much.  Tomorrow I’ll be speaking to a group of 135+ practitioners about “Supershrinks.”  I leave for home on Saturday.  While it’s cold and overcast in Chicago, I’m really looking forward to seeing my family after nearly two weeks on the road.

I spent the morning talking to practitioners in New Zealand via satellite for a conference sponsored by Te Pou.  It was a completely new and exciting experience for me, seated in an empty television studio and talking to a camera.  Anyway, organizers of the conference are determined to avoid mistakes made in the U.S., Europe, and elsewhere with the adoption of “evidence-based practice.”  As a result, they organized the event around the therapeutic alliance–the most neglected, yet evidence-based concept in the treatment literature!  More later, including a link to the hour-long presentation.

On Friday and Saturday of this last week, I was in the classic Victorian city of Melbourne, Australia doing two days worth of training at the request of WorkSafe and the Traffic Accident Commission.  The mission of WorkSafe is, “Working with the community to deliver outstanding workplace safety, together with quality care and insurance protection to workers and employers.”  100+ clinicians dedicated to helping Australians recover from work and traffic-related injuries were present for the first day of training which focused on using formal client feedback to improve retention and outcome of psychological services.  On day 2, a smaller group met for an intensive day of training and consultation.  Thanks go to the sponsors and attendees for an exciting two days.  Learn more about how outcomes are being used to inform service delivery by watching the video below with Daniel Claire and Claire Amies from the Health Services Group.

 

Filed Under: Behavioral Health, Top Performance Tagged With: australia, evidence based medicine, evidence based practice, New Zealand, supershrinks

Common versus Specific Factors and the Future of Psychotherapy: A Response to Siev and Chambless

October 31, 2009 By scottdm 4 Comments

Early last summer, I received an email from my long time friend and colleague Don Meichenbaum alerting me to an article published in the April 2009 edition of the Behavior Therapist–the official “newsletter” of the Association for Behavioral and Cognitive Therapies–critical of the work that I and others have done on the common factors.

Briefly, the article, written by two proponents of the “specific treatments for specific disorders” approach to “evidence-based practice” in psychology, argued that the common factors position–the idea that the efficacy of psychotherapy is largely due to shared rather than unique or model-specific factors–was growing in popularity despite being based on “fallacious reasoning” and a misinterpretation of the research.

Although the article claimed to provide an update on research bearing directly on the validity of the “dodo verdict”–the idea that all treatment approaches work equally well–it simply repeated old criticisms and ignored contradictory, and at times, vast evidence.  Said another way, rather than seizing the opportunity they were given to educate clinicians and address the complex issues involved in questions surrounding evidence-based practice, Siev and Chambless instead wrote to “shore up the faithful.”  “Do not doubt,” authors Siev and Chambless were counseling their adherents, “science is on our side.”

That differences and tensions exist in the interpretation of the evidence is clear and important.  At the same time, more should be expected from those who lead the field.  You read the articles and decide.  The issues at stake are critical to the future of psychotherapy.  As I will blog about next week, there are forces at work in the United States and abroad that are currently working to limit the types of approaches clinicians can employ when working with clients.  While well-intentioned, available evidence indicates they are horribly misguided.  Once again, the question clinicians and consumers face is not “which treatment is best for that problem,” but rather “which approach “fits with, engages, and helps” the particular consumer at this moment in time?”

Behavior Therapist (April 2009) from Scott Miller

Dissemination of EST’s (November 2009) from Scott Miller

Filed Under: Dodo Verdict, evidence-based practice, Practice Based Evidence Tagged With: Association for Behavioral and Cognitive Therapies, behavior therapist, Don Meichenbaum, evidence based medicine, evidence based practice, psychology, psychotherapy

Whoa Nellie! A 25 Million Dollar Study of Treatments for PTSD

October 27, 2009 By scottdm 1 Comment

I have in my hand a frayed and yellowed copy of observations once made by a well known trainer of horses. The trainer’s simple message for leading a productive and successful professional life was, “If the horse you’re riding dies, get off.”

You would think the advice straightforward enough for all to understand and benefit.  And yet, the trainer pointed out, “many professionals don’t always follow it.”  Instead, they choose from an array of alternatives, including:

  1. Buying a strong whip
  2. Switching riders
  3. Moving the dead horse to a new location
  4. Riding the dead horse for longer periods of time
  5. Saying things like, “This is the way we’ve always ridden the horse.”
  6. Appointing a committee to study the horse
  7. Arranging to visit other sites where they ride dead horses more efficiently
  8. Increasing the standards for riding dead horses
  9. Creating a test for measuring our riding ability
  10. Complaining about how the state of the horse the days
  11. Coming up with new styles of riding
  12. Blaming the horse’s parents as the problem is often in the breeding.
When it comes to the treatment of post traumatic stress disorder, it appears the Department of Defense is applying all of the above.  Recently, the DoD awarded the largest grant ever awarded to “discover the best treatments for combat-related post-traumatic stress disorder” (APA Monitor).  Beneficiaries of the award were naturally ecstatic, stating “The DoD has never put this amount of money to this before.”
Missing from the announcements was any mention of research which clearly shows no difference in outcome between approaches intended to be therapeutic—including, the two approaches chosen for comparison in the DoD study!  In June 2008, researchers Benish, Imel, and Wampold, conducted a meta-analysis of all studies in which two or more treatment approaches were directly compared.  The authors conclude, “Given the lack of differential efficacy between treatments, it seems scientifically questionable to recommend one particular treatment over others that appear to be of comparable effectiveness. . . .keeping patients in treatment would appear to be more important in achieving desired outcomes than would prescribing a particular type of psychotherapy” (p. 755).
Ah yes, the horse is dead, but proponents of “specific treatments for specific disorders” ride on.  You can hear their rallying cry, “we will find a more efficient and effective way to ride this dead horse!” My advice? Simple: let’s get off this dead horse. There are any number of effective treatments for PTSD.  The challenge is decidedly not figuring out which one is best for all but rather “what works” for the individual. In these recessionary times, I can think of far better ways to spend 25 million than on another “horse race” between competing therapeutic approaches.  Evidence based methods exist for assessing and adjusting both the “fit and effect” of clinical services—the methods described, for instance, in the scholarly publications sections of my website.  Such methods have been found to improve both outcome and retention by as much as 65%.  What will happen? Though I’m hopeful, I must say that the temptation to stay on the horse you chose at the outset of the race is a strong one.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence, PTSD Tagged With: behavioral health, continuing education, evidence based medicine, evidence based practice, icce, meta-analysis, ptst, reimbursement

History doesn’t repeat itself,

September 20, 2009 By scottdm 2 Comments

Mark Twain photo portrait.

Image via Wikipedia

“History doesn’t repeat itself,” the celebrated American author, Mark Twain once observed, “but it does rhyme.” No better example of Twain’s wry comment than recurring claims about specifc therapeutic approaches. As any clinician knows, every year witnesses the introduction of new treatment models.  Invariably, the developers and proponents claim superior effectivess of the approach over existing treatments.  In the last decade or so, such claims, and the publication of randomized clinical trials, has enabled some to assume the designation of an “evidence-based practice” or “empirically supported treatment.”  Training, continuing education, funding, and policy changes follow.

Without exception, in a few short years, other research appears showing the once widely heralded “advance” to be no more effective than what existed at the time.  Few notice, however, as professional attention is once again captured by a “newer” and “more improved” treatment model.  Studies conducted by my colleagues and I (downloadable from the “scholarly publications” are of my website), document this pattern with treatments for kids, alcohol abuse and dependence, and PTSD over the last 30 plus years.

As folks who’ve attended my recent workshops know, I’ve been using DBT as an example of approaches that have garnered significant professional attention (and funding) despite a relatively small number of studies (and participants) and no evidence of differential effectiveness.  In any event, the American Journal of Psychiatry will soon publish, “A Randomized Trial of Dialectical Behavior Therapy versus General Psychiatric Management for Borderline Personality Disorder.”

As described by the authors, this study is “the largest clinical trial comparing dialectical behavior therapy and an active high-standard, coherent, and principled approach derived from APA guidelines and delivered by clinicians with expertise in treating borderline personality disorder.”

And what did these researchers find?

“Dialectical behavior therapy was not superior to general psychiatric management with both intent-to-treat and per-protocol analyses; the two were equally effective across a range of outcomes.”  Interested readers can request a copy of the paper from the lead investigator, Shelley McMain at: Shelley_McMain@camh.net.

Below, readers can also find a set of slides summarizing and critiquing the current research on DBT. In reviewing the slides, ask yourself, “how could an approach based on such a limited and narrow sample of clients and no evidence of differential effectives achieved worldwide prominence?”

Of course, the results summarized here do not mean that there is nothing of value in the ideas and skills associated with DBT.  Rather, it suggests that the field, including clinicians, researchers, and policy makers, needs to adopt a different approach when attempting to improve the process and outcome of behavioral health practices.  Rather than continuously searching for the “specific treatment” for a “specific diagnosis,” research showing the general equivalence of competing therapeutic approaches indicates that emphasis needs to be placed on: (1) studying factors shared by all approaches that account for success; and (2) developing methods for helping clinicians identify what works for individual clients. This is, in fact, the mission of the International Center for Clinical Excellence: identifying the empirical evidence most likely to lead to superior outcomes in behavioral health.

Dbt Handouts 2009 from Scott Miller

Filed Under: Behavioral Health, Dodo Verdict, Practice Based Evidence Tagged With: alcohol abuse, Americal Psychological Association, American Journal of Psychiatry, APA, behavioral health, CEU, continuing education, CPD, evidence based medicine, evidence based practice, mental health, psychiatry, PTSD, randomized control trial, Training

Practice-Based Evidence Goes Mainstream

September 5, 2009 By scottdm 4 Comments

welcome-to-the-real-worldFor years, my colleagues and I have been using the phrase “practice-based evidence” to refer to clinicians’ use of real-time feedback to develop, guide, and evaluate behavioral health services. Against a tidal wave of support from professional and regulatory bodies, we argued that the “evidence-based practice”–the notion that certain treatments work best for certain diagnosis–was not supported by the evidence.

Along the way, I published, along with my colleagues, several meta-analytic studies, showing that all therapies worked about equally well (click here to access recent studies children, alcohol abuse and dependence, and post-traumatic stress disorder). The challenge, it seemed to me, was not finding what worked for a particular disorder or diagnosis, but rather what worked for a particular individual–and that required ongoing monitoring and feedback.  In 2006, following years of controversy and wrangling, the American Psychological Association, finally revised the official definition to be consistent with “practice-based evidence.” You can read the definition in the May-June issue of the American Psychologist, volume 61, pages 271-285.

Now, a recent report on the Medscape journal of medicine channel provides further evidence that practice-based evidence is going mainstream. I think you’ll find the commentary interesting as it provides compelling evidence that an alternative to the dominent paradigm currently guiding professional discourse is taking hold.  Watch it here.

Filed Under: Behavioral Health, evidence-based practice, Practice Based Evidence Tagged With: behavioral health, conference, deliberate practice, evidence based medicine, evidence based practice, mental health, Therapist Effects

The Debate of the Century

August 27, 2009 By scottdm

doubt_diceWhat causes change in psychotherapy?  Specific treatments applied to specific disorders?  Those in the “evidence-based” say so and have had a huge influence on behavioral healthcare policy and reimbursement.  Over the last 10 years, my colleagues and I have written extensively and traveled the world offering a different perspective: by and large, the effectiveness of care is due to a shared group of factors common to all treatment approaches.

In place of “evidence-based” practice, we’ve argued for “practice-based”evidence.  Said another way, what really matters in the debate is whether clients benefit–not the particular treatment approach.  Here on my website, clinicians can download absolutely free measures that can be used to monitor and improve outcome and retention (click Performance Metrics).

bruce-wampold-364px

Anyway, the message is finally getting through.  Recently, uber-statistician and all around good guy Bruce Wampold, Ph.D. debated prominent EBP proponent Steve Hollon.  Following the exchange, a vote was taken.  Bruce won handily: more than 15:1.

Scroll down to “Closing Debate” (Thursday)

Filed Under: Behavioral Health, Practice Based Evidence Tagged With: bruce wampold, cdoi, evidence based medicine, evidence based practice, ors, outcome rating scale, PCOMS, performance metrics, practice-based evidence, psychotherapy, session rating scale, srs, steve hollon

SEARCH

Subscribe for updates from my blog.

  

Upcoming Training

Mar
17

Feedback Informed Treatment (FIT) Intensive ONLINE


Mar
22

FIT Supervision Intensive 2021 ONLINE


Mar
30

FIT SPRING CAFÉ


Aug
02

FIT Implementation Intensive 2021


Aug
04

Training of Trainers 2021

FIT Software tools

FIT Software tools

NREPP Certified

HTML tutorial

LinkedIn

Topics of Interest:

  • Behavioral Health (110)
  • behavioral health (4)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (27)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (65)
  • excellence (61)
  • Feedback (38)
  • Feedback Informed Treatment – FIT (204)
  • FIT (25)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (8)
  • Top Performance (39)

Recent Posts

  • Developing a Sustainable Deliberate Practice Plan
  • Making Sense of Client Feedback
  • Umpires and Psychotherapists
  • Augmenting the Two-Dimensional Sensory Input of Online Psychotherapy
  • Death of a Friend

Recent Comments

  • Asta on The Expert on Expertise: An Interview with K. Anders Ericsson
  • Michael McCarthy on Culture and Psychotherapy: What Does the Research Say?
  • Jim Reynolds on Culture and Psychotherapy: What Does the Research Say?
  • gloria sayler on Culture and Psychotherapy: What Does the Research Say?
  • Joseph Maizlish on Culture and Psychotherapy: What Does the Research Say?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training