SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

What’s in an Acronym? CDOI, FIT, PCOMS, ORS, SRS … all BS?

June 7, 2014 By scottdm Leave a Comment

“What’s in a name?”

–William Shakespeare

A little over a week ago, I received an email from Anna Graham Anderson, a graduate student in psychology at Aarhus University in Denmark.  “I’m writing,” she said, “in hopes of receiving some clarifications.”

Anna Graham Anderson
Anna Graham Anderson

Without reading any further, I knew exactly where Anna was going.  I’d fielded the same question before.  As interest in measurement and feedback has expanded, it comes up more and more frequently.

Anna continued,  “I cannot find any literature on the difference between CDOI, FIT, PCOMS, ORS, and SRS.  No matter where I search, I cannot find any satisfying clues.  Is it safe to say they are the same?”  Or, as another asked more pointedly, “Are all these acronyms just a bunch of branding B.S.?”

I answered, “B.S.?  No.  Confusing?  Absolutely.  So, what is the difference?”

As spelled out in each of the six treatment and training manuals, FIT, or feedback-informed treatment, is, “a panetheoretical approach for evaluating and improving the quality and effectiveness of behavioral health services.  It involves routinely and formally soliciting feedback from consumers regarding the therapeutic relationship and outcome of care and using the resulting information to inform and tailor service deliver.”

Importantly, FIT is agnostic regarding both the method of treatment and the particular measures a practitioner may employ.  Some practitioners use the ORS and SRS, two brief, simple-to-use, and free measures of progress and the therapeutic relationship–but any other valid and reliable scales could be used.

Of all the acronyms associated with my work, CDOI is the one I no longer use.  For me, it had always problematic as it came precariously close to being a treatment model, a way of doing therapy.  I wasn’t  interested in creating a new therapeutic approach.  My work and writing on the common factors had long ago convinced me the field needed no more therapeutic schools.  The phrase, “client-directed, outcome-informed”  described the team’s position at the time, with one foot in the past (how to do therapy), the other in the future (feedback).

And PCOMS?  A long time ago, my colleagues and I had a dream of launching a web-based “system for both monitoring and improving the effectiveness of treatment” (Miller et. al, 2005).  We did some testing at an employee assistance program in located in Texas, formed a corporation called PCOMS (Partners for Change Outcome Management System), and even hired a developer to build the site.  In the end, nothing happened.  Overtime, the acronym, PCOMS, began to be used as an overall term referring to the ORS, SRS, and norms for interpreting the scores.  In February 2013, the Substance Abuse and Mental Health Service Adminstration (SAMHSA) formally recognized PCOMS as an evidence-based practice.  You can read more about PCOMS at: www.whatispcoms.com.

I expect there will be new names and acronyms as the work evolves.  While some remain, others, like fossils, are left behind; evidence of what has come before, their sum total a record of development over time.

Filed Under: Feedback Informed Treatment - FIT Tagged With: cdoi, evidence based medicine, evidence based practice, feedback informed treatment, FIT, ors, outcome measurement, outcome rating scale, PCOMS, SAMHSA, session rating scale, srs, Substance Abuse and Mental Health Service Adminstration

Do you know who said, "Sometimes the magic works, sometimes it doesn’t"?

April 30, 2014 By scottdm Leave a Comment

Dan George

Chief Dan George playing the role of Old Lodge Skins in the 1970 movie, “Little Big Man.”  Whether or not you’ve seen or remember the film, if you’re a practicing therapist, you know the wisdom contained in that quote.  No matter how skilled the clinician or devoted the client, “sometimes therapy works, sometimes it doesn’t.”

Evidence from randomized clinical trials indicates that, on average, clinicians achieve a reliable change–that is, a difference not attributable to chance, maturation, or measurement error–with approximately 50% of people treated.  For the most effective therapists, it’s about 70%.  Said another way, all of us fail between 30-50% of the time.

Of greater concern, however, is the finding that we don’t see the failure coming.  Hannan and colleagues (2005) found, for example, that therapists correctly predicted deterioration in only 1 of 550 people treated, despite having been told beforehand the likely percentage of their clients that would worsen and knowing they were participating in a study on the subject!

It’s one thing when “the magic doesn’t work”–nothing is 100%–but it’s an entirely different matter when we go on believing that something is working, when it’s not.  Put bluntly, we are a terminally, and forever hopeful group of professionals!

What to do?  Hannan et al. (2005) found that simple measures of progress in therapy correctly identified 90% of clients “at risk” for a negative outcome or dropout.  Other studies have found that routinely soliciting feedback from people in treatment regarding progress and their experience of the therapeutic relationship as much as doubles effectiveness while simultaneously reducing dropout and deterioration rates.

You can get two, simple, evidence-based measures for free here.   Get started by connecting with and learning from colleagues on the world’s largest, online network of clinicians: The International Center for Clinical Excellence.  It’s free and signing up takes only a minute or two.

Six FIT Manuals-1

Finally, take advantage of a special offer for the 6 Feedback Informed Treatment and Training Manuals, containing step by step instructions for using the scales to guide and improve the services you offer.  These manuals are the reason the ICCE received the perfect scores when SAMHSA reviewed and approved our application for evidence-based status.

Here’s to knowing when our “magic” is working, and when it’s not!

Filed Under: Feedback Informed Treatment - FIT Tagged With: icce, international center for cliniclal excellence, magic, outcome measurement, randomized clinical trial, therapy

Psychotherapy Training: Is it Worth the Bother?

October 29, 2012 By scottdm 2 Comments

Big bucks.  That’s what training in psychotherapy costs.  Take graduate school in psychology as an example.  According to the US Department of Education’s National Center (NCES), a typical doctoral program takes five years to complete and costs between US$ 240,000-300,000.00.

Who has that kind of money laying around after completing four years of college?  The solution? Why, borrow the money, of course!  And students do.  In 2009, the average amount of debt of those doctoral students in psychology who borrowed was a whopping US$ 88,000–an amount nearly double that of the prior decade.  Well, the training must be pretty darn good to warrent such expenditures–especially when one considers that entry level salaries are on the decline and not terribly high to start!

Oh well, so much for high hopes.

Here are the facts, as recounted in a recent, concisely written summary of the evidence by John Malouff:

1. Studies comparing treatments delivered by professionals and paraprofessionals either show that paraprofessionals have better outcomes or that there is no difference between the two groups;

2. There is virtually no evidence that supervision of students by professionals leads to better client outcomes (you should have guessed this after reading the first point);

3. There is no evidence that required coursework in graduate programs leads to better client outcomes.

If you are hoping that post doctoral experience will make up for the shortcomings of professional training, well, keep hoping.  In truth, professional experience does not correlate often or significantly with client therapy outcomes.

What can you do?  As Malouf points out, “For accrediting agencies to operate in the realm of principles of evidence-based practice, they must produce evidence…and this evidence needs to show that…training…contribute(s) to psychotherapy outcomes…[and] has positive benefits for future clients of the students” (p. 31).

In my workshops, I often advise therapists to forgo additional training until they determine just how effective they are right now.  Doing otherwise, risks perceiving progress where, in fact, none exists.  What golfer would buy new clubs or pursue expensive lessions without first knowing their current handicap?  How will you know if the training you attend is “worth the bother” if you can’t accurately measure the impact of it on your performance?

Determining one’s baseline rate of effectiveness is not as hard as it might seem.  Simply download the Outcome Rating Scale and begin using it with your clients.  It’s free.  You can then aggregate and analyze the data yourself or use one of the existing web-based systems (www.fit-outcomes.com or www.myoutcomes.com) to get data regarding your effectiveness in real time.

After that, join your colleagues at the upcoming Advanced Intensive Training in Feedback Informed Treatment.   This is an “evidence-based” training event.  You learn:

• How to use outcome management tools (e.g., the ORS) to inform and improve the treatment services you provide;

• Specific skills for determining your overall clinical success rate;

• How to develop an individualized, evidence-based professional development plan for improving your outcome and retention rate.

There’s a special “early bird” rate available for a few more weeks.  Last year, the event filled up several months ahead of time, so don’t wait.

On another note, just received the schedule for the 2013 Evolution of Psychotherapy conference.  I’m very excited to have been invited once again to the pretigious event and will be bring the latest information and research on acheiving excellence as a behavioral health practitioner.  On that note, the German artist and psychologist, Andreas Steiner has created a really cool poster and card game for the event, featuring all of the various presenters.  Here’s the poster.  Next to it is the “Three of Hearts.”  I’m pictured there with two of my colleagues, mentors, and friends, Michael Yapko and Stephen Gilligan:

Filed Under: Conferences and Training, Feedback Informed Treatment - FIT, Top Performance Tagged With: Andreas Steiner, evidence based medicine, evidence based practice, Evolution of Psychotherapy conference, john malouff, Michael Yapko, ors, outcome management, outcome measurement, outcome rating scale, paraprofessionals, psychology, psychotherapy, session rating scale, srs, Stephen Gilligan, therapy, Training, US Department of Education's National Center (NCES)

Research on the Outcome Rating Scale, Session Rating Scale & Feedback

January 7, 2010 By scottdm Leave a Comment

PCOMS - Partners for change outcome management system Scott D Miller - SAMHSA - NREPP“How valid and reliable are the ORS and SRS?”  “What do the data say about the impact of routine measurement and feedback on outcome and retention in behavioral health?”  “Are the ORS and SRS ‘evidence-based?'”

These and other questions regarding the evidence supporting the ORS, SRS, and feedback are becoming increasingly common in the workshops I’m teaching in the U.S. and abroad.

As indicated in my December 24th blogpost, routine outcome monitoring (PROMS) has even been endorsed by “specific treatments for specific disorders” proponent David Barlow, Ph.D., who stated unequivocally that “all therapists would soon be required to measure and monitor the outcome of their clinical work.”  Clearly, the time has come for all behavioral health practitioners to be aware of the research regarding measurement and feedback.

Over the holidays, I updated a summary of the data to date that has long been available to trainers and associates of the International Center for Clinical Excellence.  The PDF reviews all of the research on the psychometric properties of the outcome and session ratings scales as well as the studies using these and other formal measures of progress and the therapeutic relationship to improve outcome and retention in behavioral health services.  The topics is so important, that I’ve decide to make the document available to everyone.  Feel free to distribute the file to any and all colleagues interested in staying up to date on this emerging mega-trend in clinical practice.

Measures And Feedback from Scott Miller

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, continuing education, david barlow, evidence based medicine, evidence based practice, feedback, Hypertension, icce, medicine, ors, outcome measurement, outcome rating scale, post traumatic stress, practice-based evidence, proms, randomized clinical trial, session rating scale, srs, Training

Outcomes in Ohio: The Ohio Council of Behavioral Health & Family Service Providers

October 30, 2009 By scottdm Leave a Comment

Ohio is experiencing the same challenges faced by other states when it comes to behavioral health services: staff and financial cutbacks, increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  Into the breach, the Ohio Council of Behavioral Health & Family Service Providers organized their 30th annual conference, focused entirely on helping their members meet the challenges and provide the most effective services possible.

On Tuesday, I presented a plenary address summarizing 40 years of research on “What Works” in clinical practice as well as strategies for documenting and improving retention and outcome of behavioral health services.  What can I say?  It was a real pleasure working with the 200+ clinicians, administrators, payers, and business executives in attendance.  Members of OCBHFSP truly live up to their stated mission of, “improving the health of Ohio’s communities and the well-being of Ohio’s families by promoting effective, efficient, and sufficient behavioral health and family services through member excellence and family advocacy.”

For a variety of reasons, the State of Ohio has recently abandoned the outcome measure that had been in use for a number of years.  In my opinion, this is a “good news/bad news” situation.  The good news is that the scale that was being used was neither feasible or clinically useful.  The bad news, at least at this point in time, is that state officials opted for no measure rather than another valid, reliable, and feasible outcome tool.  This does not mean that agencies and providers are not interested in outcome.  Indeed, as I will soon blog about, a number of clinics and therapists in Ohio are using the Outcome and Session Rating Scales to inform and improve service delivery.  At the conference, John Blair and Jonathon Glassman from Myoutcomes.com demonstrated the web-based system for administering, scoring, and interpreting the scales to many attendees.  I caught up with them both in the hall outside the exhibit room.

Anyway, thanks go to the members and directors of OCBHFSP for inviting me to present at the conference.  I look forward to working with you in the future.

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT Tagged With: behavioral health, medicine, outcome measurement, outcome measures, outcome rating scale, research, session rating scale, therapiy, therapy

My New Year’s Resolution: The Study of Expertise

January 2, 2009 By scottdm Leave a Comment

Most of my career has been spent providing and studying psychotherapy.  Together with my colleagues at the Institute for the Study of Therapeutic Change, I’ve now published 8 books and many, many articles and scholarly papers.  If you are interested you can read more about and even download many of my publications here.

Like most clinicians, I spent the early part of my career focused on how to do therapy.  To me, the process was confusing and the prospect of sitting opposite a real, suffering, client, daunting.  I was determined to understand and be helpful so I went graduate school, read books, and attended literally hundreds of seminars.

Unfortunately, as detailed in my article, Losing Faith, written with Mark Hubble, the “secret” to effective clinical practice always seemed to elude me.  Oh, I had ideas and many of the people I worked with claimed our work together helped.  At the same time, doing the work never seemed as simple or effortless as professional books and training it appear.

Each book and paper I’ve authored and co-authored over the last 20 years has been an attempt to mine the “mystery” of how psychotherapy actually works.  Along the way, my colleagues and I have paradoxically uncovered a great deal about what contributes little or nothing to treatment outcome! Topping the list, of course, are treatment models.  In spite of the current emphasis on “evidence-based” practice, there is no evidence that using particular treatment models for specific diagnostic groups improves outcome.  It’s also hugely expensive!  Other factors that occupy a great deal of professional attention but ultimately make little or no difference include: client age, gender, DSM diagnosis, prior treatment history; additionally, therapist age, gender, years of experience, professional discipline, degree, training, amount of supervision, personal therapy, licensure, or certification.

In short, we spend a great deal of time, effort, and money on matters that matter very little.

For the last 10 years, my work has focused on factors common to all therapeutic approaches. The logic guiding these efforts was simple and straightforward. The proven effectiveness of psychotherapy, combined with the failure to find differences between competing approaches, meant that elements shared by all approaches accounted for the success of therapy. And make no mistake, treatment works. The average person in treatment is better off than 80% of those with similar problems that do not get professional help.

In the Heart and Soul of Change, my colleagues and I, joined by some of the field’s leading researchers, summarized what was known about the effective ingredients shared by all therapeutic approaches. The factors included the therapeutic alliance, placebo/hope/expectancy, structure and techniques in combination with a huge, hairy amount of unexplained “stuff” known as “extratherapeutic factors.”

Our argument, at the time, was that effectiveness could be enhanced by practitioners purposefully working to enhance the contribution of these pantheoretical ingredients.  At a minimum, we believed that working in this manner would help move professional practice beyond the schoolism that had long dominated the field.

Ultimately though, we were coming dangerously close to simply proposing a new model of therapy–this one based on the common factors.  In any event, practitioners following the work treated our suggestions as such.  Instead of say, “confronting dysfunctional thinking,” they understood us to be advocating for a “client-directed” or strength-based approach.  Discussion of particular “strategies” and “skills” for accomplishing these objectives did not lag far behind.  Additionally, while the common factors enjoyed overwhelming empirical support (especially as compared to so-called specific factors), their adoption as a guiding framework was de facto illogical.  Think about it.  If the effectiveness of the various and competing treatment approaches is due to a shared set of common factors, and yet all models work equally well, why would anyone need to learn about the common factors?

Since the publication of the first edition of the Heart and Soul of Change in 1999 I’ve struggled to move beyond this point. I’m excited to report that in the last year our understanding of effective clinical practice has taken a dramatic leap forward.  All hype aside, we discovered the reason why our previous efforts had long failed: our research had been too narrow.  Simply put, we’d been focusing on therapy rather than on expertise and expert performance.  The path to excellence, we have learned, will never be found by limiting explorations to the world of psychotherapy, with its attendant theories, tools, and techniques.  Instead, attention needs to be directed to superior performance, regardless of calling or career.

A significant body of research shows that the strategies used by top performers to achieve superior success are the same across a wide array of fields including chess, medicine, sales, sports, computer programming, teaching, music, and therapy!  Not long ago, we published our initial findings from a study of 1000’s of top performing clinicians in an article titled, “Supershrinks.”  I must say, however, that we have just “scratched the surface.”  Using outcome measures to identify and track top performing clinicians over time is enabling us, for the first time in the history of the profession, to “reverse engineer” expertise.  Instead of assuming that popular trainers (and the methods they promote) are effective, we are studying clinicians that have a proven track record.  The results are provocative and revolutionary, and will be reported first here on the Top Performance Blog!  So, stay tuned.  Indeed, why not subscribe? That way, you’ll be among the first to know.

Filed Under: Behavioral Health, excellence, Top Performance Tagged With: behavioral health, cdoi, DSM, feedback informed treatment, mental health, ors, outcome measurement, psychotherapy, routine outcome measurement, srs, supervision, therapeutic alliance, therapy

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Jun
03

Feedback Informed Treatment (FIT) Intensive ONLINE


Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (112)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Bea Lopez on The Cryptonite of Behavioral Health: Making Mistakes
  • Anshuman Rawat on Integrity versus Despair
  • Transparency In Therapy and In Life - Mindfully Alive on How Does Feedback Informed Treatment Work? I’m Not Surprised
  • scottdm on Simple, not Easy: Using the ORS and SRS Effectively
  • arthur goulooze on Simple, not Easy: Using the ORS and SRS Effectively

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training