SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

How not to be among the 70-95% of practitioners and agencies that fail

April 20, 2014 By scottdm Leave a Comment

Our field is full of good ideas, strategies that work.  Each year, practitioners and agencies devote considerable time and resources to staying current with new developments.  What does the research say about such efforts?  When it comes to the implementation of new, evidence-based practices, traditional training strategies routinely produce only 5% to 30% success rates.  Said another way, 70-95% of training fails (Fixsen, Blase, Van Dyke, & Metz, 2013).  

In 2013, Feedback Informed Treatment (FIT)–that is, formally using measures of progress and the therapeutic alliance to guide care–was deemed an evidence-based practice by SAMHSA, and listed on the official NREPP website.  It’s one of those good ideas.  Research to date shows that FIT as much as doubles the effectiveness of behavioral health services, while decreasing costs, deterioration and dropout rates. 

As effective as FIT has proven to be in scientific studies, the bigger challenge is helping clinicians and agencies implement the approach in real world clinical settings.  Simply put, it’s not enough to know “what works.”  You have to be able to put “what works” to work.  On this subject, researchers have identified five, evidence-based steps associated with the successful implementation of any evidence-based practice.  The evidence is summarized in a free, manual available online.  You can avoid the 70-95% failure rate by reading it before attending another training, buying that new software, or hiring the latest consultant.

At the International Center for Clinical Excellence, the research on implementation is integrated into all training events as well as The Feedback Informed Treatment and Training Manual.  Based on the scientifically-established steps, clinicians, supervisors, and agency directors learn how to both plan and execute a successful implementation of this potent evidence-based practice. 

Filed Under: Conferences and Training Tagged With: behavioral health, dropout rates, evidence based medicine, evidence based practice, feedback informed treatment, FIT, icce, implementation, international center for cliniclal excellence, NREPP, SAMHSA, Training

Dumb and Dumber: Research and the Media

April 2, 2014 By scottdm 1 Comment

“Just when I thought you couldn’t get any dumber, you go and do something like this… and totally redeem yourself!”
– Harry in Dumb & Dumber

A while back, my inbox suddenly began filling with emails from friends and fellow researchers around the globe.  “Have you seen the article in the Guardian?” they asked.  “What do you make of it?” Others inquired, “Have you read the study the authors are talking about?  Is it true?!”  A few of the messages were snarkier, even gloating,  “Scott, research has finally proven the Dodo verdict is wrong!”

The article the emails referred to was titled, Are all psychological therapies equally effective?  Don’t ask the dodo.  The subtitle boldly announced, “The claim that all forms of psychotherapy are winners has been dealt a blow.”

Honestly, my first thought on reading the headline was, “Why is an obscure topic like the ‘Dodo verdict’ the subject of an article in a major newspaper?”  Who in their right mind–outside of researchers and small cadre of psychotherapists–would care?  What possible interest would a lengthy dissertation on the subject–including references to psychologist Saul Rozenzweig (who first coined the expression in the 1930’s) and researcher allegiance effects–hold for the average Joe or Jane reader of The Guardian.  At a minimum, it struck me as odd.

And odd it stayed, until I glanced down to see who had written the piece.  The authors were psychologist Daniel Freeman–a strong proponent of the empirically-supported treatments movement–and his journalist brother, Jason.

Briefly, advocates of EST’s hold that certain therapies are better than others in the treatment of specific disorders.  Lists of such treatments are created–for example, the NICE Guidelines–dictating which of the therapies are deemed “best.”  Far from innocuous, such lists are, in turn, used to direct public policy, including both the types of treatment offered and the reimbursement given.

Interestingly, in the article, Freeman and Freeman base their conclusion that “the dodo was wrong” on a single study.  Sure enough, that one study comparing CBT to psychoanalysis, found that CBT resulted in superior effects in the treatment of bulimia.  No other studies were mentioned to bolster this bold claim–an assertion that would effectively overturn nearly 50 years of  robust research findings documenting no difference in outcome among competing treatment approaches.

In contrast to what is popularly believed extraordinary findings from single studies are fairly common in science.  As a result, scientists have learned to require replication, by multiple investigators, working in different settings.

The media?  They’re another story.  They love such studies.  The controversy generates interest, capturing readers attention.   Remember cold fusion?  In 1989, researchers Stanley Pons and Martin Fleischmann–then two of the world’s leading electrochemists–claimed that they had produced a nuclear reaction at room temperature–a finding that would, if true, not only overturn decades and decades of prior research and theory but, more importantly, revolutionize energy production.

The media went nuts.  TV and print couldn’t get enough of it.  The hope for a cheap, clean, and abundant source of energy was simply too much to ignore.  The only problem was that, in the time that followed, no one could replicate Pons and Fleischmann’s results.  No one.  While the media ran off in search of other, more tantalizing findings to report, cold fusion quietly disappeared, becoming a footnote in history.

Back to The Guardian.  Curiously, Freeman and Freeman did not mention the publication of another, truly massive study published in Clinical Psychology Review—a study available in print at the time their article appeared.  In it, the researchers used the statistically rigorous method of meta-analysis to review results from 53 studies of psychological treatments for eating disorders.  Fifty-three!  Their finding?  Confirming mountains of prior evidence: no difference in effect between competing therapeutic approaches.  NONE!

Obviously, however, such results are not likely to attract much attention.

Sadly, the same day that the article appeared in The Guardian, John R. Huizenga passed away.  Huizenga is perhaps best known as one of the physicists who helped build the atomic bomb.  Importantly, however, he was also among the first to debunk the claims about cold fusion made by Pons and Fleischman.  His real-world experience, and decades of research, made clear that the reports were a case of dumb (cold fusion) being followed by dumber (media reports about cold fusion).

“How ironic this stalwart of science died on this day,” I thought, “and how inspiring his example is of ‘good science.'”

I spent the rest of the day replying to my emails, including the link to study in Clinical Psychology Review (Smart). “Don’t believe the hype,” I advised, “stick to the data” (and smarter)!

Filed Under: Practice Based Evidence Tagged With: CBT, Clinical Psychology Review, Daniel Freeman, dodo verdict, eating disorder, Jason Freeman, Martin Fleischmann, meta-analysis, NICE, psychoanalysis, psychotherapist, psychotherapy, research, Saul Rozenzweig, Stanley Pons, the guardian

Are you any good as a therapist? The Legacy of Paul W. Clement

March 26, 2014 By scottdm 4 Comments

Paul Clement

Twenty years ago, I came across an article published in the journal, Professional Psychology.  It was written by a psychologist in private practice, Paul Clement.  The piece caught my eye for a number of reasons.  First, although we’d never met, Paul lived and worked in Pasadena, California, a short ride from my childhood home.  Second, the question he opened his article with was provocative, to say the least, “Are you any good?”  In other words, how effective are YOU as a psychotherapist?  Third, and most important, he had compiled and was reporting a quantitative analysis of his results over the last 26 years as a practicing clinician.  It was both riveting and stunning.  No one I knew had ever done had published something similar before.

In graduate school, I’d learned to administer a variety of tests (achievement, vocational, personality, projective, IQ, etc.).  Not once, however, did I attend a course or sit in a lecture about how to measure my results.  I was forced to wonder, “How could that be?”  Six years in graduate school and not a word about evaluating one’s outcomes.  After all, if we don’t know how effective we are, how are any of us supposed to improve?

What was the reason for the absence of measurement, evaluation, and analysis?   It certainly wasn’t because psychotherapy wasn’t effective.  A massive amount of research existed documenting the effectiveness of treatment.  Paul’s research confirmed these results.  Of those he’d worked with, 75% were improved at termination.  Moreover, such results were obtained in a relatively brief period of time, the median number of sessions used being 12.

Other results he reported were not so easy to accept.  In short, Paul’s analysis showed that his outcomes had not improved over the course of his career.   At the conclusion of the piece, he observed, “I had expected to find that I had gotten better and better over the years, but my data failed to suggest any systematic change in my therapeutic effectiveness across the 26 years in question…it was a bad surprise for me.” (p. 175).

For years, I carried the article with me in my briefcase, hoping that one day, I might better understand his findings.  Maybe, I thought, Clement was simply an outlier?  Surely, we get better with experience.  It was hard for me to believe I hadn’t improved since my first, ham-handed sessions with clients.  Then again, I didn’t really know.  I wasn’t measuring my results in any meaningful way.

The rest is history.  Within a few short years, I was routinely monitoring the outcome and alliance at every session I did with clients.  Thanks to my undergraduate professor, Michael Lambert, Ph.D., I began using the OQ 45 to assess outcomes.  Another mentor, Dr. Lynn Johnson had developed a 10-item scale for evaluating the quality of the therapeutic relationship, know as the Session Rating Scale.  Both tools became an integral part of the way I worked.  Eventually, a suggestion by Haim Omer, Ph.D., led me to consider creating shorter, less time consuming visual analogue versions of both measures.  In time, the ORS and SRS were developed and tested.  Throughout this process, Paul Clement, and his original study remained an important, motivating force.

Just over a year ago, Paul sent me an article evaluating 40 years of his work as a psychotherapist.   Once again, I was inspired by his bold, brave, and utterly transparent example.  Not only had his outcomes not improved, he reported, they’d actually deteriorated!  Leave it to him to point the way!   Ever since, our group has been busy at work researching what it takes to forestall such deterioration and improve effectiveness.  One place to find a summary is our article in the 50th Anniversary issue of Psychotherapy.  

Yesterday, I was drafting an email, responding to one I’d recently received from him, when I learned Paul had died.  I will miss him.  In this, I know I’m not alone.

Filed Under: Top Performance Tagged With: clinician, Haim Omer, Lynn Johnson, Michael Lambert, OQ45, ors, outcome rating scale, Paul Clement, popular psychology, practice-based evidence, psychotherapy, session rating scale, srs, top performance

  • « Previous Page
  • 1
  • …
  • 52
  • 53
  • 54
  • 55
  • 56
  • …
  • 108
  • Next Page »

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

There are no upcoming Events at this time.

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (109)
  • Brain-based Research (2)
  • CDOI (12)
  • Conferences and Training (62)
  • deliberate practice (29)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (64)
  • excellence (61)
  • Feedback (36)
  • Feedback Informed Treatment – FIT (230)
  • FIT (27)
  • FIT Software Tools (10)
  • ICCE (23)
  • Implementation (6)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (9)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (37)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland Hypertension icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training