SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Dumb and Dumber: Research and the Media

April 2, 2014 By scottdm 1 Comment

DUMB-AND-DUMBER

“Just when I thought you couldn’t get any dumber, you go and do something like this… and totally redeem yourself!”
– Harry in Dumb & Dumber

On January 25th, my inbox began filling with emails from friends and fellow researchers around the globe.  “Have you seen the article in the Guardian?” they asked.  “What do you make of it?” others inquired, “Have you read the study the authors are talking about?  Is it true?!”  A few of the messages were snarkier, even gloating,  “Scott, research has finally proven the Dodo verdict is wrong!”

The article the emails referred to was titled, Are all psychological therapies equally effective?  Don’t ask the dodo.  The subtitle boldly announced, “The claim that all forms of psychotherapy are winners has been dealt a blow.”

Honestly, my first thought on reading the headline was, “Why is an obscure topic like the ‘Dodo verdict’ the subject of an article in a major newspaper?”  Who in their right mind–outside of researchers and small cadre of psychotherapists–would care?  What possible interest would a lengthy dissertation on the subject–including references to psychologist Saul Rozenzweig (who first coined the expression in the 1930’s) and researcher allegiance effects–hold for the average Joe or Jane reader of The Guardian.  At a minimum, it struck me as odd.

And odd it stayed, until I glanced down to see who had written the piece.  The authors were psychologist Daniel Freeman–a strong proponent of the empirically-supported treatments–and his journalist brother, Jason.

Jason&Daniel-Freeman

Briefly, advocates of EST’s hold that certain therapies are better than others in the treatment of specific disorders.  Lists of such treatments are created–for example, the NICE Guidelines–dictating which of the therapies are deemed “best.”  Far from innocuous, such lists are, in turn, used to direct public policy, including both the types of treatment offered and the reimbursement given.

Interestingly, in the article, Freeman and Freeman base their conclusion that “the dodo was wrong” on a single study.  Sure enough, that one study comparing CBT to psychoanalysis, found that CBT resulted in superior effects in the treatment of bulimia.  No other studies were mentioned to bolster this bold claim–an assertion that would effectively overturn nearly 50 years of  robust research findings documenting no difference in outcome among competing treatment approaches.

In contrast to what is popularly believed extraordinary findings from single studies are fairly common in science.  As a result, scientists have learned to require replication, by multiple investigators, working in different settings.

The media, they’re another story.  They love such studies.   The controversy generates interest, capturing readers attention.   Remember cold fusion?  In 1989, researchers Stanley Pons and Martin Fleischmann–then two of the world’s leading electrochemists–claimed that they had produced a nuclear reaction at room temperature–a finding that would, if true, not only overturn decades and decades of prior research and theory but, more importantly, revolutionize energy production.

The media went nuts.  TV and print couldn’t get enough of it.  The hope for a cheap, clean, and abundant source of energy was simply too much to ignore.  The only problem was that, in the time that followed, no one could replicate Pons and Fleischmann’s results.  No one.  While the media ran off in search of other, more tantalizing findings to report, cold fusion quietly disappeared, becoming a footnote in history.

Back to The Guardian.  Curiously, Freeman and Freeman did not mention the publication of another, truly massive study published in Clinical Psychology Review—a study available in print at the time their article appeared.  In it, the researchers used the statistically rigorous method of meta-analysis to review results from 53 studies of psychological treatments for eating disorders.  Fifty-three!  Their finding?  Confirming mountains of prior evidence: no difference in effect between competing therapeutic approaches.  NONE!

Obviously, however, such results are not likely to attract much attention.

HUIZENGA

Sadly, the same day that the article appeared in The Guardian, John R. Huizenga passed away.  Huizenga is perhaps best known as one of the physicists who helped build the atomic bomb.  Importantly, however, he was also among the first to debunk the claims about cold fusion made by Pons and Fleischman.  His real-world experience, and decades of research, made clear that the reports were a case of dumb (cold fusion) being followed by dumber (media reports about cold fusion).

“How ironic this stalwart of science died on this day,” I thought, “and how inspiring his example is of ‘good science.'”

I spent the rest of the day replying to my emails, including the link to study in Clinical Psychology Review (Smart). “Don’t believe the hype,” I advised, “stick to the data” (and smarter)!

Filed Under: Practice Based Evidence Tagged With: CBT, Clinical Psychology Review, Daniel Freeman, dodo verdict, eating disorder, Jason Freeman, Martin Fleischmann, meta-analysis, NICE, psychoanalysis, psychotherapist, psychotherapy, research, Saul Rozenzweig, Stanley Pons, the guardian

Dealing with Scientific Objections to the Outcome and Session Rating Scales: Real and Bogus

December 15, 2012 By scottdm Leave a Comment

The available evidence is clear: seeking formal feedback from consumers of behavioral health services decreases drop out and deterioration while simultanesouly improving effectiveness.  When teaching practitioners how to use the ORS and SRS to elicit feedback regarding progress and the therapeutic relationship,  three common and important concerns are raised:

  1. How can such simple and brief scales provide meaningful information?
  2. Are consumers going to be honest?
  3. Aren’t these measures merely assessing satisfaction rather than anything meaninful?

Recently, I was discussing these concerns with ICCE Associate and Certified Trainer, Dan Buccino.

Briefly, Dan is a clinical supervisor and student coordinator in the Adult Outpatience Community Psychiatry program at Johns Hopkins.  He’d not only encountered the concerns noted above but several additional objections.  As he said in his email, “they were at once baffling and yet exciting, because they were so unusal and rigorous.”

“It’s a sign of the times,” I replied, “As FIT (feedback informed treatment) becomes more widespread, the supporting evidence will be scrutinized more carefully.  It’s a good sign.”

Together with Psychologist and ICCE Senior Associate and Trainer, Jason Seidel, Dan crafted detailed response.  When I told them that I believed the ICCE community would value having access to the document they created, both agreed to let me publish it on the Top Performance blog.  So…here it is.  Please read and feel free to pass it along to others.

 

 

 

Filed Under: Feedback Informed Treatment - FIT Tagged With: accountability, behavioral health, Certified Trainers, evidence based practice, feedback, interviews, mental health, ors, practice-based evidence, psychometrics, research, srs

Accountability in Behavioral Health: Steps for Dealing with Cutbacks, Shortfalls, and Tough Economic Conditions

January 25, 2010 By scottdm 3 Comments

As anyone who follows me on Facebook knows, I get around.  In the past few months, I visited Australia, Norway, Sweden, Denmark (to name but a few countries) as well as criss-crossed the United States.  If I were asked to sum up the state of public behavioral health agencies in a single word, the word–with very few exceptions–would be: desperate.  Between the unfunded mandates and funding cutbacks, agencies are struggling.

Not long ago, I blogged about the challenges facing agencies and providers in Ohio.  In addition to reductions in staffing, those in public behavioral health are dealing with increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  The one bright spot in this otherwise frightening climate is: outcomes.  Several counties in Ohio have adopted the ORS and SRS and been using them to improve the effectiveness and efficiency of behavioral health services.

I’ve been working with the managers and providers in both Marion and Crawford counties for a little over two years.  Last year, the agencies endured significant cuts in funding.  As a result, they were forced to eliminate a substantial number of positions.  Needless to say, it was a painful process with no upsides–except that, as a result of using the measures, the dedicated providers had so improved the effectiveness and efficiency of treatment they were able to absorb the loss of staff without having to cut on services to clients.

The agencies cite four main findings resulting from the work we’ve done together over the last two years.  In their own words:

  1.  Use of FIT has enabled us to be more efficient, which is particularly important given Ohio’s economic picture and the impact of State budget cuts. Specifically, FIT is enabling service providers and supervisors to identify consumers much earlier who are not progressing in the treatment process. This allows us to change course sooner when treatment is not working, to know if changes work, to identify consumers in need of a different level of care, etc.  FIT also provides data on which the provider and consumer can base decisions about the intensity of treatment and treatment continuation (i.e. when to extend time between services or when the episode of service should end). In short, our staff and consumers are spending much less time “spinning their wheels” in unproductive activities.  As a result, we have noticed more “planned discharges versus clients just dropping out of treatment.
  2. FIT provides aggregate effect size data for individual service providers, for programs, and for services, based on data from a valid and reliable outcome scale. Effect sizes are calculated by comparing our outcome data to a large national data base. Progress achieved by individual consumers is also compared to this national data base. For the first time, we can “prove” to referral sources and funding sources that our treatment works, using data from a valid and reliable scale. Effect size data also has numerous implications for supervision, and supervision sessions are more focused and productive.
  3.  Use of the SRS (session rating scale) is helping providers attend to the therapeutic alliance in a much more deliberate manner. As a result, we have noticed increased collaboration between consumer and provider, less resistance and more partnership, and greater openness from consumers about their treatment experience. Consumer satisfaction surveying has revealed increased satisfaction by consumers. The implications for consumers keeping appointments and actually implementing what is learned in treatment are clear. The Session Rating Scale is also yielding some unexpected feedback from clients and has caused us to rethink what we assume about clients and their treatment experience.
  4. Service providers, especially those who are less experienced, appear to be more confident and purposeful when providing services. The data provides a basis for clinical work and there is much less ‘flying by the seat of their pants.’”Inspiring, eh?  And now, listen to Community Counseling Services Director Bob Moneysmith and Crawford-Marion ADAMH Board Associate Director Shirley Galdys describe the implementation:

Filed Under: Behavioral Health Tagged With: cdoi, evidence based practice, icce, ors, outcome rating scale, public behavioral health, research, session rating scale, srs

Outcomes in Ohio: The Ohio Council of Behavioral Health & Family Service Providers

October 30, 2009 By scottdm Leave a Comment

Ohio is experiencing the same challenges faced by other states when it comes to behavioral health services: staff and financial cutbacks, increasing oversight and regulation, rising caseloads, unrelenting paperwork, and demands for accountability.  Into the breach, the Ohio Council of Behavioral Health & Family Service Providers organized their 30th annual conference, focused entirely on helping their members meet the challenges and provide the most effective services possible.

On Tuesday, I presented a plenary address summarizing 40 years of research on “What Works” in clinical practice as well as strategies for documenting and improving retention and outcome of behavioral health services.  What can I say?  It was a real pleasure working with the 200+ clinicians, administrators, payers, and business executives in attendance.  Members of OCBHFSP truly live up to their stated mission of, “improving the health of Ohio’s communities and the well-being of Ohio’s families by promoting effective, efficient, and sufficient behavioral health and family services through member excellence and family advocacy.”

For a variety of reasons, the State of Ohio has recently abandoned the outcome measure that had been in use for a number of years.  In my opinion, this is a “good news/bad news” situation.  The good news is that the scale that was being used was neither feasible or clinically useful.  The bad news, at least at this point in time, is that state officials opted for no measure rather than another valid, reliable, and feasible outcome tool.  This does not mean that agencies and providers are not interested in outcome.  Indeed, as I will soon blog about, a number of clinics and therapists in Ohio are using the Outcome and Session Rating Scales to inform and improve service delivery.  At the conference, John Blair and Jonathon Glassman from Myoutcomes.com demonstrated the web-based system for administering, scoring, and interpreting the scales to many attendees.  I caught up with them both in the hall outside the exhibit room.

Anyway, thanks go to the members and directors of OCBHFSP for inviting me to present at the conference.  I look forward to working with you in the future.

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT Tagged With: behavioral health, medicine, outcome measurement, outcome measures, outcome rating scale, research, session rating scale, therapiy, therapy

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (112)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon
  • himalayan on Do certain people respond better to specific forms of psychotherapy?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training