SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Dumb and Dumber: Research and the Media

April 2, 2014 By scottdm 1 Comment

DUMB-AND-DUMBER

“Just when I thought you couldn’t get any dumber, you go and do something like this… and totally redeem yourself!”
– Harry in Dumb & Dumber

On January 25th, my inbox began filling with emails from friends and fellow researchers around the globe.  “Have you seen the article in the Guardian?” they asked.  “What do you make of it?” others inquired, “Have you read the study the authors are talking about?  Is it true?!”  A few of the messages were snarkier, even gloating,  “Scott, research has finally proven the Dodo verdict is wrong!”

The article the emails referred to was titled, Are all psychological therapies equally effective?  Don’t ask the dodo.  The subtitle boldly announced, “The claim that all forms of psychotherapy are winners has been dealt a blow.”

Honestly, my first thought on reading the headline was, “Why is an obscure topic like the ‘Dodo verdict’ the subject of an article in a major newspaper?”  Who in their right mind–outside of researchers and small cadre of psychotherapists–would care?  What possible interest would a lengthy dissertation on the subject–including references to psychologist Saul Rozenzweig (who first coined the expression in the 1930’s) and researcher allegiance effects–hold for the average Joe or Jane reader of The Guardian.  At a minimum, it struck me as odd.

And odd it stayed, until I glanced down to see who had written the piece.  The authors were psychologist Daniel Freeman–a strong proponent of the empirically-supported treatments–and his journalist brother, Jason.

Jason&Daniel-Freeman

Briefly, advocates of EST’s hold that certain therapies are better than others in the treatment of specific disorders.  Lists of such treatments are created–for example, the NICE Guidelines–dictating which of the therapies are deemed “best.”  Far from innocuous, such lists are, in turn, used to direct public policy, including both the types of treatment offered and the reimbursement given.

Interestingly, in the article, Freeman and Freeman base their conclusion that “the dodo was wrong” on a single study.  Sure enough, that one study comparing CBT to psychoanalysis, found that CBT resulted in superior effects in the treatment of bulimia.  No other studies were mentioned to bolster this bold claim–an assertion that would effectively overturn nearly 50 years of  robust research findings documenting no difference in outcome among competing treatment approaches.

In contrast to what is popularly believed extraordinary findings from single studies are fairly common in science.  As a result, scientists have learned to require replication, by multiple investigators, working in different settings.

The media, they’re another story.  They love such studies.   The controversy generates interest, capturing readers attention.   Remember cold fusion?  In 1989, researchers Stanley Pons and Martin Fleischmann–then two of the world’s leading electrochemists–claimed that they had produced a nuclear reaction at room temperature–a finding that would, if true, not only overturn decades and decades of prior research and theory but, more importantly, revolutionize energy production.

The media went nuts.  TV and print couldn’t get enough of it.  The hope for a cheap, clean, and abundant source of energy was simply too much to ignore.  The only problem was that, in the time that followed, no one could replicate Pons and Fleischmann’s results.  No one.  While the media ran off in search of other, more tantalizing findings to report, cold fusion quietly disappeared, becoming a footnote in history.

Back to The Guardian.  Curiously, Freeman and Freeman did not mention the publication of another, truly massive study published in Clinical Psychology Review—a study available in print at the time their article appeared.  In it, the researchers used the statistically rigorous method of meta-analysis to review results from 53 studies of psychological treatments for eating disorders.  Fifty-three!  Their finding?  Confirming mountains of prior evidence: no difference in effect between competing therapeutic approaches.  NONE!

Obviously, however, such results are not likely to attract much attention.

HUIZENGA

Sadly, the same day that the article appeared in The Guardian, John R. Huizenga passed away.  Huizenga is perhaps best known as one of the physicists who helped build the atomic bomb.  Importantly, however, he was also among the first to debunk the claims about cold fusion made by Pons and Fleischman.  His real-world experience, and decades of research, made clear that the reports were a case of dumb (cold fusion) being followed by dumber (media reports about cold fusion).

“How ironic this stalwart of science died on this day,” I thought, “and how inspiring his example is of ‘good science.'”

I spent the rest of the day replying to my emails, including the link to study in Clinical Psychology Review (Smart). “Don’t believe the hype,” I advised, “stick to the data” (and smarter)!

Filed Under: Practice Based Evidence Tagged With: CBT, Clinical Psychology Review, Daniel Freeman, dodo verdict, eating disorder, Jason Freeman, Martin Fleischmann, meta-analysis, NICE, psychoanalysis, psychotherapist, psychotherapy, research, Saul Rozenzweig, Stanley Pons, the guardian

Evidence-based Practice is a Verb not a Noun

April 8, 2013 By scottdm 1 Comment

Evidence-based practice (EBP).  What is it?  Take a look at the graphic above.  According to American Psychological Association and the Institute of Medicine, there are three components: (1) the best evidence; in combination with (2) individual clinical expertise; and consistent with (3) patient values and expectations.  Said another way, EBP is a verb.  Why then do so many treat it as a noun, continually linking the expression to the use of specific treatment approaches?  As just one example, check out guidelines published for the treatment of people with PTSD by the National Institute for Clinical Excellence (NICE)–the U.K.’s equivalent to the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA).  Despite the above noted definition, and the lack of evidence favoring one treatment over another, the NICE equates EBP with the use of specific treatment approaches and boldly recommends certain methods over others.

Not long ago, ICCE Senior Associate, and U.K.-based researcher and clinician, Bill Andrews, addressed the problems with the guidelines in a presentation to an audience of British practitioners.  He not only addresses the inconsistent use of the term, evidence-based practice, in the development of guidelines by governing bodies but also the actual research on PTSD.  After watching the clip, take some time to review the articles assembled below, which Bill cites during his presentation.  The main point here is that clinicians need not be afraid of EBP.  Instead, they need to insist that leaders and officials stick to the stated definition–a definition I’m perfectly content to live with mas are most practitioners I meet.  To wit, know what the evidence says “works,” use my expertise to translate such findings into practices that fit with the values, preferences, and expectations of the individual consumers I treat.

Click here to read the meta-analysis that started it all.  Don’t stop there, however, make sure and read the response to that study written by proponents of the NICE guideliness.  You’ll be completely up-to-date if you finish with our response to that critique.

Filed Under: Practice Based Evidence Tagged With: American Psychological Association, evidence based practice, Institute of Medicine, NICE, NREPP, ptst, SAMHSA

Believing is Seeing: How Wishing Makes Things So

January 3, 2013 By scottdm Leave a Comment

Yesterday evening, my family and I were watching a bit of T.V.  My son, Michael commented about all the ads for nutrional supplements, juicing machines, weight loss programs and devices.  “Oh yeah,” I thought, then explained to him, “It’s the start of a new year.”  Following “spending more time with family,” available evidence shows exercise and weight loss top the bill of resolutions.  Other research shows that a whopping 80% eventually break these well intentioned commitments.  Fully a third won’t even make it to the end of the month!  Most attribute the failure to being too busy, others to a lack of motivation.  Whatever the cause, it’s clear that, when it comes to change, hope and belief will only take you so far. 

What can help?  More on that in a moment.

In the meantime, consider a recent study on the role of hope and belief in research on psychotherapy.  Beginning in the 1970’s, study after study, and studies of studies, have found a substantial association between the effectiveness of particular treatment models and the beliefs of the researchers who conduct the specific investigations.  In the literature, the findings are referred to under the generic label, “research allegiance” or R.A.  Basically, psychotherapy outcome researchers tend to find in favor of the approach they champion, believe in, and have an affinity towards.  Unlike New Year’s resolutions, it seems, the impact of hope and belief in psychotherapy outcome research is not limited; indeed, it carries investigators all the way to success–albeit a result that is completely “in the eye of the beholder.”  That is, if one believes the research.  Some don’t.

Hang with me now as I review the controversy about this finding.  As robust as the results on researcher allegiance appear, an argument can be made that the phenomenon is a reflection rather than a cause of differences in treatment effectiveness.  The argument goes: researcher allegiance is caused by the same factors that lead to differences in outcome between approaches: real differences in outcome betweepproaches.  In short, researchers’ beliefs do not cause the effects, as much as the superior effects of the methods cause researchers to believe.   Makes sense, right?  And the matter has largely nguished there, unresolved for decades.

That is, until recently.  Turns out, believing is seeing.  Using a sample of studies in which treatments with equivalent efficacy were directly compared within the same study, researchers Munder, Fluckiger, Gerger, Wampold, and Barth (2012) found that a researcher’s allegiance to a particular method systemically biases their results in favor of their chosen approach.  The specific methods included in this study were all treatments designated as “Trauma-focused” and deemed “equally effective” by panels of experts such as the U.K.’S National Institute for Clinical Excellence.  Since the TFT approaches are equivalent in outcome, researcher allegiance should not have been predictive of outcome.  Yet, it was–accounting for an incredible 12% of the variance.  When it comes to psychotherapy outcome research, wishing makes it so.

What’s the “take-away” for practitioners?  Belief is powerful stuff: it can either help you see possibilities or blind you to important realities.  Moreover, you cannot check your beliefs at the door of the consulting room, nor would you want to.  Everyday, therapists encourage people to take the first steps toward a happier, more meaningful life by rekindling hope.  However, if researchers, bound by adherence to protocol and subject to peer review can be fooled, so can therapists.  The potentially significant consequences of unchecked belief become apparent when one considers a recently published study by Walfish et al. (2012) which found that therapists on average overestimate their effectiveness by 65%.

When it comes to keeping New Year’s resolutions, experts recommend avoiding broad promises and grand commitments and instead advise setting small, concrete measureable objectives.  Belief, it seems, is most helpful when its aims are clear and effects routinely verified.  One simple way to implement this sage counsel in psychotherapy is to routinely solicit feedback from consumers about the process and outcome of the services offered.  Doing so, research clearly shows, improves both retention and effectiveness.

You can get two, simple, easy-to use scales for free by registering at: http://scottdmiller.com/srs-ors-license/  A world wide community of behavioral health professionals is available to support your efforts at: www.centerforclinicalexcellence.com.

You can also join us in Chicago for four days of intensive training.  We promise to challenge your both beliefs and provide you with the skills and tools necessary for pushing your clinical performance to the next level of effectiveness.

Filed Under: Feedback Informed Treatment - FIT Tagged With: NICE, ors, outome rating scale, psychotherapy, session rating scale, srs, wampold

What is "Best Practice?"

October 20, 2010 By scottdm Leave a Comment

You have to admit the phrase “best practice” is the buzzword of late. Graduate school training programs, professional continuing education events, policy and practice guidelines, and funding decisions are tied in some form or another to the concept. So, what exactly is it? At the State and Federal level, lists of so-called “evidence-based” interventions have been assembled and are being disseminated. In lockstep, as I reviewed recently, are groups like NICE. Their message is simple and straightforward: best practice is about applying specific treatments to specific disorders.
Admittedly, the message has a certain “common sense” appeal.    The problem, of course, is that behavioral health interventions are not the psychological equivalent of penicillin. In addition to the numerous studies highlighted on this blog documenting the failure of the “specific treatments for specific disorders” perspective, consider research published in the Spring 2010 edition of the Journal of Counseling and Development by Scott Nyman, Mark Nafziger, and Timothy Smith. Briefly, the authors examined outcome data to “evaluate treatment effectiveness across counselor training level [and found] no significant outcome differences between professional staff and …. interns, and practicum students” (p. 204). Although the researchers are careful to make all the customary prevarications, the conclusion—especially when combined with years of similar findings reported in the literature– is difficult to escape: counseling and psychotherapy are highly regulated activities requiring years of expensive professional training that ultimately fails to make the practitioner any better than they were at the outset.
What gives? Truth is, the popular conceptualization of “best practice” as a “specific treatment for a specific disorder” is hopelessly outdated. In a report few have read, the American Psychological Association (following the lead of the Institute of Medicine) redefined evidence-based, or best practice, as, “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.” Regarding the phrase “clinical expertise” in this definition, the Task Force stated, “Clinical expertise…entails the monitoring of patient progress (and of changes in the patient’s circumstances—e.g., job loss, major illness) that may suggest the need to adjust the treatment (Lambert, Bergin, & Garfield, 2004a). If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate” (p. 273; emphasis included in the original text).
Said another way, instead of choosing the “specific treatment for the specific disorder” from a list of approved treatments, best practice is:
·         Integrating the best evidence into ongoing clinical practice;
·         Tailoring services to the consumer’s characteristics, culture, and preferences;
·         Formal, ongoing, real-time monitoring of progress and the therapeutic relationship.
In sum, best practice is Feedback Informed Treatment (FIT)—the vision of the International Center for Clinical Excellence. And right now, clinicians, researchers and policy makers are learning, sharing, and discussion implementing FIT in treatment settings around the globe on the ICCE web-based community.
Word is getting out. As just one example, consider Accreditation Canada, which recently identified FIT as a “leading practice” for use in behavioral health services. According to the website, leading practices are defined as “creative, evidence-based innovations [that] are commendable examples of high quality leadership and service delivery.” The accreditation body identified FIT as a “simple, measurable, effective, and feasible outcome-based accountability process,” stating that the approach is a model for the rest of the country! You can read the entire report here.
How exactly did this happen? Put bluntly, people and hard work. ICCE senior associates and certified trainers, Rob Axsen and Cynthia Maeschalck, with the support and backing of Vancouver Coast Health, worked tirelessly over the last 5 years both implementing and working to gain recognition for FIT. Similar recognition is taking place in the United States, Denmark, Sweden, England, and Norway.
You can help. Next time someone—be it colleague, trainer, or researcher—equates “best practice” with using a particular model or list of “approved treatment approaches” share the real, official, “approved” definition noted above.  Second, join Rob, Cynthia, and the hundreds of other practitioners, researchers, and policy makers on the ICCE helping to reshape the behavioral health practice worldwide.

Filed Under: Behavioral Health, evidence-based practice, ICCE, Practice Based Evidence Tagged With: Accreditation Canada, American Psychological Association (APA), cdoi, Cochrane Review, evidence based practice, icce, NICE

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (112)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon
  • himalayan on Do certain people respond better to specific forms of psychotherapy?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training