SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Is THAT true? Judging Evidence by How Often its Repeated

October 22, 2019 By scottdm 11 Comments

I’m sure you’ve heard it repeated many times:

The term, “evidence-based practice” refers to specific treatment approaches which have been tested in research and found to be effective;

CBT is the most effective form of psychotherapy for anxiety and depression;

Neuroscience has added valuable insights to the practice of psychotherapy in addition to establishing the neurological basis for many mental illnesses;

Training in trauma-informed treatments (EMDR, Exposure, CRT) improves effectiveness;

Adding mindfulness-based interventions to psychotherapy improves the outcome of psychotherapy;

Clinical supervision and personal therapy enhance clinicians’ ability to engage and help.

Only one problem: none of the foregoing statements are true.  Taking each in turn:

  • As I related in detail in a blogpost some six years ago, evidence-based practice has nothing to do with specific treatment approaches.  The phrase is better thought of as a verb, not a noun.  According to the American Psychological Association and Institute of Medicine, there are three components: (1) the best evidence; in combination with (2) individual clinical expertise; and consistent with (3) patient values and expectations.  Any presenter who says otherwise is selling something.
  • CBT is certainly the most tested treatment approach — the one employed most often in randomized controlled trials (aka, RCT’s).  That said, studies which compare the approach with other methods find all therapeutic methods work equally well across a wide range of diagnoses and presenting complaints.
  • When it comes to neuroscience, a picture is apparently worth more than 1,000’s of studies.  On the lecture circuit, mental illness is routinely linked to the volume, structure, and function of the hippocampus and amygdala.  And yet, a recent review compared such claims to 19th-century phrenology.  More to the point, no studies show that so-called, “neurologically-informed” treatment approaches improve outcome over and above traditional psychotherapy (Thanks to editor Paul Fidalgo for making this normally paywalled article available).
  • When I surveyed clinicians recently about the most popular subjects at continuing education workshops, trauma came in first place.  Despite widespread belief to the contrary, there is no evidence that learning a “trauma-informed” improves a clinician’s effectiveness.  More, consistent with the second bullet point about CBT, such approaches have not shown to produce better results than any other therapeutic method.
  • Next to trauma, the hottest topic on the lecture circuit is mindfulness.  What do the data say?  The latest meta-analysis found such interventions offer no advantage over other approaches.
  • The evidence clearly shows clinicians value supervision.  In large, longitudinal studies, it is consistently listed in the top three, most influential experiences for learning psychotherapy.   And yet, research fails to provide any evidence that supervision contributes to improved outcomes.

Are you surprised?  If so, you are not alone.

The evidence notwithstanding, the important question is why these beliefs persist?

According to the research, a part of the answer is, repetition.  Hear something often enough and eventually you adjust your “truth bar” — what you accept as “accepted” or established, settled fact.  Of course, advertisers, propagandists and politicians have known this for generations — paying big bucks to have their message repeated over and over.

For a long while, researchers believed the “illusory truth effect,” as it has been termed, was limited to ambiguous statements; that is, items not easily checked or open to more than one interpretation.  A recent study, however, shows repetition increases acceptance/belief of false statements even when they are unambiguous and simple- to-verify.  Frightening to say the least.

A perfect example is the first item on the list above: evidence-based practice refers to specific treatment approaches which have been tested in research and found to be effective.  Type the term into Google, and one of the FIRST hits you’ll get makes clear the statement is false.  It, and other links, defines the term as “a way of approaching decision making about clinical issues.”

Said another way, evidence-based practice is a mindset — a way of approaching our work that has nothing to do with adopting particular treatment protocols.

Still, belief persists.

What can a reasonable person do to avoid falling prey to such falsehoods?

It’s difficult, to be sure.  More, as busy as we are, and as much information as we are subjected to on a daily basis, the usual suggestions (e.g., read carefully, verify all facts independently, seek out counter evidence) will leave all but those with massive amounts of free time on their hands feeling overwhelmed.

And therein lies the clue — at least in part — for dealing with the “illusory truth effect.”  Bottom line: if  you try to assess each bit of information you encounter on a one-by-one basis, your chances of successfully sorting fact from fiction are low.  Indeed, it will be like trying to quench your thirst by drinking from a fire hydrant.

To increase your chances of success, you must step back from the flood, asking instead, “what must I unquestioningly believe (or take for granted) in order to accept a particular assertion as true?”  Then, once identified, ask yourself whether those assumptions are true?

Try it.  Go back to the statements at the beginning of this post with this larger question in mind.

(Hint: they all share a common philosophical and theoretical basis that, once identified, makes verification of the specific statements much easier)

If you guessed the “medical model” (or something close), you are on the right track.  All assume that helping relieve mental and emotional suffering is the same as fixing a broken arm or treating a bacterial infection — that is, to be successful a treatment containing the ingredients specifically remedial to the problem must be applied.

While mountains of research published over the last five decades document the effectiveness of the “talk therapies,” the same evidence conclusively shows “psychotherapy” does not work in the same way as medical treatments.  Unlike medicine, no specific technique in any particular therapeutic approach has ever proven essential for success.  None.  Any claim based on a similar assumptive base should, therefore, be considered suspect.

Voila!

I’ve been applying the same strategy in the work my team and I have done on using measures and feedback — first, to show that therapists needed to do more than ask for feedback if they wanted to improve their effectiveness; and second, to challenge traditional notions about why, when, and with whom, the process does and doesn’t work.   In these, and other instances, the result has been greater understanding and better outcomes.

Filed Under: Brain-based Research, evidence-based practice, Feedback Informed Treatment - FIT, PTSD

Clinical Practice Guidelines: Beneficial Development or Bad Therapy?

December 4, 2017 By scottdm 13 Comments

complianceA couple of weeks ago, the American Psychological Association (APA) released clinical practice guidelines for the treatment of people diagnosed with post-traumatic stress disorder (PTSD).  “Developed over four years using a rigorous process,” according to an article in the APA Monitor, these are the first of many additional recommendations of specific treatment methods for particular psychiatric diagnoses to be published by the organization.

Almost immediately, controversy broke out.   On the Psychology Today blog, Clinical Associate Professor Jonathon Shedler, advised practitioners and patients to ignore the new guidelines, labeling them “bad therapy.”  Within a week, Professors Dean McKay and Scott Lilienfeld responded, lauding the guidelines a “significant advance for psychotherapy practice,” while repeatedly accusing Shedler of committing logical fallacies and misrepresenting the evidence.

One thing I know for sure, coming in at just over 700 pages, few if any practitioners will ever read the complete guideline and supportive appendices.  Beyond length, the way the information is presented–especially the lack of hypertext for cross referencing of the studies cited–seriously compromises any strainghtforward effort to review and verify evidentiary claims.

devil-in-the-detailIf, as the old saying goes, “the devil is in the details,” the level of mind-numbing minutae contained in the offical documents ensures he’ll remain well-hidden, tempting all but the most compulsive to accept the headlines on faith.

Consider the question of whether certain treatment approaches are more effective than others?  Page 1 of the Executive Summary identifies differential efficacy as a “key question” to be addressed by the Guideline.  Ultimately, four specific approaches are strongly recommended, being deemed more effective than…wait for it… scratchinghead“relaxation.”

My first thought is, “OK, curious comparison.”   Nevertheless, I read on.

Only by digging deep into the report, tracing the claim to the specific citations, and then using PsychNET, and another subscription service, to access the actual studies, is it possible to discover that in the vast majority of published trials reviewed, the four “strongly recommended” approaches were actually compared to nothing.  That’s right, nothing.

In the few studies that did include relaxation, the structure of that particular “treatment” precluded sufferers from talking directly about their traumatic experiences.   At this point, my curiosity gave way to chagrin.  Is it any wonder the four other approaches proved more helpful?  What real-world practitioner would limit their work with someone suffering from PTSD to recording “a relaxation script” and telling their client to “listen to it for an hour each day?”

Holy-Moly-Logo-Nur-Sprechblase(By the way,  it took me several hours to distill the information noted above from the official documentation–and I’m someone with a background in research, access to several online databases, a certain facility with search engines, and connections with a community of fellow researchers with whom I can consult)

On the subject of what research shows works best in the treatment of PTSD, meta-analyses of studies in which two or more approaches intended to be therapeutic are directly compared, consistently find no difference in outcome between methods–importantly, whether the treatments are designated “trauma-focused” or not.  Meanwhile, another highly specialized type of research–known as dismantling studies–fails to provide any evidence for the belief that specialized treatments cduck or rabbitontain ingredients specifically remedial to the diagnosis!  And yes, that includes the ingredient most believe essential to therapeutic success in the treatment of PTSD: exposure (1, 2).

So, if the data I cite above is accurate–and freely available–how could the committee that created the Guideline come to such dramatically different conclusions?  In particular, going to great lengths to recommend particular approaches to the exclusion of others?

Be forewarned, you may find my next statement confusing.  The summary of studies contained in the Guideline and supportive appendices is absolutely accurate.  It is the interpretation of that body of research, however, that is in question.

More than anything else, the difference between the recommendations contained in the Guideline and the evidence I cite above, is attributable to a deep and longstanding rift in the body politic of the APA.  How otherwise is one able to reconcile advocating the use of particular approaches with APA’s official policy on psychotherapy recognizing, “different forms . . . typically produce relatively similar outcomes”?

envySeeking to place the profession “on a comparable plane” with medicine, some within the organization–in particular, the leaders and membership of Division 12 (Clinical Psychology) have long sought to create a psychological formulary.  In part, their argument goes, “Since medicine creates lists of recommended treatments and procedures,  why not psychology?”

Here, the answer is simple and straightforward: because psychotherapy does not work like medicine.  As Jerome Frank observed long before the weight of evidence supported his view, effective psychological care is comprised of:

  • An emotionally-charged, confiding relationship with a helping person (e.g., a therapist);
  • A healing context or setting (e.g., clinic);
  • A rational, conceptual scheme, or myth that is congruent with the sufferer’s worldview and provides a plausible explanation for their difficulties (e.g., psychotherapy theories); and
  • Rituals and/or procedures consistent with the explanation (e.g., techniques).

The four attributes not only fit the evidence but explain why virtually all psychological approaches tested over the last 40 years, work–even those labelled pseudoscience (e.g., EMDR) by Lilienfeld, and other advocates of guidelines comprised of  “approved therapies.”  guidelines

That the profession could benefit from good guidelines goes without saying.  Healing the division within APA would be a good place to start.  Until then, encouraging practitioners to follow the organization’s own definition of evidence-based practice would suffice.  To wit, “Evidence based practice is the integration of the best available research with clinical expertise in the context of patient (sic) characteristics, culture, and preferences.”  Note the absence of any mention of specific treatment approaches.  Instead, consistent with Frank’s observations, and the preponderance of research findings, emphasis is placed on fitting care to the person.

How to do this?   The official statement continues, encouraging the “monitoring of patient (sic) progress . . . that may suggest the need to adjust the treatment.” Over the last decade, multiple systems have been developed for tracking engagement and progress in real time.  Our own system, known as Feedback Informed Treatment (FIT), is being applied by thousands of therapists around the world, with literally millions of clients. It is listed on the National Registry of Evidence based Programs and Practices.  More, when engagement and progress are tracked together with clients in real time, data to date document improvements in retention and outcome of mental health services regardless of the treatment method being used.

Until  next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

Filed Under: evidence-based practice, Practice Based Evidence, PTSD

Are all treatments approaches equally effective?

January 9, 2010 By scottdm Leave a Comment

Bruce Wampold, Ph.D.

Late yesterday, I blogged about a soon-to-be published article in Clinical Psychology Review in which the authors argue that the finding by Benish, Imel, & Wamppold (2008) of equivalence in outcomes among treatments for PTSD was due to, “bias, over-generalization, lack of transparency, and poor judgement.”  Which interpretation of the evidence is correct?  Are there “specific approaches for specific disorders” that are demonstrably more effective than others?  Or does the available evidence show all approaches intended to be therapeutic to be equally effective?

History makes clear that science produces results in advance of understanding.  Until the response to Ehlers, Bisson, Clark, Creamer, Pilling, Richards, Schnurr, Turner, and Yule becomes available, I wanted to remind people of three prior blog posts that review the evidence regarding differential efficacy of competing therapeutic approaches.  The first (and I think most illuminating)–“The Debate of the Century“–appeared back in August.  The post featured a link to a debate between Bruce Wampold and enthusiastic proponent of “empirically supported treatments,” Steve Hollon.  Listen and then see if you agree with the large group of scientists and practitioners in attendance who thought–by a margin of 15:1–that Bruce carried the day.

The second post–Whoa Nellie!– commented on a 25 Million US$ research grant awarded by the US Department of Defense to study treatments for PTSD.  Why does this make me think of “deep throat’s” admonition to, “follow the money!”  Here you can read the study that is causing the uproar within the “specific treatments for specific disorders” gang.

Third, and finally, if you haven’t already read the post “Common versus Specific Factors and the Future of Psychotherapy,” I believe you’ll find the thorough review of the research done in response to an article by Siev and Chambless critical of the “dodo verdict” helpful.

Filed Under: Behavioral Health, evidence-based practice, Practice Based Evidence, PTSD Tagged With: behavioral health, bruce wampold, Children, continuing education, icce, post traumatic stress, PTSD, public behavioral health

Whoa Nellie! A 25 Million Dollar Study of Treatments for PTSD

October 27, 2009 By scottdm 1 Comment

I have in my hand a frayed and yellowed copy of observations once made by a well known trainer of horses. The trainer’s simple message for leading a productive and successful professional life was, “If the horse you’re riding dies, get off.”

You would think the advice straightforward enough for all to understand and benefit.  And yet, the trainer pointed out, “many professionals don’t always follow it.”  Instead, they choose from an array of alternatives, including:

  1. Buying a strong whip
  2. Switching riders
  3. Moving the dead horse to a new location
  4. Riding the dead horse for longer periods of time
  5. Saying things like, “This is the way we’ve always ridden the horse.”
  6. Appointing a committee to study the horse
  7. Arranging to visit other sites where they ride dead horses more efficiently
  8. Increasing the standards for riding dead horses
  9. Creating a test for measuring our riding ability
  10. Complaining about how the state of the horse the days
  11. Coming up with new styles of riding
  12. Blaming the horse’s parents as the problem is often in the breeding.
When it comes to the treatment of post traumatic stress disorder, it appears the Department of Defense is applying all of the above.  Recently, the DoD awarded the largest grant ever awarded to “discover the best treatments for combat-related post-traumatic stress disorder” (APA Monitor).  Beneficiaries of the award were naturally ecstatic, stating “The DoD has never put this amount of money to this before.”
Missing from the announcements was any mention of research which clearly shows no difference in outcome between approaches intended to be therapeutic—including, the two approaches chosen for comparison in the DoD study!  In June 2008, researchers Benish, Imel, and Wampold, conducted a meta-analysis of all studies in which two or more treatment approaches were directly compared.  The authors conclude, “Given the lack of differential efficacy between treatments, it seems scientifically questionable to recommend one particular treatment over others that appear to be of comparable effectiveness. . . .keeping patients in treatment would appear to be more important in achieving desired outcomes than would prescribing a particular type of psychotherapy” (p. 755).
Ah yes, the horse is dead, but proponents of “specific treatments for specific disorders” ride on.  You can hear their rallying cry, “we will find a more efficient and effective way to ride this dead horse!” My advice? Simple: let’s get off this dead horse. There are any number of effective treatments for PTSD.  The challenge is decidedly not figuring out which one is best for all but rather “what works” for the individual. In these recessionary times, I can think of far better ways to spend 25 million than on another “horse race” between competing therapeutic approaches.  Evidence based methods exist for assessing and adjusting both the “fit and effect” of clinical services—the methods described, for instance, in the scholarly publications sections of my website.  Such methods have been found to improve both outcome and retention by as much as 65%.  What will happen? Though I’m hopeful, I must say that the temptation to stay on the horse you chose at the outset of the race is a strong one.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence, PTSD Tagged With: behavioral health, continuing education, evidence based medicine, evidence based practice, icce, meta-analysis, ptst, reimbursement

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

Oct
01

Training of Trainers 2025


Nov
04

Delberate Practice Café (PLUS) Fall 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (111)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (243)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training