SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

The Growing Inaccessibility of Science

July 23, 2024 By scottdm 7 Comments

It’s a complaint I’ve heard from the earliest days of my career.  Therapists do not read the research.  I often mentioned it when teaching workshops around the globe.

“How do we know?”  I would jokingly ask, and then quickly answer, “Research, of course!”

Like people living before the development of the printing press who were dependent on priests and “The Church” to read and interpret the Bible, I’ve long expressed concern about practitioners being dependent on researchers to tell them how to work. 

  • I advised reading the research, encouraging therapists who were skittish to skip the methodology and statistics and cut straight to the discussion section.
  • I taught courses/workshops specifically aimed at helping therapists understand and digest research findings.
  • I’ve published research on my own work despite not being employed by a university or receiving grant funding.
  • I’ve been careful to read available studies and cite the appropriate research in my presentations and writing

I was naïve.

To begin, the “research-industrial complex” – to paraphrase American president Dwight D. Eisenhower – had tremendous power and influence despite often being unreflective of and disconnected from the realities of actual clinical practice.  The dominance of CBT (and its many offshoots) in practice and policy, and reimbursement is a good example.  In some parts of the world, government and other payers restrict training and reimbursement in any other modality – this despite no evidence CBT has led to improved results and, as documented previously on my blog, data documenting such restrictions lead to poorer outcomes.     

More to the point, since I first entered the field, research has become much harder to read and understand. 

How do we know?  Research!

Sociologist David Hayes wrote about this trend in Nature more than 30 years ago, arguing it constituted “a threat to an essential characteristic of the endeavor – its openness to outside examination and appraisal” (p. 746).

I’ve been on the receiving end of what Haye’s warned about long ago.  Good scientists can disagree.  Indeed, I welcome and have benefited from critical feedback provided when my work is peer-reviewed.  At the same time, to be helpful, the person reviewing the work must know the relevant literature and methods employed.  And yet, the ever-growing complexity of research severely limits the pool of “peers” able to understand and comment usefully, or – as I’ve also experienced – to those whose work directly competes with one’s own.

Still, as Hayes notes, the far greater threat is the lack of openness and transparency resulting from scientists’ inability to communicate their findings in a way that others can understand and independently appraise.  Popular internet memes like, “I believe in science,” “stay in your lane,” and “if you disagree with a scientist, you are wrong,” are examples of the problem, not the solution.  Beliefs are the province of religion, politics and policy.  The challenge is to understand the strengths and limitations of the methodology and results of the process called science — especially given the growing inaccessibility of science, even to scientists. 

Continuing with “business as usual” — approaching science as a “faith” versus evidence-based activity — is a vanity we can ill afford.

Until next time,

Scott
Director, International Center for Clinical Excellence

Filed Under: behavioral health, evidence-based practice, Feedback Informed Treatment - FIT

Do We Learn from Our Clients? Yes, No, Maybe So …

March 2, 2021 By scottdm Leave a Comment

LearningWhen it comes to professional development, we therapists are remarkably consistent in opinion about what matters.  Regardless of experience level, theoretical preference, professional discipline, or gender identity, large, longitudinal studies show “learning from clients” is considered the most important and influential contributor (1, 2).  Said another way, we believe clinical experience leads to better, increasingly effective performance in the consulting room.

As difficult as it may be to accept, the evidence shows we are wrong.  Confidence, proficiency, even knowledge about clinical practice, may improve with time and experience, but not our outcomes.  Indeed, the largest study ever published on the topic — 6500 clients treated by 170 practitioners whose results were tracked for up to 17 years — found the longer therapists were “in practice,” the less effective they became (3)!  Importantly, this result remained unchanged even after researchers controlled for several patient, caseload, and therapist-level characteristics known to have an impact effectiveness.

Only two interpretations are possible, neither of them particularly reassuring.  Either we are not learning from our clients, or what we claim to be learning doesn’t improve our ability to help them.  Just to be clear, the problem is not a lack of will.   Therapists, research shows, devote considerable time, effort, and resources to professional development efforts (4).  Rather, it appears the way we’ve approached the subject is suspect.

Consider the following provocative, but evidence-based idea.  Most of the time, there simply is nothing to learn from a particular client rabbits footabout how to improve our craft.  Why?  Because so much of what affects the outcome of individual clients at any given moment in care is random — that is, either outside of our direct control or not part of a recurring pattern of therapist errors.  Extratherapeutic factors, as influences are termed, contribute a whopping 87% to outcome of treatment (5, 6).   Let that sink in.

The temptation to draw connections between our actions and particular therapeutic results is both strong and understandable.  We want to improve.  To that end, the first step we take — just as we counsel clients — is to examine our own thoughts and actions in an attempt to extract lessons for the future.  That’s fine, unless no causal connection exists between what we think and do, and the outcomes that follow … then, we might as well add “rubbing a rabbit’s foot” to our professional development plans.

So, what can we to do?   Once more, the answer is as provocative as it is evidence-based.  Recognizing the large role randomness plays in the outcome of clinical work, therapists can achieve better results by improving their ability to respond in-the-moment to the individual and their unique and unpredictable set of circumstances.  Indeed, uber-researchers Stiles and Horvath note, research indicates, “Certain therapists are more effective than others … because [they are] appropriately responsive … providing each client with a different, individually tailored treatment” (7, p. 71).

FIT BookWhat does improving responsiveness look like in real world clinical practice?  In a word, “feedback.”  A clever study by Jeb Brown and Chris Cazauvielh found, for example, average therapists who were more engaged with the feedback their clients provided — as measured by the number of times they logged into a computerized data gathering program to view their results — in time became more effective than their less engaged peers (8).  How much more effective you ask?  Close to 30% — not a bad “return on investment” for asking clients to answer a handful of simple questions and then responding to the information they provide!

If you haven’t already done so, click here to access and begin using two, free, standardized tools for gathering feedback from clients.  Next, ioin our free, online community to get the support and inspiration you need to act effectively and creatively on the feedback your clients provide — hundreds and hundreds of dedicated therapists working in diverse settings around the world support each other daily on the forum and are available regardless of time zone.

And here’s a bonus.  Collecting feedback, in time, provides the very data therapists need to be able to sort random from non-random in their clinical work, to reliably identify when they need to respond and when a true opportunity for learning exists.  Have you heard or read anything about “deliberate practice?”  Since first introducing the term to the field in our 2007 article, Supershrinks, it’s become a hot topic among researchers and trainers.  If you haven’t yet, chances are you will soon be seeing books and videos offering to teach how to use deliberate practice for mastering any number of treatment methods.  The promise, of course, is better outcomes.  Critically, however, if training is not targeted directly to patterns of action or inaction that reliably impact the effectiveness of your individual clinical performance in negative ways, such efforts will, like clinical experience in general, make little difference.

If you are already using standardized tools to gather feedback from clients, you might be interested in joining me and my colleague Dr. Daryl Chow Better Results Coverfor upcoming, web-based workshop.  Delivered weekly in bite-sized bits, we’ll not only help you use your data to identify your specific learning edge, but work with you to develop an individualized deliberate practice plan.  You go at your own pace as access to the course and all training materials are available to you forever.  Interested?  Click here to read more or sign up.

OK, that’s it for now.  Until next time, wishes of health and safety, to you, your colleagues, and family.

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
FIT Implementation Intensive 2021Training of Trainers 2021

 

Filed Under: Behavioral Health, deliberate practice, evidence-based practice, Feedback Informed Treatment - FIT, FIT

Getting Beyond the “Good Idea” Phase in Evidence-based Practice

July 9, 2020 By scottdm Leave a Comment

lit match

The year is 1846.  Hungarian-born physician Ignaz Semmelweis is in his first month of employment at Vienna General hospital when he notices a troublingly high death rate among women giving birth in the obstetrics ward.  Medical science at the time attributes the problem to “miasma,” an invisible, poison gas believed responsible for a variety of illnesses.

Semmelweis has a different idea.  Having noticed midwives at the hospital have a death rate six times lower than physicians, he concludes the prevailing theory cannot possibly be correct.  The final breakthrough comes when a male colleague dies after puncturing his finger while performing an autopsy.  Reasoning that contact with corpses is somehow implicated in the higher death rate among physicians, he orders all to wash their hands prior to interacting with patients.   The rest is, as they say, history.  In no time, the mortality rate on the maternity ward plummets, dropping to the same level as that of midwives.

Nowadays, of course, handwashing is considered a “best practice.”  Decades of research show it to be the single most effective way to prevent the spread of infections.  And yet, nearly 200 years after Semmewies’s life-saving discovery, compliance with hand hygiene among healthcare professionals remains shockingly low, with figures consistently ranging between 40 and 60% (1, 2).  Bottom line: a vast gulf exists between sound scientific practices and their implementation in real world settings.  Indeed, the evidence shows 70 to 95% of attempts to implement evidence-based strategies fail.

To the surprise of many, successful implementation depends less on disseminating “how to” information to practitioners thanburned out match on establishing a culture supportive of new practices.  In one study of hand washing, for example, when Johns Hopkins Hospital administrators put policies and structures in place facilitating an open, collaborative, and transparent culture among healthcare staff (e.g., nurses, physicians, assistants), compliance rates soared and infections dropped to zero!

Feedback Informed Treatment (FIT) — soliciting and using formal client feedback to guide mental health service delivery — is another sound scientific practice.  Scores of randomized clinical trials and naturalistic studies show it improves outcomes while simultaneously reducing drop out and deterioration rates.  And while literally hundreds of thousands of practitioners and agencies have downloaded the Outcome and Session Rating Scales — my two, brief, feedback tools — since they were developed nearly 20 years ago, I know most will struggle to put them into practice in a consistent and effective way.

To be clear, the problem has nothing to do with motivation or training.  Most are enthusiastic to start.  Many invest significant time and money in training.  Rather, just as with hand washing, the real challenge is creating the open, collaborative, and transparent workplace culture necessary to sustain FIT in daily practice.  What exactly does such a culture look like and what actions can practitioners, supervisors, and managers take to facilitate its development?  That’s the subject of our latest “how to” video by ICCE Certified Trainer, Stacy Bancroft.  It’s packed with practical strategies tested in real world clinical settings.

FIT IMP 2020We’ll cover the subject in even greater detail in the upcoming FIT Implementation Intensive — the only evidence-based training on implementing routine outcome monitoring available.

For the first time ever, the training will be held ONLINE, so you can learn from the comfort and safety of your home.  As with all ICCE events, we limit the number of participants to 40 to ensure each gets personalized attention.  For more information or to register, click here.

OK, that’s it for now.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S.: Want to interact with FIT Practitioners around the world?  Join the conversation here.

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, FIT, Implementation

“My Mother Made Me Do It”: An Interview with Don Meichenbaum on the Origins of CBT (Plus: Tips for Surviving COVID-19)

May 26, 2020 By scottdm Leave a Comment

Scott & DonImagine having the distinction of being voted one of the top 10 most influential psychotherapists of the 20th Century.

Psychologist Don Meichenbaum is that person.  In his spare time, together with Arron Beck and Marvin Goldfried, he created the most popular and researched method of psychotherapy in use today: cognitive-behavior therapy (CBT).

I got to know Don years ago as we shared a car ride, traveling to and from a training venue while teaching separate, week-long workshops in New England.  We laughed.  We talked.  We debated.  Fiercely.

We’ve been friends and colleagues ever since, recreating our car ride discussions in front of large audiences of therapists at each Evolution of Psychotherapy conference since 2005.

As Don approaches his 80th birthday, we look back on the development of CBT — what he thinks he got right and how his thinking has evolved over time.  Most trace the roots of CBT to various theorists in the field — Freud, Wolpe, and others.  Don is clear: his mother made him do it.  That’s right.  According to him, CBT got its start with Mrs. Meichenbaum.   I know you’ll be amused, but I also believe you’ll be surprised by why and how she contributed.

That said, my interview with Don isn’t merely a retrospective.  Still actively involved in the field, he shares important, evidence-based tips about trauma and resilience, applying the latest findings to the psychological and economic impacts of the coronavirus.  You’ll find the interview below.

All done for now,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

Filed Under: Behavioral Health, deliberate practice, Dodo Verdict, evidence-based practice, Feedback Informed Treatment - FIT, Therapeutic Relationship

Is THAT true? Judging Evidence by How Often its Repeated

October 22, 2019 By scottdm 11 Comments

earI’m sure you’ve heard it repeated many times:

The term, “evidence-based practice” refers to specific treatment approaches which have been tested in research and found to be effective;

CBT is the most effective form of psychotherapy for anxiety and depression;

Neuroscience has added valuable insights to the practice of psychotherapy in addition to establishing the neurological basis for many mental illnesses;

Training in trauma-informed treatments (EMDR, Exposure, CRT) improves effectiveness;

Adding mindfulness-based interventions to psychotherapy improves the outcome of psychotherapy;

Clinical supervision and personal therapy enhance clinicians’ ability to engage and help.

Only one problem: none of the foregoing statements are true.  Taking each in turn:

  • As I related in detail in a blogpost some six years ago, evidence-based practice has nothing to do with specific treatment approaches.  The phrase is better thought of as a verb, not a noun.  According to the American Psychological Association and Institute of Medicine, there are three components: (1) the best evidence; in combination with (2) individual clinical expertise; and consistent with (3) patient values and expectations.  Any presenter who says otherwise is selling something.
  • CBT is certainly the most tested treatment approach — the one employed most often in randomized controlled trials (aka, RCT’s).  That said, studies which compare the approach with other methods find all therapeutic methods work equally well across a wide range of diagnoses and presenting complaints.
  • When it comes to neuroscience, a picture is apparently worth more than 1,000’s of studies.  On the lecture circuit, mental illness is routinely linked to the volume, structure, and function of the hippocampus and amygdala.  And yet, a recent review compared such claims to 19th-century phrenology.  More to the point, no studies show that so-called, “neurologically-informed” treatment approaches improve outcome over and above traditional psychotherapy (Thanks to editor Paul Fidalgo for making this normally paywalled article available).
  • When I surveyed clinicians recently about the most popular subjects at continuing education workshops, trauma came in first place.  Despite widespread belief to the contrary, there is no evidence that learning a “trauma-informed” improves a clinician’s effectiveness.  More, consistent with the second bullet point about CBT, such approaches have not shown to produce better results than any other therapeutic method.
  • Next to trauma, the hottest topic on the lecture circuit is mindfulness.  What do the data say?  The latest meta-analysis found such interventions offer no advantage over other approaches.
  • The evidence clearly shows clinicians value supervision.  In large, longitudinal studies, it is consistently listed in the top three, most influential experiences for learning psychotherapy.   And yet, research fails to provide any evidence that supervision contributes to improved outcomes.

Are you surprised?  If so, you are not alone.

The evidence notwithstanding, the important question is why these beliefs persist?Coke

According to the research, a part of the answer is, repetition.  Hear something often enough and eventually you adjust your “truth bar” — what you accept as “accepted” or established, settled fact.  Of course, advertisers, propagandists and politicians have known this for generations — paying big bucks to have their message repeated over and over.

For a long while, researchers believed the “illusory truth effect,” as it has been termed, was limited to ambiguous statements; that is, items not easily checked or open to more than one interpretation.  A recent study, however, shows repetition increases acceptance/belief of false statements even when they are unambiguous and simple- to-verify.  Frightening to say the least.

EBPA perfect example is the first item on the list above: evidence-based practice refers to specific treatment approaches which have been tested in research and found to be effective.  Type the term into Google, and one of the FIRST hits you’ll get makes clear the statement is false.  It, and other links, defines the term as “a way of approaching decision making about clinical issues.”

Said another way, evidence-based practice is a mindset — a way of approaching our work that has nothing to do with adopting particular treatment protocols.

Still, belief persists.

What can a reasonable person do to avoid falling prey to such falsehoods?fire hydrant

It’s difficult, to be sure.  More, as busy as we are, and as much information as we are subjected to on a daily basis, the usual suggestions (e.g., read carefully, verify all facts independently, seek out counter evidence) will leave all but those with massive amounts of free time on their hands feeling overwhelmed.

And therein lies the clue — at least in part — for dealing with the “illusory truth effect.”  Bottom line: if  you try to assess each bit of information you encounter on a one-by-one basis, your chances of successfully sorting fact from fiction are low.  Indeed, it will be like trying to quench your thirst by drinking from a fire hydrant.

To increase your chances of success, you must step back from the flood, asking instead, “what must I unquestioningly believe (or take for granted) in order to accept a particular assertion as true?”  Then, once identified, ask yourself whether those assumptions are true?

Try it.  Go back to the statements at the beginning of this post with this larger question in mind.

lie detector(Hint: they all share a common philosophical and theoretical basis that, once identified, makes verification of the specific statements much easier)

If you guessed the “medical model” (or something close), you are on the right track.  All assume that helping relieve mental and emotional suffering is the same as fixing a broken arm or treating a bacterial infection — that is, to be successful a treatment containing the ingredients specifically remedial to the problem must be applied.

While mountains of research published over the last five decades document the effectiveness of the “talk therapies,” the same evidence conclusively shows “psychotherapy” does not work in the same way as medical treatments.  Unlike medicine, no specific technique in any particular therapeutic approach has ever proven essential for success.  None.  Any claim based on a similar assumptive base should, therefore, be considered suspect.

Voila!

I’ve been applying the same strategy in the work my team and I have done on using measures and feedback — first, to show that therapists needed to do more than ask for feedback if they wanted to improve their effectiveness; and second, to challenge traditional notions about why, when, and with whom, the process does and doesn’t work.   In these, and other instances, the result has been greater understanding and better outcomes.

So there you have it.  Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S: Registration for the Spring Feedback Informed Treatment intensives is now open.  In prior years, these two events have sold out several months in advance.  For more information or to register, click here or on the images below.

ICCE Advanced FIT Intensive 2020 Scott D Miller

ICCE Fit Supervision Intensive 2020 Scott D Miller

Filed Under: Brain-based Research, evidence-based practice, Feedback Informed Treatment - FIT, PTSD

The Baader-Meinhof Effect in Trauma and Psychotherapy

August 28, 2019 By scottdm 35 Comments

noticingHave you heard of the “Baader-Meinhof” effect?  If not, I’m positive you’ll soon be seeing evidence of it everywhere.

That’s what “it” is, by the way — that curious experience of seeing something you’ve just noticed, been told of, or thought about, cropping up all around you.  So …

You buy a car and suddenly it’s everywhere.  That outfit you thought was so unique?  Boom!  Everyone is sporting it.  How about the conversation you just had with your friend?  You know, the one that was so stimulating and interesting?  Now the subject is on everyone’s lips.

Depending on your level of self-esteem or degree of narcissism, Baader-Meinhof either leaves you feeling on the “cutting edge” of cultural trends or constantly lagging behind others.  For me, it’s generally the latter.  And recently, its a feeling that has been dogging me a fair bit.

The subject?  Trauma.

Whether simple or complex, ongoing or one-off, experienced as a child or adult, trauma is the topic de jour — a cause célèbre linked to anCertified Trauma Professional ever-growing list of problems, including depression, anxiety, dissociation, insomnia, headaches, stomachaches, asthma, stroke, diabetes, and most recently, ADHD.

Then, of course, there are the offers for training.  Is it just me or is trauma the subject of every other email solicitation, podcast announcement, and printed flyer?

The truth is our field has been here many times before.  Over the last 25 years, depression, multiple personality disorder, rapid cycling bipolar disorder II, attention deficit disorder, and borderline personality disorder have all burst on the scene, enjoyed a period of intense professional interest, and then receded into the background.

Available evidence makes clear this pattern — aha, whoa, and hmm what’s next? — is far from benign.  While identifying who is suffering and why is an important and noble endeavor, outcomes of mental healthcare have not improved over the last 40 years.  What’s more, no evidence exists that training in treatment modalities specific to any particular diagnosis — the popularly-termed, “evidence-based” practices — improves effectiveness.  Problematically, studies do show undergoing such training increases practitioner perception of enhanced competence (Neimeyer, Taylor, & Cox, 2012) .

which wayOn more than one occasion, I’ve witnessed advocates of particular treatment methods claim it’s unethical for a therapist to work with people who’ve experienced a trauma if they haven’t been trained in a specific “trauma-focused” approach.  It’s a curious statement — one which, given the evidence, can only be meant to bully and shame practitioners into going along with the crowd.  Data on the subject are clear and date back over a decade (1, 2, 3).  In case of any doubt, a brand new review of the research, published in the journal Psychotherapy, concludes, “There are no clinically meaningful differences between … treatment methods for trauma … [including those] designed intentionally to omit components [believed essential to] effective treatments (viz., exposure, cognitive restructuring, and focus on trauma)” (p. 393).

If you find the results reported in the preceding paragraph confusing or unbelievable, recalling the “Baader-Meinhof” effect can be help.  It reminds us that despite its current popularity in professional discourse, trauma and its treatment is nothing new.  Truth is, therapists have always been helping those who’ve suffered its effects.  More, while the field’s outcomes have not improved over time, studies of real world practitioners show they generally achieve results on par with those obtained in studies of so-called evidence-based treatments 1, 2, 3).

Of course, none of the foregoing means nothing can be done to improve our effectiveness.  As my Swedish grandmother Stena used to say, “The room for improvement is the biggest one in our house!”  20190817_101819

To get started, or fine tune your professional development efforts, listen in to an interview I did recently with Elizabeth Irias from Clearly Clinical (an approved provider of CEU’s for APA, NBCC, NAADAC, CCAPP, and CAMFT).  Available here: What Every Therapist Needs To Know: Lessons From The Research, Ep. 61.  

In it, I lay out several, concrete, evidence-based steps, practitioners can take to improve their therapeutic effectiveness.  It’s FREE, plus you can earn a FREE hour of CE credit.  Additionally, if follow them on Instagram and leave a comment on this post, you’ll be automatically entered into a contest for one year of free, unlimited continuing education — the winner to be announced on October 31st, 2019.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
ICCE Fit Supervision Intensive 2020 Scott D MillerICCE Advanced FIT Intensive 2020 Scott D Miller

 

–

 

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Responsiveness is “Job One” in Becoming a More Effective Therapist

June 28, 2019 By scottdm 4 Comments

face in cloudsLook at the picture to the left.  What do you see?

In no time at all, most report a large face with deep set eyes and slight frown.  

Actually, once seen, it’s difficult, if not impossible to unsee.  Try it.  Look away momentarily then back again.

Once set in motion, the process tends to take on a life of its own, with many other items coming into focus. 

Do you see the ghostly hand?  Skeletonized spine and rib cage?  Other eyes and faces?  A clown hat?

From an evolutionary perspective, the tendency to find patterns — be it in clouds, polished marble surfaces, burn marks on toast, or tea leaves in a cup — is easy to understand.  For our earliest ancestors, seeing eyes in the underbrush, whether real or illusory, had obvious survival value.   Whether or not the perceptions or predictions were accurate mattered less than the consequences of being wrong.   

In short, we are hardwired to look for and find patterns.  And, as researchers Foster and Kokko (2008) point out, “natural selection … favour[s] strategies that make many incorrect causal associations in order to establish those that are essential for survival …” (p. 36).   

As proof of the tendency to draw incorrect causal associations,flying couch one need only look at the field’s most popular beliefs and practices, many of which, the evidence shows, have little or no relationship to outcome.  These include:

  • Training in or use of evidence-based treatment approaches;
  • Participation in clinical supervision;
  • Attending continuing education workshops;
  • Professional degree, licensure, or amount of clinical experience;

Alas, all of the above, and more, are mere “faces in the clouds” — compelling to be sure, but more accurately seen as indicators of our desire to improve than reliable pathways to better results.  They are not.

So, what, if anything, can we do to improve our effectiveness?

According to researchers Stiles and Horvath (2017), “Certain therapists are more effective than others … because [they are] appropriately responsive … providing each client with a different, individually tailored treatment” (p. 71).   

Sounds good, right?  The recommendation that one should “fit the therapy to the person” is as old as the profession.   The challenge, of course, is knowing when to respond as well as whether any of the myriad “in-the-moment” adjustments we make in a given therapy hour actually help. 

That is until now.

EngagementConsider a new study involving 100’s real world therapists and more than 10,000 of their clients (Brown and Cazauvielh, 2019).  Intriguingly, the researchers found, therapists who were more “engaged” in formally seeking and utilizing feedback from their clients regarding progress and quality of care — as measured by the frequency with which they logged in to a computerized outcome management system to check their results — were significantly more effective. 

How much, you ask? 

Look at the graph above.  With an effect size difference of .4 σ, the feedback-informed practitioners (green curve) were on average more effective than 70% of their less engaged, less responsive peers (the red).

Such findings confirm and extend results from another study I blogged about back in May documenting that feedback-informed treatment, or FIT, led to significant improvements in the quality and strength of the therapeutic alliance.fitbit

Why some choose to actively utilize feedback to inform and improve the quality and outcome of care, while others dutifully administer measurement scales but ignore the results is currently unknown — that is, scientifically.  Could it really be that mysterious, however?  Many of us have exercise equipment stuffed into closets bought in the moment but never used.  In time, I suspect research will eventually point to the same factors responsible for implementation failures in other areas of life, both personal and professional (e.g., habit, lack of support, contextual barriers, etc.).

Until then, one thing we know helps is community.  Having like-minded to interact with and share experiences makes a difference when it comes to staying on track.  The International Center for Clinical Excellence is a free, social network with thousands of members around the world.  Every day, practitioners, managers, and supervisors meet to address questions and provide support to one another in their efforts to implement feedback-informed treatment.  Click on the link to connect today.

Still wanting more?  Listen to my interview with Gay Barfield, Ph.D., a colleague of Carl Rogers, with whom she co-directed the Carl Rogers Institute for Peace –an organization that applied person-centered principles to real and potential international crisis situations, and for which Dr. Rogers was nominated for the Nobel Peace Prize in 1987.  I know her words and being will inspire you to seek and use client feedback on a more regular basis…

OK, done for now,

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S.: Registration for the Spring 2020 Advanced and Supervision Intensives is open!  Both events sold out months in advance this year.  Click on the icons below for more information or to register.
ICCE Advanced FIT Intensive 2020 Scott D MillerICCE Fit Supervision Intensive 2020 Scott D Miller

 

 

 

 

 

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

What does losing your keys have in common with the treatment of trauma?

April 24, 2019 By scottdm 9 Comments

keysLast week, I was preparing to leave the house and could not locate my keys.  Trust me when I say, it’s embarrassing to admit this is not an infrequent occurrence.

Logic and reason are always my first problem solving choices.  That’s why I paused after looking in the kitchen drawer where I am supposed to keep them, along with my wallet and glasses, and found it empty.  When did I last have them?  Not finding them there, the “search” began.

Upstairs to the bedroom to check my pants pockets.  No.  Downstairs to the front closet to look in my coat.  No.  Back upstairs to the hamper in the laundry room.  No.  Once more, down the stairs to the kitchen hutch.  I sometimes leave them there.  This time, however, no.  I then headed back up the stairs to the master bathroom — my pace now a bit frantic — and rummaged through my clothing.  No.  They’ve gotta be on my office desk.  Down two flights of stairs to the basement.  Not there either.

In a fit of pique, I stormed over to the landing, and yelled at the top of my voice, “DID SOMEONE TAKE MY KEYS?” the accusation barely concealed.  Although my head knew this was nuts, my heart was certain it was true. They’ve hidden them!

“No,” my family members kindly reply, then ask, “Have you lost them again?”

“Arrgh,” I mutter under my breath.  And that’s when I do something that, in hindsight, make no sense.  I wonder if you do the same?  Streetlight EffectNamely, I start the entire search over from the beginning — pants, coat, hamper, closet, hutch, office — often completing the exact same cycle several times.  Pants, coat, hamper, closet, hutch, office.   Pants, coat, hamper, closet, hutch, office.  Pants, coat, hamper, closet, hutch, office.

I can’t explain the compulsion, other than, by this point, I’ve generally lost my mind.  More, I can’t think of anything else do.  My problem: I have somewhere to go!  The solution: Keep looking (and it goes without saying, of course, in the same places).

(I did eventually locate my keys.  More on that in a moment)

Yesterday, I was reminded of my experience while reading a newly released study on the treatment of trauma.   Bear with me as I explain. Over a decade ago, I blogged about the U.S. Veteran’s Administration spending $25,000,000 aimed at “discover[ing] the best treatments for PTSD” despite a virtual mountain of evidence showing no difference in outcome between various therapy approaches.

Since that original post, the evidence documenting equivalence between competing methods has only increased (1, 2).  The data are absolutely clear.  Meta-analyses of studies in which two or more approaches intended to be therapeutic are directly compared, consistently find no difference in outcome between methods – importantly, whether the treatments are designated “trauma-focused” or not.   More, other highly specialized studies – known as dismantling research – fail to provide any evidence for the belief that specialized treatments contain ingredients specifically remedial to the diagnosis!  And yes, that includes the ingredient most believe essential to therapeutic success in the treatment of PTSD; namely, exposure (1, 2).

The new study confirms and extends such findings.  Briefly, using data drawn from 39 V.A. treatment centers, researchers examined the relationship between outcome and the degree of adoption of two so-called “evidence-based,” trauma-informed psychotherapy approaches — prolonged exposure and cognitive processing therapy.  If method mattered, of course, then a greater degree of adoption would be associated with better results.  It was not.  As the authors of the study conclude, “programs that used prolonged exposure and cognitive processing therapy with most or all patients did not see greater reductions in PTSD or depression symptoms or alcohol use, compared with programs that did not use these evidence-based psychotherapies.”

Winston Churchill Quote About History Repeating Itself History Doesn't Repeat Itself But It Rhymes | Quote"history Does - QUOTES BY PEOPLE

So what happens now?  If history, and my own behavior whenever I lose my keys, is any indication, we’ll start the process of looking all over again.  Instead of accepting the key is not where we’ve been looking, the field will continue it’s search.  After all, we have somewhere to go — and right back to the search for the next method, model, or treatment approach, we go.

It’s worse than that, actually, as looking over and again in the same place, keeps us from looking elsewhere.  That’s how I generally find my keys.  As simple and perhaps dumb as it sounds, I find them someplace I had not looked.

And where is the field not looking?  As Norcross and Wampold point out in an article published this week, “relationships and responsiveness” are the key ingredients in successful psychological care for people who are suffering as a result of traumatic experiences, going on to say that the emphasis on model or method is actually harmful, as it “squanders a vital opportunity to identify what actually heals.”

Improving our ability to connect with and respond effectively to the diverse people we meet in therapy is the focus on Deliberate Practice Intensive, held this August in Chicago, Illinois.  Unlike training in protocol-driven treatments, studies to date show learning the skills taught at the workshop result in steady improvements in clinicians’ facilitative interpersonal skills and outcomes commensurate with the rate of improvement seen in elite athletes.  For more information or to register, click here.

Until next time,

Scott

Scott D. Miller, Ph.D.
International Center for Clinical Excellence
FIT Deliberate Practice Aug 2019 - ICCEFIT Training of Trainers Aug 2019 - ICCEFIT Implementation Intensive Aug 2019 - ICCE

Filed Under: evidence-based practice, Feedback, Feedback Informed Treatment - FIT, Therapeutic Relationship

Mountains and Molehills, or What the JFK Assasination and the Therapeutic Relationship have in Common?

April 14, 2019 By scottdm 5 Comments

mountain-molehill (1)Over the last 10 days or so, I’ve been digesting a recently published article on the therapeutic alliance — reading, highlighting, tracking down references, rereading, and then discussing the reported findings with colleagues and a peer group of fellow researchers.  It’s what I do.

The particular study has been on my “to be read” pile for the better part of a year, maybe more.  Provocatively titled, “Is the Alliance Really Therapeutic?” it promises to answer the question in light of  “recent methodological advances.”

I know this will sound strange — at least at first — but throughout, I kept finding myself thinking of the assasination of the 35th President of the United States, John F. Kennedy.  Bear with me as I explain.

I personally remember the shock and grief of this event.  Although I was only six years old at the time, I have vivid memories, watching televised segments of the funeral procession down Pennsylvania Avenue under a grey, overcast and rainy sky.  “Why?” my family and the Nation asked, and “How?”

You likely know the rest of the story.  jfkWithin hours, a suspect was arrested.  Two days later, he was murdered on live TV by a Dallas nightclub owner.  Ever since, events surrounding the assasination have been the subject of heated debate.  More than 2,000 books have been published, each offering a different theory of the event — a veritable “Who’s who” of suspects, including but not limited to the Soviet Union, CIA, Mafia, Cuban government, and Vice President of the United States.

Whatever you might believe, it’s hard to fault the majority of Americans — 61% in the most recent polls — who seriously doubt that the slight, unemployed, thrice court-martialed former marine, acted alone.   To many, in fact, it’s simply inconceivable.  And, that’s the point.  As investigative reporter, Gerald Posner, observed in his book Case Closed, “The notion that a misguided sociopath … wreaked such havoc [makes] the crime seem senseless” (p. xviii).   By contrast, concluding there was an elaborate plot involving important and powerful people, embues Kennedy’s death with meaning equal to his stature and significance in the mind of the public.

headheartbalanceresizeSaid another way, maybe, just maybe, in our attempts to reconcile the facts with our feelings, we made a molehill into a mountain … which brings me back to the article about the therapeutic relationship.  The empirical evidence is clear: the quality of the alliance between client and clinician is one of the most potent and reliable predictors of successful psychotherapy.

According to the most recent and thorough review of the empirical literature:

  • Better alliances result in better outcomes when working with individuals, groups, couples and families, children and adolescents, and mandated/involuntary clients;
  • With regard to specific qualities, better outcomes result the more therapists:
    • Like, value, and care for the client (known as the “real” relationship, it contributes more to outcome than relational elements associated with the doing of therapy.  Effect Size [E.S.] ~ .80 );
    • Communicate their understanding of and compassion for the client (E.S. ~ .58);
    • Collaborate with the client regarding the focus (e.g., problem) and goals for treatment (E.S. ~ .49);
    • Present as accessible, approachable, and sincere (i.e., congruent and genuine, E.S. ~ .46)
    • Demonstrate respect, warmth, and positive regard (E.S. ~ .36);
    • Seek and utilize formal feedback regarding the client’s experience of progress and the therapeutic alliance (E.S. ~ .33 – .49);
    • Express emotions and generate hope and expectancy of positive results (E.S. = .56 & .36, respectively).

EvidenceSounds pretty straightforward and simple to me.  In a relatively efficient fashion (worldwide the average number of visits is around 5 visits), we establish relationships with people that result in significant improvements in their well being.  With regard to the latter, as reviewed many times on my blog, the average recipient of psychotherapy is better off than 80% of those with similar problems that do not.

That said, is the relationship we offer people so astounding that it forever changes them?  Judging by the article’s dense language and near inpenetrable statistical procedures, you’d assume so.  Yet ultimately, it fails to show as much, focusing instead on defining characteristics and qualities of clients amenable to a particular theoretical orientation rather than the relationship.

Now, before you object, please note, I did not say relationships — in life or in therapy — were easy.  But therein lies the risk.  Challenging or difficult (e.g., a lone gunman taking out a beloved and powerful figure) is equated with complicated (i.e., must have been a conspiracy).   Add to that the tendency of professionals to embue their interactions with clients with life-changing significance and voila! we are poised, as a field, to make mountains out of molehills.  Nowhere is this more easy to see than in the language we use to describe our work.  We “treat,” have “countertransference reactions,” “repair ruptures,” and form “therapeutic alliances” rather than connect, experience frustration (or other feelings), and develop relationships.simple

It’s time to embrace what 50 years of evidence plainly shows: yes, we offer an important service, an opportunity for someone to feel understood, get support while going through a difficult period, solve problems, learn new and different ways for approaching life’s challenges, and every once in a while –maybe one in a hundred — something more.  To do that, what’s needed is humility and a relentless focus on the fundamentals.   Given the history of our field, that alone will prove hard enough.

Embracing the evidence and focusing on fundamentals is precisely what we’ll be doing, by the way, at the Deliberate Practice Intensive this summer coming August in Chicago.  Join colleagues from around the world to learn how to use this simple (not easy) way for improving your effectiveness!  For more info, click here or on the banner below.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
FIT Deliberate Practice Aug 2019 - ICCE

Filed Under: evidence-based practice, excellence, Therapeutic Relationship

It’s Time to Abandon the “Mean” in Psychotherapy Practice and Research

April 8, 2019 By scottdm 6 Comments

car seatRecognize this?  Yours will likely look at bit different.  If you drive an expensive car, it may be motorized, with buttons automatically set to your preferences.  All, however, serve the same purpose.

Got it?

It’s the lever for adjusting your car seat.

I’m betting you’re not impressed.   Believe it or not though, this little device was once considered an amazing innovation — a piece of equipment so disruptive manufacturers balked at producing it, citing “engineering challenges” and fear of cost overruns.

For decades, seats in cars came in a fixed position.  You could not move them forward or back.  For that Plane-Crash-04022016-2matter, the same was the case with seats in the cockpits of airplanes.  The result?  Many dead drivers and pilots.

The military actually spent loads of time and money during the 1940’s and 50’s looking for the source of the problem.  Why, they wondered, were so many planes crashing?  Investigators were baffled.

Every detail was checked and rechecked.  Electronic and mechanical systems tested out.  Pilot training was reviewed and deemed exceptional.  Systematic review of accidents ruled out human error.   Finally, the equipment was examined.  Nothing, it was determined, could not have been more carefully designed — the size and shape of the seat, distance to the controls, even the shape of the helmet, were based on measurements of 140 dimensions of 4,000 pilots (e.g., thumb length, hand size, waist circumference, crotch height, distance from eye to ear, etc.).

It was not until a young lieutenant, Gilbert S. Daniels, intervened that the problem was solved.  Turns out, despite of the careful measurements, no pilot fit the average of the various dimensions used to design the cockpit and flight equipment.  Indeed, his study found, even when “the average” was defined as the middle 30 percent of the range of values on any given indice, no actual pilot fell within the range!

The conclusion was as obvious as it was radical.  Instead of fitting pilot into planes, planes needed to be designed to fit pilots.  Voila!   The adjustable seat was born.

round-head-square-holeNow, before you scoff — wisecracking, perhaps, about “military intelligence” being the worst kind of oxymoron — beware.  The very same “averagarianism” that gripped leaders and engineers in the armed services is still in full swing today in the field of mental health.

Perhaps the best example is the randomized controlled trial (RCT) — deemed the “gold standard” for identifying “best practices” by professional bodies, research scientists, and governmental regulatory bodies.  t-test

However sophisticated the statistical procedures may appear to the non-mathematically inclined, they are nothing more than mean comparisons.

Briefly, participants are recruited and then randomly assigned to one of two groups (e.g., Treatment A or a Control group; Treatment A or Treatment as Usual; and more rarely, Treatment A versus Treatment B).  A measure of some kind is administered to everyone in both groups at the beginning and the end of the study.   Should the mean response of one group prove statistically greater than the other, that particular treatment is deemed “empirically supported” and recommended for all.

The flaw in this logic is hopefully obvious: no individual fits the average.  More, as any researcher will tell you, the variability between individuals within groups is most often greater than variability between groups being compared.

in boxBottom line:  instead of fitting people into treatments, mental health care should be to made to fit the person.  Doing so is referred to, in the psychotherapy outcome literature, as responsiveness  — that is, “doing the right thing at the right time with the right person.”  And while the subject receives far less attention in professional discourse and practice than diagnostic-specific treatment packages, evidence indicates it accounts for why, “certain therapists are more effective than others…” (p. 71, Stiles & Horvath, 2017). 

I’m guessing you’ll agree it’s time for the field to make an “adjustment lever” a core standard of therapeutic practice — I’ll bet it’s what you try to do with the people you care for anyway.on box

Turns out, a method exists that can aid in our efforts to adjust services to the individual client.  It involves routinely and formally soliciting feedback from the people we treat.  That said, not all feedback is created equal.  With a few notable exceptions, all routine outcome monitoring systems (ROM) in use today suffer from the same problem that dogs the rest of the field.  In particular, all generate feedback by comparing the individual client to an index of change based on an average of a large sample (e.g., reliable change index, median response of an entire sample).

By contrast, three computerized outcome monitoring systems use cutting edge technology to provide feedback about progress and the quality of the therapeutic alliance unique to the individual client.  Together, they represent a small step in providing an evidence-based alternative to the “mean” approaches traditionally used in psychotherapy practice and research.

Interested in your thoughts,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

PS: Want to learn more?  Join me and colleagues from around the world for any or all three, intensive workshops being offered this August in Chicago, IL (USA).

  1. The FIT Implementation Intensive: the only workshop in the US to provide an in depth training in the evidence-based steps for successful integration of Feedback Informed Treatment (FIT) into your agency or clinical practice.
  2. The Training of Trainers: a 3-day workshop aimed at enhancing your presentation and training skills.
  3. The Deliberate Practice Intensive: a 2-day training on using deliberate practice to improve your clinical effectiveness.

Click on the title of the workshop for more information or to register.

 

 

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, FIT Software Tools

Routine Outcome Monitoring and Deliberate Practice: Fad or Phenomenon?

March 26, 2019 By scottdm 1 Comment

new-improved-newspaper-headline-better-product-update-upgrad-headlines-announcements-upgrade-60079897Would you believe me if I told you there was a way you could more than double the chances of helping your clients?  Probably not, eh?  As I’ve documented previously, claims abound regaring new methods for improving the outcome of psychotherapy.  It’s easy to grow cynical.

And yet, findings from a recent study document when clinicians add this particular practice to their clinical work, clients are actually 2.5 times more likely to improve.  The impact is so significant, a review of research emerging from a task force of the American Psychological Association concluded, “it is among the most effective ways available to services to improve outcomes.”feedback effects

That said, there’s a catch.

The simple nature of this “highly rated,” transtheoretical method belies a steep learning curve.  In truth, experience shows you can learn  to do it — the mechanics — in a few minutes.

But therein lies the problem.  The empirical evidence makes clear successful implementation often takes several years.  This latter fact explains, in part, why surveys of American, Canadian, and Australian practitioners reveal that, while being aware of the method, they rarely integrate it into their work.

What exactly is the “it” being referred to?

Known by the acronym FIT,  feedback-informed treatment (FIT) involves using standardized measures to formally and routinely solicit feedback from clients regarding progress and the quality of the therapeutic relationship, and then using the resulting information to inform and improve care.

The ORS and SRS are examples of two simple feedback scales used in more than a dozen randomized controlled trials as well as vetted and deemed “evidence-based” by the Substance Abuse and Mental Health Services Administration.  Together, the forms take less than 3 minutes to administer, score and interpret (less if one of the web-based scoring systems is used).

So why, you might wonder, would it take so long to put such tools into practice?

As paradoxical as it may sound, because FIT is really not about using measures — any more say than making a home is about erecting four walls and a roof.  While the structure is the most visible aspect — a symbol or representation — we all know it’s what’s inside that counts; namely, the people and their relationships.

On this score, it should come as no surprise that a newly released study has found a significant portion of the impact of FIT is brought about by the alliance or relationship between client and therapist.   It’s the first study in history to look at how the process actually works and I’m proud to have been involved.

Of course, all practitioners know relationships skills are not only central to effective psychotherapy, but require lifelong learning.   With time, and the right kind of support, using measurement tools facilitates both responsiveness to individual clients and continuous professional development.

Here’s the rub.  Whenever I respond to inquiries about the tools — in particular, suggesting it takes time for the effects to manifest, and that the biggest benefit lies beyond the measurement of alliance and outcome — interest in FIT almost always disappears.  “We already know how to do therapy,” a manager  replied just over a week ago, “We only want the measures, and we like yours because they are the simplest and fastest to administer.”fit training

Every so often, however, the reply is different.  “What do we have to do to make this work to improve the effectiveness of our clinical work and clinicians?” asked Thomas Haastrup, the Coordinator of Family Services for Odense Municipality in Denmark.  When I advised, planning and patience, with an emphasis on helping individual practitioners learn to use feedback to foster professional development versus simply measuring their results, he followed through.  “We adopted the long view,” Thomas recounts, “and it’s paid off.”  Now in their 5th year, outcomes are improving at both the program and provider level across services aimed at helping adults, children, and families.

In addition to the Manual 6 in the ICCE Treatment and Training manuals, the ICCE Summer Intensives offer several opportunities for helping you or your agency to succeed in implementing FIT.  First, the 2-day FIT Implementation Training — the only workshop offering in-depth, evidence-based training in the steps for integrating FIT into clinical practice at the individual, agency, and system-of-care level.  Second, the Deliberate Practice Intensive — here you not only learn the steps, but begin to set up a professional develop plan designed to enhance your effectiveness.

To help out, I’d like to offer a couple of discounts:

  1. Purchase Manual 6 at 70% off the regular price.  Click here to order.  Enter the word IMPLEMENTATION at checkout to receive the discount  (If you want to purchase the entire set, I’m making them available at 50% off the usual price.  Enter IMPLEMENTATION2 at checkout).
  2. Register for any or all of the summer intensives by May 1st and receive an additional discount off the early bird price.  Simple enter the code FITPROMOAPRIL at checkout.  Please note, registration MUST occur before May 1st.  Generally, we sell out 6 to 8 weeks in advance.

Feel free to email me with any questions.  In the meantime, as always, I’m interested in your thoughts about FIT and DP.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
FIT Implementation Intensive Aug 2019 - ICCEFIT Training of Trainers Aug 2019 - ICCEFIT Deliberate Practice Aug 2019 - ICCE

Filed Under: evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT

Just how good are our theories about the causes and alleviation of mental and emotional suffering?

July 12, 2018 By scottdm 7 Comments

wrong way

Does the name Barry Marshall ring a bell?

Probably not if you are a mental health professional.

For decades, the Australian physician was persona non grata in the field of medicine — or perhaps stated more accurately, persona sciocca, a fool.

Beginning in the early 1980’s, Marshall, together with colleague Robin Warren, advanced the hypothesis that the bacteria heliobacter pylori was at root of most stomach ulcers.  That idea proved exceptionally controversial flying, as it did, in the face of years of accepted practice and wisdom.  Ulcers caused by something as simple and obvious as a bacterial infection?  Bunk, the medical community responded, in the process lampooning the two researchers.  After all, everyone knew stress was the culprit.  The also knew the cure: certainly not antibiotics.  Rather, antacids, sedatives, therapy and, in the more chronic and serious cases, gastrectomy–a surgical procedure involving the removal of the lower third of the stomach.

The textbook used in my Introduction to Psychology course in my first year at University boldly declared, “Emotional stress is now known to relate to … such illnesses as … peptic ulcers” (p. 343, Psychology Today: An Introduction 4th Edition [Braun and Linder, 1979]).  The chapter on the subject was full of stories of people whose busy, emotionally demanding lives were clearly the cause of their stomach problems.  I dutifully overlined all the relevant sections with my orange highlighter.  Later, in my clinical career, whenever I saw a person with an ulcer, I told them it was caused by stress and, not surprisingly, taught them “stress-management” strategies.

The only problem is the field, my textbook, and I were wrong, seriously wrong.  Stress was not responsible for stomach ulcers.  And no, antacids, sedatives, and psychotherapy, were not the best treatments.  The problem could be cured much more efficiently and effectively with a standard course of antibiotics, many of which had been available since the 1960’s!   In other words, the cure had been within reach all along.  Which begs the question, how could the field have missed it?  Not only that, even after conclusively demonstrating the link between ulcers and the h.pylori bacterium, the medical community continued to reject Marshall and Warren’s papers and evidence for another 10 years (Klein, 2013)!mark twain

So what was it?  Money, ignorance, hubris–even the normal process by which new scientific findings are disseminated–have all been offered as explanations.   The truth is, however, the field of medicine, and mental health in particular, has a weakness–to paraphrase Mark Twain–for “knowing with certainty things that just ain’t so.”

How about these?

  • Structural abnormalities in the ovaries cause neurosis in women;
  • Psychopathology results from unconscious dynamics originating in childhood;
  • Optimism, anger control, and the expression of emotion reduces the risk of developing cancer;
  • Negative thinking, “cognitive distortions,” and/or a chemical imbalance cause depression;
  • Some psychotherapeutic approaches are more effective than others.

The list is extensive and dates all the way back to the field’s founding nearly 150 years ago.  All, at one point or another, deeply believed and passionately advocated.  All false.

story-magnet-attract-candidatesLooking back, its easy to see that we therapists are suckers for a good story–especially those that appear to offer scientific confirmation of strongly held cultural beliefs and values.

Nowadays, for example, it simply sounds better to say that our work targets, “abnormal activation patterns in dlPFC and amygdala that underlie the cognitive control and emotion regulation impairments observed in Major Depressive Disorder” than, “Hey, I listened attentively and offered some advice which seemed to help.”  And while there’s a mountain of evidence confirming the effectiveness of the latter, and virtually none supporting the former, proponents tell us it’s the former that “holds the promise” (Alvarez & Icoviello, 2015).

What to do?  Our present “neuroenchantment” notwithstanding, is there anything we practitioners and the field can learn from more than 150 years of theorizing?its piss

Given our history, it’s easy to become cynical, either coming to doubt the very existence of Truth or assuming that it’s relative to a particular individual, time, or culture.  The other choice, it seems to me, is humility–not the feigned ignorance believed by some to be a demonstration of respect for individual differences–but rather what results when we closely and carefully examine our actual work.

Take empathy, for example.  Not only do most practitioners consider the ability to understand and share the feelings of another  an “essential” clinical skill, it is one of the most frequently studied aspects of therapeutic work (Norcross, 2011).   And, research shows therapists, when asked, generally give themselves high marks in this area (c.f., Orlinksky & Howard, 2005).   My colleagues, Daryl Chow, Sharon Lu, Geoffrey Tan, and I encountered the same degree of confidence when working with therapists in our recent, Difficult Conversations in Therapy study.  Briefly, therapists were asked to respond empathically to a series of vignettes depicting challenging moments in psychotherapy (e.g., a client expressing anger at them).  Each time, their responses were rated on standardized scale and individualized feedback for improving was provided.

Head_spinNow, here is the absolutely cool part.  The longer therapists participated in the research, the less confident but more demonstrably empathic they became!   The process is known as “The Illusion of Explanatory Depth.”  Simply put, most of us feel we understand the world and our work with far greater detail, coherence, and depth than we really do.  Only when we are forced ourselves to grapple with the details, does this illusion give way to reality, and the possibility of personal and professional growth become possible.

If this makes your head spin, get a cup of coffee and watch the video below in which Dr. Daryl Chow explains these intriguing results.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S. Marshall and Warren were awarded the Nobel Prize for their research in 2005.  Better late than never.

FITSUP2019

Filed Under: evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT

Finding Meaning in Psychotherapy Amidst the Trivia and Trivial

April 1, 2018 By scottdm 11 Comments

drowningI don’t know if you feel the same way I do.  Looking back, I’m pretty sure its been going on for a while, but somehow I didn’t notice.

Professional books and journals fill my bookshelves and are stacked around my desk.  I am, and always have been, a voracious–even compulsive–reader.  In the last couple of years, the volume of material has only increased–exponentially so, if I include digital items saved to my desktop.

Now, I’ll be the first to admit: it’s hard keeping up.  But that’s really not my problem.

The issue is: I feel like I’m drowning in trivia and the trivial.

How about you?  When was the last time you read something truly meaningful?

guidelinesIncreasingly, research journals are filled with studies that are either so narrow in focus as to defy any real world application, or simply revisit the same questions over and over.   Just how many more studies does the field need, for example, on cognitive-behavioral therapy?  A Google Scholar search on the subject, crossed with the term, “randomized controlled trial,” returns over a million hits!

In terms of translating research into practice, here’s a sample of articles sure to appeal to almost every clinician (and I didn’t have to “dig deep” to find these, by the way, as all were in journals neatly stacked on my desk):

  1. Psychodynamizing and Existentializing Cognitive-Behavioral Interventions
  2. How extraverted is honey.bunny77@hotmail.de? Inferring personality from e-mail addresses
  3. Satisfaction with life moderates the indirect effect of pain intensity on pain interference through pain catastrophizing

I didn’t make these up.  All are real articles in real research journals.  If you don’t believe me, click on the links to see for yourself.

Neologisms (#1) and cuteness (#2) aside, their titles often belie a mind-numbing banality in both scope and findings.  Take the last study.  Can you guess what its about?  Satisfaction with life moderates the indirect effect of pain intensity on pain interference through pain catastrophizing.  And what findings do you think the authors spent 10 double-column, 10-point font pages relating in one of psychology’s most prestigious journals?

wait

 

“Satisfaction with life appears to buffer the effect of pain.”

 

Hmm.  Not particularly earth-shattering.  And, based on these results, what do the authors recommend?  Of course: “Further evaluation in longitudinal and interventional studies”  (I foresee another study on cognitive-behavioral therapy in the near future).

Purpose, belonging, sense-making, transcendence, and growth are the foundations of meaning.  Most of what shows up in my inbox, is taught at professionals workshops, and appears in scholarly publications has, or engenders, none of those qualities.  The cost to our field and the people we serve is staggering.  Worldwide, rates of depression, anxiety, and suicide continue to rise.  At the same time, fewer and fewer people are seeking psychotherapy–34% fewer according to the latest findings.  It is important to note that even when extensive efforts are made, and significant financial support is provided, 85% of those who could benefit choose not go.  I just can’t believe its because therapists haven’t attended the latest “amygdala retraining” workshop, or do not know how to “psychodynamize” their cognitive-behavioral interventions.

This last week, I had the pleasure of interviewing Dr. Ben Caldwell.  His book, Saving Psychotherapy: Bringing the Talking Cure Back from the Brink, speaks directly to the challenges facing the field as well as steps every clinician can take to restore meaning to both research and practice.  Take listen, and then be sure to leave a comment.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
FIT Implementation Intensive 2018 FIT Deliberate Practice Intensive 2018

 

 

 

 

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT

Ho, Ho, Oh No! Science, politics, and the demise of the National Registry of Evidence-based Programs and Practices

February 7, 2018 By scottdm 13 Comments

End of NREPPWhile you were celebrating the Holidays–shopping and spending time with family–government officials were busy at work.  On December 28th, the Substance Abuse and Mental Health Services (SAMHSA) sent a formal termination notice to the National Registry of Evidence-based Programs and Practices (NREPP).

Ho, ho, oh no…!

Briefly, NREPP is “an evidence-based repository and review system designed to provide the public with reliable information on mental health and substance use interventions.”  In plain English, it’s a government website listing treatment approaches that have scientific support.  SAMHSA is the Federal Agency overseeing behavior health policy.

Back in November, I’d responded to a request from NREPP to update research on the Outcome and Session Rating Scales, two routine outcome measures currently listed on the registry website site.  All’s well until January 4th, when I received a short email stating that “no further review activities will occur” because the program was being ended “for the convenience of the government.”Danger

Like much that comes from our Nation’s capitol, the reason given for the actions taken depends entirely on who you ask.  Democrats are blaming Trump.   Republicans, and the new SAMHSA director, blame the system, calling the registry not only flawed, but potentially dangerous.   As is typical nowadays, everyone is outraged!

As someone whose work was vetted by NREPP, I can personally vouch for the thoroughness of the process and the integrity of the reviewers.  No favors were sought and none were given.  More, while no one knows exactly what will happen in the future, I sincerely believe officials leading the change have the best of intentions.  What I am much less certain of is whether science will finally prevail in communicating “what works” in mental health and substance abuse to the public.

Bottom line: psychological approaches for alleviating human suffering are remarkably effective–on par or better than most medical treatments.  That said, NONE work like a medicine.

salespersonYou have a bacterial infection, antibiotics are the solution.  A virus?  Well, you are just going to have to tough it out.  Take an aspirin and get some rest–and no, the brand you choose doesn’t really matter.   Ask a friend or relative, and they likely have a favorite.  The truth is, however, it doesn’t matter which one you take: Bayer, Econtin, Bufferin, Alka-Selzter, Anacin, a hundred other names, they’re all the same!

Four decades of research shows psychotherapy works much more like aspirin than an antibiotic.  Despite claims, its effects are not targeted nor specific to particular diagnoses.  Ask a friend, relative, your therapist or workshop presenter, and they all have their favorite: CBT, IPT, DBT, PD, TFT, CRT, EMDR, four-hundred additional names.  And yet, meta-analytic studies of head-to-head comparisons find no meaningful difference in outcome between approaches.

What does all this mean for the future of NREPP and SAMHSA?  The evidence makes clear that, when it comes to psychotherapy, any “list” of socially sanctioned approaches is not only unscientific, but seriously misleading.  Would it be too much to hope that future governmental efforts stop offering a marketplace for manufacturers of different brands of aspirin and focus instead on fostering evidence-based practice (EBP)?

Really, it’s not a bridge too far.  bridge too farIt merely means putting policies in place that help practitioners and agencies live up to the values inherent in the definition of EBP accepted by all professional organizations and regulatory bodies; namely, “the integration of the best available research with with clinical expertise in the context of patient characteristics, culture, and preferences” (pp. 273, APA, 2006).

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence 

P.S.: Every other year, the ICCE sponsors the “Training of Trainers” intensive.  Over three days, we focus on helping you become a world class presenter and trainer.  Join me, and colleagues from around the world for this transformational event.
FIT Training of Trainers 2018

 

 

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, PCOMS

Clinical Practice Guidelines: Beneficial Development or Bad Therapy?

December 4, 2017 By scottdm 13 Comments

complianceA couple of weeks ago, the American Psychological Association (APA) released clinical practice guidelines for the treatment of people diagnosed with post-traumatic stress disorder (PTSD).  “Developed over four years using a rigorous process,” according to an article in the APA Monitor, these are the first of many additional recommendations of specific treatment methods for particular psychiatric diagnoses to be published by the organization.

Almost immediately, controversy broke out.   On the Psychology Today blog, Clinical Associate Professor Jonathon Shedler, advised practitioners and patients to ignore the new guidelines, labeling them “bad therapy.”  Within a week, Professors Dean McKay and Scott Lilienfeld responded, lauding the guidelines a “significant advance for psychotherapy practice,” while repeatedly accusing Shedler of committing logical fallacies and misrepresenting the evidence.

One thing I know for sure, coming in at just over 700 pages, few if any practitioners will ever read the complete guideline and supportive appendices.  Beyond length, the way the information is presented–especially the lack of hypertext for cross referencing of the studies cited–seriously compromises any strainghtforward effort to review and verify evidentiary claims.

devil-in-the-detailIf, as the old saying goes, “the devil is in the details,” the level of mind-numbing minutae contained in the offical documents ensures he’ll remain well-hidden, tempting all but the most compulsive to accept the headlines on faith.

Consider the question of whether certain treatment approaches are more effective than others?  Page 1 of the Executive Summary identifies differential efficacy as a “key question” to be addressed by the Guideline.  Ultimately, four specific approaches are strongly recommended, being deemed more effective than…wait for it… scratchinghead“relaxation.”

My first thought is, “OK, curious comparison.”   Nevertheless, I read on.

Only by digging deep into the report, tracing the claim to the specific citations, and then using PsychNET, and another subscription service, to access the actual studies, is it possible to discover that in the vast majority of published trials reviewed, the four “strongly recommended” approaches were actually compared to nothing.  That’s right, nothing.

In the few studies that did include relaxation, the structure of that particular “treatment” precluded sufferers from talking directly about their traumatic experiences.   At this point, my curiosity gave way to chagrin.  Is it any wonder the four other approaches proved more helpful?  What real-world practitioner would limit their work with someone suffering from PTSD to recording “a relaxation script” and telling their client to “listen to it for an hour each day?”

Holy-Moly-Logo-Nur-Sprechblase(By the way,  it took me several hours to distill the information noted above from the official documentation–and I’m someone with a background in research, access to several online databases, a certain facility with search engines, and connections with a community of fellow researchers with whom I can consult)

On the subject of what research shows works best in the treatment of PTSD, meta-analyses of studies in which two or more approaches intended to be therapeutic are directly compared, consistently find no difference in outcome between methods–importantly, whether the treatments are designated “trauma-focused” or not.  Meanwhile, another highly specialized type of research–known as dismantling studies–fails to provide any evidence for the belief that specialized treatments cduck or rabbitontain ingredients specifically remedial to the diagnosis!  And yes, that includes the ingredient most believe essential to therapeutic success in the treatment of PTSD: exposure (1, 2).

So, if the data I cite above is accurate–and freely available–how could the committee that created the Guideline come to such dramatically different conclusions?  In particular, going to great lengths to recommend particular approaches to the exclusion of others?

Be forewarned, you may find my next statement confusing.  The summary of studies contained in the Guideline and supportive appendices is absolutely accurate.  It is the interpretation of that body of research, however, that is in question.

More than anything else, the difference between the recommendations contained in the Guideline and the evidence I cite above, is attributable to a deep and longstanding rift in the body politic of the APA.  How otherwise is one able to reconcile advocating the use of particular approaches with APA’s official policy on psychotherapy recognizing, “different forms . . . typically produce relatively similar outcomes”?

envySeeking to place the profession “on a comparable plane” with medicine, some within the organization–in particular, the leaders and membership of Division 12 (Clinical Psychology) have long sought to create a psychological formulary.  In part, their argument goes, “Since medicine creates lists of recommended treatments and procedures,  why not psychology?”

Here, the answer is simple and straightforward: because psychotherapy does not work like medicine.  As Jerome Frank observed long before the weight of evidence supported his view, effective psychological care is comprised of:

  • An emotionally-charged, confiding relationship with a helping person (e.g., a therapist);
  • A healing context or setting (e.g., clinic);
  • A rational, conceptual scheme, or myth that is congruent with the sufferer’s worldview and provides a plausible explanation for their difficulties (e.g., psychotherapy theories); and
  • Rituals and/or procedures consistent with the explanation (e.g., techniques).

The four attributes not only fit the evidence but explain why virtually all psychological approaches tested over the last 40 years, work–even those labelled pseudoscience (e.g., EMDR) by Lilienfeld, and other advocates of guidelines comprised of  “approved therapies.”  guidelines

That the profession could benefit from good guidelines goes without saying.  Healing the division within APA would be a good place to start.  Until then, encouraging practitioners to follow the organization’s own definition of evidence-based practice would suffice.  To wit, “Evidence based practice is the integration of the best available research with clinical expertise in the context of patient (sic) characteristics, culture, and preferences.”  Note the absence of any mention of specific treatment approaches.  Instead, consistent with Frank’s observations, and the preponderance of research findings, emphasis is placed on fitting care to the person.

How to do this?   The official statement continues, encouraging the “monitoring of patient (sic) progress . . . that may suggest the need to adjust the treatment.” Over the last decade, multiple systems have been developed for tracking engagement and progress in real time.  Our own system, known as Feedback Informed Treatment (FIT), is being applied by thousands of therapists around the world, with literally millions of clients. It is listed on the National Registry of Evidence based Programs and Practices.  More, when engagement and progress are tracked together with clients in real time, data to date document improvements in retention and outcome of mental health services regardless of the treatment method being used.

Until  next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

Filed Under: evidence-based practice, Practice Based Evidence, PTSD

That’s it. I’m done. It’s time for me to say goodbye.

November 2, 2017 By scottdm 3 Comments

dddb02383d1bbe1e0c3d0ad991bd95b8--alternative-treatments-termination-activities-for-teensEnding psychotherapy.

Whether formal or informal, planned or unplanned, it’s going to happen every time treatment is initiated.

What do we know about the subject?

Nearly 50% of people who start, discontinue without warning.  At the time they end, half have experienced no meaningful improvement in their functioning or well-being. On the other hand, of those who do continue, between 35-40% experience no measurable benefit despite continuous engagement in lengthy episodes of care.

Such findings remind me of the lyrics to the Beatles’ tune, “Hello Goodbye.”

“You say yes, I say no;Hello Goodbye

You say stop and I say go, go, go, oh no!

Hello, hello?

I don’t know why you say goodbye, I say hello.”

Here’s another key research finding: the most effective therapists have significantly more planned terminations.

In a recent study, Norcross, Zimmerman, Greenberg, and Swift identified eight core, pantheoretical processes associated with successful termination. You can read the article here.  Better yet, download and begin using the “termination checklist”–a simple, yet helpful method for ensuring you are putting these evidence-based principles to work with your clients.  Best of all, listen to my recent interview with John Norcross, Ph.D., the study’s first author, as we discuss how therapists can master this vitally important part of the therapeutic experience.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

Filed Under: Behavioral Health, evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, Termination

Something BIG is Happening: The Demand for Routine Outcome Measurement from Funders

October 16, 2017 By scottdm 2 Comments

Something in the air

Something is happening.  Something big.

Downloads of the Outcome and Session Rating Scales have skyrocketed.

The number of emails I receive has been steadily increasing.

The subject?  Routine outcome measurement.  The questions:

  • Where can I get copies of your measures?person asking question

Paper and pencil versions are available on my website.

  • What is the cost?

Individual practitioners can access the tools for free.  Group licenses are available for agencies and healthcare systems.

  • Can we incorporate the tools into our electronic healthcare record (E.H.R.)?

Three companies are licensed and authorized to provide an “Application Program Interface” (or API) for integrating the ORS, SRS, data aggregation formulas, and feedback signals directly into your E.H.R.  Detailed information and contact forms are available in a special page on my website.

  • What evidence is available for the validity, reliability, and effectiveness of the measures?

evidenceAlways a good question!  Since the tools were published seventeen years ago, studies have multiplied.  Keeping up with the data can be challenging as the tools are being used in different settings and with diverse clinical populations around the world.

Each year, together with my colleague, New Zealand psychologist, Eeuwe Schuckard, we add the latest research to a comprehensive document available for free online, titled “Measures and Feedback.”

Additionally, the tools have been vetted by an independent group of research scientists and are listed on the Substance Abuse and Mental Health Administration’s National Registry of Evidence-based Programs and Practices.

  • How can I (or my agency) get started?

Although it may sound simple and straightforward, this is the hardest question to answer.  There is often a tone of urgency in the emails I receive, “We need to measure outcomes now,” they say.tortoise-hare1

I nearly always respond with the same advice: the fastest way to succeed is to go slow.

We’ve learned a great deal about implementation over the last 10 years.  Getting practitioners to administer outcome measures is easy.  I can teach them how in less than three minutes.  Making the process more than just another, dreary “administrative task” takes time, patience, and persistence.

I caution against purchasing licenses, software, or onsite training.  Instead, I recommend taking time to explore.  It’s why the reviewers at SAMHSA gave our application for evidence-based status the highest ratings on “implementation support.”

ICCE ImplementationTo succeed, start with:

  1. Accessing a set of the ICCE Feedback Informed Treatment Manuals–the single, most comprehensive resource available on using the ORS and SRS.  Read and discuss them together with colleagues.
  2. Connect with practitioners and agencies around the world who have already implemented.  It’s easy.  Join the International Center for Clinical Excellence–the world’s largest online community dedicated to routine outcome measurement.
  3. Send a few key staff–managers, supervisors, implementation team leaders–to the Feedback-Informed Treatment Intensives.   The Advanced and Supervision workshops are held back-to-back each March in Chicago.  Participants not only leave with a thorough understanding of the ORS and SRS, but ready to kick off a successful implementation at home.  I tell people to sign up early as the courses are limited to 35 participants and always sell out a couple of months in advance.

Feel free to email me with any questions.

Until next time,

Scott

Scott D. Miller, Ph.D.
International Center for Clinical Excellence

 

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, FIT, FIT Software Tools, Implementation, PCOMS

More Deliberate Practice Resources…

May 30, 2017 By scottdm 1 Comment

what happenedLast week, I blogged about a free, online resource aimed at helping therapists improve their outcomes via deliberate practice.  As the web-based system was doubling as a randomized controlled trial (RCT), participants would not only be accessing a cutting-edge, evidence-based protocol but also contributing to the field’s growing knowledge in this area.

To say interest was high, doesn’t even come close.  Within 45 minutes of the first social media blast, every available spot was filled–including those on the waiting list!  Lead researchers Daryl Chow and Sharon Lu managed to open a few additional spots, and yet demand still far exceeded supply.

I soon started getting emails.  Their content was strikingly similar–like the one I received from Kathy Hardie-Williams, an MFT from Forest Grove, Oregon, “I’m interested in deliberate practice!  Are there other materials, measures, tools that I can access and start using in my practice?”

The answer is, “YES!”  Here they are:

Cycle of Excellence cover - single

Resource #1.  Written for practicing therapists, supervisors, and supervisees, this volume brings together leading researchers and supervisors to teach practical methods for using deliberate practice to improve the effectiveness of psychotherapy.

Written for practicing therapists, supervisors, and supervisees, this volume brings together leading researchers and supervisors to teach practical methods for using deliberate practice to improve the effectiveness of psychotherapy.

Twelve chapters split into four sections covering: (1) the science of expertise and professional development; (2) practical, evidence-based methods for tracking individual performance; (3) step-by-step applications for integrating deliberate practice into clinical practice and supervision; and (4) recommendations for making psychotherapist expertise development routine and expected.

“This book offers a challenge and a roadmap for addressing a fundamental issue in mental health: How can therapists improve and become experts?  Our goal,” the editors of this new volume state, ” is to bring the science of expertise to the field of mental health.  We do this by proposing a model for using the ‘Cycle of Excellence’ throughout therapists’ careers, from supervised training to independent practice.”

The book is due out June 1st.  Order today by clicking here: The Cycle of Excellence: Using Deliberate Practice to Improve Supervision and Training

Resource #2: The MyOutcomes E-Learning Platform

The folks at MyOutcomes have just added a new module on deliberate practice to their already extensive e-learning platform.  The information is cutting edge, and the production values simply fantastic.  More, MyOutcomes is offering free access to the system for the first 25 people who email to support@myoutcomes.com.  Put the words, “Responding to Scott’s Blogpost” in the subject line.  Meanwhile, here’s a taste of the course:

Resource #3:

proDLast but not least, the FIT Professional Development Intensive.  There simply is no better way to learn about deliberate practice than to attend the upcoming intensive in Chicago.  It’s the only such training available.  Together with my colleague, Tony Rousmaniere–author of the new book, Deliberate Practice for Psychotherapists: A Guide to Improving Clinical Effectiveness, we will help you develop an individualized plan for improving your effectiveness based on the latest scientific evidence on expert performance.

We’ve got a few spaces left.  Those already registered are coming from spots all around globe, so you’ll be in good company.  Click here to register today!

OK, that’s it for now.  Wishing you all the best for the Summer,

Scott D. Miller, Ph.D.

 

Filed Under: Behavioral Health, deliberate practice, evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, Practice Based Evidence

Can you tell me what I’m supposed to do? A free deliberate practice resource

May 17, 2017 By scottdm 4 Comments

what can i doYou’ve read the studies.  Maybe you’ve even attended a training.

Deliberate practice is the key to improving your effectiveness as a psychotherapist.  Top performing therapists devote twice as much time to the process. More, when employed purposefully and mindfully, the outcomes of average practitioners steadily rise over time.

But what exactly is a therapist supposed to practice in order to improve?  It’s a question that comes up within minutes of introducing the subject at my workshops–one my colleagues, Daryl Chow, Sharon Lu, Geoffrey Tan, and I have been working on answering.

Just over three years ago, we published preliminary results of a study documenting the impact of individualized feedback and rehearsal on mastering difficult conversations in psychotherapy. Therapists not only improved their ability to respond empathically under especially challenging circumstances, but were able to generalize what they learned to new and different situations.

How to learn from homeNow, the entire deliberate practice program has gone online.  In light of the research, it’s been both expanded and refined.  There’s no need to leave the comfort of your home or office and, best of all, it’s free.

Sign up to participate and you will learn what to practice as well as receive feedback specifically tailored to your professional development.  You will also be helping the field as the program is part of a research study on deliberate practice.

****UPDATE! UPDATE! UPDATE! UPDATE!****

Response to the above post has been overwhelming!  Despite the size of the study, all of the available spots filled within 45 minutes.  I’ve been corresponding with the chief researcher, Daryl Chow, Ph.D.. He tells me 15 more spots have just been added.  If you want to participate, click here.  The password is: DCT.  If all of the spots are taken, please add your name to the wait list.

One more opportunity: join me in Chicago for the upcoming two-day intensive on deliberate practice. For more information or to register, click on the icon below my name.  As with the online program, we are nearly full, so register today.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
proD

Filed Under: deliberate practice, evidence-based practice, excellence, Feedback Informed Treatment - FIT

The Missing Link: Why 80% of People who could benefit will never see a Therapist

March 17, 2017 By scottdm 22 Comments


1077-20170313-045746-miller_opener_300x300
The facts are startling.  Despite being on the scene for close to 150 years, the field of mental health–and psychotherapy in particular–does not, and never has had mass appeal.  Epidemiological studies consistently show, for example, the majority of people who could benefit from seeing a therapist, do not go.  And nowadays, fewer and fewer are turning to psychotherapy—33% less than did 20 years ago—and a staggering 56% either don’t follow through after making contact or drop out after a single visit with a therapist (Guadiano & Miller, 2012; Marshall, Quinn, & Child, 2016; Swift & Greenberg, 2014).

For those on the front line, conventional wisdom holds, the real problems lie outside the profession.  Insurance companies, in the best of circumstances, make access to and payment for psychotherapy an ordeal.  Another common refrain is nowadays people are looking for a quick fix.  Big Pharma has obliged, using their deep pockets to market “progress in a pill.”  No work required beyond opening wide and swallowing.  And finally, beyond instant gratification or corporate greed, many point to social disapproval or stigma as a continuing barrier to people getting the help they need.

For all that, were psychotherapy held in high regard, widely respected as the way to a better life, people would overcome their hesitancy, put up with any inconvenience, and choose it over any alternative.  They don’t.

WHY?  Mountains of research published over the last four decades document the effectiveness of the “talk therapies.”  With truly stunning results, and a minimal side effect profile compared to drugs, why do most never make it into a therapist’s office?

For the last two years, my longtime colleague, Mark Hubble and I, have explored this question.  We reviewed the research, consulted experts, and interviewed scores of potential consumers.

Our conclusion?  The secular constructions, reductionistic explanations, and pedestrian techniques that so characterize modern clinical practice fall flat, failing to offer people the kinds of experiences, depth of meaning, and sense of connection they want in their lives.

In sum, most chotarotose not to go to psychotherapy because they are busy doing something else–consulting psychics, mediums, and other spiritual advisers–forms of healing that are a better fit with their beliefs, that “sing to their souls.”

Actually, reports show more people attend and pay out of pocket for such services than see mental health practitioners!

More, as I noted in my plenary address at the last Evolution of Psychotherapy conference, our own, large-Consumer Reports style survey, found people actually rated psychics and other “spiritual advisers” more helpful than therapists, physicians and friends.  While certain to cause controversy, I strongly suggested the field could learn from and gain by joining the larger community of healers outside of our field.

Below — thanks to the Erickson Foundation — you can see that speech, as well as learn exactly what people felt these alternative healers provided that made a difference.  An even deeper dive is available in our article, “How Psychotherapy Lost its Magic.”  Thanks to the gracious folks at the Psychotherapy Networker for making it available for all to read, regardless of whether they subscribe to the magazine or not.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
ICCE - Advanced FIT Intensive 2019Feedback Informed Treatment SupervisionIntensive2019-Scott D Miller

Filed Under: Behavioral Health, Dodo Verdict, evidence-based practice, excellence, Feedback Informed Treatment - FIT, Therapeutic Relationship

The Asch Effect: The Impact of Conformity, Rebelliousness, and Ignorance in Research on Psychology and Psychotherapy

December 3, 2016 By scottdm 5 Comments

asch-1
Consider the photo above.  If you ever took Psych 101, it should be familiar.   The year is 1951.  The balding man on the right is psychologist, Solomon Asch.   Gathered around the table are a bunch of undergraduates at Swarthmore College participating in a vision test.

Briefly, the procedure began with a cardboard printout displaying three lines of varying length.  A second containing a single line was then produced and participants asked to state out loud which it best matched.  Try it for yourself:
asch-2
Well, if you guessed “C,” you would have been the only one to do so, as all the other participants taking the test on that day chose “B.”  As you may recall, Asch was not really assessing vision.  He was investigating conformity.  All the participants save one were in on the experiment, instructed to choose an obviously incorrect answer in twelve out of eighteen total trials.

The results?not-me

On average, a third of the people in the experiment went along with the majority, with seventy-five percent conforming in at least one trial.

Today, practitioners face similar pressures—to go along with the assertion that some treatment approaches are more effective than others.

Regulatory bodies, including the Substance Abuse and Mental Health Services Administration in the United States, and the National Institute for Health and Care Excellence, are actually restricting services and limiting funding to approaches deemed “evidence based.”  The impact on publicly funded mental health and substance abuse treatment is massive.

So, in the spirit of Solomon Asch, consider the lines below and indicate which treatment is most effective?

asch-3
If your eyes tell you that the outcomes between competing therapeutic approaches appear similar, you are right.  Indeed, one of the most robust findings in the research literature over the last 40 years is the lack of difference in outcome between psychotherapeutic approaches.

The key to changing matters is speaking up!  In the original Asch experiments, for example, the addition of even one dissenting vote reduced conformity by 80%!   And no, you don’t have to be a researcher to have an impact.  On this score, when in a later study, a single dissenting voter wearing thick glasses—strongly suggestive of poor visual acuity—was added to the group, the likelihood of going along with the crowd was cut in half.

That said, knowing and understanding science does help.  In the 1980’s, two researchers found that engineering, mathematics, and chemistry students conformed with the errant majority in only 1 out of 396 trials!

What does the research actually say about the effectiveness of competing treatment approaches?

You can find a review in the most current review of the research in the latest issue of Psychotherapy Research–the premier outlet for studies about psychotherapy.  It’s just out and I’m pleased and honored to have been part of a dedicated and esteemed group scientists that are speaking up.  In it, we review and redo several recent meta-analyses purporting to show that one particular method is more effective than all others.  Can you guess which one?

The stakes are high, the consequences, serious.  Existing guidelines and lists of approved therapies do not correctly represent existing research about “what works” in treatment.  More, as I’ve blogged about before, they limit choice and effectiveness without improving outcome–and in certain cases, result in poorer results.  As official definitions make clear, “evidence-based practice” is NOT about applying particular approaches to specific diagnoses, but rather “integrating the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (p. 273, APA, 2006).

Read it and speak up!

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
Scott D. Miller - Australian Drug and Alcohol Symposium

Filed Under: Dodo Verdict, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Does practice make perfect?

August 30, 2016 By scottdm 1 Comment

michael ammart“Practice does not make perfect,” my friend, and award-winning magician, Michael Ammar, is fond of saying.  “Rather,” he observes, “practice makes permanent.”

Thus, if we are not getting better as we work, our work will simply insure our current performance stays the same.

Now, before reading any further, watch a bit of the video below.  It features Diana Damrau singing one of the most recognizable arias from Mozart’s, “The Magic Flute.”  Trust me, even if you don’t like opera, this performance will make the hair on your neck stand on end.

All right, now click on the video below (and listen for as long as you can stand it).

No, the latter recording is not a joke.  Neither is it a reject from one of the “GOT TALENT” shows so popular on TV at present.  It’s none other than Florence Jenkins—an American socialite and heiress who was, according to Wikipedia, “a prominent musical cult figure…during the 1920’s, ‘30’s, and 40’s.”

Florence Jenkins

How could that be, you may well wonder?  Her pitch is off, and there are so many mistakes in terms of rhythm, tempo, and phrasing in the first 30 seconds, one quickly loses count.

The problem?  In a word, feedback—more specifically, the lack of critical feedback extending over many years.

For most of her career, Lady Florence, as she liked to be called, performed to “select audiences” in her home or small clubs. Attendance was invitation-only–and Jenkins controlled the list.  Her guests did their best not to let on what they tought of her abilities.  Instead, they smiled approvingly and applauded–loudly as it turns out, in an attempt to cover the laughter that invariably accompanied her singing!

Jenkins performanceEverything changed in 1944 when Jenkins booked Carnegie Hall for a public performance. This time, the applause was not sufficient to cover the laughter.  If anything, it followed, treating the performance as a comedy act, and encouraging the singer to continue the frivolity.

The reviews were scathing.  The next morning, the critic for the New York Sun, wrote, Lady Florence, “…can sing everything…except notes…”

The moral of the story?  Practice is not enough.  To improve, feedback is required.  Honest feedback–and the earlier in the process, the better.  Research indicates the keys to success are: (1) identifying performance objectives that lie just beyond an individuals current level of reliable achievement; (2) immediate feedback; and (3) continuous effort aimed at gradually refining and improving one’s performance.

Here’s the parallel with psychotherapy: the evidence shows therapist self-appraisal is not a reliable measure of either the quality or effectiveness of their work.  Indeed, a number of studies have found that, when asked, the least effective clinicians rate themselves on par with the most effective–a finding that could well be labelled, “Jenkin’s Paradox.”

Evidence-based measures exists which can help therapists avoid the bias inherent in self-assessment as well as aid in the identification of small, achievable performance improvement objectives.  A recent study documented, for example, how therapists can use such tools, in combination with immediate feedback and practice, to gradually yet significantly improve the quality and effectiveness of their therapeutic relationships–arguably, the most important contributor to treatment outcome.  Using the tools to improve outcome and engagement in psychotherapy will be the focus of the upcoming ICCE webinar.  It’s a simply way to get started, or to refine your existing knowledge.  Learn more or register online by clicking here.

Let me leave you with one last video.  It’s an interview I did with Danish psychologist Susanne Bargmann.  Over the last 5 years, she’s applied the principles described here in an attempt to not only improve her effectiveness as a clinician, but also in music.  Recently, her efforts came to the attention of the folks at Freakonomics radio.  As was the case when you listened to Diana Damrau, you’ll come away inspired!

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
ICCE Fall WEbinar

 

 

 

Filed Under: CDOI, evidence-based practice, Feedback Informed Treatment - FIT, FIT, Top Performance

Making the Impossible, Possible: The Fragile Balance

July 25, 2016 By scottdm 1 Comment

Trip-Advisor scores it # 11 out of 45 things to do Sausalito, California.  No, it not’s the iconic Golden Gate Bridge or Point Bonita Lighthouse.  Neither is it one of the fantastic local restaurants or bars.  What’s more, in what can be a fairly pricey area, this attraction won’t cost you a penny.   It’s the gravity-defying rock sculptures of local performance artist, Bill Dan.

bill dan

So impossible his work seems, most initially assume there’s a trick: magnets, hooks, cement, or pre-worked or prefab construction materials.

Dan 1

Watch for a while, get up close, and you’ll see there are no tricks or shortcuts.  Rather, Bill Dan has vision, a deep understanding of the materials he works with, and perseverance.  Three qualities that, it turns out, are essential in any implementation.

Over the last decade, I’ve had the pleasure of working with agencies and healthcare systems around the world as they work to implement Feedback-Informed Treatment (FIT).  Not long ago, FIT–that is, formally using measures of progress and the therapeutic alliance to guide care–was deemed an evidence-based practice by SAMHSA, and listed on the official NREPP website.  Research to date shows that FIT makes the impossible, possible, improving the effectiveness of behavioral health services, while simultaneously decreasing costs, deterioration and dropout rates.

Dan 2

 

Over the last decade, a number of treatment settings and healthcare systems have beaten the odds.  Together with insights gleaned from the field of Implementation Science, they are helping us understand what it takes to be successful.

One such group is Prairie Ridge, an integrated behavioral healthcare agency located in Mason City, Iowa.  Recently, I had the privilege of speaking with the clinical leadership and management team at this cutting-edge agency.

Click on the video below to listen in as they share the steps for successfully implementing FIT that have led to improved outcomes and satisfaction across an array of treatment programs, including residential, outpatient, mental health, and addictions.

Until next time,

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
Scott D Miller Symposium bg3

P.S.: Looking for a way to learn the principles and practice of Feedback Informed Treatment?  No need to leave home.  You can learn and earn CE’s at the ICCE Fall FIT Webinar.  Register today at: https://www.eventbrite.ie/e/fall-2016-feedback-informed-treatment-webinar-series-tickets-26431099129.

ICCE Fall WEbinar

 

Filed Under: behavioral health, CDOI, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Why aren’t therapists talking about this?

June 20, 2016 By scottdm 8 Comments

shhTurns out, every year, for the last several years, and right around this time, I’ve done a post on the subject of deterioration in psychotherapy.  In June 2014, I was responding to yet another attention-grabbing story published in The Guardian, one of the U.K.’s largest daily newspapers. “Misjudged counselling and therapy can be harmful,” the headline boldly asserted, citing results from “a major new analysis of outcomes.” The article was long on warnings to the public, but short on details about the study.  In fact, there wasn’t anything about the size, scope, or design.  Emails to the researchers were never answered.  As of today, no results have appeared in print.

One year later, I was at it again—this time after seeing the biopic Love & Mercy, a film about the relationship LOVE-MERCY-POSTER-1308x1940 between psychologist Eugene Landy and his famous client, Beach Boy Brian Wilson. In a word, it was disturbing.  The psychologist did “24-hour-a-day” therapy, as he termed it, living full time with the singer-songwriter, keeping Wilson isolated from family and friends, and on a steady dose of psychotropic drugs while simultaneously taking ownership of Wilson’s songs, and charging $430,000 in fees annually. Eventually, the State of California intervened, forcing the psychologist to surrender his license to practice.  As egregious as the behavior of this practitioner was, the problem of deterioration in psychotherapy goes beyond the field’s “bad apples.”

bad-appleDo some people in therapy get worse? The answer is, most assuredly, “Yes.” Research dating back several decades puts the figure at about 10% (Lambert, 2010). Said another way, at termination, roughly one out of ten people are functioning more poorly than they were at the beginning of treatment. Despite claims to the contrary (e.g., Lilenfeld, 2007), no psychotherapy approach tested in a clinical trial has ever been shown to reliably lead to or increase the chances of deterioration. NONE. Scary stories about dangerous psychological treatments are limited to a handful of fringe therapies–approaches that have been never vetted scientifically and which all practitioners, but a few, avoid.

So, what is the chief cause of deterioration in treatment?norw-MMAP-md Norwegian psychologist Jørgen A. Flor just completed a study on the subject. We’ve been corresponding for a number  of year as he worked on the project.  Given the limited information available, I was interested.

What he found may surprise you. Watch the video or click here to read his entire report (in Norwegian).  Be sure and leave a comment!

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

Scott Abbey Road.jpg

Filed Under: Behavioral Health, CDOI, Conferences and Training, evidence-based practice

NERD ALERT: Determining IF, WHAT, and HOW Psychotherapy Works

May 5, 2016 By scottdm 12 Comments

Nerd

OK, this post may not be for everyone.  I’m hoping to “go beyond the headlines,” “dig deep,” and cover a subject essential to research on the effectiveness of psychotherapy. So, if you fit point #2 in the definition above, read on.

eysenck

It’s easy to forget the revolution that took place in the field of psychotherapy a mere 40 years ago.  At that time, the efficacy of psychotherapy was in serious question. As I posted last week, psychologist Hans Eysenck (1952, 1961, 1966) had published a review of studies purporting to show that psychotherapy was not only ineffective, but potentially harmful.  Proponents of psychotherapy responded with the own reviews (c.f., Bergin, 1971).  Back and forth each side went, arguing their respective positions–that is, until Mary Lee Smith and Gene Glass (19
77) published the first meta-analysis of psychotherapy outcome studies.

Their original analysis of 375 studies showed psychotherapy to be remarkably beneficial.  As I’ve said here, and frequently on my blog, they found that the average treated client was better off than 80% of people with similar problems were untreated.

Eysenck and other critics (1978, 1984; Rachman and Wilson 1980) immediately complained about the use of meta-analysis, using an argusmith and glassment still popular today; namely, that by including studies of varying (read: poor) quality, Smith and Glass OVERESTIMATED the effectiveness of psychotherapy.  Were such studies excluded, they contended, the results would most certainly be different and behavior therapy—Eysenck’s preferred method—would once again prove superior.

polemicFor Smith and Glass, such claims were not a matter of polemics, but rather empirical questions serious scientists could test—with meta-analysis, of course.

So, what did they do?  Smith and Glass rated the quality of all outcome studies with specific criteria and multiple raters.  And what did they find?  The better and more tightly controlled studies were, the more effective psychotherapy proved to be.  Studies of low, medium, and high internal validity, for example, had effect sizes of .78, .78, and .88, respectively.  Other meta-analyses followed, using slightly different samples, with similar results: the tighter the study, the more effective psychotherapy proved to be.

Importantly, the figures reported by Smith and Glass have stood the test of time.  Indeed, the most recent meta-analyses provide estimates of the effectiveness of psychotherapy that are nearly identical to those generated in Smith and Glass’s original study.  More, use of their pioneering method has exploded, becoming THE standard method for aggregating and understanding results from studies in education, psychology, and medicine.

sheldon kopp

As psychologist Sheldon Kopp (1973) was fond of saying, “All solutions breed new problems.”  Over the last two decades the number of meta-analyses of psychotherapy research has exploded.  In fact, there are now more meta-analyses than there were studies of psychotherapy at the time of Smith and Glass’s original research.  The result is that it’s become exceedingly challenging to understand and integrate information generated by such studies into a larger gestalt about the effectiveness of psychotherapy.

Last week, for example, I posted results from the original Smith and Glass study on Facebook and Twitter—in particular, their finding that better controlled studies resulted in higher effect sizes.   Immediately, a colleague responded, citing a new meta-analysis, “Usually, it’s the other way around…” and “More contemporary studies find that better methodology is associated with lower effect sizes.”

CustomerobjectionsIt’s a good idea to read this study, closely.  If you just read the “headline”–“The Effect of Psychotherapy for Adult Depression are Overestimated–or skip the method’s section and read the author’s conclusions, you might be tempted to conclude that better designed studies produce smaller effects (in this particular study, in the case of depression).  In fact, what the study actually says is that better designed studies will find smaller differences when a manualized therapy is compared to a credible alternative!  Said another way, differences between a particular psychotherapy approach and an alternative (e.g., counseling, usual care, or placebo), are likely to be greater when the study is of poor quality.

What can we conclude? Just because a study is more recent, does not mean it’s better, or more informative.  The important question one must consider is, “What is being compared?”  For the most part, Smith and Glass analyzed studies in which psychotherapy was compared to no treatment.  The study cited by my colleague, demonstrates what I, and others (e.g., Wampold, Imel, Lambert, Norcross, etc.) have long argued: few if any differences will be found between approaches.

The implications for research and practice are clear.  For therapists, find an approach that fits you and benefits your clients.  Make sure it works by routinely seeking feedback from those you serve.  For researchers, stop wasting time and precious resources on clinical trials.  Such studies, as Wampold and Imel so eloquently put it, “seemed not to have added much clinically or scientifically (other than to further reinforce the conclusion that there are no differences between treatments), [and come] at a cost…” (p. 268).

Until next time,

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
Scott D Miller Symposium bg3

Filed Under: Behavioral Health, evidence-based practice

Improving the Odds: Implementing FIT in Care for Problem Gamblers and their Families

April 17, 2016 By scottdm 1 Comment

spiraling roulette

Quick Healthcare Quiz

What problem in the U.S. costs the government approximately $274 per adult annually?

If you guessed gambling, give yourself one point.  According to the latest research, nearly 6 million Americans have a serious gaming problem—a number that is on the rise.  One-third of the Nation’s adults visit a Casino every year, losing according to the latest figures an estimated 100 billion dollars.

Which problem is more common?  Substance abuse or problem gambling?

If you guessed the former, give yourself another point.  Problems related to alcohol and drug use are about 3.5 times more common than gambling.  At the same time, 281 times more funding is devoted to treating drug and alcohol problems.  In March 2014, the National Council on Problem Gambling reported that government-funded treatment was provided to less than one quarter of one percent of those in need.

Does psychotherapy work for problem gambling?

If you answered “yes,” add one to your score.  Research not only indicates that psychological treatment approaches are effective, but that changes are maintained at follow up.  As with other presenting problems (e.g., anxiety and depression), more therapy is associated with better outcomes than less.

What is the key to successful treatment of problem gambling?

If you answered, “funding and getting people into treatment,” or some variation thereof, take away three points!

So, how many points do you have left?  If you are at or near zero, join the club.

Healthcare is obsessed with treatment.  A staggering 99% of resources are invested in interventions.  Said another way, practitioners and healthcare systems love solutions.  The problem is that research shows this investment, “does not result in positive implementation outcomes (changes in practitioner behavior) or intervention outcomes (benefits to consumers).”  Simply put, it’s not enough to know “what works.”  You have to be able to put “what works” to work.

BCRPGP

Enter the BC Responsible and Problem Gambling Program—an agency that provides free support and treatment services aimed at reducing and preventing the harmful impacts of excessive or uncontrolled gaming.  Clinicians working for the program not only sought to provide cutting-edge services, they wanted to know if they were effective and what they could do to continuously improve.

Five years ago, the organization adopted feedback-informed treatment (FIT)—routinely and formally seeking feedback from clients regarding the quality and outcome of services offered.    A host of studies documents that FIT improves retention in and outcome of psychotherapy.  Like all good ideas, however, the challenge of FIT is implementation.

Last week, I interviewed Michael Koo, the clinical coordinator of the BCRPGP.  Listen in as he discusses the principles and challenges of their successful implementation.  Learn also how the talented and devoted crew achieve outcomes on par with randomized controlled trials in an average of 7 visits while working with a culturally and clinically diverse clientele.

As you’ll hear, implementation is difficult, but doable.  More, you don’t have to reinvent the wheel or do it alone.  When FIT was reviewed and deemed “evidence-based” by the Substance Abuse and Mental Health Services organization in 2013, it received perfect scores for “implementation, training, support, and quality assurance” resources.  Regardless of the population you serve, you can:

  • Join a free, online, international community of nearly 10,000 like-minded professionals using FIT in diverse settings (www.iccexcellence.com).  Every day, members connect and share their knowledge and experience with each other;
  • Access a series of “how to” manuals and free, gap assessment tool (FRIFM) to aid in planning, guiding progress, and identifying common blind spots in implementation.
  • Attend the upcoming, 2-day FIT Implementation workshop.  Held once a year in August, this event provides an in-depth, evidence-based training based on the latest findings from the field of implementation science.

Come meet managers, supervisors, practitioners, and team leaders from around the world. You will leave the tools necessary to “put ‘what works’ to work.”

FIT IMP 2016
Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT, FIT, ICCE

Are you Better? Improving Effectiveness One Therapist at a Time

January 24, 2016 By scottdm 3 Comments

IMG_20160121_122453Greetings from snowy Sweden.  I’m in the beautiful city of Gothenburg this week, working with therapists and administrators on implementing Feedback-Informed Treatment (FIT).

I’m always impressed by the dedication of those who attend the intensive workshops.  More, I feel responsible for providing a training that not only results in mastery of the material, but also leads to better outcomes.

As commonsensical as it may seem to expect that training should foster better results, it’s not.  Consider a recent study out of the United Kingdom.  There, massive amounts of money have been spent over the last five years training clinicians to use cognitive behavioral therapy (CBT).  The expenditure is part of a well-intentioned government program aimed at improving access to effective mental health services.

Anyway, in the study, clinicians participated in a year-long “high-intensity” course that included more than 300 hours of training, supervision, and practice—a tremendous investment of time, money, and resources.  Competency in delivering CBT was assessed at regular intervals and shown to improve significantly throughout the training.

2a-we-are-all-the-same-problemThe only problem?  Training therapists in CBT did not result in better outcomes.

While one might hope such findings would cause the researchers to rethink the training program, they chose instead to question whether “patient outcome should … be used as a metric of competence…” (p. 27).  Said another way, doing treatment the right way was more important than whether it actually worked!  One is left to wonder whether the researchers would have reached a similar conclusion had the study gone the other way.  Most certainly, the headline would then have been, “Empirical Research Establishes Connection between Competence in CBT and Treatment Outcome!”

Attempts to improve the effectiveness of treatment via the creation of a psychological formulary—official lists of specific treatments for specific disorders—have persisted, and even intensified, despite consistent evidence that the methods clinicians use contribute little to outcome.  Indeed, neither clinicians’ competence in conducting specific types of therapy nor adherence to evidence-based protocols have been “found to be related to patient outcome and indeed . . . estimates of their effects [are] very close to zero” (p. 207, Webb, DeRubeis, & Barber, 2010).

So, what gives?

There are two reasons why such efforts have failed:

  • First, they do not focus on helping therapists develop the skills that account for the lion’s share of variability in treatment outcome.

Empathy, for example, has a greater impact than the combined effect sizes of therapist competence, adherence to protocol, specific ingredients within and differences between various treatment approaches.  Still, most, like the present study, continue to focus on method.

  • Second, they ignore the extensive scientific literature on expertise and expert performance.

Here, research has identified a universal set of processes, and step-by-step directions, anyone can follow to improve performance within a particular discipline.  To improve, training must be highly individualized, focused on helping performers reach for objectives just beyond their current ability.

“Deliberate Practice,” as it has been termed, requires grit and determination.  “Nobody is allowed to stagnate,” said one clinician when asked to describe what it was like to work at a clinic that had implemented the steps, adding, “Nobody is allowed to stay put in their comfort zone.”  The therapist works at Stangehjelpa, a community mental health service located an hour north of Oslo, Norway.

BirgitvidereThe director of the agency is psychologist, Birgit Valla (left), author of visionary book, Further: How Mental Services Can Be Better.   Birgit is on a mission to improve outcomes—not by dictating the methods staff are allowed to use but by focusing on their individual development.

It starts with measuring outcomes.  All therapists at Stangehjelpa know exactly how effective they are and, more importantly, when they are not helpful.  “It’s not about the measures,” Birgit is quick to point out, “It´s about the therapist, and how the service can support that therapist getting better.”  She continues, “It´s like if you want improve your time in the 100 meter race, you need a stopwatch.  It would be absurd to think, however, that the stopwatch is responsible for running faster.  Rather, it’s how one chooses to practice in relation to the results.”

Recently, researcher Siri Vikrem Austdal interviewed staff members at the clinic about their experience applying deliberate practice in their work.  Says one, ““It is strenuous. You are expected to deliver all the time. But being part of a team that dare to have new thoughts, and that wants something, is really exciting. I need it, or I would grow tired. It is demanding, but then there is that feeling we experience when we have climbed a mountain top. Then it is all worthwhile. It is incredibly fun to make new discoveries and experience mastery.”

So, what exactly are they doing at Stangehjelp?

You can read the entire report here (Norwegian), or the abbreviated version here (English).  Why not join Birgit this summer at the FIT Professional Development training in Chicago, Illinois.  Together with Dr. Daryl Chow, we will teach participants how to incorporate deliberate practice into an individualized, evidence-based plan for continuous professional development.  Click on the icon below to reserve your spot now.

FitProfessionalDevelopmentIntensiveAug8th2016 Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

 

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, FIT, ICCE, Top Performance

The Benefits of Doubt: New Research Sheds Light on Becoming a More Effective Therapist

December 9, 2015 By scottdm 6 Comments

puzzle

These are exciting times for clinicians.  The pieces of the puzzle are falling into place.  Researchers are finally beginning to understand what it takes to improve the effectiveness of psychotherapy.  Shifting away from the failed, decades-long focus on methods and diagnosis, attention has now turned to the individual practitioner.

Such efforts have already shown a host of factors to be largely ineffective in promoting therapist growth and development, including:

  • Supervision;
  • Continuing education;
  • Therapist personal therapy;
  • Clinical experience; and
  • Access to feedback

In October, I blogged about the largest, longitudinal study of therapists ever conducted.  Despite having access to ongoing, formal feedback from their clients for as long as 17 years, clinicians in the study not only did not improve, their outcomes actually deteriorated, on average, year after year.

Such findings contrast sharply with beliefs of practitioners who, according to other studies, see themselves as improving with time and experience.  In fact, findings on all the practices noted above contrast sharply with beliefs commonly-held in the field:

  • Supervision is at the top of the list of experiences therapists cite as central to their growth and development as practitioners. By contrast, the latest review of the literature concludes, “We do not seem to be any more able to say now (as opposed to 30 years ago) that psychotherapy supervision contributes to patient outcome” (p. 235, Watkins 2011).
  • Although most clinicians value participating in continuing education activities—and licensure requirements mandate attendance—there is no evidence such events engender learning, competence, or improved outcomes. Neither do they appear to decrease disciplinary actions, ethical infractions, or inspire confidence on the part of therapy consumers.
  • Therapist personal therapy is ranked as one of the most important sources of professional development despite there being no evidence it contributes to better performance as a clinician and some studies documenting a negative impact on outcome (see Orlinsky & Ronnestad, 2005);

If any of the research I’ve cited surprises you, or gives you pause, there is hope!  Really. Read on.

doubt_dice

Doubt, it turns out, is a good thing–a quality possessed by the fields’ most effective practitioners.  Possessing it is one of the clues to continuous professional development.  Indeed, several studies now confirm that “healthy self-criticism,” or professional self-doubt (PSD), is a strong predictor of both alliance and outcome in psychotherapy (2015).

To be sure, I’m not talking about assuming a “not-knowing” stance in therapeutic interactions.  Although much has been written about having a “beginner’s mind,” research by Nissen-Lie and others makes clear that nothing can be gained by either our feigned or willful ignorance.

Rather, the issue is about taking the time to reflect on our work.  Doing so on a routine basis prevents us from falling prey to the “over-claiming error”—a type of confidence that comes from the feeling we’ve seen something before when, in fact, we hnot listeningave not.

The “over-claiming error” is subtle, unconscious, and fantastically easy to succumb to and elicit.  In a very clever series of experiments, for example, researchers asked people a series of questions designed either to engender a feeling of knowledge and expertise or ignorance.  Being made to feel more knowledgeable, in turn, lead people to act less open-mindedly and feel justified in being dogmatic.  Most importantly, it caused them to falsely claim to know more about the subject than they did, including “knowing” things the researchers simply made up!

In essence, feeling like an expert actually makes it difficult to separate what we do and do not know.  Interestingly, people with the most knowledge in a particular domain (e.g., psychotherapy) are at the greatest risk.  Researchers term the phenomenon, “The ‘Earned Dogmatism’ Effect.”

What to do?  The practices of highly effective therapists provide some clues:

  1. Adopt an “error-centric” mindset. Take time to reflect on your work, looking for and then examining moments that do not go well. One simple way to prevent over-claiming is to routinely measure the outcome of your work.  Don’t rely on your judgement alone, use a simple measures like the ORS to enhance facts from your fictions.
  1. Think like a scientist. Actively seek disconfirmation rather than confirmation of your beliefs and practices.  Therapy can be vague and ambiguous process—two conditions which dramatically increase the risk of over-claiming.  Seeking out a community of peers and a coach to review your work can be helpful in this regard.  No need to leave your home or office.  Join colleagues in a worldwide virtual community at: iccexcellence.com.
  1. Seek formal feedback from clients. Interestingly, research shows that highly effective therapists are surprised more often by what their clients say than average clinicians who, it seems, “have heard it all before.”  If you haven’t been surprised in a while, ask your clients to provide feedback about your work via a simple tool like the SRS.  You’ll be amazed by what you’ve missed.
  1. Attend the 2016 Professional Development Intensive this summer in Chicago. At this small group, intensive training, you will the latest evidence-based steps for unlocking your potential as a therapist.

Best wishes for the Holidays.  As always, please leave a comment.

Scott

Scott D. Miller, Ph.D.
International Center for Clinical Excellence
Scott D. Miller - Australian Drug and Alcohol Symposium

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, Top Performance

Swedish National Audit Office concludes: When all you have is CBT, mental health suffers

November 10, 2015 By scottdm 15 Comments

hammer-screw

“The One-Sided Focus on CBT is Damaging Swedish Mental Health”

That’s the headline from one of Sweden’s largest daily newspapers for Monday, November 9th.  Professor Gunnar Bohman, together with colleagues and psychotherapists, Eva Mari Eneroth Säll and Marie-Louise Ögren, were responding to a report released last week by the Swedish National Audit Office (NAO).

Back in May 2012, I wrote about Sweden’s massive investment in cognitive behavioral therapy (CBT).  The idea was simple: address rising rates of disability due to mental illness by training clinicians in CBT.  At the time, a mere two billion Swedish crowns had been spent.

Now, several years and nearly 7 billion Crowns later, the NAO audited the program.  Briefly, it found:

  •  The widespread adoption of the method had no effect whatsoever on the outcome of people disabled by depression and anxiety;
  • A significant number of people who were not disabled at the time they were treated with CBT became disabled thereby increasing the amount of time they spent on disability; and 
  • Nearly a quarter of people treated with CBT dropped out.

The Swedish NAO concludes, “Steering towards specific treatment methods has been ineffective in achieving the objective.”

choice

How, you might reasonably ask, could anyone think that restricting choice would improve outcomes?  It was 1966, when psychologist Abraham Maslow famously observed, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (p. 15, The Psychology of Science).  Still, many countries and professional organizations are charting a similar path today.

The choice is baffling, given the lack of evidence for differential efficacy among psychotherapeutic approaches. Consider a study I blogged about in April 2013.  It was conducted in Sweden at 13 different public health outpatient clinics over a three year period.  Consistent with 40 years of evidence, the researchers found that psychotherapy was remarkably effective regardless of the type of treatment offered!

Key-to-success-h-800So, what is the key to improving outcome?

As Bohman, Säll and Ögren point out in their article in Svenska Dagbladet, “offering choice…on the basis of patients’ problems, preferences and needs.”

The NAO report makes one additional recommendation: systematic measurement and follow-up.

As readers of this blog know, insuring that services both fit the consumer and are effective is what Feedback-Informed Treatment (FIT) is all about.  More than 20 randomized clinical trials show that this transtheoretical process improves retention and outcome.  Indeed, in 2013, FIT was deemed evidence-based by the Substance Abuse and Mental Health Services Administration.

Learn more by joining the International Center for Clinical Excellence–a free, web-based community of practitioners dedicated to improving the quality and effectiveness of clinical work.   Better yet, join colleagues from around the world at our upcoming March intensive trainings in Chicago!  Register soon as both the Advanced Intensive and FIT Supervision Courses are already more than half subscribed.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT

Do Psychotherapists Improve with Time and Experience?

October 27, 2015 By scottdm 14 Comments

researchThe practice known as “routine outcome measurement,” or ROM, is resulting in the publication of some of the biggest and most clinically relevant psychotherapy studies in history.  Freed from the limits of the randomized clinical trial, and accompanying obsession with manuals and methods, researchers are finally able to examine what happens in real world clinical practice.

A few weeks ago, I blogged about the largest study of psychotherapy ever published.  More than 1,400 therapists participated.  The progress of over 26,000 people (aged 16-95) treated over a 12 year period in primary care settings in the UK was tracked on an ongoing basis via ROM.  The results?  In an average of 8 visits, 60% of those treated by this diverse group of practitioners achieved both reliable and clinically significant change—results on par with tightly controlled RCT’s.  The study is a stunning confirmation of the effectiveness of psychotherapy.

This week, another mega-study was accepted for publication in the Journal of Counselexperienceing Psychology.   Once more,
ROM was involved.  In this one, researchers Goldberg, Rousemanier, Miller, Whipple, Nielsen, Hoyt, and Wampold examined a large, naturalistic data set that included outcomes of 6500 clients treated by 170 practitioners whose results had been tracked an average of 5 years.

Their question?

Do therapists become more effective with time and experience?

Their answer?  No.

readerFor readers of this blog, such findings will not be particularly newsworthy.  As I’ve frequently pointed out, experience has never proven to be a significant predictor of effectiveness.

What might be a bit surprising is that the study found clinicians’ outcomes actually worsened with time and experience.  That’s right.  On average, the longer a therapist practiced, the less effective they became!  Importantly, this finding remained even when controlling for several patient-level, caseload-level, and therapist-level characteristics, as well as when excluding several types of outliers.

Such findings are noteworthy for a number of reasons but chiefly because they contrast sharply with results from other, equally-large studies documenting that therapists see themselves as continuously developing in both knowledge and ability over the course of their careers.   To be sure, the drop in performance reported by Goldberg and colleagues wasn’t steep.  Rather, the pattern was a slow, inexorable decline from year to year.

Where, one can wonder, does the disconnect come from?  How can therapists’ assessments of themselves and their work be so at odds with the facts?  Especially considering, in the study by Goldberg and colleagues, participating clinicians had ongoing access to data regarding their effectiveness (or lack thereof) on real-time basis!  Even the study I blogged about previously—the largest in history where outcomes of psychotherapy were shown to be quite positive—a staggering 40% of people treated experienced little or no change whatsoever.  How can such findings be reconciled with others indicating that clinicians routinely overestimate their effectiveness by 65%?

Turns out, thboundariese boundary between “belief in the process” and “denial of reality” is remarkably fuzzy.  Hope is a  significant contributor to outcome—accounting for as much as 30% of the variance in results.  At the same time, it becomes toxic when actual outcomes are distorted in a manner that causes practitioners to miss important opportunities to grow and develop—not to mention help more clients.  Recall studies documenting that top performing therapists evince more of what researchers term, “professional self-doubt.”  Said another way, they are less likely to see progress where none exists and more likely to values outcomes over therapeutic process.

What’s more, unlike their more average counterparts, highly effective practitioners actually become more effective with time and experience.  In the article below, my colleagues and I at the International Center for Clinical Excellence identify several evidence-based steps any practitioner follow to match such results.

Let me know your thoughts.

Until next time,

Scott

Scott D. Miller, Ph.D.
headerMain8.pngRegistration is now open for our March Intensives in Chicago.  Join colleagues from around the world for the FIT Advanced and the FIT Supervision workshops.

Do therapists improve (preprint)
The outcome of psychotherapy yesterday, today, and tomorrow (psychotherapy miller, hubble, chow, seidal, 2013)

 

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance Tagged With: excellence, outcome rating scale, psychotherapy

  • 1
  • 2
  • 3
  • Next Page »

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • behavioral health (5)
  • Behavioral Health (112)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon
  • himalayan on Do certain people respond better to specific forms of psychotherapy?

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training